US20180077437A1 - Parallel Video Streaming - Google Patents

Parallel Video Streaming Download PDF

Info

Publication number
US20180077437A1
US20180077437A1 US15/434,003 US201715434003A US2018077437A1 US 20180077437 A1 US20180077437 A1 US 20180077437A1 US 201715434003 A US201715434003 A US 201715434003A US 2018077437 A1 US2018077437 A1 US 2018077437A1
Authority
US
United States
Prior art keywords
definition video
high definition
clone
video stream
streams
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/434,003
Inventor
Barrie Hansen
Rio Wing
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US15/434,003 priority Critical patent/US20180077437A1/en
Publication of US20180077437A1 publication Critical patent/US20180077437A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/222Secondary servers, e.g. proxy server, cable television Head-end
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02FOPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
    • G02F1/00Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
    • G02F1/29Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the position or the direction of light beams, i.e. deflection
    • GPHYSICS
    • G02OPTICS
    • G02FOPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
    • G02F1/00Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
    • G02F1/29Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the position or the direction of light beams, i.e. deflection
    • G02F1/291Two-dimensional analogue deflection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/305Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using lenticular lenses, e.g. arrangements of cylindrical lenses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234363Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/2365Multiplexing of several video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/242Synchronization processes, e.g. processing of PCR [Program Clock References]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/61Network physical structure; Signal processing
    • H04N21/6106Network physical structure; Signal processing specially adapted to the downstream path of the transmission network
    • H04N21/6125Network physical structure; Signal processing specially adapted to the downstream path of the transmission network involving transmission via Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/85406Content authoring involving a specific file format, e.g. MP4 format
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/157Conference systems defining a virtual conference space and using avatars or agents
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/02Casings; Cabinets ; Supports therefor; Mountings therein
    • H04R1/028Casings; Cabinets ; Supports therefor; Mountings therein associated with devices performing functions other than acoustics, e.g. electric candles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4113PC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/20Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/15Transducers incorporated in visual displaying devices, e.g. televisions, computer displays, laptops

Definitions

  • the subject application teaches embodiments that relate generally to streaming audio and video for sports venues, and specifically to video and audio capture, processing, and streaming of sporting events and practices.
  • Cameras used by broadcasters are typically large complicated devices designed for professional camera personnel and include high resolution image capturing elements and expensive lenses with variable zoom. Cameras are typically mounted on tripods, slung from wires above sporting events, or attached to weight-bearing harnesses strapped to camera personnel who position themselves nearby to the action taking place on the field.
  • Cameras and expertise for operating the cameras creates a barrier for new entrants to the market, local small-market producers, schools, and individuals wanting to create audio and video of sporting events, either for their own use or for monetizing their work through third party subscription.
  • Broadcasters can offset the costs of obtaining, maintaining, and operating cameras, editing systems, and other broadcasting expenses through marketing and/or subscription revenues from their larger base of advertisers and/or consumers.
  • the present disclosure presents new modalities for streaming audio and video from sporting venues to viewers.
  • a method includes receiving cloned copies of a number of high definition video streams by a clone-of-a-clone server and streaming parallel video streams to a user computing device, where the parallel video streams include both the high definition video streams and low definition video streams based on the high definition video streams.
  • the low definition video stream can use the common intermediate format or CIF nominally at 320 ⁇ 240 pixels.
  • the high definition video stream can use the 1080p resolution with 1920 ⁇ 1080 pixels.
  • the low definition and high definition video streams are substantially identical videos but have different special resolutions.
  • the method can include generating a low definition video stream from the high definition video stream by the clone-of-a-clone server.
  • the method can include synchronizing frames of the parallel video streams that are sent to the user computing device.
  • the method can include receiving the parallel streams on the user computing device from the clone-of-a-clone server, displaying each of the low definition video streams on the user computing device, and displaying a selected high definition video stream on the user computing device.
  • the method can include receiving a user selection of one of the low definition video streams on the user computing device and the high definition video stream that is display is based on the user selection.
  • the method can include receiving a second user selection of a second low definition video stream and switching from the displayed high definition video stream associated with the second user selection. The switching is performed substantially seamlessly from the first high definition video stream to the second high definition video stream.
  • Each of the low definition video streams can be displayed in a low resolution small window and the selected high definition video stream can be displayed in a high resolution large window on the user computing device.
  • a system includes a clone-of-a-clone server that is configured to receive a number of high definition video streams and streams parallel video streams to one or more user computing devices, where the parallel video streams include both the high definition video streams and low definition video streams based on the high definition video streams.
  • the low definition video stream can use the common intermediate format or CIF nominally at 320 ⁇ 240 pixels.
  • the high definition video stream can use the 1080p resolution with 1920 ⁇ 1080 pixels.
  • the low definition video can be CIF, VGA, 4CIF, and D1 resolution, while the high definition video steam can be 720p, 1 Megapixel, and 1080p. Other resolutions and video encoding standards can be used.
  • the system can include a number of cameras that are configured to stream high definition video streams and a clone server configured to receive the streaming video from the camera and clone the streaming video onto the clone-of-a-clone server.
  • the clone-of-a-clone server can generate a low definition video stream from each of the high definition video streams that the clone-of-a-clone server receives.
  • the clone-of-a-clone server can synchronize frames of the parallel video streams that are streamed to the user computing device.
  • the system can include a user computing device that is configured to receive the parallel high definition and low definition video streams.
  • the user computing device is configured to display each of the low definition video streams and a selected high definition video stream.
  • the user computing device can be configured to receive a user selection of one of the displayed low definition video streams, display an associated high definition video stream, receive a second user selection, and seamlessly switch to displaying the high definition video stream associated with the second user selection.
  • Each of the low definition video streams can be displayed in a low resolution small window and the selected high definition video stream can be displayed in a high resolution large window on the user computing device.
  • a system includes a clone-of-a-clone server that is configured to receive a number of cloned high definition video streams, generate low definition video streams from the high definition video streams, and selectively stream parallel high and low definition video streams to a user computing device.
  • the clone-of-a-clone server synchronizes the parallel video streams so as to enable seemless switching on the display of the user computer device between different high resolution video streams.
  • the low definition video stream can use the common intermediate format or CIF nominally at 320 ⁇ 240 pixels.
  • the high definition video stream can use the 1080p resolution with 1920 ⁇ 1080 pixels.
  • FIG. 1 is a diagram of an audio/video system for sporting venues according to an embodiment of the disclosure.
  • FIG. 2 is a diagram of an impact-resistant camera housing according to an embodiment of the disclosure.
  • FIG. 3 is a diagram of a sports helmet with integrated audio/video system according to an embodiment of the disclosure.
  • FIG. 4 is a diagram of example audio/video and network components according to an embodiment of the disclosure.
  • FIG. 5 is a flowchart of example operations for networking audio/video components according to an embodiment of the disclosure.
  • FIG. 6 is a flow diagram of example data connections according to an embodiment of the disclosure.
  • FIG. 7 is a diagram of an example screen for selecting from multiple audio and video feeds according to an embodiment of the disclosure.
  • FIG. 8 is a flowchart of example operations for custom content creation according to an embodiment of the disclosure.
  • FIG. 9 is a diagram of components of an example computing device configured for audio/video operations according to an embodiment of the disclosure.
  • FIG. 10 is a functional block diagram of example modules of an audio/video streaming system.
  • FIG. 11 is a diagram of example video resolutions.
  • FIG. 12 is a diagram of an example clone streaming system for parallel streams.
  • the systems and methods disclosed herein describe various aspects of real-time video for sporting venues. Although the disclosed system and method are described below with regard to one or more computing devices and in particular mobile computing devices, the system and method can be used with any suitable computing device including but not limited to mobile phones, smart phones, pad computing devices, laptops, personal computers, desktops, servers, embedded controllers, and so forth. Among other various possibilities
  • the system 100 includes one or more audio/video streaming devices illustrated as cameras 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , and 13 .
  • cameras 1 , 11 , 12 , and 13 can be fixed cameras in an arena
  • cameras 2 , 5 , 6 , and 9 can be movable cameras that follow players or the action in the arena
  • camera 4 can be a camera positioned ideally to point at a scoreboard
  • cameras 3 , 7 , and 8 can be helmet cameras mounted to the helmets of certain players
  • camera 10 can be a pair of helmet cameras configured to provide a view with a 3D virtual reality view from a player's perspective, such as a goalie's view.
  • a wired microphone can include an analog transducer that is coupled to a digitizer; the digitizer converts the analog signal into a suitable digital format such as H.264.
  • wired microphones are analog devices that are connected via cables to a head end unit; long cables require sufficient electrical insulation to avoid interference and substantial gauge wire that makes them expensive and heavy. Even with quality electrical insulation and properly gauged wire, purely analog solutions are subject to attenuation losses and noise, affecting the signal-to-noise ratio of the signal received at the head end unit.
  • Power over Ethernet advantageously can be used to both provide power to devices and to provide a wired communications medium for the devices.
  • Wireless communications can be effected using Wi-Fi or other wireless protocols, including but not limited to Bluetooth or Li-Fi.
  • the system 100 can include a private network, shown as intranet 110 , configured for data communications between the devices and a streaming system 120 .
  • the streaming system 120 is configured to support audio and video streams from the devices, and convert them as required, as described below in greater detail.
  • the streaming system 120 can include storage 130 for storing the audio and video streams.
  • the streaming system 120 can allow users 150 to stream audio and video from the devices or from storage 130 .
  • the camera housing 200 is configured to withstand vibrations, shocks, and impact to a camera mounted within the camera housing 200 .
  • the camera housing 200 can protect the camera from the impact, and also ensure that parts from a damaged camera, such as glass or electronics, do not end up on spectators or players or on the ice where sharp or heavy pieces might cause injury.
  • the camera housing 200 is structurally configured to protect the camera while allowing connection to electrical components such as cables or wires for power and data communications.
  • the camera housing 200 can also be configured to provide clean air for the camera, and remove heat dissipated by the camera.
  • the camera housing 200 comprises a dome assembly that attaches at one end of a drum 201 .
  • the dome assembly comprises a transparent dome cover 203 and a retainer ring 204 .
  • the dome assembly can be coupled to the drum 201 using complementary threading, screws, nuts, bolts, washers, (not shown) and the like as would be understood in the art.
  • a camera can be mounted inside the camera housing 200 , for example on a support structure having support members (not shown) that contact the interior wall of the drum 201 .
  • the support members can be configured to dampen vibrations as would be understood in the art.
  • An example support structure can be a disk that rests against pliable dampeners that act as support members and that seat the disk along a cross section of the drum 201 .
  • a camera can be mounted to the disk, for example using screws or other suitable fasteners.
  • the drum 201 can include threaded holes 202 A, 202 B to attach camera angle travel limiters inside the drum 201 thereby limiting the camera rotation to a predetermined angle.
  • Camera angle travel limiters work by limiting camera rotation angle to prevent the camera from becoming damaged during rotation, or to ensure that the camera is always pointed at a certain area of the arena. For example, it may be desirable to use angle travel limiters to ensure that a camera cannot be pointed at spectators accidentally.
  • the threaded holes 202 A, 202 B do not penetrate the drum 201 and are accessible only from the inside of the drum 201 .
  • a mounting cover comprises retainer ring 205 and cover plate 206 .
  • Cover plate 206 can include collar 207 configured to accept a support rod 210 that connects to a support structure 211 and mounting plate 212 .
  • the mounting plate 212 can be attached to a structure in the arena such as a wall, ceiling, support beam, and so forth.
  • a quick link 208 can be used as a backup failsafe to further anchor the camera housing 200 to a wall or support structure, for example using metal strings, or rope. This can be used to ensure that the camera housing 200 does not fall onto spectators, players, or the arena if the mounting plate 212 were become detached for any reason.
  • the support rod 210 can be hollow, providing for passage for electrical components such as wires, cables, and so forth.
  • the cover plate 206 can include threaded screw holes 209 A, 209 B, 209 C, 209 D for connecting the cover plate 206 and ring 205 to the drum 201 .
  • long screws can be used that pass through the drum 201 and also connect the dome assembly to the drum 201 .
  • the helmet 300 can include one or more cameras 302 and/or microphones 304 .
  • the camera 302 can use a standard definition or high definition frame size and frame rate such as a 720p, 1080i, 1080p, 2k, or 4k at 30 frames per second (fps), 60 fps, or 120 fps, or lower frame rates.
  • a helmet cam for providing a 3D virtual reality video feed can include two spatially separated cameras 302 as would be understood in the art.
  • the microphone 304 can include an analog transducer that is coupled to a digitizer; the digitizer converts the analog signal into a suitable digital format such as MP3.
  • the camera 302 and microphone 304 can be a single unit.
  • the camera 302 and microphone 304 are in communication with an embedded controller 306 .
  • the embedded controller 306 can include custom designed electronics, for example a chip or microcontroller with a Wi-Fi or other antenna.
  • the embedded controller 306 can include a modified smartphone.
  • the camera element and microphone element from a smartphone can be displaced from the modified smartphone and used as camera 302 and microphone 304 .
  • the embedded controller 306 can stream one or more video or audio streams from the camera 302 and/or microphone 304 .
  • Data communications from the embedded controller 306 can include Wi-Fi.
  • a microphone 450 for example a wired microphone configured to be placed near the glass surrounding a hockey rink, can be connected to a proxy server 410 via an Ethernet cable such as a CAT 6 cable.
  • the communications protocol between the proxy server 410 and microphone 450 can be USB over Ethernet, among other possible protocols as would be understood in the art.
  • the Ethernet cable can provide power to the microphone 450 .
  • the microphone can use a wireless network such as Wi-Fi or Li-Fi.
  • An IP camera 460 for example an IP camera configured to be placed inside of the impact resistant camera housing 200 of FIG. 2 , can be connected to a PoE switch 430 using a CAT 6 cable.
  • the PoE switch 430 can provide power to the IP camera 460 .
  • the communications protocol between the proxy server 410 and IP camera 460 can be RTSP or real-time streaming protocol, among other possible protocols as would be understood in the art.
  • a wireless helmet camera 470 can be configured for wireless data communications with the proxy server 410 via Wi-Fi router 440 .
  • Wi-Fi router 440 can be connected to the proxy server 410 via PoE switch 430 or by a direct connection to the proxy server 410 .
  • the communications protocol between the proxy server 410 and wireless helmet camera 470 can be RTSP (i.e., real-time streaming protocol) H.264, or H.265, among other possible protocols as would be understood in the art.
  • a wireless microphone 480 can be configured for wireless data communications with the proxy server 410 via Wi-Fi router 440 .
  • the proxy server 410 can receive digitized audio, for example an MP3 stream, by establishing a connection with the wireless microphone, for example using hypertext transfer protocol, or HTTP. Other communication protocols could also be used as would be understood in the art.
  • the proxy server 410 receives audio and video streams from microphones 450 , 480 and cameras 460 , 470 .
  • the proxy server 410 can store the streams to a memory, such as data store 420 for archiving or temporary storage.
  • the proxy server 410 and data store 420 reside in the same hardware.
  • the proxy server 410 can convert each video or audio stream to one or more common formats, sampling or compression rates, and frame sizes.
  • the proxy server 410 can receive a video stream and convert it to a standard H.264 or MPEG video stream prior to storing in data store 420 .
  • the proxy server 410 can store two or more different video streams from the same received video stream.
  • the proxy server 410 can convert a received video stream into a small thumbnail-sized video stream and a full size video stream.
  • two or more proxy servers can be used, for example a first proxy server can receive the audio and video streams from devices and clone the streams to a second proxy server, and the second proxy server can convert and then stream audio and video to users (see for example, FIG. 12 and associated description.)
  • Operation commences at start block 500 labeled “START” and proceeds to process block 502 .
  • process block 502 the wireless device is powered on. Processing continues to process block 504 .
  • the wireless device detects a Wi-Fi network.
  • the wireless device can be preconfigured to connect to a specific Wi-Fi network by name, or service set identifier (SSID).
  • SSID service set identifier
  • the Wi-Fi network may be configured not to broadcast the SSID, for example to prevent the wireless network from being visible on spectators' mobile devices in the arena.
  • the wireless device may detect the Wi-Fi network by querying for the Wi-Fi network using the preconfigured SSID. Processing continues to decision block 506 .
  • decision block 506 if the wireless device has previously received an IP address, then processing continues to process block 514 , otherwise processing continues to process block 508 .
  • process block 508 the wireless device requests an IP address using the dynamic host control protocol or DHCP. Processing continues to process block 510 .
  • a DHCP server receives the DHCP request from the wireless device and provides an IP address to the wireless device.
  • the DHCP server reserves a fixed IP address for each wireless device.
  • reserving a fixed IP address for each wireless device facilitates determining which video or audio feed belongs to each wireless device.
  • a fixed or reserved IP address simplifies the process of allowing multiple users to receive video feeds from specific wireless devices, as players have helmet cams that may disconnect and reconnect to the Wi-Fi network as they move about the arena during game play. Without fixed or reserved IP addresses, the IP addresses of helmet cams could change during game play and make live streams have to disconnect and reconnect. Processing continues to process block 512 .
  • process block 512 the wireless device receives the IP address from the DHCP server. Processing continues to process block 514 .
  • the wireless device streams audio and/or video to the proxy server using the configured IP address. Processing continues to decision block 516 .
  • decision block 516 if the connection to the wireless device drops, then processing continues to decision block 518 , otherwise processing continues back to process block 514 to continue streaming the audio and/or video.
  • decision block 518 if the connection has dropped due to a power off event or a signal to end streaming, then processing terminates at end block 520 , otherwise processing continues back to process block 504 to attempt to reconnect to the Wi-Fi network.
  • example data connections are illustrated for an embodiment of the audio/video system 600 .
  • an arena 602 such as a hockey arena, a sporting venue, or an entertainment venue in general
  • one or more fixed or moveable cameras 604 , helmet cams 606 , and microphones 608 are in data communication with a proxy server 612 through data communications equipment represented by wireless hub 610 .
  • the proxy server 612 provides one or more ports through which video and audio data streams can be accessed by users 630 , either in real-time or through viewing stored data streams.
  • a firewall 614 such as a specially configured router or dedicated piece of data communications equipment, prevents unauthorized users 630 from accessing data streams from the proxy server 612 .
  • users 630 first access a website system 620 which provides authentication information for accessing the data streams through the firewall. Authenticated users 630 connect through the firewall to the proxy server 612 and selected data streams are obtained from the proxy server 612 and presented on the users 630 screens.
  • the website system 620 is able to connect through the firewall 614 and connect to the proxy server 612 that streams to the website system 620 . Users 630 that are authenticated on the website system 620 receive data streams that pass through the website system 620 from the proxy server 612 .
  • two or more proxy servers can be used, for example a first proxy server can receive the audio and video streams from devices and clone the streams to a second proxy server, and the second proxy server can convert and then stream audio and video to users (see for example, FIG. 12 and associated description.)
  • Multiple end users 630 can simultaneously use the audio/video system 600 .
  • the audio/video system 600 can simultaneously support multiple events occurring in different venues.
  • the audio/video system 600 can allow users 630 to create their own customized streams. For example, a first end user 632 can view different live streams from the audio/video system 600 during a particular sporting event.
  • a second end user 634 can generate a customized stream based on a current live stream, or stored data streams of a previous sporting event.
  • a third end user 636 can stream the customized stream of the second end user 634 .
  • Each end user 630 can use a different kind of computing device, for example a mobile device such as a smartphone or tablet, a laptop, a desktop, and so forth.
  • the first end user 632 can be streaming to a mobile computing device that is using a dedicated application or app that has been downloaded to a mobile computing device.
  • the second end user 634 can be using a high end workstation with a fast Internet connection for editing and generating their customized stream.
  • the third end user 636 can be using an Internet browser and clicking a link to access the customized stream of the second end user 636 .
  • the bit rate, frame rate, and frame size of the video and audio streams can be optimized for the type of end user computing device and connection speed.
  • thumbnail views 710 from each of the cameras and microphones.
  • Some thumbnail views 710 may not include audio or video, either because the feed does not include audio or video, or due to a lost connection.
  • Some thumbnail views, such as thumbnail view 10 may include a left and right view, allowing a user with a 3D viewing device connected to their video device to view a sporting event as a virtual reality experience from one or more of the players' perspectives.
  • the user can select from one or more of the thumbnail views 710 , for example by clicking on a particular thumbnail view 710 or dragging a thumbnail view to a focus window 720 .
  • the currently selected video is presented in a focus window 720 that typically is larger than the thumbnail views.
  • Clicking a camera icon associated with each thumbnail view 710 allows a user to select whether video, audio, or both are to be presented to the user, for example via the focus window 720 .
  • a user can select video from one device and audio from another device.
  • the user can customize the screen 700 , for example to reorganize the order or size of the thumbnail views 710 , to have two or more focus windows. Different user controls and window arrangements can be presented to the user as would be understood in the art.
  • the focus window 720 can be selected by the user and clicked to toggle between full screen and the illustrated split screen that includes both the focus window 720 and the thumbnail view 710 .
  • clicking on the focus window 720 will cycle between a group of selected thumbnail views 710 . This can be particularly useful to a user viewing the event using VR or 3D viewing devices.
  • Operation commences at start block 800 labeled “START” and proceeds to process block 802 .
  • the streaming system receives streams from devices such as cameras and microphones. Processing continues to process block 804 .
  • the streaming system streams one or more device streams to users 808 , for example through the selection screen 700 of FIG. 7 .
  • users 808 can join a live stream of a sporting event or view a saved stream in process block 806 .
  • Processing continues to decision block 810 .
  • decision block 810 if the streaming system is configured to auto-select the focus window, then processing continues to process block 812 , otherwise processing continues to process block 814 .
  • the streaming system selects a feature that is used to determine the focus window.
  • the streaming system can select the feature to be the camera where the puck is located, or the microphone that is loudest.
  • the selected feature can change dynamically during the game or practice.
  • the selected feature can be the penalty box subsequent to determining that an official has blown a whistle and the clock has been stopped, or the scoreboard after a change to a score on the scoreboard, or a particular player when that player enters the ice in the arena.
  • the streaming system attempts to select devices to present the best user experience of the sporting event. Processing continues to process block 818 where the streaming system determines the focus window based on the selected feature.
  • decision block 814 if a user manually selects a feature to use as the selected feature, then processing continues to process block 816 to receive the user selection, otherwise processing continue to decision block 820 .
  • the streaming system receives a selection of a feature to use for selecting the focus window from the available devices.
  • a user who is a scout may desire to follow one particular athlete, and thus use the streaming system in a scouting mode.
  • the scout may select as the feature a jersey number of the particular athlete, in which case the streaming system in process block 818 will determine which camera shows the athlete's jersey number best.
  • an avid fan of a particular player may desire to have that player as the focus of attention while still watching the game in progress, in which case the camera could be selected that displays both the player and the puck the majority of the time while the selected audio device could be from the helmet of the player or the audio device closest to the player. Processing continues to process block 818 .
  • the streaming system determines the focus window from the available cameras and microphones.
  • the steaming system can track players on the ice, or other playing surfaces for other sports, and use player position and motion data to determine the best camera and microphone to use in the focus window.
  • the streaming system can use the selected feature from process block 812 and/or process block 816 in determining the best device to display in the focus window.
  • the streaming system can determine when a particular device is not streaming, or has a connection issue, and switch to the next best device. Processing continues to decision block 820 .
  • decision block 820 if a user selects a particular device to use in the focus window, for example to override an selected device by the streaming system from process block 818 , the processing continues to process block 822 , otherwise processing continues to decision block 824 .
  • process block 822 the streaming system changes the focus window to the user selected device or devices. Processing continues to decision block 824 .
  • decision block 824 if the user adds user-content to the content stream, then processing continues to process block 826 , otherwise processing continue to decision block 828 .
  • a user adds user-created content to the content stream.
  • the user may have a microphone connected to their computing device and can add live commentary, such as player analysis or real-time play-by-play announcing such as is performed by professional announcers and commentators.
  • live commentary such as player analysis or real-time play-by-play announcing such as is performed by professional announcers and commentators.
  • sophisticated users can include user-created video such as replay clips or on-screen annotation. Processing continues to decision block 828 .
  • decision block 828 if the stream is offered to users, then processing continues to process block 830 , otherwise processing continues to decision block 834 .
  • a custom stream can be saved.
  • metadata is saved that includes time-stamped tracking of which device(s) were selected for the focus window(s). In this way, the custom stream can be recreated as needed from saved video streams.
  • a new stream can be saved separately for each custom created stream.
  • the original sources or streams can be saved for a configurable period of time, and then purged at a particular expiration date to recover storage space.
  • custom streams can be saved and stored for a period of time before being purged. For example, a single custom stream created by the streaming system might be stored indefinitely, while the remaining streams are purged. Processing continues to process block 832 .
  • users can be invited to view a custom stream.
  • a stream automatically generated by the streaming system can be shown on a schedule of available live or saved games for viewing by users.
  • the streaming system can also include user-create custom streams in the schedule, and allow other users to rate user-created streams.
  • a user that creates custom content can generate a link to their custom stream that can be forwarded to other users, for example through social media.
  • a link can be placed on a FACEBOOK page, a clip and link uploaded to the user's INSTAGRAM or TWITTER account, or a link can be emailed to potentially interested parties, for example using an email list and advertisement.
  • Other uses of social media either currently extant or yet to be developed, can be utilized as would be understood by one of skill in the art. Processing continues to decision block 834 .
  • decision block 834 if the sports event is determined to be over or if the saved stream has concluded, then processing terminates at end block 836 , otherwise processing continues back to process block 804 to continue streaming content to users.
  • the costs of creating audio-video content are substantially reduced by allowing users, or the streaming system itself, to determine which video and audio stream to use as the focus window(s), especially when compared to the costs incurred by professional broadcast services such as the major television networks.
  • the use of relatively inexpensive cameras, microphones, and networking equipment allows that equipment to be more or less permanently placed in a sporting venue and used for whatever events occur in the venue, whether they are sporting events, entertainment events, or other events. This opens the opportunity to allow streaming of practices, pre-season games, minor-league games, club-level events, and even high-school events to interested parties.
  • the present system democratizes the capture, production, and distribution of content from all levels of sporting venues.
  • Example computing devices 900 can be servers, desktop systems, mobile computing devices, embedded controllers, wireless cams and microphones, and so forth. Included are one or more processors, such as that illustrated by processor 904 . Each processor is suitably associated with non-volatile memory, such as read only memory (ROM) 910 and random access memory (RAM) 912 , via a data bus 914 .
  • ROM read only memory
  • RAM random access memory
  • Processor 904 is also in data communication with a storage interface 916 for reading or writing to a data storage system 918 , suitably comprised of a hard disk, memory or solid-state disk, or any other suitable data storage as will be appreciated by one of ordinary skill in the art.
  • a storage interface 916 for reading or writing to a data storage system 918 , suitably comprised of a hard disk, memory or solid-state disk, or any other suitable data storage as will be appreciated by one of ordinary skill in the art.
  • Processor 904 is also in data communication with a network interface controller (NIC) 930 , which provides a data path to any suitable wired or physical network connection via physical network interface 934 , or to any suitable wireless data connection via wireless network interface 938 or cellular interface 936 , such as one or more of the networks detailed above.
  • NIC network interface controller
  • Processor 904 is also in data communication with an input/output (I/O) interface 940 which provides data communication with devices such as a microphone 946 or camera 948 or user peripherals, such as a touchscreen display 944 , keyboard, or mouse or any other suitable user interface.
  • I/O input/output
  • functional units are suitably comprised of intelligent units, including any suitable hardware or software platform.
  • a user interface module 1002 serves web pages to users and administrators that provides a graphical user interface for logging into the system, viewing camera and audio microphone locations, viewing calendars of upcoming sporting events and archived streams, selecting sporting events or recorded steams to view, receiving video and audio streams from the proxy server through the firewall, customizing the user's thumbnail and focus window views, and interacting with the system in general.
  • User accounts, configuration data, calendar information, stream information, and other data can be stored in a database 1010 or other suitable memory.
  • a scheduler engine 1004 can schedule recordings of sporting events by the proxy server.
  • An analytics engine 1006 can analyze video and audio streams. For example, the analytics engine 1006 can determine when a video or audio feed has disconnected, and switch a user's focus window to another available stream and switch back once the video or audio feed reconnects. Similarly, the analytics engine 1006 can monitor video or audio streams and either blackout some or all of a stream in real-time, or switch the focus window to a different stream. The analytics engine 1006 can be used to detect objectionable language in audio, or objectionable images in a video feed, for example nudity, political messages, unauthorized advertising, excessive violence, and so forth. In a configuration, the analytics engine 1006 can be rules-based or use heuristics or other suitable analytics to perform an analysis of one or more streams.
  • the analytics engine 1006 can receive a copy of the streams from a proxy server, or clone-of-a-clone system of FIG. 12 , and the analytics engine 1006 can be executing on any suitable system as would be understood in the art.
  • the analytics engine 1006 can also track selected features for determining which stream to use in a focus window. For example, when the website system in being used by a user that is a scout, or if the system is set to use a scout mode, an individual player can be tracked in multiple video streams, for example by jersey number. The analytics engine 1006 can determine the optimal video and audio streams to use to track the selected player or feature being tracked.
  • the analytics engine 1006 can also perform analysis of helmet cam video and/or audio, for example to track where a player is looking or to determine how the player is moving the helmet.
  • the analytics engine 1006 can determine if rapid helmet movements are suggestive of violent impacts which could cause concussions.
  • the analytics engine 1006 can monitor a helmet cam for video and/or audio that might indicate a concussion, injury, or exhaustion of the player. For example, movements of the helmet that are atypical for the player, such as looking down more often, looking up, not turning the head in one particular direction, not following the puck (or a ball as might be used in other sporting events) or a delay in following the puck or action of the game, not looking where other players are looking, and so forth.
  • a player's typical pattern of helmet movements can be analyzed and saved for reference and comparison.
  • the analytics engine 1006 can send an alert to a coach or medical professional via a text message, email, or other suitable alert, for example using the user interface 1002 .
  • a tracking engine 1008 can track one or more players' movements in the arena.
  • the tracking engine 1008 can turn a player's movements into vector data, or any other suitable position data.
  • the tracking engine 1008 can work in conjunction with the analytics engine 1006 .
  • the tracking engine 1008 can provide player position or vector data to the analytics engine 1006 that is used to determine which camera and audio feed to use in the focus window(s).
  • each player can be analyzed to create a digital representation of the players.
  • Example data that can be determined can include position, speed, direction, acceleration, deceleration, linearity, non-linearity, circularity, time, and other measurements as would be understood in the art.
  • the tracking engine 1008 and analytics engine 1006 can determine the correct camera frame to provide to a user based on the player data. For example, the system can sum all of the vectors or kinetic energy for each frame and/or camera stream and switch the focus window to a particular camera stream based on that calculation.
  • tracking data can be combined with video data to provide a visual representation of players' movements during practice or a game.
  • tracking data and/or analytics data can be combined with video and/or audio data to provide player performance information to couches, scouts, and interested viewers and fans.
  • the tracking engine 1008 can receive position data from helmet cams, for example position data derived from GPS or radio signal triangulation. Tracking and analytics data can be stored in the database 1010 or any other suitable memory.
  • Standard resolutions can include common intermediate format or CIF at 352 ⁇ 240 or 352 ⁇ 288 pixels, VGA at 640 ⁇ 480 pixels, and 4CIF/D1 at 704 ⁇ 480, 704 ⁇ 576, or 720 ⁇ 480 pixels.
  • High definition resolutions can include 720p at 1280 ⁇ 720 pixels, 1 Megapixel at 1280 ⁇ 1024, and 1080p at 1920 ⁇ 1080 pixels.
  • Ultra high resolution formats are also contemplated, for example QHD at 2560 ⁇ 1440 pixels, UHD or 4K at 3840 ⁇ 2160 pixels, and so forth.
  • Standard resolution can also include QCIF at 176 ⁇ 120 or 176 ⁇ 144 pixels. Steaming video can be interlaced or progressive scan as appropriate for the resolution.
  • Audio can similarly be encoded, for example as 19.2 kb/s PCM, 9.6 kb/s ADPCM, MP3, or any other suitable encoding or compression as would be understood in the art.
  • PCM 19.2 kb/s PCM
  • 9.6 kb/s ADPCM 9.6 kb/s ADPCM
  • MP3 any other suitable encoding or compression as would be understood in the art.
  • the disclosed resolutions are presented as non-limiting examples only. Other suitable resolutions can also be used as would be understood in the art.
  • a plurality of cameras 1202 are configured to stream video across one or more local network connections 1204 to a clone server 1206 .
  • the cameras 1202 can be configured to stream a high definition video stream, such as 1080p at 1920 ⁇ 1080 pixels.
  • one or more cameras 1202 can be configured to stream both a low definition video stream, such as CIF at 320 ⁇ 240 pixels, and a high definition video stream.
  • different cameras 1202 can stream in different resolutions. For example, camera 1 could stream in 1080p, while camera 2 streams in 4k and camera n streams using 1 megapixel streaming.
  • Each camera 1202 streams across a local network connection 1204 , such as a LAN, WiFi, LiFi, Power over Ethernet, or any other suitable network for example as described with respect to the devices of FIG. 1 .
  • the clone server 1206 receives each of the streams from the cameras 1202 .
  • the clone server 1206 can store each of the streams from each of the cameras 1202 .
  • the streams are stored temporarily, or ephemerally, before being streamed to one or more clone-of-a-clone servers 1210 and/or to cloud storage 1213 .
  • the clone server 1206 can store each stream for a longer period of time, for example as permanent storage.
  • the clone server 1206 is in network communication, for example using a VPN or virtual private network, with one or more clone-of-a-clone servers 1210 through firewall 1207 , which can be a suitable router or other suitable network element.
  • the clone server 1206 clones the live video streams 1208 from the cameras 1202 onto the clone-of-a-clone server 1210 .
  • Each clone-of-a-clone server 1210 receives live video streams 1208 associated with each of the cameras 1202 .
  • each clone-of-a-clone server 1210 can receive live video streams 1208 from a subset of all of the available cameras 1202 associated with the clone server 1206 .
  • each clone-of-a-clone server 1210 can receive live video streams 1208 from multiple clone servers 1206 and associated cameras 1202 .
  • the clone-of-a-clone servers 1210 can be anywhere in the network, for example in the cloud 1216 as shown, at an ISP or Internet Service Provider, in a colocation premises, in the arena 1214 or any other suitable place.
  • the clone-of-a-clone server 1210 can be hosted by a service company that provides high speed cloud hosting services, such as AMAZON, as would be understood in the art.
  • the clone server 1206 also sends recorded video streams 1209 to cloud storage 1213 .
  • Cloud storage 1213 can include network servers, redundant network storage hosted by third party companies, and other suitable cloud storage as would be understood in the art.
  • the recorded video streams 1209 can include live video streams.
  • the clone server 1206 , the clone-of-a-clone server 1210 , and cloud storage 1213 allow the system architecture to easily scale to support any number of cameras 1202 and users.
  • the clone server 1206 aggregates video streams from multiple cameras 1202 . Additional clone servers 1206 can be used to accommodate more cameras 1202 as needed.
  • Each clone-of-a-clone server 1210 receives cloned video streams from one or more clone servers 1206 and supports forwarding video streams to multiple users. Additional clone-of-a-clone servers 1210 can be used to accommodate more users when needed.
  • Cloud storage 1213 can be scaled as necessary to support automated recording of live video streams and playback of video streams by users.
  • a web server 1211 can provide front end web services for users to interact with the system and gain access to the live video streams and recorded video from the clone-of-a-clone servers 1210 and cloud storage 1213 .
  • Clone-of-a-clone servers 1210 can be configured to perform other services, for example archiving video, providing user video editing functions, and so forth.
  • one or more cameras 1202 stream only a single stream of video, for example a single high definition 1080p stream.
  • the clone-of-a-clone server 1210 receives a clone of each high definition stream from the clone server 1206 and the clone-of-a-clone server 1210 creates an additional low definition video stream such as a CIF stream based on the received high definition stream.
  • the clone server 1206 receives the high definition stream and creates the additional low definition video stream such as a CIF stream based on the high definition stream received from the cameras 1202 .
  • the clone server 1206 receives a single stream from some cameras 1202 and multiple streams from other cameras 1202 ; the clone server 1206 or the clone-of-a-clone server 1210 generates a second stream, for example a second low definition stream, for cameras 1202 that only provide a single stream.
  • a consumer for example a user or business located in a consumer premises 1218 such as a home or business office, uses a computing device 1220 that establishes a network connection, for example over the Internet 1212 , with the web server 1211 .
  • the user interacts with the web server 1211 to view live video streams or recorded video from the clone-of-a-clone servers 1210 or cloud storage 1213 .
  • the computing device 1220 can be a personal computer, a laptop, a tablet device, a smartphone, a smart TV, a video game device, a television set top box, or any other suitable computing device as would be known in the art.
  • the computing device receives parallel streams from each of the cameras 1202 , or parallel streams from a subset of the cameras 1202 over the network connection. For example, as illustrated in FIG. 12 , the computing device 1220 receives both a low definition CIF stream and a high definition HD stream as parallel streams from each of the cameras 1202 over a network connection via the Internet 1212 .
  • the computing device 1220 can be configured to display the received streams in any suitable or desired configuration or format.
  • the computing device 1220 can run software that displays multiple streams from the cameras 1202 in low definition in smaller preview windows 1222 and a high definition stream of one of the cameras 1202 in a large focus window 1224 .
  • a user can select any one of the smaller preview windows 1222 to display the high definition stream of the selected camera 1202 in the large focus window 1224 .
  • the computing device receives both a low definition stream and a high definition stream associated with each of the cameras 1202 , there is no delay, or minimal delay, that is perceived by the user as the user switches between streams from different cameras 1202 in the large focus window 1224 . Also advantageously, because the low definition streams are displayed in smaller preview windows 1222 , the user does not perceive that those streams are presented in low resolution because of the smaller size of the smaller preview windows 1222 .
  • Both the low definition stream and high definition stream from each camera can be synchronized, such that the start of each frame of video for both streams are in sync. This advantageously allows the picture displayed in both the smaller preview window 1222 and large focus window 1224 to be in perfect sync, preventing the user for perceiving temporal differences. Also, the parallel streams from each of the cameras 1202 can be in sync so that the picture in the large focus window 1224 can smoothly switch between high definition streams from different cameras 1202 without displaying partial frames or experiencing temporal delays during a switch between video sources.
  • the smaller preview windows 1222 can have the same pixel resolution as the pixel size of the low definition streams from cameras 1202 . This can reduce the computation load on the computing device 1220 which does not have to remap each of pixels of the low definition streams into a different pixel size of the smaller preview windows 1222 .
  • the pixel size of the high definition stream can be the same as the pixel size the large focus windows 1224 .
  • the smaller preview windows 1222 or large focus windows 1224 can have different pixels sizes that the low definition streams or high definition streams respectively and the computing device 1220 can remap the streams onto the screen as would be understood in the art.
  • the computing device 1220 can receive the low definition streams and the high definition streams in a desired resolution and/or frame rate from the clone-of-a-clone server 1210 .
  • the streaming system 1200 presented herein provides the user with a seamless visual experience as the user switches between the different views from each of the cameras 1202 .
  • video compression can be used to reduce the overall bandwidth required.
  • video streams can be compressed using compression algorithms such as MP4, H.264, H.265 or other forms of compression as would be understood in the art.
  • the low definition and high definition streams can share a common audio stream to further reduce bandwidth.
  • the low definition streams and high definition streams can be separately streamed in distinct network connections to the computing device 1220 .
  • streams can be combined into a single network connection.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Nonlinear Science (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Acoustics & Sound (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Marketing (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The system and method includes a clone of a clone server that receives cloned copies of high definition video streams. The clone of a clone server generates low definition video streams from the high definition videos streams. The clone of a clone server streams parallel video streams to user computing devices that include both high definition video streams and low definition video streams. The clone of a clone server generates the low definition video streams from the high definition video streams and synchronizes the parallel video streams that are sent to the user computer device. The user computing device displays the received low definition video streams and a user selects one or more of the low definition video streams to be displayed in high definition on the user computer device.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Application No. 62/385,605, filed Sep. 9, 2016, which is incorporated herein by reference.
  • TECHNICAL FIELD
  • The subject application teaches embodiments that relate generally to streaming audio and video for sports venues, and specifically to video and audio capture, processing, and streaming of sporting events and practices.
  • BACKGROUND
  • Professional broadcasters capture live action events at sporting venues and broadcast live or recorded video to subscribers and television viewing audience. When sporting events are broadcast, viewers generally are limited to viewing an event through the viewpoint of a single camera selected by producers from one or more cameras that capture the sporting event. Most practices and some pre-season games are not broadcast, and minor league games, club level events, and high school sporting events are rarely broadcast or recorded at all. Cameras used by broadcasters are typically large complicated devices designed for professional camera personnel and include high resolution image capturing elements and expensive lenses with variable zoom. Cameras are typically mounted on tripods, slung from wires above sporting events, or attached to weight-bearing harnesses strapped to camera personnel who position themselves nearby to the action taking place on the field. Cameras and expertise for operating the cameras creates a barrier for new entrants to the market, local small-market producers, schools, and individuals wanting to create audio and video of sporting events, either for their own use or for monetizing their work through third party subscription. Broadcasters can offset the costs of obtaining, maintaining, and operating cameras, editing systems, and other broadcasting expenses through marketing and/or subscription revenues from their larger base of advertisers and/or consumers. The present disclosure presents new modalities for streaming audio and video from sporting venues to viewers.
  • SUMMARY
  • A method includes receiving cloned copies of a number of high definition video streams by a clone-of-a-clone server and streaming parallel video streams to a user computing device, where the parallel video streams include both the high definition video streams and low definition video streams based on the high definition video streams. The low definition video stream can use the common intermediate format or CIF nominally at 320×240 pixels. The high definition video stream can use the 1080p resolution with 1920×1080 pixels. The low definition and high definition video streams are substantially identical videos but have different special resolutions. The method can include generating a low definition video stream from the high definition video stream by the clone-of-a-clone server. The method can include synchronizing frames of the parallel video streams that are sent to the user computing device. The method can include receiving the parallel streams on the user computing device from the clone-of-a-clone server, displaying each of the low definition video streams on the user computing device, and displaying a selected high definition video stream on the user computing device. The method can include receiving a user selection of one of the low definition video streams on the user computing device and the high definition video stream that is display is based on the user selection. The method can include receiving a second user selection of a second low definition video stream and switching from the displayed high definition video stream associated with the second user selection. The switching is performed substantially seamlessly from the first high definition video stream to the second high definition video stream. Each of the low definition video streams can be displayed in a low resolution small window and the selected high definition video stream can be displayed in a high resolution large window on the user computing device.
  • A system includes a clone-of-a-clone server that is configured to receive a number of high definition video streams and streams parallel video streams to one or more user computing devices, where the parallel video streams include both the high definition video streams and low definition video streams based on the high definition video streams. The low definition video stream can use the common intermediate format or CIF nominally at 320×240 pixels. The high definition video stream can use the 1080p resolution with 1920×1080 pixels. The low definition video can be CIF, VGA, 4CIF, and D1 resolution, while the high definition video steam can be 720p, 1 Megapixel, and 1080p. Other resolutions and video encoding standards can be used. The system can include a number of cameras that are configured to stream high definition video streams and a clone server configured to receive the streaming video from the camera and clone the streaming video onto the clone-of-a-clone server. The clone-of-a-clone server can generate a low definition video stream from each of the high definition video streams that the clone-of-a-clone server receives. The clone-of-a-clone server can synchronize frames of the parallel video streams that are streamed to the user computing device. The system can include a user computing device that is configured to receive the parallel high definition and low definition video streams. The user computing device is configured to display each of the low definition video streams and a selected high definition video stream. The user computing device can be configured to receive a user selection of one of the displayed low definition video streams, display an associated high definition video stream, receive a second user selection, and seamlessly switch to displaying the high definition video stream associated with the second user selection. Each of the low definition video streams can be displayed in a low resolution small window and the selected high definition video stream can be displayed in a high resolution large window on the user computing device.
  • A system includes a clone-of-a-clone server that is configured to receive a number of cloned high definition video streams, generate low definition video streams from the high definition video streams, and selectively stream parallel high and low definition video streams to a user computing device. The clone-of-a-clone server synchronizes the parallel video streams so as to enable seemless switching on the display of the user computer device between different high resolution video streams. The low definition video stream can use the common intermediate format or CIF nominally at 320×240 pixels. The high definition video stream can use the 1080p resolution with 1920×1080 pixels.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a diagram of an audio/video system for sporting venues according to an embodiment of the disclosure.
  • FIG. 2 is a diagram of an impact-resistant camera housing according to an embodiment of the disclosure.
  • FIG. 3 is a diagram of a sports helmet with integrated audio/video system according to an embodiment of the disclosure.
  • FIG. 4 is a diagram of example audio/video and network components according to an embodiment of the disclosure.
  • FIG. 5 is a flowchart of example operations for networking audio/video components according to an embodiment of the disclosure.
  • FIG. 6 is a flow diagram of example data connections according to an embodiment of the disclosure.
  • FIG. 7 is a diagram of an example screen for selecting from multiple audio and video feeds according to an embodiment of the disclosure.
  • FIG. 8 is a flowchart of example operations for custom content creation according to an embodiment of the disclosure.
  • FIG. 9 is a diagram of components of an example computing device configured for audio/video operations according to an embodiment of the disclosure.
  • FIG. 10 is a functional block diagram of example modules of an audio/video streaming system.
  • FIG. 11 is a diagram of example video resolutions.
  • FIG. 12 is a diagram of an example clone streaming system for parallel streams.
  • DETAILED DESCRIPTION
  • The systems and methods disclosed herein are described in detail by way of examples and with reference to the figures. It will be appreciated that modifications to disclosed and described examples, arrangements, configurations, components, elements, apparatuses, devices methods, systems, etc. can suitably be made and may be desired for a specific application. In this disclosure, any identification of specific techniques, arrangements, etc. are either related to a specific example presented or are merely a general description of such a technique, arrangement, etc. Identifications of specific details or examples are not intended to be, and should not be, construed as mandatory or limiting unless specifically designated as such.
  • The systems and methods disclosed herein describe various aspects of real-time video for sporting venues. Although the disclosed system and method are described below with regard to one or more computing devices and in particular mobile computing devices, the system and method can be used with any suitable computing device including but not limited to mobile phones, smart phones, pad computing devices, laptops, personal computers, desktops, servers, embedded controllers, and so forth. Among other various possibilities
  • Turning to FIG. 1, an audio/video system 100 for sporting venues is presented. The system 100 includes one or more audio/video streaming devices illustrated as cameras 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, and 13. For example, cameras 1, 11, 12, and 13 can be fixed cameras in an arena, cameras 2, 5, 6, and 9 can be movable cameras that follow players or the action in the arena, camera 4 can be a camera positioned ideally to point at a scoreboard, cameras 3, 7, and 8 can be helmet cameras mounted to the helmets of certain players, and camera 10 can be a pair of helmet cameras configured to provide a view with a 3D virtual reality view from a player's perspective, such as a goalie's view.
  • The devices can be cameras, microphones, wireless cameras, wireless microphones, helmet cams, and so forth. Wired communications can be provided over Ethernet, for example using UDP or TCP protocols as would be understood in the art. In a configuration, a wired microphone can include an analog transducer that is coupled to a digitizer; the digitizer converts the analog signal into a suitable digital format such as H.264. Typically, wired microphones are analog devices that are connected via cables to a head end unit; long cables require sufficient electrical insulation to avoid interference and substantial gauge wire that makes them expensive and heavy. Even with quality electrical insulation and properly gauged wire, purely analog solutions are subject to attenuation losses and noise, affecting the signal-to-noise ratio of the signal received at the head end unit. By immediately converting the signal from the analog transducer into a digital signal, the digital signal can be carried on less expensive, longer cables without the subject attenuation losses and lower signal-to-noise ratio of a purely analog system. Power over Ethernet (PoE) advantageously can be used to both provide power to devices and to provide a wired communications medium for the devices. Wireless communications can be effected using Wi-Fi or other wireless protocols, including but not limited to Bluetooth or Li-Fi.
  • The system 100 can include a private network, shown as intranet 110, configured for data communications between the devices and a streaming system 120. The streaming system 120 is configured to support audio and video streams from the devices, and convert them as required, as described below in greater detail. The streaming system 120 can include storage 130 for storing the audio and video streams. The streaming system 120 can allow users 150 to stream audio and video from the devices or from storage 130.
  • Turning now to FIG. 2, an example impact resistant camera housing 200 is presented. The camera housing 200 is configured to withstand vibrations, shocks, and impact to a camera mounted within the camera housing 200. For example, in a hockey arena it is possible for cameras to come into contact with a flying hockey puck, or be impacted by a hockey stick or a player. The camera housing 200 can protect the camera from the impact, and also ensure that parts from a damaged camera, such as glass or electronics, do not end up on spectators or players or on the ice where sharp or heavy pieces might cause injury.
  • The camera housing 200 is structurally configured to protect the camera while allowing connection to electrical components such as cables or wires for power and data communications. The camera housing 200 can also be configured to provide clean air for the camera, and remove heat dissipated by the camera.
  • The camera housing 200 comprises a dome assembly that attaches at one end of a drum 201. The dome assembly comprises a transparent dome cover 203 and a retainer ring 204. The dome assembly can be coupled to the drum 201 using complementary threading, screws, nuts, bolts, washers, (not shown) and the like as would be understood in the art.
  • A camera can be mounted inside the camera housing 200, for example on a support structure having support members (not shown) that contact the interior wall of the drum 201. The support members can be configured to dampen vibrations as would be understood in the art. An example support structure can be a disk that rests against pliable dampeners that act as support members and that seat the disk along a cross section of the drum 201. A camera can be mounted to the disk, for example using screws or other suitable fasteners.
  • The drum 201 can include threaded holes 202A, 202B to attach camera angle travel limiters inside the drum 201 thereby limiting the camera rotation to a predetermined angle. Camera angle travel limiters work by limiting camera rotation angle to prevent the camera from becoming damaged during rotation, or to ensure that the camera is always pointed at a certain area of the arena. For example, it may be desirable to use angle travel limiters to ensure that a camera cannot be pointed at spectators accidentally. In a configuration, the threaded holes 202A, 202B do not penetrate the drum 201 and are accessible only from the inside of the drum 201.
  • A mounting cover comprises retainer ring 205 and cover plate 206. Cover plate 206 can include collar 207 configured to accept a support rod 210 that connects to a support structure 211 and mounting plate 212. The mounting plate 212 can be attached to a structure in the arena such as a wall, ceiling, support beam, and so forth. A quick link 208 can be used as a backup failsafe to further anchor the camera housing 200 to a wall or support structure, for example using metal strings, or rope. This can be used to ensure that the camera housing 200 does not fall onto spectators, players, or the arena if the mounting plate 212 were become detached for any reason. The support rod 210 can be hollow, providing for passage for electrical components such as wires, cables, and so forth. The cover plate 206 can include threaded screw holes 209A, 209B, 209C, 209D for connecting the cover plate 206 and ring 205 to the drum 201. In a configuration, long screws can be used that pass through the drum 201 and also connect the dome assembly to the drum 201.
  • Referring now to FIG. 3, a helmet 300 that includes a helmet cam is presented. The helmet 300 can include one or more cameras 302 and/or microphones 304. The camera 302 can use a standard definition or high definition frame size and frame rate such as a 720p, 1080i, 1080p, 2k, or 4k at 30 frames per second (fps), 60 fps, or 120 fps, or lower frame rates. A helmet cam for providing a 3D virtual reality video feed can include two spatially separated cameras 302 as would be understood in the art. In a configuration, the microphone 304 can include an analog transducer that is coupled to a digitizer; the digitizer converts the analog signal into a suitable digital format such as MP3. In a configuration the camera 302 and microphone 304 can be a single unit. The camera 302 and microphone 304 are in communication with an embedded controller 306. The embedded controller 306 can include custom designed electronics, for example a chip or microcontroller with a Wi-Fi or other antenna. In a configuration, the embedded controller 306 can include a modified smartphone. In one such configuration, the camera element and microphone element from a smartphone can be displaced from the modified smartphone and used as camera 302 and microphone 304. The embedded controller 306 can stream one or more video or audio streams from the camera 302 and/or microphone 304. Data communications from the embedded controller 306 can include Wi-Fi.
  • Referring now to FIG. 4, example audio/video and network components 400 are presented. A microphone 450, for example a wired microphone configured to be placed near the glass surrounding a hockey rink, can be connected to a proxy server 410 via an Ethernet cable such as a CAT 6 cable. The communications protocol between the proxy server 410 and microphone 450 can be USB over Ethernet, among other possible protocols as would be understood in the art. The Ethernet cable can provide power to the microphone 450. In another configuration, the microphone can use a wireless network such as Wi-Fi or Li-Fi.
  • An IP camera 460, for example an IP camera configured to be placed inside of the impact resistant camera housing 200 of FIG. 2, can be connected to a PoE switch 430 using a CAT 6 cable. The PoE switch 430 can provide power to the IP camera 460. The communications protocol between the proxy server 410 and IP camera 460 can be RTSP or real-time streaming protocol, among other possible protocols as would be understood in the art.
  • A wireless helmet camera 470, for example as described in helmet 300 of FIG. 3, can be configured for wireless data communications with the proxy server 410 via Wi-Fi router 440. Wi-Fi router 440 can be connected to the proxy server 410 via PoE switch 430 or by a direct connection to the proxy server 410. The communications protocol between the proxy server 410 and wireless helmet camera 470 can be RTSP (i.e., real-time streaming protocol) H.264, or H.265, among other possible protocols as would be understood in the art.
  • Similarly, a wireless microphone 480 can be configured for wireless data communications with the proxy server 410 via Wi-Fi router 440. The proxy server 410 can receive digitized audio, for example an MP3 stream, by establishing a connection with the wireless microphone, for example using hypertext transfer protocol, or HTTP. Other communication protocols could also be used as would be understood in the art.
  • The proxy server 410 receives audio and video streams from microphones 450, 480 and cameras 460, 470. The proxy server 410 can store the streams to a memory, such as data store 420 for archiving or temporary storage. In a configuration, the proxy server 410 and data store 420 reside in the same hardware. In a configuration, the proxy server 410 can convert each video or audio stream to one or more common formats, sampling or compression rates, and frame sizes. For example, the proxy server 410 can receive a video stream and convert it to a standard H.264 or MPEG video stream prior to storing in data store 420. In a configuration, the proxy server 410 can store two or more different video streams from the same received video stream. For example, the proxy server 410 can convert a received video stream into a small thumbnail-sized video stream and a full size video stream. In an embodiment, two or more proxy servers can be used, for example a first proxy server can receive the audio and video streams from devices and clone the streams to a second proxy server, and the second proxy server can convert and then stream audio and video to users (see for example, FIG. 12 and associated description.)
  • Referring now to FIG. 5, example operations for networking wireless audio and video devices are presented. Operation commences at start block 500 labeled “START” and proceeds to process block 502.
  • In process block 502, the wireless device is powered on. Processing continues to process block 504.
  • In process block 504, the wireless device detects a Wi-Fi network. The wireless device can be preconfigured to connect to a specific Wi-Fi network by name, or service set identifier (SSID). The Wi-Fi network may be configured not to broadcast the SSID, for example to prevent the wireless network from being visible on spectators' mobile devices in the arena. In this configuration, the wireless device may detect the Wi-Fi network by querying for the Wi-Fi network using the preconfigured SSID. Processing continues to decision block 506.
  • In decision block 506, if the wireless device has previously received an IP address, then processing continues to process block 514, otherwise processing continues to process block 508.
  • In process block 508, the wireless device requests an IP address using the dynamic host control protocol or DHCP. Processing continues to process block 510.
  • In process block 510, a DHCP server receives the DHCP request from the wireless device and provides an IP address to the wireless device. The DHCP server reserves a fixed IP address for each wireless device. Advantageously, reserving a fixed IP address for each wireless device facilitates determining which video or audio feed belongs to each wireless device. A fixed or reserved IP address simplifies the process of allowing multiple users to receive video feeds from specific wireless devices, as players have helmet cams that may disconnect and reconnect to the Wi-Fi network as they move about the arena during game play. Without fixed or reserved IP addresses, the IP addresses of helmet cams could change during game play and make live streams have to disconnect and reconnect. Processing continues to process block 512.
  • In process block 512, the wireless device receives the IP address from the DHCP server. Processing continues to process block 514.
  • In process block 514, the wireless device streams audio and/or video to the proxy server using the configured IP address. Processing continues to decision block 516.
  • In decision block 516, if the connection to the wireless device drops, then processing continues to decision block 518, otherwise processing continues back to process block 514 to continue streaming the audio and/or video.
  • In decision block 518, if the connection has dropped due to a power off event or a signal to end streaming, then processing terminates at end block 520, otherwise processing continues back to process block 504 to attempt to reconnect to the Wi-Fi network.
  • Referring now to FIG. 6, example data connections are illustrated for an embodiment of the audio/video system 600. In an arena 602, such as a hockey arena, a sporting venue, or an entertainment venue in general, one or more fixed or moveable cameras 604, helmet cams 606, and microphones 608 are in data communication with a proxy server 612 through data communications equipment represented by wireless hub 610. The proxy server 612 provides one or more ports through which video and audio data streams can be accessed by users 630, either in real-time or through viewing stored data streams. A firewall 614, such as a specially configured router or dedicated piece of data communications equipment, prevents unauthorized users 630 from accessing data streams from the proxy server 612.
  • In an embodiment, users 630 first access a website system 620 which provides authentication information for accessing the data streams through the firewall. Authenticated users 630 connect through the firewall to the proxy server 612 and selected data streams are obtained from the proxy server 612 and presented on the users 630 screens. In another embodiment, the website system 620 is able to connect through the firewall 614 and connect to the proxy server 612 that streams to the website system 620. Users 630 that are authenticated on the website system 620 receive data streams that pass through the website system 620 from the proxy server 612. In another embodiment, two or more proxy servers can be used, for example a first proxy server can receive the audio and video streams from devices and clone the streams to a second proxy server, and the second proxy server can convert and then stream audio and video to users (see for example, FIG. 12 and associated description.)
  • Multiple end users 630 can simultaneously use the audio/video system 600. The audio/video system 600 can simultaneously support multiple events occurring in different venues. The audio/video system 600 can allow users 630 to create their own customized streams. For example, a first end user 632 can view different live streams from the audio/video system 600 during a particular sporting event. A second end user 634 can generate a customized stream based on a current live stream, or stored data streams of a previous sporting event. A third end user 636 can stream the customized stream of the second end user 634. Each end user 630 can use a different kind of computing device, for example a mobile device such as a smartphone or tablet, a laptop, a desktop, and so forth. For example, the first end user 632 can be streaming to a mobile computing device that is using a dedicated application or app that has been downloaded to a mobile computing device. The second end user 634 can be using a high end workstation with a fast Internet connection for editing and generating their customized stream. The third end user 636 can be using an Internet browser and clicking a link to access the customized stream of the second end user 636. In a configuration, the bit rate, frame rate, and frame size of the video and audio streams can be optimized for the type of end user computing device and connection speed.
  • Referring also to FIG. 7, an example screen 700 for selecting from multiple audio and video feeds is presented. The screen 700 includes thumbnail views 710 from each of the cameras and microphones. Some thumbnail views 710 may not include audio or video, either because the feed does not include audio or video, or due to a lost connection. Some thumbnail views, such as thumbnail view 10 may include a left and right view, allowing a user with a 3D viewing device connected to their video device to view a sporting event as a virtual reality experience from one or more of the players' perspectives.
  • The user can select from one or more of the thumbnail views 710, for example by clicking on a particular thumbnail view 710 or dragging a thumbnail view to a focus window 720. The currently selected video is presented in a focus window 720 that typically is larger than the thumbnail views. Clicking a camera icon associated with each thumbnail view 710 allows a user to select whether video, audio, or both are to be presented to the user, for example via the focus window 720. A user can select video from one device and audio from another device. In an embodiment, the user can customize the screen 700, for example to reorganize the order or size of the thumbnail views 710, to have two or more focus windows. Different user controls and window arrangements can be presented to the user as would be understood in the art. For example, in one configuration the focus window 720 can be selected by the user and clicked to toggle between full screen and the illustrated split screen that includes both the focus window 720 and the thumbnail view 710. In another configuration, clicking on the focus window 720 will cycle between a group of selected thumbnail views 710. This can be particularly useful to a user viewing the event using VR or 3D viewing devices.
  • Referring now to FIG. 8, example operations of a system for creating custom content are presented. Users and/or the streaming system itself can choose which devices to display in the focus window or focus windows. Other users can be invited to view the custom created content. Operation commences at start block 800 labeled “START” and proceeds to process block 802.
  • In process block 802, the streaming system receives streams from devices such as cameras and microphones. Processing continues to process block 804.
  • In process block 804, the streaming system streams one or more device streams to users 808, for example through the selection screen 700 of FIG. 7. At any time, users 808 can join a live stream of a sporting event or view a saved stream in process block 806. Processing continues to decision block 810.
  • In decision block 810, if the streaming system is configured to auto-select the focus window, then processing continues to process block 812, otherwise processing continues to process block 814.
  • In process block 812, the streaming system selects a feature that is used to determine the focus window. For example, the streaming system can select the feature to be the camera where the puck is located, or the microphone that is loudest. The selected feature can change dynamically during the game or practice. For example, the selected feature can be the penalty box subsequent to determining that an official has blown a whistle and the clock has been stopped, or the scoreboard after a change to a score on the scoreboard, or a particular player when that player enters the ice in the arena. In this mode, the streaming system attempts to select devices to present the best user experience of the sporting event. Processing continues to process block 818 where the streaming system determines the focus window based on the selected feature.
  • In decision block 814, if a user manually selects a feature to use as the selected feature, then processing continues to process block 816 to receive the user selection, otherwise processing continue to decision block 820.
  • In process block 816, the streaming system receives a selection of a feature to use for selecting the focus window from the available devices. For example, a user who is a scout may desire to follow one particular athlete, and thus use the streaming system in a scouting mode. The scout may select as the feature a jersey number of the particular athlete, in which case the streaming system in process block 818 will determine which camera shows the athlete's jersey number best. In another example, an avid fan of a particular player may desire to have that player as the focus of attention while still watching the game in progress, in which case the camera could be selected that displays both the player and the puck the majority of the time while the selected audio device could be from the helmet of the player or the audio device closest to the player. Processing continues to process block 818.
  • In process block 818, the streaming system determines the focus window from the available cameras and microphones. The steaming system can track players on the ice, or other playing surfaces for other sports, and use player position and motion data to determine the best camera and microphone to use in the focus window. The streaming system can use the selected feature from process block 812 and/or process block 816 in determining the best device to display in the focus window. The streaming system can determine when a particular device is not streaming, or has a connection issue, and switch to the next best device. Processing continues to decision block 820.
  • In decision block 820, if a user selects a particular device to use in the focus window, for example to override an selected device by the streaming system from process block 818, the processing continues to process block 822, otherwise processing continues to decision block 824.
  • In process block 822, the streaming system changes the focus window to the user selected device or devices. Processing continues to decision block 824.
  • In decision block 824, if the user adds user-content to the content stream, then processing continues to process block 826, otherwise processing continue to decision block 828.
  • In process block 826, a user adds user-created content to the content stream. For example, the user may have a microphone connected to their computing device and can add live commentary, such as player analysis or real-time play-by-play announcing such as is performed by professional announcers and commentators. In another example, sophisticated users can include user-created video such as replay clips or on-screen annotation. Processing continues to decision block 828.
  • In decision block 828, if the stream is offered to users, then processing continues to process block 830, otherwise processing continues to decision block 834.
  • In process block 830, a custom stream can be saved. In one configuration, metadata is saved that includes time-stamped tracking of which device(s) were selected for the focus window(s). In this way, the custom stream can be recreated as needed from saved video streams. In another configuration, a new stream can be saved separately for each custom created stream. In another configuration, the original sources or streams can be saved for a configurable period of time, and then purged at a particular expiration date to recover storage space. Similarly, custom streams can be saved and stored for a period of time before being purged. For example, a single custom stream created by the streaming system might be stored indefinitely, while the remaining streams are purged. Processing continues to process block 832.
  • In process block 832, users can be invited to view a custom stream. For example, a stream automatically generated by the streaming system can be shown on a schedule of available live or saved games for viewing by users. The streaming system can also include user-create custom streams in the schedule, and allow other users to rate user-created streams. In another example, a user that creates custom content can generate a link to their custom stream that can be forwarded to other users, for example through social media. For example, a link can be placed on a FACEBOOK page, a clip and link uploaded to the user's INSTAGRAM or TWITTER account, or a link can be emailed to potentially interested parties, for example using an email list and advertisement. Other uses of social media, either currently extant or yet to be developed, can be utilized as would be understood by one of skill in the art. Processing continues to decision block 834.
  • In decision block 834, if the sports event is determined to be over or if the saved stream has concluded, then processing terminates at end block 836, otherwise processing continues back to process block 804 to continue streaming content to users.
  • The costs of creating audio-video content are substantially reduced by allowing users, or the streaming system itself, to determine which video and audio stream to use as the focus window(s), especially when compared to the costs incurred by professional broadcast services such as the major television networks. Further, the use of relatively inexpensive cameras, microphones, and networking equipment allows that equipment to be more or less permanently placed in a sporting venue and used for whatever events occur in the venue, whether they are sporting events, entertainment events, or other events. This opens the opportunity to allow streaming of practices, pre-season games, minor-league games, club-level events, and even high-school events to interested parties. In effect, the present system democratizes the capture, production, and distribution of content from all levels of sporting venues.
  • Referring now to FIG. 9, an example computing device 900 is presented. Example computing devices 900 can be servers, desktop systems, mobile computing devices, embedded controllers, wireless cams and microphones, and so forth. Included are one or more processors, such as that illustrated by processor 904. Each processor is suitably associated with non-volatile memory, such as read only memory (ROM) 910 and random access memory (RAM) 912, via a data bus 914.
  • Processor 904 is also in data communication with a storage interface 916 for reading or writing to a data storage system 918, suitably comprised of a hard disk, memory or solid-state disk, or any other suitable data storage as will be appreciated by one of ordinary skill in the art.
  • Processor 904 is also in data communication with a network interface controller (NIC) 930, which provides a data path to any suitable wired or physical network connection via physical network interface 934, or to any suitable wireless data connection via wireless network interface 938 or cellular interface 936, such as one or more of the networks detailed above.
  • Processor 904 is also in data communication with an input/output (I/O) interface 940 which provides data communication with devices such as a microphone 946 or camera 948 or user peripherals, such as a touchscreen display 944, keyboard, or mouse or any other suitable user interface. It will be understood that functional units are suitably comprised of intelligent units, including any suitable hardware or software platform.
  • Referring now to FIG. 10, presented are example software modules of an embodiment of the website system of FIG. 6. A user interface module 1002 serves web pages to users and administrators that provides a graphical user interface for logging into the system, viewing camera and audio microphone locations, viewing calendars of upcoming sporting events and archived streams, selecting sporting events or recorded steams to view, receiving video and audio streams from the proxy server through the firewall, customizing the user's thumbnail and focus window views, and interacting with the system in general. User accounts, configuration data, calendar information, stream information, and other data can be stored in a database 1010 or other suitable memory. A scheduler engine 1004 can schedule recordings of sporting events by the proxy server.
  • An analytics engine 1006 can analyze video and audio streams. For example, the analytics engine 1006 can determine when a video or audio feed has disconnected, and switch a user's focus window to another available stream and switch back once the video or audio feed reconnects. Similarly, the analytics engine 1006 can monitor video or audio streams and either blackout some or all of a stream in real-time, or switch the focus window to a different stream. The analytics engine 1006 can be used to detect objectionable language in audio, or objectionable images in a video feed, for example nudity, political messages, unauthorized advertising, excessive violence, and so forth. In a configuration, the analytics engine 1006 can be rules-based or use heuristics or other suitable analytics to perform an analysis of one or more streams. In a configuration, the analytics engine 1006 can receive a copy of the streams from a proxy server, or clone-of-a-clone system of FIG. 12, and the analytics engine 1006 can be executing on any suitable system as would be understood in the art.
  • The analytics engine 1006 can also track selected features for determining which stream to use in a focus window. For example, when the website system in being used by a user that is a scout, or if the system is set to use a scout mode, an individual player can be tracked in multiple video streams, for example by jersey number. The analytics engine 1006 can determine the optimal video and audio streams to use to track the selected player or feature being tracked.
  • The analytics engine 1006 can also perform analysis of helmet cam video and/or audio, for example to track where a player is looking or to determine how the player is moving the helmet. The analytics engine 1006 can determine if rapid helmet movements are suggestive of violent impacts which could cause concussions. The analytics engine 1006 can monitor a helmet cam for video and/or audio that might indicate a concussion, injury, or exhaustion of the player. For example, movements of the helmet that are atypical for the player, such as looking down more often, looking up, not turning the head in one particular direction, not following the puck (or a ball as might be used in other sporting events) or a delay in following the puck or action of the game, not looking where other players are looking, and so forth. In a configuration, a player's typical pattern of helmet movements can be analyzed and saved for reference and comparison. In a configuration, the analytics engine 1006 can send an alert to a coach or medical professional via a text message, email, or other suitable alert, for example using the user interface 1002.
  • A tracking engine 1008 can track one or more players' movements in the arena. The tracking engine 1008 can turn a player's movements into vector data, or any other suitable position data. The tracking engine 1008 can work in conjunction with the analytics engine 1006. For example, the tracking engine 1008 can provide player position or vector data to the analytics engine 1006 that is used to determine which camera and audio feed to use in the focus window(s). In a configuration, each player can be analyzed to create a digital representation of the players. Example data that can be determined can include position, speed, direction, acceleration, deceleration, linearity, non-linearity, circularity, time, and other measurements as would be understood in the art. In a configuration, the tracking engine 1008 and analytics engine 1006 can determine the correct camera frame to provide to a user based on the player data. For example, the system can sum all of the vectors or kinetic energy for each frame and/or camera stream and switch the focus window to a particular camera stream based on that calculation.
  • In a configuration, tracking data can be combined with video data to provide a visual representation of players' movements during practice or a game. Similarly, tracking data and/or analytics data can be combined with video and/or audio data to provide player performance information to couches, scouts, and interested viewers and fans.
  • In a configuration, the tracking engine 1008 can receive position data from helmet cams, for example position data derived from GPS or radio signal triangulation. Tracking and analytics data can be stored in the database 1010 or any other suitable memory.
  • Referring to FIG. 11, example video resolutions are presented. Standard resolutions can include common intermediate format or CIF at 352×240 or 352×288 pixels, VGA at 640×480 pixels, and 4CIF/D1 at 704×480, 704×576, or 720×480 pixels. High definition resolutions can include 720p at 1280×720 pixels, 1 Megapixel at 1280×1024, and 1080p at 1920×1080 pixels. Ultra high resolution formats are also contemplated, for example QHD at 2560×1440 pixels, UHD or 4K at 3840×2160 pixels, and so forth. Standard resolution can also include QCIF at 176×120 or 176×144 pixels. Steaming video can be interlaced or progressive scan as appropriate for the resolution. Audio can similarly be encoded, for example as 19.2 kb/s PCM, 9.6 kb/s ADPCM, MP3, or any other suitable encoding or compression as would be understood in the art. The disclosed resolutions are presented as non-limiting examples only. Other suitable resolutions can also be used as would be understood in the art.
  • Referring now to FIG. 12, an example streaming system 1200 is presented. In a venue, such as arena 1214, a plurality of cameras 1202 are configured to stream video across one or more local network connections 1204 to a clone server 1206. The cameras 1202, such as camera 1 through camera n as illustrated, can be configured to stream a high definition video stream, such as 1080p at 1920×1080 pixels. In an embodiment, one or more cameras 1202 can be configured to stream both a low definition video stream, such as CIF at 320×240 pixels, and a high definition video stream. In a configuration, different cameras 1202 can stream in different resolutions. For example, camera 1 could stream in 1080p, while camera 2 streams in 4k and camera n streams using 1 megapixel streaming.
  • Each camera 1202 streams across a local network connection 1204, such as a LAN, WiFi, LiFi, Power over Ethernet, or any other suitable network for example as described with respect to the devices of FIG. 1. The clone server 1206 receives each of the streams from the cameras 1202. The clone server 1206 can store each of the streams from each of the cameras 1202. In a configuration, the streams are stored temporarily, or ephemerally, before being streamed to one or more clone-of-a-clone servers 1210 and/or to cloud storage 1213. In another configuration, the clone server 1206 can store each stream for a longer period of time, for example as permanent storage.
  • The clone server 1206 is in network communication, for example using a VPN or virtual private network, with one or more clone-of-a-clone servers 1210 through firewall 1207, which can be a suitable router or other suitable network element. The clone server 1206 clones the live video streams 1208 from the cameras 1202 onto the clone-of-a-clone server 1210. Each clone-of-a-clone server 1210 receives live video streams 1208 associated with each of the cameras 1202. In an embodiment, each clone-of-a-clone server 1210 can receive live video streams 1208 from a subset of all of the available cameras 1202 associated with the clone server 1206. In another embodiment, each clone-of-a-clone server 1210 can receive live video streams 1208 from multiple clone servers 1206 and associated cameras 1202. The clone-of-a-clone servers 1210 can be anywhere in the network, for example in the cloud 1216 as shown, at an ISP or Internet Service Provider, in a colocation premises, in the arena 1214 or any other suitable place. The clone-of-a-clone server 1210 can be hosted by a service company that provides high speed cloud hosting services, such as AMAZON, as would be understood in the art.
  • The clone server 1206 also sends recorded video streams 1209 to cloud storage 1213. Cloud storage 1213 can include network servers, redundant network storage hosted by third party companies, and other suitable cloud storage as would be understood in the art. In a configuration, the recorded video streams 1209 can include live video streams.
  • Advantageously, the clone server 1206, the clone-of-a-clone server 1210, and cloud storage 1213 allow the system architecture to easily scale to support any number of cameras 1202 and users. The clone server 1206 aggregates video streams from multiple cameras 1202. Additional clone servers 1206 can be used to accommodate more cameras 1202 as needed. Each clone-of-a-clone server 1210 receives cloned video streams from one or more clone servers 1206 and supports forwarding video streams to multiple users. Additional clone-of-a-clone servers 1210 can be used to accommodate more users when needed. Cloud storage 1213 can be scaled as necessary to support automated recording of live video streams and playback of video streams by users. A web server 1211 can provide front end web services for users to interact with the system and gain access to the live video streams and recorded video from the clone-of-a-clone servers 1210 and cloud storage 1213.
  • Clone-of-a-clone servers 1210 can be configured to perform other services, for example archiving video, providing user video editing functions, and so forth. In an embodiment, one or more cameras 1202 stream only a single stream of video, for example a single high definition 1080p stream. In this embodiment, the clone-of-a-clone server 1210 receives a clone of each high definition stream from the clone server 1206 and the clone-of-a-clone server 1210 creates an additional low definition video stream such as a CIF stream based on the received high definition stream. Alternatively, the clone server 1206 receives the high definition stream and creates the additional low definition video stream such as a CIF stream based on the high definition stream received from the cameras 1202. In yet another embodiment, the clone server 1206 receives a single stream from some cameras 1202 and multiple streams from other cameras 1202; the clone server 1206 or the clone-of-a-clone server 1210 generates a second stream, for example a second low definition stream, for cameras 1202 that only provide a single stream.
  • A consumer, for example a user or business located in a consumer premises 1218 such as a home or business office, uses a computing device 1220 that establishes a network connection, for example over the Internet 1212, with the web server 1211. The user interacts with the web server 1211 to view live video streams or recorded video from the clone-of-a-clone servers 1210 or cloud storage 1213.
  • The computing device 1220 can be a personal computer, a laptop, a tablet device, a smartphone, a smart TV, a video game device, a television set top box, or any other suitable computing device as would be known in the art. The computing device receives parallel streams from each of the cameras 1202, or parallel streams from a subset of the cameras 1202 over the network connection. For example, as illustrated in FIG. 12, the computing device 1220 receives both a low definition CIF stream and a high definition HD stream as parallel streams from each of the cameras 1202 over a network connection via the Internet 1212.
  • The computing device 1220 can be configured to display the received streams in any suitable or desired configuration or format. For example, the computing device 1220 can run software that displays multiple streams from the cameras 1202 in low definition in smaller preview windows 1222 and a high definition stream of one of the cameras 1202 in a large focus window 1224. A user can select any one of the smaller preview windows 1222 to display the high definition stream of the selected camera 1202 in the large focus window 1224. In a embodiment, there can be two or more large focus windows 1224, each of which can display a different selected stream. Advantageously, because the computing device receives both a low definition stream and a high definition stream associated with each of the cameras 1202, there is no delay, or minimal delay, that is perceived by the user as the user switches between streams from different cameras 1202 in the large focus window 1224. Also advantageously, because the low definition streams are displayed in smaller preview windows 1222, the user does not perceive that those streams are presented in low resolution because of the smaller size of the smaller preview windows 1222.
  • Both the low definition stream and high definition stream from each camera can be synchronized, such that the start of each frame of video for both streams are in sync. This advantageously allows the picture displayed in both the smaller preview window 1222 and large focus window 1224 to be in perfect sync, preventing the user for perceiving temporal differences. Also, the parallel streams from each of the cameras 1202 can be in sync so that the picture in the large focus window 1224 can smoothly switch between high definition streams from different cameras 1202 without displaying partial frames or experiencing temporal delays during a switch between video sources.
  • In a configuration, the smaller preview windows 1222 can have the same pixel resolution as the pixel size of the low definition streams from cameras 1202. This can reduce the computation load on the computing device 1220 which does not have to remap each of pixels of the low definition streams into a different pixel size of the smaller preview windows 1222. Similarly, the pixel size of the high definition stream can be the same as the pixel size the large focus windows 1224. In other configurations, the smaller preview windows 1222 or large focus windows 1224 can have different pixels sizes that the low definition streams or high definition streams respectively and the computing device 1220 can remap the streams onto the screen as would be understood in the art. In an embodiment, the computing device 1220 can receive the low definition streams and the high definition streams in a desired resolution and/or frame rate from the clone-of-a-clone server 1210.
  • Advantageously, the streaming system 1200 presented herein provides the user with a seamless visual experience as the user switches between the different views from each of the cameras 1202. Although the streaming system 1200 sends both high definition and low definition streams for each camera 1202 to the user, video compression can be used to reduce the overall bandwidth required. For example, video streams can be compressed using compression algorithms such as MP4, H.264, H.265 or other forms of compression as would be understood in the art. In an embodiment, the low definition and high definition streams can share a common audio stream to further reduce bandwidth. In an embodiment, the low definition streams and high definition streams can be separately streamed in distinct network connections to the computing device 1220. In an embodiment, streams can be combined into a single network connection.
  • While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the spirit and scope of the inventions.

Claims (20)

What is claimed is:
1. A method, comprising:
receiving, by a clone-of-a-clone server, cloned copies of a plurality of high definition video streams; and
streaming, by the clone-of-a-clone server, parallel video streams to a user computing device,
wherein the parallel video streams comprise the high definition video streams and a plurality of low definition video streams based on each of the high definition video streams.
2. The method of claim 1, wherein each low definition video stream uses a common intermediate format (CIF) resolution of 320 by 240 pixels, and wherein each high definition video stream uses a 1080p resolution of 1920 by 1080 pixels.
3. The method of claim 1, wherein each low definition video stream and associated high definition video stream are substantially identical videos that differ primarily by spatial resolution.
4. The method of claim 1, further comprising:
generating, by the clone-of-a-clone server, each low definition video stream from an associated high definition video stream.
5. The method of claim 1, further comprising:
synchronizing frames of the parallel video streams sent to the user computing device.
6. The method of claim 1, further comprising:
receiving, by the user computing device, a plurality of parallel streams from the clone-of-a-clone server;
displaying, by the user computing device, each low definition video stream from the parallel video streams; and
displaying, by the user computing device, a selected high definition video stream from the parallel video streams.
7. The method of claim 6, further comprising:
receiving, by the user computing device, a user selection of one of the displayed low definition video streams, and
wherein the selected high definition video stream is based on the user selection.
8. The method of claim 7, further comprising:
receiving, by the user computing device, a second user selection of a second low definition video stream; and
switching, by the user computing device, from displaying the high definition video stream to displaying a second high definition video stream based on the second user selection, and
wherein the switching is performed substantially seamlessly between the high definition video stream and the second high definition video stream.
9. The method of claim 6, wherein each of the low definition video streams is displayed on the user computing device in a low resolution small window and wherein the selected high definition video stream is displayed in a high resolution large window.
10. A system, comprising:
a clone-of-a-clone server configured to
receive a plurality of high definition video streams, and
stream parallel video streams to a user computing device,
wherein the parallel video streams comprise the high definition video streams and a plurality of low definition video streams based on each of the high definition video streams.
11. The system of claim 10, wherein each low definition video stream uses a common intermediate format (CIF) resolution of 320 by 240 pixels and wherein each high definition video stream uses a 1080p resolution of 1920 by 1080 pixels.
12. The system of claim 10, wherein each low definition video stream is selected from the group consisting of CIF, VGA, 4CIF, and D1, and wherein each high definition video stream is selected from the group consisting of 720p, 1 Megapixel, and 1080p.
13. The system of claim 10, further comprising:
a plurality of cameras each configured to stream video comprising a high definition video stream; and
a clone server configured to receive streaming video from a camera and clone the streaming video onto the clone-of-a-clone server.
14. The system of claim 10, wherein the clone-of-a-clone server is further configured to generate a low definition video stream from each of the received high definition video streams.
15. The system of claim 10, wherein the clone-of-a-clone server is configured to synchronize frames of the parallel video streams streamed to the user computing device.
16. The system of claim 10, further comprising:
a user computing device configured to
receive the parallel video streams,
display each low definition video stream from the parallel video streams, and
display a selected high definition video stream from the parallel video streams.
17. The system of claim 16, wherein the user computing device is further configured to
receive a first user selection of one of the displayed low definition video streams,
display a high definition video stream associated with the first user selection,
receive a second user selection associated with a second displayed low definition video stream, and
switch from displaying the high definition video stream to displaying a second high definition video stream associated with the second user selection,
wherein the switch is performed substantially seamlessly between the high definition video stream and the second high definition video stream.
18. The system of claim 17, wherein each of the low definition video streams is displayed on the user computing device in a low resolution small window and wherein a selected high definition video stream is displayed in a high resolution large window.
19. A system, comprising:
a clone-of-a-clone server configured to
receive a plurality of cloned high definition video streams,
generate a low definition video stream from each high definition video stream, and
selectively stream parallel video streams to a plurality of user computing devices each configured to display a plurality of low definition video streams and at least one selected high definition video stream, and
wherein each of the parallel video streams comprises one of the high definition video streams and an associated low definition video stream, and
wherein the clone-of-a-close server is further configured to synchronize the parallel video streams to enable seemless switching between the display of a first selected high definition video stream and the display of a second selected high definition video stream on the user computing device.
20. The system of claim 19, wherein each low definition video stream has a common intermediate format (CIF) resolution of 320 by 240 pixels and wherein each high definition video stream has a 1080p resolution of 1920 by 1080 pixels.
US15/434,003 2016-09-09 2017-02-15 Parallel Video Streaming Abandoned US20180077437A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/434,003 US20180077437A1 (en) 2016-09-09 2017-02-15 Parallel Video Streaming

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662385685P 2016-09-09 2016-09-09
US15/434,003 US20180077437A1 (en) 2016-09-09 2017-02-15 Parallel Video Streaming

Publications (1)

Publication Number Publication Date
US20180077437A1 true US20180077437A1 (en) 2018-03-15

Family

ID=59930787

Family Applications (5)

Application Number Title Priority Date Filing Date
US15/434,003 Abandoned US20180077437A1 (en) 2016-09-09 2017-02-15 Parallel Video Streaming
US15/433,984 Abandoned US20180077430A1 (en) 2016-09-09 2017-02-15 Cloned Video Streaming
US15/699,651 Active US10327014B2 (en) 2016-09-09 2017-09-08 Three-dimensional telepresence system
US16/443,481 Active US10750210B2 (en) 2016-09-09 2019-06-17 Three-dimensional telepresence system
US16/946,826 Active US10880582B2 (en) 2016-09-09 2020-07-08 Three-dimensional telepresence system

Family Applications After (4)

Application Number Title Priority Date Filing Date
US15/433,984 Abandoned US20180077430A1 (en) 2016-09-09 2017-02-15 Cloned Video Streaming
US15/699,651 Active US10327014B2 (en) 2016-09-09 2017-09-08 Three-dimensional telepresence system
US16/443,481 Active US10750210B2 (en) 2016-09-09 2019-06-17 Three-dimensional telepresence system
US16/946,826 Active US10880582B2 (en) 2016-09-09 2020-07-08 Three-dimensional telepresence system

Country Status (7)

Country Link
US (5) US20180077437A1 (en)
EP (1) EP3510768B1 (en)
JP (2) JP7001675B2 (en)
KR (3) KR102256707B1 (en)
CN (2) CN112584080B (en)
DE (1) DE202017105484U1 (en)
WO (1) WO2018049201A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USD838739S1 (en) * 2016-01-08 2019-01-22 Apple Inc. Display screen or portion thereof with graphical user interface
US20210195258A1 (en) * 2017-03-10 2021-06-24 Sling Media Pvt. Ltd. Media session management
US11064226B2 (en) * 2017-03-16 2021-07-13 Echo-Sense, Inc. System and method for concurrent data streams from a singular sensor with remotely selectable parameters
US11140455B1 (en) * 2017-06-09 2021-10-05 Amazon Technologies, Inc. Video encoder network sandboxing
US20220400364A1 (en) * 2021-06-10 2022-12-15 Getac Technology Corporation Providing alternate communication proxies for media collection devices
IL296044A (en) * 2022-08-29 2024-03-01 Abu Freh Ismael System and method for streaming video in real-time via virtual reality headset using a camera network

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180077437A1 (en) 2016-09-09 2018-03-15 Barrie Hansen Parallel Video Streaming
GB201621879D0 (en) * 2016-12-21 2017-02-01 Branston Ltd A crop monitoring system and method
TWI665649B (en) * 2018-02-27 2019-07-11 鴻海精密工業股份有限公司 Micro led array, display and electronic device
US10785422B2 (en) * 2018-05-29 2020-09-22 Microsoft Technology Licensing, Llc Face recognition using depth and multi-spectral camera
WO2020030989A1 (en) * 2018-08-09 2020-02-13 Corephotonics Ltd. Multi-cameras with shared camera apertures
US10764533B2 (en) 2018-11-09 2020-09-01 Google Llc Computerworkstation with curved lenticular display
CN110149510B (en) * 2019-01-17 2023-09-08 深圳市光鉴科技有限公司 3D camera module and electronic equipment used under screen
US20220217301A1 (en) * 2019-04-15 2022-07-07 Shanghai New York University Systems and methods for interpolative three-dimensional imaging within the viewing zone of a display
US11516374B2 (en) 2019-06-05 2022-11-29 Synaptics Incorporated Under-display image sensor
US11057549B2 (en) * 2019-08-16 2021-07-06 Lenovo (Singapore) Pte. Ltd. Techniques for presenting video stream next to camera
US11153513B2 (en) 2019-08-19 2021-10-19 Synaptics Incorporated Light source for camera
CN110850599A (en) * 2019-08-19 2020-02-28 上海鲲游光电科技有限公司 Infrared floodlighting assembly
US11184605B2 (en) * 2019-09-27 2021-11-23 Apple Inc. Method and device for operating a lenticular display
US11076080B2 (en) 2019-12-05 2021-07-27 Synaptics Incorporated Under-display image sensor for eye tracking
US20210409893A1 (en) * 2020-06-25 2021-12-30 Microsoft Technology Licensing, Llc Audio configuration for displayed features
WO2022059981A1 (en) * 2020-09-18 2022-03-24 문명일 3d image acquisition device
WO2022076020A1 (en) * 2020-10-08 2022-04-14 Google Llc Few-shot synthesis of talking heads
WO2022115119A1 (en) * 2020-11-30 2022-06-02 Google Llc Three-dimensional (3d) facial feature tracking for autostereoscopic telepresence systems
CN114567767A (en) * 2022-02-23 2022-05-31 京东方科技集团股份有限公司 Display device, light field acquisition method, image data transmission method and related equipment
CN114827465A (en) * 2022-04-19 2022-07-29 京东方科技集团股份有限公司 Image acquisition method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020069265A1 (en) * 1999-12-03 2002-06-06 Lazaros Bountour Consumer access systems and methods for providing same
US20110063500A1 (en) * 2009-09-15 2011-03-17 Envysion, Inc. Video Streaming Method and System
US20120254933A1 (en) * 2011-03-31 2012-10-04 Hunt Electronic Co., Ltd. Network video server and video control method thereof
US8576271B2 (en) * 2010-06-25 2013-11-05 Microsoft Corporation Combining direct and routed communication in a video conference
US20150128174A1 (en) * 2013-11-04 2015-05-07 Broadcom Corporation Selecting audio-video (av) streams associated with an event
US9307217B1 (en) * 2013-06-12 2016-04-05 Ambarella, Inc. Portable video camera/recorder having video security feature
US20170013233A1 (en) * 2015-07-08 2017-01-12 Google Inc. Single-stream transmission method for multi-user video conferencing

Family Cites Families (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5335011A (en) * 1993-01-12 1994-08-02 Bell Communications Research, Inc. Sound localization system for teleconferencing using self-steering microphone arrays
CN1172267A (en) * 1996-07-29 1998-02-04 冯有纲 New stereoscopic visual image technique and device
US6208373B1 (en) * 1999-08-02 2001-03-27 Timothy Lo Fong Method and apparatus for enabling a videoconferencing participant to appear focused on camera to corresponding users
JP2003506973A (en) * 1999-08-10 2003-02-18 ホワイト・ピーター・マクダフィー Communications system
GB2411735A (en) * 2004-03-06 2005-09-07 Sharp Kk Control of liquid crystal alignment in an optical device
JP2005303683A (en) * 2004-04-12 2005-10-27 Sony Corp Image transceiver
US7535468B2 (en) * 2004-06-21 2009-05-19 Apple Inc. Integrated sensing display
JP5090337B2 (en) * 2005-04-08 2012-12-05 リアルディー インコーポレイテッド Autostereoscopic display with planar pass-through
WO2008132724A1 (en) * 2007-04-26 2008-11-06 Mantisvision Ltd. A method and apparatus for three dimensional interaction with autosteroscopic displays
US20090146915A1 (en) 2007-12-05 2009-06-11 Marathe Madhav V Multiple view display device
CN101472133B (en) * 2007-12-28 2010-12-08 鸿富锦精密工业(深圳)有限公司 Apparatus and method for correcting image
US9684380B2 (en) * 2009-04-02 2017-06-20 Oblong Industries, Inc. Operating environment with gestural control and multiple client devices, displays, and users
JP2010171573A (en) * 2009-01-21 2010-08-05 Epson Imaging Devices Corp Three-dimensional image display-imaging device, communication system, and display device
US8570423B2 (en) * 2009-01-28 2013-10-29 Hewlett-Packard Development Company, L.P. Systems for performing visual collaboration between remotely situated participants
JP5732064B2 (en) * 2009-11-17 2015-06-10 エーテーハー チューリヒ Transparent autostereoscopic image display apparatus and method
US8823782B2 (en) 2009-12-31 2014-09-02 Broadcom Corporation Remote control with integrated position, viewer identification and optical and audio test
KR101725044B1 (en) * 2010-05-27 2017-04-11 삼성전자주식회사 Imaging display apparatus
CN101866056A (en) * 2010-05-28 2010-10-20 中国科学院合肥物质科学研究院 3D imaging method and system based on LED array common lens TOF depth measurement
JP5494283B2 (en) * 2010-06-24 2014-05-14 ソニー株式会社 3D display device and 3D display device control method
KR101280636B1 (en) * 2010-07-29 2013-07-01 주식회사 팬택 Active type display apparatus for stereographic image and driving method thereof
US8624960B2 (en) * 2010-07-30 2014-01-07 Silicon Image, Inc. Multi-view display system
KR101732135B1 (en) 2010-11-05 2017-05-11 삼성전자주식회사 3dimension video communication apparatus and method for video processing of 3dimension video communication apparatus
US20120139906A1 (en) * 2010-12-03 2012-06-07 Qualcomm Incorporated Hybrid reality for 3d human-machine interface
US8823769B2 (en) * 2011-01-05 2014-09-02 Ricoh Company, Ltd. Three-dimensional video conferencing system with eye contact
US20130286010A1 (en) * 2011-01-30 2013-10-31 Nokia Corporation Method, Apparatus and Computer Program Product for Three-Dimensional Stereo Display
JP2012169822A (en) * 2011-02-14 2012-09-06 Nec Personal Computers Ltd Image processing method and image processing device
US20120223885A1 (en) * 2011-03-02 2012-09-06 Microsoft Corporation Immersive display experience
US20120257004A1 (en) * 2011-04-05 2012-10-11 Polycom, Inc. Direct Eye-Contact Enhancing Videoconferencing Unit
JP5834533B2 (en) * 2011-06-23 2015-12-24 沖電気工業株式会社 Communication system and communication device
JP2013125985A (en) * 2011-12-13 2013-06-24 Sharp Corp Display system
JP2013128181A (en) * 2011-12-16 2013-06-27 Fujitsu Ltd Display device, display method, and display program
US9024844B2 (en) * 2012-01-25 2015-05-05 Microsoft Technology Licensing, Llc Recognition of image on external display
JPWO2013115024A1 (en) * 2012-01-31 2015-05-11 ソニー株式会社 Image processing apparatus, image processing method, program, and recording medium
WO2013153464A1 (en) * 2012-04-13 2013-10-17 Nokia Corporation Method, apparatus and computer program for generating an spatial audio output based on an spatial audio input
WO2013159114A1 (en) 2012-04-20 2013-10-24 Total 3rd Dimension Systems, Inc. Systems and methods for real-time conversion of video into three-dimensions
US20130321564A1 (en) * 2012-05-31 2013-12-05 Microsoft Corporation Perspective-correct communication window with motion parallax
KR101350996B1 (en) * 2012-06-11 2014-01-13 재단법인 실감교류인체감응솔루션연구단 3d video-teleconferencing apparatus capable of eye contact and method using the same
US20140063198A1 (en) * 2012-08-30 2014-03-06 Microsoft Corporation Changing perspectives of a microscopic-image device based on a viewer' s perspective
US8976224B2 (en) * 2012-10-10 2015-03-10 Microsoft Technology Licensing, Llc Controlled three-dimensional communication endpoint
KR101977711B1 (en) * 2012-10-12 2019-05-13 삼성전자주식회사 Depth sensor, image capturing method thereof and image processing system having the depth sensor
US20140146394A1 (en) * 2012-11-28 2014-05-29 Nigel David Tout Peripheral display for a near-eye display device
BR112015014629A2 (en) * 2012-12-18 2020-09-15 Eyesmatch Ltd method for operating a system that has a monitor, a camera and a processor
US20140176684A1 (en) * 2012-12-24 2014-06-26 Alejandro Varela Techniques for multiple viewer three-dimensional display
JP6199619B2 (en) * 2013-06-13 2017-09-20 株式会社ニューフレアテクノロジー Vapor growth equipment
KR20140147376A (en) * 2013-06-19 2014-12-30 삼성전자주식회사 Layered type color-depth sensor and 3D image acquisition apparatus employing the sensor
US20140375541A1 (en) * 2013-06-25 2014-12-25 David Nister Eye tracking via depth camera
US9325936B2 (en) * 2013-08-09 2016-04-26 Samsung Electronics Co., Ltd. Hybrid visual communication
CN104427049A (en) * 2013-08-30 2015-03-18 深圳富泰宏精密工业有限公司 Portable electronic device
US20150097925A1 (en) * 2013-10-04 2015-04-09 Electronics And Telecommunications Research Institute Apparatus and method for displaying hologram based on pupil tracking using hybrid camera
US20150235408A1 (en) * 2014-02-14 2015-08-20 Apple Inc. Parallax Depth Rendering
CN104866261B (en) * 2014-02-24 2018-08-10 联想(北京)有限公司 A kind of information processing method and device
US9344748B2 (en) * 2014-03-31 2016-05-17 Arris Enterprises, Inc. Adaptive streaming transcoder synchronization
US20150324646A1 (en) * 2014-05-08 2015-11-12 Brown University Navigation methods and apparatus for the visually impaired
WO2016025962A1 (en) * 2014-08-15 2016-02-18 The University Of Akron Device and method for three-dimensional video communication
KR102269318B1 (en) * 2014-09-15 2021-06-28 삼성디스플레이 주식회사 Display device and display system including the same
US10248192B2 (en) * 2014-12-03 2019-04-02 Microsoft Technology Licensing, Llc Gaze target application launcher
KR102396289B1 (en) * 2015-04-28 2022-05-10 삼성디스플레이 주식회사 Three dimensional image display device and driving method thereof
JP6509027B2 (en) * 2015-05-12 2019-05-08 キヤノン株式会社 Object tracking device, optical apparatus, imaging device, control method of object tracking device, program
US20170070804A1 (en) * 2015-09-03 2017-03-09 Monster, Llc Multifunction Wireless Adapter
KR20170035608A (en) * 2015-09-23 2017-03-31 삼성전자주식회사 Videotelephony System, Image Display Apparatus, Driving Method of Image Display Apparatus, Method for Generation Realistic Image and Computer Readable Recording Medium
US10203566B2 (en) * 2015-12-21 2019-02-12 Facebook Technologies, Llc Enhanced spatial resolution using a segmented electrode array
US20180077437A1 (en) 2016-09-09 2018-03-15 Barrie Hansen Parallel Video Streaming

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020069265A1 (en) * 1999-12-03 2002-06-06 Lazaros Bountour Consumer access systems and methods for providing same
US20110063500A1 (en) * 2009-09-15 2011-03-17 Envysion, Inc. Video Streaming Method and System
US8576271B2 (en) * 2010-06-25 2013-11-05 Microsoft Corporation Combining direct and routed communication in a video conference
US20120254933A1 (en) * 2011-03-31 2012-10-04 Hunt Electronic Co., Ltd. Network video server and video control method thereof
US9307217B1 (en) * 2013-06-12 2016-04-05 Ambarella, Inc. Portable video camera/recorder having video security feature
US20150128174A1 (en) * 2013-11-04 2015-05-07 Broadcom Corporation Selecting audio-video (av) streams associated with an event
US20170013233A1 (en) * 2015-07-08 2017-01-12 Google Inc. Single-stream transmission method for multi-user video conferencing

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USD838739S1 (en) * 2016-01-08 2019-01-22 Apple Inc. Display screen or portion thereof with graphical user interface
USD936689S1 (en) 2016-01-08 2021-11-23 Apple Inc. Display screen or portion thereof with graphical user interface
US20210195258A1 (en) * 2017-03-10 2021-06-24 Sling Media Pvt. Ltd. Media session management
US11595707B2 (en) * 2017-03-10 2023-02-28 Dish Network Technologies India Private Limited Media session management
US11973997B2 (en) * 2017-03-10 2024-04-30 Dish Network Technologies India Private Limited Media session management
US11064226B2 (en) * 2017-03-16 2021-07-13 Echo-Sense, Inc. System and method for concurrent data streams from a singular sensor with remotely selectable parameters
US11140455B1 (en) * 2017-06-09 2021-10-05 Amazon Technologies, Inc. Video encoder network sandboxing
US20220400364A1 (en) * 2021-06-10 2022-12-15 Getac Technology Corporation Providing alternate communication proxies for media collection devices
US11818637B2 (en) * 2021-06-10 2023-11-14 Getac Technology Corporation Providing alternate communication proxies for media collection devices
IL296044A (en) * 2022-08-29 2024-03-01 Abu Freh Ismael System and method for streaming video in real-time via virtual reality headset using a camera network
WO2024047634A1 (en) * 2022-08-29 2024-03-07 Abu Freh Ismael System and method for streaming video in real-time via virtual reality headset using a camera network
IL296044B1 (en) * 2022-08-29 2024-04-01 Abu Freh Ismael System and method for streaming video in real-time via virtual reality headset using a camera network

Also Published As

Publication number Publication date
US20190306541A1 (en) 2019-10-03
US10327014B2 (en) 2019-06-18
CN109565567A (en) 2019-04-02
KR20200078703A (en) 2020-07-01
CN112584080A (en) 2021-03-30
US10880582B2 (en) 2020-12-29
WO2018049201A1 (en) 2018-03-15
KR20190026804A (en) 2019-03-13
CN112584080B (en) 2023-10-24
KR102142643B1 (en) 2020-08-07
JP2022009242A (en) 2022-01-14
EP3510768A1 (en) 2019-07-17
DE202017105484U1 (en) 2018-01-09
US10750210B2 (en) 2020-08-18
EP3510768B1 (en) 2023-05-24
US20180077384A1 (en) 2018-03-15
US20200344500A1 (en) 2020-10-29
US20180077430A1 (en) 2018-03-15
JP2019533324A (en) 2019-11-14
CN109565567B (en) 2020-12-08
KR102256707B1 (en) 2021-05-26
JP7443314B2 (en) 2024-03-05
KR20200096322A (en) 2020-08-11
JP7001675B2 (en) 2022-01-19

Similar Documents

Publication Publication Date Title
US20180077437A1 (en) Parallel Video Streaming
US20180077438A1 (en) Streaming audio and video for sporting venues
US11871088B2 (en) Systems, apparatus, and methods for providing event video streams and synchronized event information via multiple Internet channels
US11770591B2 (en) Systems, apparatus, and methods for rendering digital content streams of events, and synchronization of event information with rendered streams, via multiple internet channels
EP3459252B1 (en) Method and apparatus for spatial enhanced adaptive bitrate live streaming for 360 degree video playback
US9894323B2 (en) Systems and methods for providing interactive video services
US11153615B2 (en) Method and apparatus for streaming panoramic video
US11838563B2 (en) Switching between transmitting a preauthored video frame and a composited video frame
Liu et al. LIME: understanding commercial 360 live video streaming services
US20230379531A1 (en) Systems, apparatus and methods for rendering digital content
US11924397B2 (en) Generation and distribution of immersive media content from streams captured via distributed mobile devices
CN107800946A (en) A kind of live broadcasting method and system
CN108093300B (en) Animation capture management system
US20160182930A1 (en) Systems and methods for enabling simultaneous second screen data access during ongoing primary screen programming
Niamut et al. Live event experiences-interactive UHDTV on mobile devices
US20180063253A1 (en) Method, system and device for providing live data streams to content-rendering devices
KR102276636B1 (en) Method and Apparatus for Automatic Tracking and Replaying Images Based on Artificial Intelligence
US10623803B2 (en) Essence content creation, modification and/or delivery methods and systems
Niamut et al. Immersive live event experiences-interactive UHDTV on mobile devices
Tunturipuro Building a low-cost streaming system: Streaming and camera operating system for live internet productions
Schreurs et al. Delivering Multicamera Content to Smart Devices through Cloud Platforms
Thomas et al. Report on final demonstration. Fascinate deliverable D6. 3.1
DTO et al. Deliverable D6.

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION