US20180077430A1 - Cloned Video Streaming - Google Patents

Cloned Video Streaming Download PDF

Info

Publication number
US20180077430A1
US20180077430A1 US15/433,984 US201715433984A US2018077430A1 US 20180077430 A1 US20180077430 A1 US 20180077430A1 US 201715433984 A US201715433984 A US 201715433984A US 2018077430 A1 US2018077430 A1 US 2018077430A1
Authority
US
United States
Prior art keywords
clone
definition video
video stream
high definition
stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/433,984
Inventor
Barrie Hansen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US15/433,984 priority Critical patent/US20180077430A1/en
Publication of US20180077430A1 publication Critical patent/US20180077430A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/222Secondary servers, e.g. proxy server, cable television Head-end
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02FOPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
    • G02F1/00Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
    • G02F1/29Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the position or the direction of light beams, i.e. deflection
    • GPHYSICS
    • G02OPTICS
    • G02FOPTICAL DEVICES OR ARRANGEMENTS FOR THE CONTROL OF LIGHT BY MODIFICATION OF THE OPTICAL PROPERTIES OF THE MEDIA OF THE ELEMENTS INVOLVED THEREIN; NON-LINEAR OPTICS; FREQUENCY-CHANGING OF LIGHT; OPTICAL LOGIC ELEMENTS; OPTICAL ANALOGUE/DIGITAL CONVERTERS
    • G02F1/00Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
    • G02F1/29Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the position or the direction of light beams, i.e. deflection
    • G02F1/291Two-dimensional analogue deflection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/305Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using lenticular lenses, e.g. arrangements of cylindrical lenses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234363Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/2365Multiplexing of several video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/242Synchronization processes, e.g. processing of PCR [Program Clock References]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/61Network physical structure; Signal processing
    • H04N21/6106Network physical structure; Signal processing specially adapted to the downstream path of the transmission network
    • H04N21/6125Network physical structure; Signal processing specially adapted to the downstream path of the transmission network involving transmission via Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/85406Content authoring involving a specific file format, e.g. MP4 format
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/157Conference systems defining a virtual conference space and using avatars or agents
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/02Casings; Cabinets ; Supports therefor; Mountings therein
    • H04R1/028Casings; Cabinets ; Supports therefor; Mountings therein associated with devices performing functions other than acoustics, e.g. electric candles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4113PC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/20Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/15Transducers incorporated in visual displaying devices, e.g. televisions, computer displays, laptops

Definitions

  • the subject application teaches embodiments that relate generally to streaming audio and video for sports venues, and specifically to video and audio capture, processing, and streaming of sporting events and practices.
  • Cameras used by broadcasters are typically large complicated devices designed for professional camera personnel and include high resolution image capturing elements and expensive lenses with variable zoom. Cameras are typically mounted on tripods, slung from wires above sporting events, or attached to weight-bearing harnesses strapped to camera personnel who position themselves nearby to the action taking place on the field.
  • Cameras and expertise for operating the cameras creates a barrier for new entrants to the market, local small-market producers, schools, and individuals wanting to create audio and video of sporting events, either for their own use or for monetizing their work through third party subscription.
  • Broadcasters can offset the costs of obtaining, maintaining, and operating cameras, editing systems, and other broadcasting expenses through marketing and/or subscription revenues from their larger base of advertisers and/or consumers.
  • the present disclosure presents new modalities for streaming audio and video from sporting venues to viewers.
  • a method includes receiving streaming video from a number of cameras on a clone server and cloning at least a high definition video stream from each of the cameras to a clone-of-a-clone server.
  • the method includes streaming at least one high definition and one low definition video stream associated with the same camera from the clone-of-a-clone server to a user computing device such as a mobile phone or personal computer.
  • the low definition video stream can use the common intermediate format or CIF nominally at 320 ⁇ 240 pixels.
  • the high definition video stream can use the 1080p resolution with 1920 ⁇ 1080 pixels.
  • the streaming video from one of the cameras can include both low definition and high definition video streams, or only a high definition video stream.
  • the method can include generating a low definition video stream based on the high definition video stream, and can be generated by the clone server or the clone-of-a-clone server.
  • the method can include synchronizing frames of the video streams that are sent to the user computing device.
  • a system includes a clone server and a clone-of-a-clone server.
  • the clone server is configured to receive streaming video from a number of cameras and transmit clones of the streaming video from some or all of the cameras to the clone-of-a-clone server.
  • the clone-of-a-clone server is configured to receive clones of the streaming video and stream both a high definition and a low definition video stream from some or all of the received clones of streaming video to a user computing device.
  • the low definition video stream can use the common intermediate format or CIF nominally at 320 ⁇ 240 pixels.
  • the high definition video stream can use the 1080p resolution with 1920 ⁇ 1080 pixels.
  • the low definition video can be CIF, VGA, 4CIF, and D1 resolution
  • the high definition video steam can be 720p, 1 Megapixel, and 1080p. Other resolutions and video encoding standards can be used.
  • the system can include the cameras which are configured to stream video to one or more clone servers, and the streaming video can include both high definition and low definition video streams or only a high definition video stream.
  • the clone server receives only a high definition video stream, a clone of the high definition video stream can be transmitted to the clone-of-a-clone and the clone-of-a-clone can be configured to generate an associated low definition video stream from the high definition video stream.
  • the clone-of-a-clone server can synchronize frames of the high definition and low definition video streams streamed to the user computing device.
  • the system can include a plurality of clone servers and each clone-of-a-clone servers can be configured to receive video streams from one or a number of clone servers.
  • the system can include a plurality of clone-of-a-clone servers, and each clone server can be configured to send video streams to one or more of the clone-of-a-clone server.
  • the system can include a user computing device that is configured to receive the high definition and low definition video streams. The user computing device is configured to display each of the low definition video streams, receiving a user selection of one of the displayed low definition video streams, and display the high definition video stream associated with the user selection.
  • a system includes a clone server and a clone-of-a-clone server.
  • the clone server is configured to receive a high definition video stream from one or more cameras and clone the high definition video stream to the clone-of-a-clone server.
  • the clone-of-a-clone server is configured to receive the cloned high definition video streams, generate a low definition video stream for each received high definition video steam and selectively stream parallel video streams to a user computing device, where each parallel video stream is one of the high definition video streams and the low definition video stream generated from the high definition video stream.
  • the low definition video stream can use the common intermediate format or CIF nominally at 320 ⁇ 240 pixels.
  • the high definition video stream can use the 1080p definition with 1920 ⁇ 1080 pixels.
  • FIG. 1 is a diagram of an audio/video system for sporting venues according to an embodiment of the disclosure.
  • FIG. 2 is a diagram of an impact-resistant camera housing according to an embodiment of the disclosure.
  • FIG. 3 is a diagram of a sports helmet with integrated audio/video system according to an embodiment of the disclosure.
  • FIG. 4 is a diagram of example audio/video and network components according to an embodiment of the disclosure.
  • FIG. 5 is a flowchart of example operations for networking audio/video components according to an embodiment of the disclosure.
  • FIG. 6 is a flow diagram of example data connections according to an embodiment of the disclosure.
  • FIG. 7 is a diagram of an example screen for selecting from multiple audio and video feeds according to an embodiment of the disclosure.
  • FIG. 8 is a flowchart of example operations for custom content creation according to an embodiment of the disclosure.
  • FIG. 9 is a diagram of components of an example computing device configured for audio/video operations according to an embodiment of the disclosure.
  • FIG. 10 is a functional block diagram of example modules of an audio/video streaming system.
  • FIG. 11 is a diagram of example video resolutions.
  • FIG. 12 is a diagram of an example clone streaming system for parallel streams.
  • the systems and methods disclosed herein describe various aspects of real-time video for sporting venues. Although the disclosed system and method are described below with regard to one or more computing devices and in particular mobile computing devices, the system and method can be used with any suitable computing device including but not limited to mobile phones, smart phones, pad computing devices, laptops, personal computers, desktops, servers, embedded controllers, and so forth. Among other various possibilities
  • the system 100 includes one or more audio/video streaming devices illustrated as cameras 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , and 13 .
  • cameras 1 , 11 , 12 , and 13 can be fixed cameras in an arena
  • cameras 2 , 5 , 6 , and 9 can be movable cameras that follow players or the action in the arena
  • camera 4 can be a camera positioned ideally to point at a scoreboard
  • cameras 3 , 7 , and 8 can be helmet cameras mounted to the helmets of certain players
  • camera 10 can be a pair of helmet cameras configured to provide a view with a 3D virtual reality view from a player's perspective, such as a goalie's view.
  • a wired microphone can include an analog transducer that is coupled to a digitizer; the digitizer converts the analog signal into a suitable digital format such as H.264.
  • wired microphones are analog devices that are connected via cables to a head end unit; long cables require sufficient electrical insulation to avoid interference and substantial gauge wire that makes them expensive and heavy. Even with quality electrical insulation and properly gauged wire, purely analog solutions are subject to attenuation losses and noise, affecting the signal-to-noise ratio of the signal received at the head end unit.
  • Power over Ethernet advantageously can be used to both provide power to devices and to provide a wired communications medium for the devices.
  • Wireless communications can be effected using Wi-Fi or other wireless protocols, including but not limited to Bluetooth or Li-Fi.
  • the system 100 can include a private network, shown as intranet 110 , configured for data communications between the devices and a streaming system 120 .
  • the streaming system 120 is configured to support audio and video streams from the devices, and convert them as required, as described below in greater detail.
  • the streaming system 120 can include storage 130 for storing the audio and video streams.
  • the streaming system 120 can allow users 150 to stream audio and video from the devices or from storage 130 .
  • the camera housing 200 is configured to withstand vibrations, shocks, and impact to a camera mounted within the camera housing 200 .
  • the camera housing 200 can protect the camera from the impact, and also ensure that parts from a damaged camera, such as glass or electronics, do not end up on spectators or players or on the ice where sharp or heavy pieces might cause injury.
  • the camera housing 200 is structurally configured to protect the camera while allowing connection to electrical components such as cables or wires for power and data communications.
  • the camera housing 200 can also be configured to provide clean air for the camera, and remove heat dissipated by the camera.
  • the camera housing 200 comprises a dome assembly that attaches at one end of a drum 201 .
  • the dome assembly comprises a transparent dome cover 203 and a retainer ring 204 .
  • the dome assembly can be coupled to the drum 201 using complementary threading, screws, nuts, bolts, washers, (not shown) and the like as would be understood in the art.
  • a camera can be mounted inside the camera housing 200 , for example on a support structure having support members (not shown) that contact the interior wall of the drum 201 .
  • the support members can be configured to dampen vibrations as would be understood in the art.
  • An example support structure can be a disk that rests against pliable dampeners that act as support members and that seat the disk along a cross section of the drum 201 .
  • a camera can be mounted to the disk, for example using screws or other suitable fasteners.
  • the drum 201 can include threaded holes 202 A, 202 B to attach camera angle travel limiters inside the drum 201 thereby limiting the camera rotation to a predetermined angle.
  • Camera angle travel limiters work by limiting camera rotation angle to prevent the camera from becoming damaged during rotation, or to ensure that the camera is always pointed at a certain area of the arena. For example, it may be desirable to use angle travel limiters to ensure that a camera cannot be pointed at spectators accidentally.
  • the threaded holes 202 A, 202 B do not penetrate the drum 201 and are accessible only from the inside of the drum 201 .
  • a mounting cover comprises retainer ring 205 and cover plate 206 .
  • Cover plate 206 can include collar 207 configured to accept a support rod 210 that connects to a support structure 211 and mounting plate 212 .
  • the mounting plate 212 can be attached to a structure in the arena such as a wall, ceiling, support beam, and so forth.
  • a quick link 208 can be used as a backup failsafe to further anchor the camera housing 200 to a wall or support structure, for example using metal strings, or rope. This can be used to ensure that the camera housing 200 does not fall onto spectators, players, or the arena if the mounting plate 212 were become detached for any reason.
  • the support rod 210 can be hollow, providing for passage for electrical components such as wires, cables, and so forth.
  • the cover plate 206 can include threaded screw holes 209 A, 209 B, 209 C, 209 D for connecting the cover plate 206 and ring 205 to the drum 201 .
  • long screws can be used that pass through the drum 201 and also connect the dome assembly to the drum 201 .
  • the helmet 300 can include one or more cameras 302 and/or microphones 304 .
  • the camera 302 can use a standard definition or high definition frame size and frame rate such as a 720p, 1080i, 1080p, 2k, or 4k at 30 frames per second (fps), 60 fps, or 120 fps, or lower frame rates.
  • a helmet cam for providing a 3D virtual reality video feed can include two spatially separated cameras 302 as would be understood in the art.
  • the microphone 304 can include an analog transducer that is coupled to a digitizer; the digitizer converts the analog signal into a suitable digital format such as MP3.
  • the camera 302 and microphone 304 can be a single unit.
  • the camera 302 and microphone 304 are in communication with an embedded controller 306 .
  • the embedded controller 306 can include custom designed electronics, for example a chip or microcontroller with a Wi-Fi or other antenna.
  • the embedded controller 306 can include a modified smartphone.
  • the camera element and microphone element from a smartphone can be displaced from the modified smartphone and used as camera 302 and microphone 304 .
  • the embedded controller 306 can stream one or more video or audio streams from the camera 302 and/or microphone 304 .
  • Data communications from the embedded controller 306 can include Wi-Fi.
  • a microphone 450 for example a wired microphone configured to be placed near the glass surrounding a hockey rink, can be connected to a proxy server 410 via an Ethernet cable such as a CAT 6 cable.
  • the communications protocol between the proxy server 410 and microphone 450 can be USB over Ethernet, among other possible protocols as would be understood in the art.
  • the Ethernet cable can provide power to the microphone 450 .
  • the microphone can use a wireless network such as Wi-Fi or Li-Fi.
  • An IP camera 460 for example an IP camera configured to be placed inside of the impact resistant camera housing 200 of FIG. 2 , can be connected to a PoE switch 430 using a CAT 6 cable.
  • the PoE switch 430 can provide power to the IP camera 460 .
  • the communications protocol between the proxy server 410 and IP camera 460 can be RTSP or real-time streaming protocol, among other possible protocols as would be understood in the art.
  • a wireless helmet camera 470 can be configured for wireless data communications with the proxy server 410 via Wi-Fi router 440 .
  • Wi-Fi router 440 can be connected to the proxy server 410 via PoE switch 430 or by a direct connection to the proxy server 410 .
  • the communications protocol between the proxy server 410 and wireless helmet camera 470 can be RTSP (i.e., real-time streaming protocol) H.264, or H.265, among other possible protocols as would be understood in the art.
  • a wireless microphone 480 can be configured for wireless data communications with the proxy server 410 via Wi-Fi router 440 .
  • the proxy server 410 can receive digitized audio, for example an MP3 stream, by establishing a connection with the wireless microphone, for example using hypertext transfer protocol, or HTTP. Other communication protocols could also be used as would be understood in the art.
  • the proxy server 410 receives audio and video streams from microphones 450 , 480 and cameras 460 , 470 .
  • the proxy server 410 can store the streams to a memory, such as data store 420 for archiving or temporary storage.
  • the proxy server 410 and data store 420 reside in the same hardware.
  • the proxy server 410 can convert each video or audio stream to one or more common formats, sampling or compression rates, and frame sizes.
  • the proxy server 410 can receive a video stream and convert it to a standard H.264 or MPEG video stream prior to storing in data store 420 .
  • the proxy server 410 can store two or more different video streams from the same received video stream.
  • the proxy server 410 can convert a received video stream into a small thumbnail-sized video stream and a full size video stream.
  • two or more proxy servers can be used, for example a first proxy server can receive the audio and video streams from devices and clone the streams to a second proxy server, and the second proxy server can convert and then stream audio and video to users (see for example, FIG. 12 and associated description.)
  • Operation commences at start block 500 labeled “START” and proceeds to process block 502 .
  • process block 502 the wireless device is powered on. Processing continues to process block 504 .
  • the wireless device detects a Wi-Fi network.
  • the wireless device can be preconfigured to connect to a specific Wi-Fi network by name, or service set identifier (SSID).
  • SSID service set identifier
  • the Wi-Fi network may be configured not to broadcast the SSID, for example to prevent the wireless network from being visible on spectators' mobile devices in the arena.
  • the wireless device may detect the Wi-Fi network by querying for the Wi-Fi network using the preconfigured SSID. Processing continues to decision block 506 .
  • decision block 506 if the wireless device has previously received an IP address, then processing continues to process block 514 , otherwise processing continues to process block 508 .
  • process block 508 the wireless device requests an IP address using the dynamic host control protocol or DHCP. Processing continues to process block 510 .
  • a DHCP server receives the DHCP request from the wireless device and provides an IP address to the wireless device.
  • the DHCP server reserves a fixed IP address for each wireless device.
  • reserving a fixed IP address for each wireless device facilitates determining which video or audio feed belongs to each wireless device.
  • a fixed or reserved IP address simplifies the process of allowing multiple users to receive video feeds from specific wireless devices, as players have helmet cams that may disconnect and reconnect to the Wi-Fi network as they move about the arena during game play. Without fixed or reserved IP addresses, the IP addresses of helmet cams could change during game play and make live streams have to disconnect and reconnect. Processing continues to process block 512 .
  • process block 512 the wireless device receives the IP address from the DHCP server. Processing continues to process block 514 .
  • the wireless device streams audio and/or video to the proxy server using the configured IP address. Processing continues to decision block 516 .
  • decision block 516 if the connection to the wireless device drops, then processing continues to decision block 518 , otherwise processing continues back to process block 514 to continue streaming the audio and/or video.
  • decision block 518 if the connection has dropped due to a power off event or a signal to end streaming, then processing terminates at end block 520 , otherwise processing continues back to process block 504 to attempt to reconnect to the Wi-Fi network.
  • example data connections are illustrated for an embodiment of the audio/video system 600 .
  • an arena 602 such as a hockey arena, a sporting venue, or an entertainment venue in general
  • one or more fixed or moveable cameras 604 , helmet cams 606 , and microphones 608 are in data communication with a proxy server 612 through data communications equipment represented by wireless hub 610 .
  • the proxy server 612 provides one or more ports through which video and audio data streams can be accessed by users 630 , either in real-time or through viewing stored data streams.
  • a firewall 614 such as a specially configured router or dedicated piece of data communications equipment, prevents unauthorized users 630 from accessing data streams from the proxy server 612 .
  • users 630 first access a website system 620 which provides authentication information for accessing the data streams through the firewall. Authenticated users 630 connect through the firewall to the proxy server 612 and selected data streams are obtained from the proxy server 612 and presented on the users 630 screens.
  • the website system 620 is able to connect through the firewall 614 and connect to the proxy server 612 that streams to the website system 620 . Users 630 that are authenticated on the website system 620 receive data streams that pass through the website system 620 from the proxy server 612 .
  • two or more proxy servers can be used, for example a first proxy server can receive the audio and video streams from devices and clone the streams to a second proxy server, and the second proxy server can convert and then stream audio and video to users (see for example, FIG. 12 and associated description.)
  • Multiple end users 630 can simultaneously use the audio/video system 600 .
  • the audio/video system 600 can simultaneously support multiple events occurring in different venues.
  • the audio/video system 600 can allow users 630 to create their own customized streams. For example, a first end user 632 can view different live streams from the audio/video system 600 during a particular sporting event.
  • a second end user 634 can generate a customized stream based on a current live stream, or stored data streams of a previous sporting event.
  • a third end user 636 can stream the customized stream of the second end user 634 .
  • Each end user 630 can use a different kind of computing device, for example a mobile device such as a smartphone or tablet, a laptop, a desktop, and so forth.
  • the first end user 632 can be streaming to a mobile computing device that is using a dedicated application or app that has been downloaded to a mobile computing device.
  • the second end user 634 can be using a high end workstation with a fast Internet connection for editing and generating their customized stream.
  • the third end user 636 can be using an Internet browser and clicking a link to access the customized stream of the second end user 636 .
  • the bit rate, frame rate, and frame size of the video and audio streams can be optimized for the type of end user computing device and connection speed.
  • thumbnail views 710 from each of the cameras and microphones.
  • Some thumbnail views 710 may not include audio or video, either because the feed does not include audio or video, or due to a lost connection.
  • Some thumbnail views, such as thumbnail view 10 may include a left and right view, allowing a user with a 3D viewing device connected to their video device to view a sporting event as a virtual reality experience from one or more of the players' perspectives.
  • the user can select from one or more of the thumbnail views 710 , for example by clicking on a particular thumbnail view 710 or dragging a thumbnail view to a focus window 720 .
  • the currently selected video is presented in a focus window 720 that typically is larger than the thumbnail views.
  • Clicking a camera icon associated with each thumbnail view 710 allows a user to select whether video, audio, or both are to be presented to the user, for example via the focus window 720 .
  • a user can select video from one device and audio from another device.
  • the user can customize the screen 700 , for example to reorganize the order or size of the thumbnail views 710 , to have two or more focus windows. Different user controls and window arrangements can be presented to the user as would be understood in the art.
  • the focus window 720 can be selected by the user and clicked to toggle between full screen and the illustrated split screen that includes both the focus window 720 and the thumbnail view 710 .
  • clicking on the focus window 720 will cycle between a group of selected thumbnail views 710 . This can be particularly useful to a user viewing the event using VR or 3D viewing devices.
  • Operation commences at start block 800 labeled “START” and proceeds to process block 802 .
  • the streaming system receives streams from devices such as cameras and microphones. Processing continues to process block 804 .
  • the streaming system streams one or more device streams to users 808 , for example through the selection screen 700 of FIG. 7 .
  • users 808 can join a live stream of a sporting event or view a saved stream in process block 806 .
  • Processing continues to decision block 810 .
  • decision block 810 if the streaming system is configured to auto-select the focus window, then processing continues to process block 812 , otherwise processing continues to process block 814 .
  • the streaming system selects a feature that is used to determine the focus window.
  • the streaming system can select the feature to be the camera where the puck is located, or the microphone that is loudest.
  • the selected feature can change dynamically during the game or practice.
  • the selected feature can be the penalty box subsequent to determining that an official has blown a whistle and the clock has been stopped, or the scoreboard after a change to a score on the scoreboard, or a particular player when that player enters the ice in the arena.
  • the streaming system attempts to select devices to present the best user experience of the sporting event. Processing continues to process block 818 where the streaming system determines the focus window based on the selected feature.
  • decision block 814 if a user manually selects a feature to use as the selected feature, then processing continues to process block 816 to receive the user selection, otherwise processing continue to decision block 820 .
  • the streaming system receives a selection of a feature to use for selecting the focus window from the available devices.
  • a user who is a scout may desire to follow one particular athlete, and thus use the streaming system in a scouting mode.
  • the scout may select as the feature a jersey number of the particular athlete, in which case the streaming system in process block 818 will determine which camera shows the athlete's jersey number best.
  • an avid fan of a particular player may desire to have that player as the focus of attention while still watching the game in progress, in which case the camera could be selected that displays both the player and the puck the majority of the time while the selected audio device could be from the helmet of the player or the audio device closest to the player. Processing continues to process block 818 .
  • the streaming system determines the focus window from the available cameras and microphones.
  • the steaming system can track players on the ice, or other playing surfaces for other sports, and use player position and motion data to determine the best camera and microphone to use in the focus window.
  • the streaming system can use the selected feature from process block 812 and/or process block 816 in determining the best device to display in the focus window.
  • the streaming system can determine when a particular device is not streaming, or has a connection issue, and switch to the next best device. Processing continues to decision block 820 .
  • decision block 820 if a user selects a particular device to use in the focus window, for example to override an selected device by the streaming system from process block 818 , the processing continues to process block 822 , otherwise processing continues to decision block 824 .
  • process block 822 the streaming system changes the focus window to the user selected device or devices. Processing continues to decision block 824 .
  • decision block 824 if the user adds user-content to the content stream, then processing continues to process block 826 , otherwise processing continue to decision block 828 .
  • a user adds user-created content to the content stream.
  • the user may have a microphone connected to their computing device and can add live commentary, such as player analysis or real-time play-by-play announcing such as is performed by professional announcers and commentators.
  • live commentary such as player analysis or real-time play-by-play announcing such as is performed by professional announcers and commentators.
  • sophisticated users can include user-created video such as replay clips or on-screen annotation. Processing continues to decision block 828 .
  • decision block 828 if the stream is offered to users, then processing continues to process block 830 , otherwise processing continues to decision block 834 .
  • a custom stream can be saved.
  • metadata is saved that includes time-stamped tracking of which device(s) were selected for the focus window(s). In this way, the custom stream can be recreated as needed from saved video streams.
  • a new stream can be saved separately for each custom created stream.
  • the original sources or streams can be saved for a configurable period of time, and then purged at a particular expiration date to recover storage space.
  • custom streams can be saved and stored for a period of time before being purged. For example, a single custom stream created by the streaming system might be stored indefinitely, while the remaining streams are purged. Processing continues to process block 832 .
  • users can be invited to view a custom stream.
  • a stream automatically generated by the streaming system can be shown on a schedule of available live or saved games for viewing by users.
  • the streaming system can also include user-create custom streams in the schedule, and allow other users to rate user-created streams.
  • a user that creates custom content can generate a link to their custom stream that can be forwarded to other users, for example through social media.
  • a link can be placed on a FACEBOOK page, a clip and link uploaded to the user's INSTAGRAM or TWITTER account, or a link can be emailed to potentially interested parties, for example using an email list and advertisement.
  • Other uses of social media either currently extant or yet to be developed, can be utilized as would be understood by one of skill in the art. Processing continues to decision block 834 .
  • decision block 834 if the sports event is determined to be over or if the saved stream has concluded, then processing terminates at end block 836 , otherwise processing continues back to process block 804 to continue streaming content to users.
  • the costs of creating audio-video content are substantially reduced by allowing users, or the streaming system itself, to determine which video and audio stream to use as the focus window(s), especially when compared to the costs incurred by professional broadcast services such as the major television networks.
  • the use of relatively inexpensive cameras, microphones, and networking equipment allows that equipment to be more or less permanently placed in a sporting venue and used for whatever events occur in the venue, whether they are sporting events, entertainment events, or other events. This opens the opportunity to allow streaming of practices, pre-season games, minor-league games, club-level events, and even high-school events to interested parties.
  • the present system democratizes the capture, production, and distribution of content from all levels of sporting venues.
  • Example computing devices 900 can be servers, desktop systems, mobile computing devices, embedded controllers, wireless cams and microphones, and so forth. Included are one or more processors, such as that illustrated by processor 904 . Each processor is suitably associated with non-volatile memory, such as read only memory (ROM) 910 and random access memory (RAM) 912 , via a data bus 914 .
  • ROM read only memory
  • RAM random access memory
  • Processor 904 is also in data communication with a storage interface 916 for reading or writing to a data storage system 918 , suitably comprised of a hard disk, memory or solid-state disk, or any other suitable data storage as will be appreciated by one of ordinary skill in the art.
  • a storage interface 916 for reading or writing to a data storage system 918 , suitably comprised of a hard disk, memory or solid-state disk, or any other suitable data storage as will be appreciated by one of ordinary skill in the art.
  • Processor 904 is also in data communication with a network interface controller (NIC) 930 , which provides a data path to any suitable wired or physical network connection via physical network interface 934 , or to any suitable wireless data connection via wireless network interface 938 or cellular interface 936 , such as one or more of the networks detailed above.
  • NIC network interface controller
  • Processor 904 is also in data communication with an input/output (I/O) interface 940 which provides data communication with devices such as a microphone 946 or camera 948 or user peripherals, such as a touchscreen display 944 , keyboard, or mouse or any other suitable user interface.
  • I/O input/output
  • functional units are suitably comprised of intelligent units, including any suitable hardware or software platform.
  • a user interface module 1002 serves web pages to users and administrators that provides a graphical user interface for logging into the system, viewing camera and audio microphone locations, viewing calendars of upcoming sporting events and archived streams, selecting sporting events or recorded steams to view, receiving video and audio streams from the proxy server through the firewall, customizing the user's thumbnail and focus window views, and interacting with the system in general.
  • User accounts, configuration data, calendar information, stream information, and other data can be stored in a database 1010 or other suitable memory.
  • a scheduler engine 1004 can schedule recordings of sporting events by the proxy server.
  • An analytics engine 1006 can analyze video and audio streams. For example, the analytics engine 1006 can determine when a video or audio feed has disconnected, and switch a user's focus window to another available stream and switch back once the video or audio feed reconnects. Similarly, the analytics engine 1006 can monitor video or audio streams and either blackout some or all of a stream in real-time, or switch the focus window to a different stream. The analytics engine 1006 can be used to detect objectionable language in audio, or objectionable images in a video feed, for example nudity, political messages, unauthorized advertising, excessive violence, and so forth. In a configuration, the analytics engine 1006 can be rules-based or use heuristics or other suitable analytics to perform an analysis of one or more streams.
  • the analytics engine 1006 can receive a copy of the streams from a proxy server, or clone-of-a-clone system of FIG. 12 , and the analytics engine 1006 can be executing on any suitable system as would be understood in the art.
  • the analytics engine 1006 can also track selected features for determining which stream to use in a focus window. For example, when the website system in being used by a user that is a scout, or if the system is set to use a scout mode, an individual player can be tracked in multiple video streams, for example by jersey number. The analytics engine 1006 can determine the optimal video and audio streams to use to track the selected player or feature being tracked.
  • the analytics engine 1006 can also perform analysis of helmet cam video and/or audio, for example to track where a player is looking or to determine how the player is moving the helmet.
  • the analytics engine 1006 can determine if rapid helmet movements are suggestive of violent impacts which could cause concussions.
  • the analytics engine 1006 can monitor a helmet cam for video and/or audio that might indicate a concussion, injury, or exhaustion of the player. For example, movements of the helmet that are atypical for the player, such as looking down more often, looking up, not turning the head in one particular direction, not following the puck (or a ball as might be used in other sporting events) or a delay in following the puck or action of the game, not looking where other players are looking, and so forth.
  • a player's typical pattern of helmet movements can be analyzed and saved for reference and comparison.
  • the analytics engine 1006 can send an alert to a coach or medical professional via a text message, email, or other suitable alert, for example using the user interface 1002 .
  • a tracking engine 1008 can track one or more players' movements in the arena.
  • the tracking engine 1008 can turn a player's movements into vector data, or any other suitable position data.
  • the tracking engine 1008 can work in conjunction with the analytics engine 1006 .
  • the tracking engine 1008 can provide player position or vector data to the analytics engine 1006 that is used to determine which camera and audio feed to use in the focus window(s).
  • each player can be analyzed to create a digital representation of the players.
  • Example data that can be determined can include position, speed, direction, acceleration, deceleration, linearity, non-linearity, circularity, time, and other measurements as would be understood in the art.
  • the tracking engine 1008 and analytics engine 1006 can determine the correct camera frame to provide to a user based on the player data. For example, the system can sum all of the vectors or kinetic energy for each frame and/or camera stream and switch the focus window to a particular camera stream based on that calculation.
  • tracking data can be combined with video data to provide a visual representation of players' movements during practice or a game.
  • tracking data and/or analytics data can be combined with video and/or audio data to provide player performance information to couches, scouts, and interested viewers and fans.
  • the tracking engine 1008 can receive position data from helmet cams, for example position data derived from GPS or radio signal triangulation. Tracking and analytics data can be stored in the database 1010 or any other suitable memory.
  • Standard resolutions can include common intermediate format or CIF at 352 ⁇ 240 or 352 ⁇ 288 pixels, VGA at 640 ⁇ 480 pixels, and 4CIF/D1 at 704 ⁇ 480, 704 ⁇ 576, or 720 ⁇ 480 pixels.
  • High definition resolutions can include 720p at 1280 ⁇ 720 pixels, 1 Megapixel at 1280 ⁇ 1024, and 1080p at 1920 ⁇ 1080 pixels.
  • Ultra high resolution formats are also contemplated, for example QHD at 2560 ⁇ 1440 pixels, UHD or 4K at 3840 ⁇ 2160 pixels, and so forth.
  • Standard resolution can also include QCIF at 176 ⁇ 120 or 176 ⁇ 144 pixels. Steaming video can be interlaced or progressive scan as appropriate for the resolution.
  • Audio can similarly be encoded, for example as 19.2 kb/s PCM, 9.6 kb/s ADPCM, MP3, or any other suitable encoding or compression as would be understood in the art.
  • PCM 19.2 kb/s PCM
  • 9.6 kb/s ADPCM 9.6 kb/s ADPCM
  • MP3 any other suitable encoding or compression as would be understood in the art.
  • the disclosed resolutions are presented as non-limiting examples only. Other suitable resolutions can also be used as would be understood in the art.
  • a plurality of cameras 1202 are configured to stream video across one or more local network connections 1204 to a clone server 1206 .
  • the cameras 1202 can be configured to stream a high definition video stream, such as 1080p at 1920 ⁇ 1080 pixels.
  • one or more cameras 1202 can be configured to stream both a low definition video stream, such as CIF at 320 ⁇ 240 pixels, and a high definition video stream.
  • different cameras 1202 can stream in different resolutions. For example, camera 1 could stream in 1080p, while camera 2 streams in 4k and camera n streams using 1 megapixel streaming.
  • Each camera 1202 streams across a local network connection 1204 , such as a LAN, WiFi, LiFi, Power over Ethernet, or any other suitable network for example as described with respect to the devices of FIG. 1 .
  • the clone server 1206 receives each of the streams from the cameras 1202 .
  • the clone server 1206 can store each of the streams from each of the cameras 1202 .
  • the streams are stored temporarily, or ephemerally, before being streamed to one or more clone-of-a-clone servers 1210 and/or to cloud storage 1213 .
  • the clone server 1206 can store each stream for a longer period of time, for example as permanent storage.
  • the clone server 1206 is in network communication, for example using a VPN or virtual private network, with one or more clone-of-a-clone servers 1210 through firewall 1207 , which can be a suitable router or other suitable network element.
  • the clone server 1206 clones the live video streams 1208 from the cameras 1202 onto the clone-of-a-clone server 1210 .
  • Each clone-of-a-clone server 1210 receives live video streams 1208 associated with each of the cameras 1202 .
  • each clone-of-a-clone server 1210 can receive live video streams 1208 from a subset of all of the available cameras 1202 associated with the clone server 1206 .
  • each clone-of-a-clone server 1210 can receive live video streams 1208 from multiple clone servers 1206 and associated cameras 1202 .
  • the clone-of-a-clone servers 1210 can be anywhere in the network, for example in the cloud 1216 as shown, at an ISP or Internet Service Provider, in a colocation premises, in the arena 1214 or any other suitable place.
  • the clone-of-a-clone server 1210 can be hosted by a service company that provides high speed cloud hosting services, such as AMAZON, as would be understood in the art.
  • the clone server 1206 also sends recorded video streams 1209 to cloud storage 1213 .
  • Cloud storage 1213 can include network servers, redundant network storage hosted by third party companies, and other suitable cloud storage as would be understood in the art.
  • the recorded video streams 1209 can include live video streams.
  • the clone server 1206 , the clone-of-a-clone server 1210 , and cloud storage 1213 allow the system architecture to easily scale to support any number of cameras 1202 and users.
  • the clone server 1206 aggregates video streams from multiple cameras 1202 . Additional clone servers 1206 can be used to accommodate more cameras 1202 as needed.
  • Each clone-of-a-clone server 1210 receives cloned video streams from one or more clone servers 1206 and supports forwarding video streams to multiple users. Additional clone-of-a-clone servers 1210 can be used to accommodate more users when needed.
  • Cloud storage 1213 can be scaled as necessary to support automated recording of live video streams and playback of video streams by users.
  • a web server 1211 can provide front end web services for users to interact with the system and gain access to the live video streams and recorded video from the clone-of-a-clone servers 1210 and cloud storage 1213 .
  • Clone-of-a-clone servers 1210 can be configured to perform other services, for example archiving video, providing user video editing functions, and so forth.
  • one or more cameras 1202 stream only a single stream of video, for example a single high definition 1080p stream.
  • the clone-of-a-clone server 1210 receives a clone of each high definition stream from the clone server 1206 and the clone-of-a-clone server 1210 creates an additional low definition video stream such as a CIF stream based on the received high definition stream.
  • the clone server 1206 receives the high definition stream and creates the additional low definition video stream such as a CIF stream based on the high definition stream received from the cameras 1202 .
  • the clone server 1206 receives a single stream from some cameras 1202 and multiple streams from other cameras 1202 ; the clone server 1206 or the clone-of-a-clone server 1210 generates a second stream, for example a second low definition stream, for cameras 1202 that only provide a single stream.
  • a consumer for example a user or business located in a consumer premises 1218 such as a home or business office, uses a computing device 1220 that establishes a network connection, for example over the Internet 1212 , with the web server 1211 .
  • the user interacts with the web server 1211 to view live video streams or recorded video from the clone-of-a-clone servers 1210 or cloud storage 1213 .
  • the computing device 1220 can be a personal computer, a laptop, a tablet device, a smartphone, a smart TV, a video game device, a television set top box, or any other suitable computing device as would be known in the art.
  • the computing device receives parallel streams from each of the cameras 1202 , or parallel streams from a subset of the cameras 1202 over the network connection. For example, as illustrated in FIG. 12 , the computing device 1220 receives both a low definition CIF stream and a high definition HD stream as parallel streams from each of the cameras 1202 over a network connection via the Internet 1212 .
  • the computing device 1220 can be configured to display the received streams in any suitable or desired configuration or format.
  • the computing device 1220 can run software that displays multiple streams from the cameras 1202 in low definition in smaller preview windows 1222 and a high definition stream of one of the cameras 1202 in a large focus window 1224 .
  • a user can select any one of the smaller preview windows 1222 to display the high definition stream of the selected camera 1202 in the large focus window 1224 .
  • the computing device receives both a low definition stream and a high definition stream associated with each of the cameras 1202 , there is no delay, or minimal delay, that is perceived by the user as the user switches between streams from different cameras 1202 in the large focus window 1224 . Also advantageously, because the low definition streams are displayed in smaller preview windows 1222 , the user does not perceive that those streams are presented in low resolution because of the smaller size of the smaller preview windows 1222 .
  • Both the low definition stream and high definition stream from each camera can be synchronized, such that the start of each frame of video for both streams are in sync. This advantageously allows the picture displayed in both the smaller preview window 1222 and large focus window 1224 to be in perfect sync, preventing the user for perceiving temporal differences. Also, the parallel streams from each of the cameras 1202 can be in sync so that the picture in the large focus window 1224 can smoothly switch between high definition streams from different cameras 1202 without displaying partial frames or experiencing temporal delays during a switch between video sources.
  • the smaller preview windows 1222 can have the same pixel resolution as the pixel size of the low definition streams from cameras 1202 . This can reduce the computation load on the computing device 1220 which does not have to remap each of pixels of the low definition streams into a different pixel size of the smaller preview windows 1222 .
  • the pixel size of the high definition stream can be the same as the pixel size the large focus windows 1224 .
  • the smaller preview windows 1222 or large focus windows 1224 can have different pixels sizes that the low definition streams or high definition streams respectively and the computing device 1220 can remap the streams onto the screen as would be understood in the art.
  • the computing device 1220 can receive the low definition streams and the high definition streams in a desired resolution and/or frame rate from the clone-of-a-clone server 1210 .
  • the streaming system 1200 presented herein provides the user with a seamless visual experience as the user switches between the different views from each of the cameras 1202 .
  • video compression can be used to reduce the overall bandwidth required.
  • video streams can be compressed using compression algorithms such as MP4, H.264, H.265 or other forms of compression as would be understood in the art.
  • the low definition and high definition streams can share a common audio stream to further reduce bandwidth.
  • the low definition streams and high definition streams can be separately streamed in distinct network connections to the computing device 1220 .
  • streams can be combined into a single network connection.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Nonlinear Science (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Acoustics & Sound (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Marketing (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The system and method include a clone server that receives high definition video streams from cameras. The clone server clones the high definition video streams to a clone of a clone server. The clone of a clone server generates low definition video streams from the high definition videos streams. The clone of a clone server streams the high definition video streams and the low definition video streams in parallel to a user computing device. The user computing device displays the received low definition video streams. A user selects one or more of the low definition video streams, and the user computing device displays a high definition video stream corresponding to the user selection.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Application No. 62/385,605, filed Sep. 9, 2016, which is incorporated herein by reference.
  • TECHNICAL FIELD
  • The subject application teaches embodiments that relate generally to streaming audio and video for sports venues, and specifically to video and audio capture, processing, and streaming of sporting events and practices.
  • BACKGROUND
  • Professional broadcasters capture live action events at sporting venues and broadcast live or recorded video to subscribers and television viewing audience. When sporting events are broadcast, viewers generally are limited to viewing an event through the viewpoint of a single camera selected by producers from one or more cameras that capture the sporting event. Most practices and some pre-season games are not broadcast, and minor league games, club level events, and high school sporting events are rarely broadcast or recorded at all. Cameras used by broadcasters are typically large complicated devices designed for professional camera personnel and include high resolution image capturing elements and expensive lenses with variable zoom. Cameras are typically mounted on tripods, slung from wires above sporting events, or attached to weight-bearing harnesses strapped to camera personnel who position themselves nearby to the action taking place on the field. Cameras and expertise for operating the cameras creates a barrier for new entrants to the market, local small-market producers, schools, and individuals wanting to create audio and video of sporting events, either for their own use or for monetizing their work through third party subscription. Broadcasters can offset the costs of obtaining, maintaining, and operating cameras, editing systems, and other broadcasting expenses through marketing and/or subscription revenues from their larger base of advertisers and/or consumers. The present disclosure presents new modalities for streaming audio and video from sporting venues to viewers.
  • SUMMARY
  • A method includes receiving streaming video from a number of cameras on a clone server and cloning at least a high definition video stream from each of the cameras to a clone-of-a-clone server. The method includes streaming at least one high definition and one low definition video stream associated with the same camera from the clone-of-a-clone server to a user computing device such as a mobile phone or personal computer. The low definition video stream can use the common intermediate format or CIF nominally at 320×240 pixels. The high definition video stream can use the 1080p resolution with 1920×1080 pixels. The streaming video from one of the cameras can include both low definition and high definition video streams, or only a high definition video stream. The method can include generating a low definition video stream based on the high definition video stream, and can be generated by the clone server or the clone-of-a-clone server. The method can include synchronizing frames of the video streams that are sent to the user computing device.
  • A system includes a clone server and a clone-of-a-clone server. The clone server is configured to receive streaming video from a number of cameras and transmit clones of the streaming video from some or all of the cameras to the clone-of-a-clone server. The clone-of-a-clone server is configured to receive clones of the streaming video and stream both a high definition and a low definition video stream from some or all of the received clones of streaming video to a user computing device. The low definition video stream can use the common intermediate format or CIF nominally at 320×240 pixels. The high definition video stream can use the 1080p resolution with 1920×1080 pixels. The low definition video can be CIF, VGA, 4CIF, and D1 resolution, while the high definition video steam can be 720p, 1 Megapixel, and 1080p. Other resolutions and video encoding standards can be used. The system can include the cameras which are configured to stream video to one or more clone servers, and the streaming video can include both high definition and low definition video streams or only a high definition video stream. When the clone server receives only a high definition video stream, a clone of the high definition video stream can be transmitted to the clone-of-a-clone and the clone-of-a-clone can be configured to generate an associated low definition video stream from the high definition video stream. The clone-of-a-clone server can synchronize frames of the high definition and low definition video streams streamed to the user computing device. The system can include a plurality of clone servers and each clone-of-a-clone servers can be configured to receive video streams from one or a number of clone servers. The system can include a plurality of clone-of-a-clone servers, and each clone server can be configured to send video streams to one or more of the clone-of-a-clone server. The system can include a user computing device that is configured to receive the high definition and low definition video streams. The user computing device is configured to display each of the low definition video streams, receiving a user selection of one of the displayed low definition video streams, and display the high definition video stream associated with the user selection.
  • A system includes a clone server and a clone-of-a-clone server. The clone server is configured to receive a high definition video stream from one or more cameras and clone the high definition video stream to the clone-of-a-clone server. The clone-of-a-clone server is configured to receive the cloned high definition video streams, generate a low definition video stream for each received high definition video steam and selectively stream parallel video streams to a user computing device, where each parallel video stream is one of the high definition video streams and the low definition video stream generated from the high definition video stream. The low definition video stream can use the common intermediate format or CIF nominally at 320×240 pixels. The high definition video stream can use the 1080p definition with 1920×1080 pixels.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a diagram of an audio/video system for sporting venues according to an embodiment of the disclosure.
  • FIG. 2 is a diagram of an impact-resistant camera housing according to an embodiment of the disclosure.
  • FIG. 3 is a diagram of a sports helmet with integrated audio/video system according to an embodiment of the disclosure.
  • FIG. 4 is a diagram of example audio/video and network components according to an embodiment of the disclosure.
  • FIG. 5 is a flowchart of example operations for networking audio/video components according to an embodiment of the disclosure.
  • FIG. 6 is a flow diagram of example data connections according to an embodiment of the disclosure.
  • FIG. 7 is a diagram of an example screen for selecting from multiple audio and video feeds according to an embodiment of the disclosure.
  • FIG. 8 is a flowchart of example operations for custom content creation according to an embodiment of the disclosure.
  • FIG. 9 is a diagram of components of an example computing device configured for audio/video operations according to an embodiment of the disclosure.
  • FIG. 10 is a functional block diagram of example modules of an audio/video streaming system.
  • FIG. 11 is a diagram of example video resolutions.
  • FIG. 12 is a diagram of an example clone streaming system for parallel streams.
  • DETAILED DESCRIPTION
  • The systems and methods disclosed herein are described in detail by way of examples and with reference to the figures. It will be appreciated that modifications to disclosed and described examples, arrangements, configurations, components, elements, apparatuses, devices methods, systems, etc. can suitably be made and may be desired for a specific application. In this disclosure, any identification of specific techniques, arrangements, etc. are either related to a specific example presented or are merely a general description of such a technique, arrangement, etc. Identifications of specific details or examples are not intended to be, and should not be, construed as mandatory or limiting unless specifically designated as such.
  • The systems and methods disclosed herein describe various aspects of real-time video for sporting venues. Although the disclosed system and method are described below with regard to one or more computing devices and in particular mobile computing devices, the system and method can be used with any suitable computing device including but not limited to mobile phones, smart phones, pad computing devices, laptops, personal computers, desktops, servers, embedded controllers, and so forth. Among other various possibilities
  • Turning to FIG. 1, an audio/video system 100 for sporting venues is presented. The system 100 includes one or more audio/video streaming devices illustrated as cameras 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, and 13. For example, cameras 1, 11, 12, and 13 can be fixed cameras in an arena, cameras 2, 5, 6, and 9 can be movable cameras that follow players or the action in the arena, camera 4 can be a camera positioned ideally to point at a scoreboard, cameras 3, 7, and 8 can be helmet cameras mounted to the helmets of certain players, and camera 10 can be a pair of helmet cameras configured to provide a view with a 3D virtual reality view from a player's perspective, such as a goalie's view.
  • The devices can be cameras, microphones, wireless cameras, wireless microphones, helmet cams, and so forth. Wired communications can be provided over Ethernet, for example using UDP or TCP protocols as would be understood in the art. In a configuration, a wired microphone can include an analog transducer that is coupled to a digitizer; the digitizer converts the analog signal into a suitable digital format such as H.264. Typically, wired microphones are analog devices that are connected via cables to a head end unit; long cables require sufficient electrical insulation to avoid interference and substantial gauge wire that makes them expensive and heavy. Even with quality electrical insulation and properly gauged wire, purely analog solutions are subject to attenuation losses and noise, affecting the signal-to-noise ratio of the signal received at the head end unit. By immediately converting the signal from the analog transducer into a digital signal, the digital signal can be carried on less expensive, longer cables without the subject attenuation losses and lower signal-to-noise ratio of a purely analog system. Power over Ethernet (PoE) advantageously can be used to both provide power to devices and to provide a wired communications medium for the devices. Wireless communications can be effected using Wi-Fi or other wireless protocols, including but not limited to Bluetooth or Li-Fi.
  • The system 100 can include a private network, shown as intranet 110, configured for data communications between the devices and a streaming system 120. The streaming system 120 is configured to support audio and video streams from the devices, and convert them as required, as described below in greater detail. The streaming system 120 can include storage 130 for storing the audio and video streams. The streaming system 120 can allow users 150 to stream audio and video from the devices or from storage 130.
  • Turning now to FIG. 2, an example impact resistant camera housing 200 is presented. The camera housing 200 is configured to withstand vibrations, shocks, and impact to a camera mounted within the camera housing 200. For example, in a hockey arena it is possible for cameras to come into contact with a flying hockey puck, or be impacted by a hockey stick or a player. The camera housing 200 can protect the camera from the impact, and also ensure that parts from a damaged camera, such as glass or electronics, do not end up on spectators or players or on the ice where sharp or heavy pieces might cause injury.
  • The camera housing 200 is structurally configured to protect the camera while allowing connection to electrical components such as cables or wires for power and data communications. The camera housing 200 can also be configured to provide clean air for the camera, and remove heat dissipated by the camera.
  • The camera housing 200 comprises a dome assembly that attaches at one end of a drum 201. The dome assembly comprises a transparent dome cover 203 and a retainer ring 204. The dome assembly can be coupled to the drum 201 using complementary threading, screws, nuts, bolts, washers, (not shown) and the like as would be understood in the art.
  • A camera can be mounted inside the camera housing 200, for example on a support structure having support members (not shown) that contact the interior wall of the drum 201. The support members can be configured to dampen vibrations as would be understood in the art. An example support structure can be a disk that rests against pliable dampeners that act as support members and that seat the disk along a cross section of the drum 201. A camera can be mounted to the disk, for example using screws or other suitable fasteners.
  • The drum 201 can include threaded holes 202A, 202B to attach camera angle travel limiters inside the drum 201 thereby limiting the camera rotation to a predetermined angle. Camera angle travel limiters work by limiting camera rotation angle to prevent the camera from becoming damaged during rotation, or to ensure that the camera is always pointed at a certain area of the arena. For example, it may be desirable to use angle travel limiters to ensure that a camera cannot be pointed at spectators accidentally. In a configuration, the threaded holes 202A, 202B do not penetrate the drum 201 and are accessible only from the inside of the drum 201.
  • A mounting cover comprises retainer ring 205 and cover plate 206. Cover plate 206 can include collar 207 configured to accept a support rod 210 that connects to a support structure 211 and mounting plate 212. The mounting plate 212 can be attached to a structure in the arena such as a wall, ceiling, support beam, and so forth. A quick link 208 can be used as a backup failsafe to further anchor the camera housing 200 to a wall or support structure, for example using metal strings, or rope. This can be used to ensure that the camera housing 200 does not fall onto spectators, players, or the arena if the mounting plate 212 were become detached for any reason. The support rod 210 can be hollow, providing for passage for electrical components such as wires, cables, and so forth. The cover plate 206 can include threaded screw holes 209A, 209B, 209C, 209D for connecting the cover plate 206 and ring 205 to the drum 201. In a configuration, long screws can be used that pass through the drum 201 and also connect the dome assembly to the drum 201.
  • Referring now to FIG. 3, a helmet 300 that includes a helmet cam is presented. The helmet 300 can include one or more cameras 302 and/or microphones 304. The camera 302 can use a standard definition or high definition frame size and frame rate such as a 720p, 1080i, 1080p, 2k, or 4k at 30 frames per second (fps), 60 fps, or 120 fps, or lower frame rates. A helmet cam for providing a 3D virtual reality video feed can include two spatially separated cameras 302 as would be understood in the art. In a configuration, the microphone 304 can include an analog transducer that is coupled to a digitizer; the digitizer converts the analog signal into a suitable digital format such as MP3. In a configuration the camera 302 and microphone 304 can be a single unit. The camera 302 and microphone 304 are in communication with an embedded controller 306. The embedded controller 306 can include custom designed electronics, for example a chip or microcontroller with a Wi-Fi or other antenna. In a configuration, the embedded controller 306 can include a modified smartphone. In one such configuration, the camera element and microphone element from a smartphone can be displaced from the modified smartphone and used as camera 302 and microphone 304. The embedded controller 306 can stream one or more video or audio streams from the camera 302 and/or microphone 304. Data communications from the embedded controller 306 can include Wi-Fi.
  • Referring now to FIG. 4, example audio/video and network components 400 are presented. A microphone 450, for example a wired microphone configured to be placed near the glass surrounding a hockey rink, can be connected to a proxy server 410 via an Ethernet cable such as a CAT 6 cable. The communications protocol between the proxy server 410 and microphone 450 can be USB over Ethernet, among other possible protocols as would be understood in the art. The Ethernet cable can provide power to the microphone 450. In another configuration, the microphone can use a wireless network such as Wi-Fi or Li-Fi.
  • An IP camera 460, for example an IP camera configured to be placed inside of the impact resistant camera housing 200 of FIG. 2, can be connected to a PoE switch 430 using a CAT 6 cable. The PoE switch 430 can provide power to the IP camera 460. The communications protocol between the proxy server 410 and IP camera 460 can be RTSP or real-time streaming protocol, among other possible protocols as would be understood in the art.
  • A wireless helmet camera 470, for example as described in helmet 300 of FIG. 3, can be configured for wireless data communications with the proxy server 410 via Wi-Fi router 440. Wi-Fi router 440 can be connected to the proxy server 410 via PoE switch 430 or by a direct connection to the proxy server 410. The communications protocol between the proxy server 410 and wireless helmet camera 470 can be RTSP (i.e., real-time streaming protocol) H.264, or H.265, among other possible protocols as would be understood in the art.
  • Similarly, a wireless microphone 480 can be configured for wireless data communications with the proxy server 410 via Wi-Fi router 440. The proxy server 410 can receive digitized audio, for example an MP3 stream, by establishing a connection with the wireless microphone, for example using hypertext transfer protocol, or HTTP. Other communication protocols could also be used as would be understood in the art.
  • The proxy server 410 receives audio and video streams from microphones 450, 480 and cameras 460, 470. The proxy server 410 can store the streams to a memory, such as data store 420 for archiving or temporary storage. In a configuration, the proxy server 410 and data store 420 reside in the same hardware. In a configuration, the proxy server 410 can convert each video or audio stream to one or more common formats, sampling or compression rates, and frame sizes. For example, the proxy server 410 can receive a video stream and convert it to a standard H.264 or MPEG video stream prior to storing in data store 420. In a configuration, the proxy server 410 can store two or more different video streams from the same received video stream. For example, the proxy server 410 can convert a received video stream into a small thumbnail-sized video stream and a full size video stream. In an embodiment, two or more proxy servers can be used, for example a first proxy server can receive the audio and video streams from devices and clone the streams to a second proxy server, and the second proxy server can convert and then stream audio and video to users (see for example, FIG. 12 and associated description.)
  • Referring now to FIG. 5, example operations for networking wireless audio and video devices are presented. Operation commences at start block 500 labeled “START” and proceeds to process block 502.
  • In process block 502, the wireless device is powered on. Processing continues to process block 504.
  • In process block 504, the wireless device detects a Wi-Fi network. The wireless device can be preconfigured to connect to a specific Wi-Fi network by name, or service set identifier (SSID). The Wi-Fi network may be configured not to broadcast the SSID, for example to prevent the wireless network from being visible on spectators' mobile devices in the arena. In this configuration, the wireless device may detect the Wi-Fi network by querying for the Wi-Fi network using the preconfigured SSID. Processing continues to decision block 506.
  • In decision block 506, if the wireless device has previously received an IP address, then processing continues to process block 514, otherwise processing continues to process block 508.
  • In process block 508, the wireless device requests an IP address using the dynamic host control protocol or DHCP. Processing continues to process block 510.
  • In process block 510, a DHCP server receives the DHCP request from the wireless device and provides an IP address to the wireless device. The DHCP server reserves a fixed IP address for each wireless device. Advantageously, reserving a fixed IP address for each wireless device facilitates determining which video or audio feed belongs to each wireless device. A fixed or reserved IP address simplifies the process of allowing multiple users to receive video feeds from specific wireless devices, as players have helmet cams that may disconnect and reconnect to the Wi-Fi network as they move about the arena during game play. Without fixed or reserved IP addresses, the IP addresses of helmet cams could change during game play and make live streams have to disconnect and reconnect. Processing continues to process block 512.
  • In process block 512, the wireless device receives the IP address from the DHCP server. Processing continues to process block 514.
  • In process block 514, the wireless device streams audio and/or video to the proxy server using the configured IP address. Processing continues to decision block 516.
  • In decision block 516, if the connection to the wireless device drops, then processing continues to decision block 518, otherwise processing continues back to process block 514 to continue streaming the audio and/or video.
  • In decision block 518, if the connection has dropped due to a power off event or a signal to end streaming, then processing terminates at end block 520, otherwise processing continues back to process block 504 to attempt to reconnect to the Wi-Fi network.
  • Referring now to FIG. 6, example data connections are illustrated for an embodiment of the audio/video system 600. In an arena 602, such as a hockey arena, a sporting venue, or an entertainment venue in general, one or more fixed or moveable cameras 604, helmet cams 606, and microphones 608 are in data communication with a proxy server 612 through data communications equipment represented by wireless hub 610. The proxy server 612 provides one or more ports through which video and audio data streams can be accessed by users 630, either in real-time or through viewing stored data streams. A firewall 614, such as a specially configured router or dedicated piece of data communications equipment, prevents unauthorized users 630 from accessing data streams from the proxy server 612.
  • In an embodiment, users 630 first access a website system 620 which provides authentication information for accessing the data streams through the firewall. Authenticated users 630 connect through the firewall to the proxy server 612 and selected data streams are obtained from the proxy server 612 and presented on the users 630 screens. In another embodiment, the website system 620 is able to connect through the firewall 614 and connect to the proxy server 612 that streams to the website system 620. Users 630 that are authenticated on the website system 620 receive data streams that pass through the website system 620 from the proxy server 612. In another embodiment, two or more proxy servers can be used, for example a first proxy server can receive the audio and video streams from devices and clone the streams to a second proxy server, and the second proxy server can convert and then stream audio and video to users (see for example, FIG. 12 and associated description.)
  • Multiple end users 630 can simultaneously use the audio/video system 600. The audio/video system 600 can simultaneously support multiple events occurring in different venues. The audio/video system 600 can allow users 630 to create their own customized streams. For example, a first end user 632 can view different live streams from the audio/video system 600 during a particular sporting event. A second end user 634 can generate a customized stream based on a current live stream, or stored data streams of a previous sporting event. A third end user 636 can stream the customized stream of the second end user 634. Each end user 630 can use a different kind of computing device, for example a mobile device such as a smartphone or tablet, a laptop, a desktop, and so forth. For example, the first end user 632 can be streaming to a mobile computing device that is using a dedicated application or app that has been downloaded to a mobile computing device. The second end user 634 can be using a high end workstation with a fast Internet connection for editing and generating their customized stream. The third end user 636 can be using an Internet browser and clicking a link to access the customized stream of the second end user 636. In a configuration, the bit rate, frame rate, and frame size of the video and audio streams can be optimized for the type of end user computing device and connection speed.
  • Referring also to FIG. 7, an example screen 700 for selecting from multiple audio and video feeds is presented. The screen 700 includes thumbnail views 710 from each of the cameras and microphones. Some thumbnail views 710 may not include audio or video, either because the feed does not include audio or video, or due to a lost connection. Some thumbnail views, such as thumbnail view 10 may include a left and right view, allowing a user with a 3D viewing device connected to their video device to view a sporting event as a virtual reality experience from one or more of the players' perspectives.
  • The user can select from one or more of the thumbnail views 710, for example by clicking on a particular thumbnail view 710 or dragging a thumbnail view to a focus window 720. The currently selected video is presented in a focus window 720 that typically is larger than the thumbnail views. Clicking a camera icon associated with each thumbnail view 710 allows a user to select whether video, audio, or both are to be presented to the user, for example via the focus window 720. A user can select video from one device and audio from another device. In an embodiment, the user can customize the screen 700, for example to reorganize the order or size of the thumbnail views 710, to have two or more focus windows. Different user controls and window arrangements can be presented to the user as would be understood in the art. For example, in one configuration the focus window 720 can be selected by the user and clicked to toggle between full screen and the illustrated split screen that includes both the focus window 720 and the thumbnail view 710. In another configuration, clicking on the focus window 720 will cycle between a group of selected thumbnail views 710. This can be particularly useful to a user viewing the event using VR or 3D viewing devices.
  • Referring now to FIG. 8, example operations of a system for creating custom content are presented. Users and/or the streaming system itself can choose which devices to display in the focus window or focus windows. Other users can be invited to view the custom created content. Operation commences at start block 800 labeled “START” and proceeds to process block 802.
  • In process block 802, the streaming system receives streams from devices such as cameras and microphones. Processing continues to process block 804.
  • In process block 804, the streaming system streams one or more device streams to users 808, for example through the selection screen 700 of FIG. 7. At any time, users 808 can join a live stream of a sporting event or view a saved stream in process block 806. Processing continues to decision block 810.
  • In decision block 810, if the streaming system is configured to auto-select the focus window, then processing continues to process block 812, otherwise processing continues to process block 814.
  • In process block 812, the streaming system selects a feature that is used to determine the focus window. For example, the streaming system can select the feature to be the camera where the puck is located, or the microphone that is loudest. The selected feature can change dynamically during the game or practice. For example, the selected feature can be the penalty box subsequent to determining that an official has blown a whistle and the clock has been stopped, or the scoreboard after a change to a score on the scoreboard, or a particular player when that player enters the ice in the arena. In this mode, the streaming system attempts to select devices to present the best user experience of the sporting event. Processing continues to process block 818 where the streaming system determines the focus window based on the selected feature.
  • In decision block 814, if a user manually selects a feature to use as the selected feature, then processing continues to process block 816 to receive the user selection, otherwise processing continue to decision block 820.
  • In process block 816, the streaming system receives a selection of a feature to use for selecting the focus window from the available devices. For example, a user who is a scout may desire to follow one particular athlete, and thus use the streaming system in a scouting mode. The scout may select as the feature a jersey number of the particular athlete, in which case the streaming system in process block 818 will determine which camera shows the athlete's jersey number best. In another example, an avid fan of a particular player may desire to have that player as the focus of attention while still watching the game in progress, in which case the camera could be selected that displays both the player and the puck the majority of the time while the selected audio device could be from the helmet of the player or the audio device closest to the player. Processing continues to process block 818.
  • In process block 818, the streaming system determines the focus window from the available cameras and microphones. The steaming system can track players on the ice, or other playing surfaces for other sports, and use player position and motion data to determine the best camera and microphone to use in the focus window. The streaming system can use the selected feature from process block 812 and/or process block 816 in determining the best device to display in the focus window. The streaming system can determine when a particular device is not streaming, or has a connection issue, and switch to the next best device. Processing continues to decision block 820.
  • In decision block 820, if a user selects a particular device to use in the focus window, for example to override an selected device by the streaming system from process block 818, the processing continues to process block 822, otherwise processing continues to decision block 824.
  • In process block 822, the streaming system changes the focus window to the user selected device or devices. Processing continues to decision block 824.
  • In decision block 824, if the user adds user-content to the content stream, then processing continues to process block 826, otherwise processing continue to decision block 828.
  • In process block 826, a user adds user-created content to the content stream. For example, the user may have a microphone connected to their computing device and can add live commentary, such as player analysis or real-time play-by-play announcing such as is performed by professional announcers and commentators. In another example, sophisticated users can include user-created video such as replay clips or on-screen annotation. Processing continues to decision block 828.
  • In decision block 828, if the stream is offered to users, then processing continues to process block 830, otherwise processing continues to decision block 834.
  • In process block 830, a custom stream can be saved. In one configuration, metadata is saved that includes time-stamped tracking of which device(s) were selected for the focus window(s). In this way, the custom stream can be recreated as needed from saved video streams. In another configuration, a new stream can be saved separately for each custom created stream. In another configuration, the original sources or streams can be saved for a configurable period of time, and then purged at a particular expiration date to recover storage space. Similarly, custom streams can be saved and stored for a period of time before being purged. For example, a single custom stream created by the streaming system might be stored indefinitely, while the remaining streams are purged. Processing continues to process block 832.
  • In process block 832, users can be invited to view a custom stream. For example, a stream automatically generated by the streaming system can be shown on a schedule of available live or saved games for viewing by users. The streaming system can also include user-create custom streams in the schedule, and allow other users to rate user-created streams. In another example, a user that creates custom content can generate a link to their custom stream that can be forwarded to other users, for example through social media. For example, a link can be placed on a FACEBOOK page, a clip and link uploaded to the user's INSTAGRAM or TWITTER account, or a link can be emailed to potentially interested parties, for example using an email list and advertisement. Other uses of social media, either currently extant or yet to be developed, can be utilized as would be understood by one of skill in the art. Processing continues to decision block 834.
  • In decision block 834, if the sports event is determined to be over or if the saved stream has concluded, then processing terminates at end block 836, otherwise processing continues back to process block 804 to continue streaming content to users.
  • The costs of creating audio-video content are substantially reduced by allowing users, or the streaming system itself, to determine which video and audio stream to use as the focus window(s), especially when compared to the costs incurred by professional broadcast services such as the major television networks. Further, the use of relatively inexpensive cameras, microphones, and networking equipment allows that equipment to be more or less permanently placed in a sporting venue and used for whatever events occur in the venue, whether they are sporting events, entertainment events, or other events. This opens the opportunity to allow streaming of practices, pre-season games, minor-league games, club-level events, and even high-school events to interested parties. In effect, the present system democratizes the capture, production, and distribution of content from all levels of sporting venues.
  • Referring now to FIG. 9, an example computing device 900 is presented. Example computing devices 900 can be servers, desktop systems, mobile computing devices, embedded controllers, wireless cams and microphones, and so forth. Included are one or more processors, such as that illustrated by processor 904. Each processor is suitably associated with non-volatile memory, such as read only memory (ROM) 910 and random access memory (RAM) 912, via a data bus 914.
  • Processor 904 is also in data communication with a storage interface 916 for reading or writing to a data storage system 918, suitably comprised of a hard disk, memory or solid-state disk, or any other suitable data storage as will be appreciated by one of ordinary skill in the art.
  • Processor 904 is also in data communication with a network interface controller (NIC) 930, which provides a data path to any suitable wired or physical network connection via physical network interface 934, or to any suitable wireless data connection via wireless network interface 938 or cellular interface 936, such as one or more of the networks detailed above.
  • Processor 904 is also in data communication with an input/output (I/O) interface 940 which provides data communication with devices such as a microphone 946 or camera 948 or user peripherals, such as a touchscreen display 944, keyboard, or mouse or any other suitable user interface. It will be understood that functional units are suitably comprised of intelligent units, including any suitable hardware or software platform.
  • Referring now to FIG. 10, presented are example software modules of an embodiment of the website system of FIG. 6. A user interface module 1002 serves web pages to users and administrators that provides a graphical user interface for logging into the system, viewing camera and audio microphone locations, viewing calendars of upcoming sporting events and archived streams, selecting sporting events or recorded steams to view, receiving video and audio streams from the proxy server through the firewall, customizing the user's thumbnail and focus window views, and interacting with the system in general. User accounts, configuration data, calendar information, stream information, and other data can be stored in a database 1010 or other suitable memory. A scheduler engine 1004 can schedule recordings of sporting events by the proxy server.
  • An analytics engine 1006 can analyze video and audio streams. For example, the analytics engine 1006 can determine when a video or audio feed has disconnected, and switch a user's focus window to another available stream and switch back once the video or audio feed reconnects. Similarly, the analytics engine 1006 can monitor video or audio streams and either blackout some or all of a stream in real-time, or switch the focus window to a different stream. The analytics engine 1006 can be used to detect objectionable language in audio, or objectionable images in a video feed, for example nudity, political messages, unauthorized advertising, excessive violence, and so forth. In a configuration, the analytics engine 1006 can be rules-based or use heuristics or other suitable analytics to perform an analysis of one or more streams. In a configuration, the analytics engine 1006 can receive a copy of the streams from a proxy server, or clone-of-a-clone system of FIG. 12, and the analytics engine 1006 can be executing on any suitable system as would be understood in the art.
  • The analytics engine 1006 can also track selected features for determining which stream to use in a focus window. For example, when the website system in being used by a user that is a scout, or if the system is set to use a scout mode, an individual player can be tracked in multiple video streams, for example by jersey number. The analytics engine 1006 can determine the optimal video and audio streams to use to track the selected player or feature being tracked.
  • The analytics engine 1006 can also perform analysis of helmet cam video and/or audio, for example to track where a player is looking or to determine how the player is moving the helmet. The analytics engine 1006 can determine if rapid helmet movements are suggestive of violent impacts which could cause concussions. The analytics engine 1006 can monitor a helmet cam for video and/or audio that might indicate a concussion, injury, or exhaustion of the player. For example, movements of the helmet that are atypical for the player, such as looking down more often, looking up, not turning the head in one particular direction, not following the puck (or a ball as might be used in other sporting events) or a delay in following the puck or action of the game, not looking where other players are looking, and so forth. In a configuration, a player's typical pattern of helmet movements can be analyzed and saved for reference and comparison. In a configuration, the analytics engine 1006 can send an alert to a coach or medical professional via a text message, email, or other suitable alert, for example using the user interface 1002.
  • A tracking engine 1008 can track one or more players' movements in the arena. The tracking engine 1008 can turn a player's movements into vector data, or any other suitable position data. The tracking engine 1008 can work in conjunction with the analytics engine 1006. For example, the tracking engine 1008 can provide player position or vector data to the analytics engine 1006 that is used to determine which camera and audio feed to use in the focus window(s). In a configuration, each player can be analyzed to create a digital representation of the players. Example data that can be determined can include position, speed, direction, acceleration, deceleration, linearity, non-linearity, circularity, time, and other measurements as would be understood in the art. In a configuration, the tracking engine 1008 and analytics engine 1006 can determine the correct camera frame to provide to a user based on the player data. For example, the system can sum all of the vectors or kinetic energy for each frame and/or camera stream and switch the focus window to a particular camera stream based on that calculation.
  • In a configuration, tracking data can be combined with video data to provide a visual representation of players' movements during practice or a game. Similarly, tracking data and/or analytics data can be combined with video and/or audio data to provide player performance information to couches, scouts, and interested viewers and fans.
  • In a configuration, the tracking engine 1008 can receive position data from helmet cams, for example position data derived from GPS or radio signal triangulation. Tracking and analytics data can be stored in the database 1010 or any other suitable memory.
  • Referring to FIG. 11, example video resolutions are presented. Standard resolutions can include common intermediate format or CIF at 352×240 or 352×288 pixels, VGA at 640×480 pixels, and 4CIF/D1 at 704×480, 704×576, or 720×480 pixels. High definition resolutions can include 720p at 1280×720 pixels, 1 Megapixel at 1280×1024, and 1080p at 1920×1080 pixels. Ultra high resolution formats are also contemplated, for example QHD at 2560×1440 pixels, UHD or 4K at 3840×2160 pixels, and so forth. Standard resolution can also include QCIF at 176×120 or 176×144 pixels. Steaming video can be interlaced or progressive scan as appropriate for the resolution. Audio can similarly be encoded, for example as 19.2 kb/s PCM, 9.6 kb/s ADPCM, MP3, or any other suitable encoding or compression as would be understood in the art. The disclosed resolutions are presented as non-limiting examples only. Other suitable resolutions can also be used as would be understood in the art.
  • Referring now to FIG. 12, an example streaming system 1200 is presented. In a venue, such as arena 1214, a plurality of cameras 1202 are configured to stream video across one or more local network connections 1204 to a clone server 1206. The cameras 1202, such as camera 1 through camera n as illustrated, can be configured to stream a high definition video stream, such as 1080p at 1920×1080 pixels. In an embodiment, one or more cameras 1202 can be configured to stream both a low definition video stream, such as CIF at 320×240 pixels, and a high definition video stream. In a configuration, different cameras 1202 can stream in different resolutions. For example, camera 1 could stream in 1080p, while camera 2 streams in 4k and camera n streams using 1 megapixel streaming.
  • Each camera 1202 streams across a local network connection 1204, such as a LAN, WiFi, LiFi, Power over Ethernet, or any other suitable network for example as described with respect to the devices of FIG. 1. The clone server 1206 receives each of the streams from the cameras 1202. The clone server 1206 can store each of the streams from each of the cameras 1202. In a configuration, the streams are stored temporarily, or ephemerally, before being streamed to one or more clone-of-a-clone servers 1210 and/or to cloud storage 1213. In another configuration, the clone server 1206 can store each stream for a longer period of time, for example as permanent storage.
  • The clone server 1206 is in network communication, for example using a VPN or virtual private network, with one or more clone-of-a-clone servers 1210 through firewall 1207, which can be a suitable router or other suitable network element. The clone server 1206 clones the live video streams 1208 from the cameras 1202 onto the clone-of-a-clone server 1210. Each clone-of-a-clone server 1210 receives live video streams 1208 associated with each of the cameras 1202. In an embodiment, each clone-of-a-clone server 1210 can receive live video streams 1208 from a subset of all of the available cameras 1202 associated with the clone server 1206. In another embodiment, each clone-of-a-clone server 1210 can receive live video streams 1208 from multiple clone servers 1206 and associated cameras 1202. The clone-of-a-clone servers 1210 can be anywhere in the network, for example in the cloud 1216 as shown, at an ISP or Internet Service Provider, in a colocation premises, in the arena 1214 or any other suitable place. The clone-of-a-clone server 1210 can be hosted by a service company that provides high speed cloud hosting services, such as AMAZON, as would be understood in the art.
  • The clone server 1206 also sends recorded video streams 1209 to cloud storage 1213. Cloud storage 1213 can include network servers, redundant network storage hosted by third party companies, and other suitable cloud storage as would be understood in the art. In a configuration, the recorded video streams 1209 can include live video streams.
  • Advantageously, the clone server 1206, the clone-of-a-clone server 1210, and cloud storage 1213 allow the system architecture to easily scale to support any number of cameras 1202 and users. The clone server 1206 aggregates video streams from multiple cameras 1202. Additional clone servers 1206 can be used to accommodate more cameras 1202 as needed. Each clone-of-a-clone server 1210 receives cloned video streams from one or more clone servers 1206 and supports forwarding video streams to multiple users. Additional clone-of-a-clone servers 1210 can be used to accommodate more users when needed. Cloud storage 1213 can be scaled as necessary to support automated recording of live video streams and playback of video streams by users. A web server 1211 can provide front end web services for users to interact with the system and gain access to the live video streams and recorded video from the clone-of-a-clone servers 1210 and cloud storage 1213.
  • Clone-of-a-clone servers 1210 can be configured to perform other services, for example archiving video, providing user video editing functions, and so forth. In an embodiment, one or more cameras 1202 stream only a single stream of video, for example a single high definition 1080p stream. In this embodiment, the clone-of-a-clone server 1210 receives a clone of each high definition stream from the clone server 1206 and the clone-of-a-clone server 1210 creates an additional low definition video stream such as a CIF stream based on the received high definition stream. Alternatively, the clone server 1206 receives the high definition stream and creates the additional low definition video stream such as a CIF stream based on the high definition stream received from the cameras 1202. In yet another embodiment, the clone server 1206 receives a single stream from some cameras 1202 and multiple streams from other cameras 1202; the clone server 1206 or the clone-of-a-clone server 1210 generates a second stream, for example a second low definition stream, for cameras 1202 that only provide a single stream.
  • A consumer, for example a user or business located in a consumer premises 1218 such as a home or business office, uses a computing device 1220 that establishes a network connection, for example over the Internet 1212, with the web server 1211. The user interacts with the web server 1211 to view live video streams or recorded video from the clone-of-a-clone servers 1210 or cloud storage 1213.
  • The computing device 1220 can be a personal computer, a laptop, a tablet device, a smartphone, a smart TV, a video game device, a television set top box, or any other suitable computing device as would be known in the art. The computing device receives parallel streams from each of the cameras 1202, or parallel streams from a subset of the cameras 1202 over the network connection. For example, as illustrated in FIG. 12, the computing device 1220 receives both a low definition CIF stream and a high definition HD stream as parallel streams from each of the cameras 1202 over a network connection via the Internet 1212.
  • The computing device 1220 can be configured to display the received streams in any suitable or desired configuration or format. For example, the computing device 1220 can run software that displays multiple streams from the cameras 1202 in low definition in smaller preview windows 1222 and a high definition stream of one of the cameras 1202 in a large focus window 1224. A user can select any one of the smaller preview windows 1222 to display the high definition stream of the selected camera 1202 in the large focus window 1224. In a embodiment, there can be two or more large focus windows 1224, each of which can display a different selected stream. Advantageously, because the computing device receives both a low definition stream and a high definition stream associated with each of the cameras 1202, there is no delay, or minimal delay, that is perceived by the user as the user switches between streams from different cameras 1202 in the large focus window 1224. Also advantageously, because the low definition streams are displayed in smaller preview windows 1222, the user does not perceive that those streams are presented in low resolution because of the smaller size of the smaller preview windows 1222.
  • Both the low definition stream and high definition stream from each camera can be synchronized, such that the start of each frame of video for both streams are in sync. This advantageously allows the picture displayed in both the smaller preview window 1222 and large focus window 1224 to be in perfect sync, preventing the user for perceiving temporal differences. Also, the parallel streams from each of the cameras 1202 can be in sync so that the picture in the large focus window 1224 can smoothly switch between high definition streams from different cameras 1202 without displaying partial frames or experiencing temporal delays during a switch between video sources.
  • In a configuration, the smaller preview windows 1222 can have the same pixel resolution as the pixel size of the low definition streams from cameras 1202. This can reduce the computation load on the computing device 1220 which does not have to remap each of pixels of the low definition streams into a different pixel size of the smaller preview windows 1222. Similarly, the pixel size of the high definition stream can be the same as the pixel size the large focus windows 1224. In other configurations, the smaller preview windows 1222 or large focus windows 1224 can have different pixels sizes that the low definition streams or high definition streams respectively and the computing device 1220 can remap the streams onto the screen as would be understood in the art. In an embodiment, the computing device 1220 can receive the low definition streams and the high definition streams in a desired resolution and/or frame rate from the clone-of-a-clone server 1210.
  • Advantageously, the streaming system 1200 presented herein provides the user with a seamless visual experience as the user switches between the different views from each of the cameras 1202. Although the streaming system 1200 sends both high definition and low definition streams for each camera 1202 to the user, video compression can be used to reduce the overall bandwidth required. For example, video streams can be compressed using compression algorithms such as MP4, H.264, H.265 or other forms of compression as would be understood in the art. In an embodiment, the low definition and high definition streams can share a common audio stream to further reduce bandwidth. In an embodiment, the low definition streams and high definition streams can be separately streamed in distinct network connections to the computing device 1220. In an embodiment, streams can be combined into a single network connection.
  • While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the spirit and scope of the inventions.

Claims (20)

What is claimed is:
1. A method, comprising:
receiving, by a clone server, streaming video from a plurality of cameras;
cloning, between the clone server and a clone-of-a-clone server, at least a high definition video stream associated with each of the plurality of cameras; and
streaming, by the clone-of-a-clone server to a user computing device, at least one high definition video stream and at least one low definition video stream associated with a common camera.
2. The method of claim 1, wherein each low definition video stream uses a common intermediate format (CIF) resolution of 320 by 240 pixels, and wherein each high definition video stream uses a 1080p resolution of 1920 by 1080 pixels.
3. The method of claim 1, wherein the streaming video sent by at least one of the plurality of cameras comprises a high definition video stream and a low definition video stream.
4. The method of claim 1, wherein the streaming video sent by at least one of the plurality of cameras consists of only a high definition video stream.
5. The method of claim 4, further comprising:
generating, by at least one of the clone server or the clone-of-a-clone server, a low definition video stream from the high definition video stream.
6. The method of claim 1, further comprising:
synchronizing frames of each of the video streams sent to the user computing device.
7. A system, comprising:
a clone server configured to
receive streaming video from each of a plurality of cameras, and
transmit, to a clone-of-a-clone server, clones of the streaming video from
at least a subset the plurality of cameras; and
a clone-of-a-clone server configured to
receive clones of the streaming video, and
stream, to a user computing device, a high definition video stream and a low definition video stream associated with each of at least a subset of the received clones of streaming video.
8. The system of claim 7, wherein each low definition video stream uses a common intermediate format (CIF) resolution of 320 by 240 pixels and wherein each high definition video stream uses a 1080p resolution of 1920 by 1080 pixels.
9. The system of claim 7, wherein each low definition video stream is selected from the group consisting of CIF, VGA, 4CIF, and D1, and wherein each high definition video stream is selected from the group consisting of 720p, 1 Megapixel, and 1080p.
10. The system of claim 7, further comprising:
a plurality of cameras each configured to stream video to a clone server.
11. The system of claim 10, wherein at least one camera is configured to stream a high definition video stream and a low definition video stream to the clone server.
12. The system of claim 10, wherein at least one camera is configured to stream only a high definition video stream to the clone server.
13. The system of claim 12, wherein the clone server is further configured to receive a high definition video stream from the at least one camera, and transmit a clone of the high definition video stream to the clone-of-a-clone server, and wherein the clone-of-a-clone server is configured to generate an associated low definition video stream from the clone of the high definition video stream.
14. The system of claim 7, wherein each streaming video received from a camera comprises a high definition video stream and wherein the clone-of-a-clone server is further configured to generate a low definition video stream from each high definition video stream.
15. The system of claim 7, wherein the clone-of-a-clone server is configured to synchronize frames of each high definition video stream and low definition video stream that are streamed to the user computing device.
16. The system of claim 7, further comprising:
a plurality of clone servers, and
wherein the clone-of-a-clone server is configured to receive video streams from at least a subset of the plurality of clone servers.
17. The system of claim 7, further comprising:
a plurality of clone-of-a-clone servers, and
wherein each clone server is configured to send video streams to at least one of the plurality of clone-of-a-clone servers.
18. The system of claim 7, further comprising:
a user computing device configured to
receive a plurality of low definition video streams and an associated plurality of high definition video streams,
display each of the received low definition video streams,
receive a user selection associated with one of the displayed low definition video streams, and
display a high definition video stream associated with the user selection.
19. A system, comprising:
a clone server configured to
receive a high definition video stream from a camera, and
clone the high definition video stream onto a clone-of-a-clone server; and
a clone-of-a-clone server configured to
receive a plurality of cloned high definition video streams,
generate an associated low definition video stream for each high definition video stream, and
selectively stream parallel video streams to a user computing device, and
wherein each parallel video stream comprises one of the plurality of high definition video streams and the associated low definition video stream.
20. The system of claim 19, wherein each low definition video stream has a common intermediate format (CIF) resolution of 320 by 240 pixels and wherein each high definition video stream has a 1080p resolution of 1920 by 1080 pixels.
US15/433,984 2016-09-09 2017-02-15 Cloned Video Streaming Abandoned US20180077430A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/433,984 US20180077430A1 (en) 2016-09-09 2017-02-15 Cloned Video Streaming

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662385685P 2016-09-09 2016-09-09
US15/433,984 US20180077430A1 (en) 2016-09-09 2017-02-15 Cloned Video Streaming

Publications (1)

Publication Number Publication Date
US20180077430A1 true US20180077430A1 (en) 2018-03-15

Family

ID=59930787

Family Applications (5)

Application Number Title Priority Date Filing Date
US15/434,003 Abandoned US20180077437A1 (en) 2016-09-09 2017-02-15 Parallel Video Streaming
US15/433,984 Abandoned US20180077430A1 (en) 2016-09-09 2017-02-15 Cloned Video Streaming
US15/699,651 Active US10327014B2 (en) 2016-09-09 2017-09-08 Three-dimensional telepresence system
US16/443,481 Active US10750210B2 (en) 2016-09-09 2019-06-17 Three-dimensional telepresence system
US16/946,826 Active US10880582B2 (en) 2016-09-09 2020-07-08 Three-dimensional telepresence system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/434,003 Abandoned US20180077437A1 (en) 2016-09-09 2017-02-15 Parallel Video Streaming

Family Applications After (3)

Application Number Title Priority Date Filing Date
US15/699,651 Active US10327014B2 (en) 2016-09-09 2017-09-08 Three-dimensional telepresence system
US16/443,481 Active US10750210B2 (en) 2016-09-09 2019-06-17 Three-dimensional telepresence system
US16/946,826 Active US10880582B2 (en) 2016-09-09 2020-07-08 Three-dimensional telepresence system

Country Status (7)

Country Link
US (5) US20180077437A1 (en)
EP (1) EP3510768B1 (en)
JP (2) JP7001675B2 (en)
KR (3) KR20190026804A (en)
CN (2) CN109565567B (en)
DE (1) DE202017105484U1 (en)
WO (1) WO2018049201A1 (en)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USD778941S1 (en) 2016-01-08 2017-02-14 Apple Inc. Display screen or portion thereof with graphical user interface
US20180077437A1 (en) 2016-09-09 2018-03-15 Barrie Hansen Parallel Video Streaming
GB201621879D0 (en) * 2016-12-21 2017-02-01 Branston Ltd A crop monitoring system and method
EP3593525A1 (en) * 2017-03-10 2020-01-15 Sling Media PVT Ltd Media session management
US11064226B2 (en) * 2017-03-16 2021-07-13 Echo-Sense, Inc. System and method for concurrent data streams from a singular sensor with remotely selectable parameters
US11140455B1 (en) * 2017-06-09 2021-10-05 Amazon Technologies, Inc. Video encoder network sandboxing
TWI665649B (en) * 2018-02-27 2019-07-11 鴻海精密工業股份有限公司 Micro led array, display and electronic device
US10785422B2 (en) * 2018-05-29 2020-09-22 Microsoft Technology Licensing, Llc Face recognition using depth and multi-spectral camera
WO2020030989A1 (en) * 2018-08-09 2020-02-13 Corephotonics Ltd. Multi-cameras with shared camera apertures
US10764533B2 (en) 2018-11-09 2020-09-01 Google Llc Computerworkstation with curved lenticular display
CN210168142U (en) * 2019-01-17 2020-03-20 深圳市光鉴科技有限公司 Display device and electronic equipment with 3D camera module
WO2020210937A1 (en) * 2019-04-15 2020-10-22 Shanghai New York University Systems and methods for interpolative three-dimensional imaging within the viewing zone of a display
US11516374B2 (en) 2019-06-05 2022-11-29 Synaptics Incorporated Under-display image sensor
US11057549B2 (en) * 2019-08-16 2021-07-06 Lenovo (Singapore) Pte. Ltd. Techniques for presenting video stream next to camera
US11153513B2 (en) * 2019-08-19 2021-10-19 Synaptics Incorporated Light source for camera
CN112394527A (en) * 2019-08-19 2021-02-23 上海鲲游光电科技有限公司 Multi-dimensional camera device and application terminal and method thereof
CN116506715A (en) * 2019-09-27 2023-07-28 苹果公司 Method and apparatus for operating a lenticular display
US11076080B2 (en) 2019-12-05 2021-07-27 Synaptics Incorporated Under-display image sensor for eye tracking
US20210409893A1 (en) * 2020-06-25 2021-12-30 Microsoft Technology Licensing, Llc Audio configuration for displayed features
US20230341557A1 (en) * 2020-09-18 2023-10-26 Myung Il MOON Three-dimensional image obtainment device
JP7386888B2 (en) * 2020-10-08 2023-11-27 グーグル エルエルシー Two-shot composition of the speaker on the screen
WO2022115119A1 (en) * 2020-11-30 2022-06-02 Google Llc Three-dimensional (3d) facial feature tracking for autostereoscopic telepresence systems
US11818637B2 (en) * 2021-06-10 2023-11-14 Getac Technology Corporation Providing alternate communication proxies for media collection devices
SE545897C2 (en) * 2022-02-04 2024-03-05 Livearena Tech Ab System and method for producing a shared video stream
CN114567767B (en) * 2022-02-23 2024-06-18 京东方科技集团股份有限公司 Display device, light field acquisition method, image data transmission method and related equipment
CN114827465B (en) * 2022-04-19 2024-09-27 京东方科技集团股份有限公司 Image acquisition method and device and electronic equipment
US12039140B2 (en) 2022-04-25 2024-07-16 Zoom Video Communications, Inc. Configuring a graphical user interface for display at an output interface during a video conference
IL296044B2 (en) * 2022-08-29 2024-08-01 Abu Freh Ismael System and method for streaming video in real-time via virtual reality headset using a camera network
WO2024210419A1 (en) * 2023-04-04 2024-10-10 엘지이노텍 주식회사 Information device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020069265A1 (en) * 1999-12-03 2002-06-06 Lazaros Bountour Consumer access systems and methods for providing same
US20110063500A1 (en) * 2009-09-15 2011-03-17 Envysion, Inc. Video Streaming Method and System
US20120254933A1 (en) * 2011-03-31 2012-10-04 Hunt Electronic Co., Ltd. Network video server and video control method thereof
US8576271B2 (en) * 2010-06-25 2013-11-05 Microsoft Corporation Combining direct and routed communication in a video conference
US20150128174A1 (en) * 2013-11-04 2015-05-07 Broadcom Corporation Selecting audio-video (av) streams associated with an event
US20150281746A1 (en) * 2014-03-31 2015-10-01 Arris Enterprises, Inc. Adaptive streaming transcoder synchronization

Family Cites Families (63)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5335011A (en) * 1993-01-12 1994-08-02 Bell Communications Research, Inc. Sound localization system for teleconferencing using self-steering microphone arrays
CN1172267A (en) * 1996-07-29 1998-02-04 冯有纲 New stereoscopic visual image technique and device
US6208373B1 (en) * 1999-08-02 2001-03-27 Timothy Lo Fong Method and apparatus for enabling a videoconferencing participant to appear focused on camera to corresponding users
AU6171200A (en) * 1999-08-10 2001-03-05 Peter Mcduffie White Communications system
GB2411735A (en) * 2004-03-06 2005-09-07 Sharp Kk Control of liquid crystal alignment in an optical device
JP2005303683A (en) 2004-04-12 2005-10-27 Sony Corp Image transceiver
US7535468B2 (en) * 2004-06-21 2009-05-19 Apple Inc. Integrated sensing display
JP5090337B2 (en) * 2005-04-08 2012-12-05 リアルディー インコーポレイテッド Autostereoscopic display with planar pass-through
WO2008132724A1 (en) * 2007-04-26 2008-11-06 Mantisvision Ltd. A method and apparatus for three dimensional interaction with autosteroscopic displays
US20090146915A1 (en) 2007-12-05 2009-06-11 Marathe Madhav V Multiple view display device
CN101472133B (en) * 2007-12-28 2010-12-08 鸿富锦精密工业(深圳)有限公司 Apparatus and method for correcting image
US9684380B2 (en) * 2009-04-02 2017-06-20 Oblong Industries, Inc. Operating environment with gestural control and multiple client devices, displays, and users
JP2010171573A (en) 2009-01-21 2010-08-05 Epson Imaging Devices Corp Three-dimensional image display-imaging device, communication system, and display device
US8570423B2 (en) * 2009-01-28 2013-10-29 Hewlett-Packard Development Company, L.P. Systems for performing visual collaboration between remotely situated participants
JP5732064B2 (en) 2009-11-17 2015-06-10 エーテーハー チューリヒ Transparent autostereoscopic image display apparatus and method
US8823782B2 (en) * 2009-12-31 2014-09-02 Broadcom Corporation Remote control with integrated position, viewer identification and optical and audio test
KR101725044B1 (en) * 2010-05-27 2017-04-11 삼성전자주식회사 Imaging display apparatus
CN101866056A (en) 2010-05-28 2010-10-20 中国科学院合肥物质科学研究院 3D imaging method and system based on LED array common lens TOF depth measurement
JP5494283B2 (en) 2010-06-24 2014-05-14 ソニー株式会社 3D display device and 3D display device control method
KR101280636B1 (en) * 2010-07-29 2013-07-01 주식회사 팬택 Active type display apparatus for stereographic image and driving method thereof
US8624960B2 (en) * 2010-07-30 2014-01-07 Silicon Image, Inc. Multi-view display system
KR101732135B1 (en) 2010-11-05 2017-05-11 삼성전자주식회사 3dimension video communication apparatus and method for video processing of 3dimension video communication apparatus
US20120139906A1 (en) * 2010-12-03 2012-06-07 Qualcomm Incorporated Hybrid reality for 3d human-machine interface
US8823769B2 (en) * 2011-01-05 2014-09-02 Ricoh Company, Ltd. Three-dimensional video conferencing system with eye contact
WO2012100434A1 (en) * 2011-01-30 2012-08-02 Nokia Corporation Method, apparatus and computer program product for three-dimensional stereo display
JP2012169822A (en) 2011-02-14 2012-09-06 Nec Personal Computers Ltd Image processing method and image processing device
US20120223885A1 (en) * 2011-03-02 2012-09-06 Microsoft Corporation Immersive display experience
US20120257004A1 (en) * 2011-04-05 2012-10-11 Polycom, Inc. Direct Eye-Contact Enhancing Videoconferencing Unit
JP5834533B2 (en) * 2011-06-23 2015-12-24 沖電気工業株式会社 Communication system and communication device
JP2013125985A (en) 2011-12-13 2013-06-24 Sharp Corp Display system
JP2013128181A (en) 2011-12-16 2013-06-27 Fujitsu Ltd Display device, display method, and display program
US9024844B2 (en) * 2012-01-25 2015-05-05 Microsoft Technology Licensing, Llc Recognition of image on external display
CN104081780A (en) * 2012-01-31 2014-10-01 索尼公司 Image processing apparatus and image processing method
US9591418B2 (en) * 2012-04-13 2017-03-07 Nokia Technologies Oy Method, apparatus and computer program for generating an spatial audio output based on an spatial audio input
HUE045628T2 (en) 2012-04-20 2020-01-28 Affirmation Llc Systems and methods for real-time conversion of video into three-dimensions
US20130321564A1 (en) * 2012-05-31 2013-12-05 Microsoft Corporation Perspective-correct communication window with motion parallax
KR101350996B1 (en) * 2012-06-11 2014-01-13 재단법인 실감교류인체감응솔루션연구단 3d video-teleconferencing apparatus capable of eye contact and method using the same
US20140063198A1 (en) * 2012-08-30 2014-03-06 Microsoft Corporation Changing perspectives of a microscopic-image device based on a viewer' s perspective
US8976224B2 (en) 2012-10-10 2015-03-10 Microsoft Technology Licensing, Llc Controlled three-dimensional communication endpoint
KR101977711B1 (en) * 2012-10-12 2019-05-13 삼성전자주식회사 Depth sensor, image capturing method thereof and image processing system having the depth sensor
US20140146394A1 (en) * 2012-11-28 2014-05-29 Nigel David Tout Peripheral display for a near-eye display device
CN109288333B (en) * 2012-12-18 2021-11-30 艾斯适配有限公司 Apparatus, system and method for capturing and displaying appearance
US20140176684A1 (en) * 2012-12-24 2014-06-26 Alejandro Varela Techniques for multiple viewer three-dimensional display
US9307217B1 (en) * 2013-06-12 2016-04-05 Ambarella, Inc. Portable video camera/recorder having video security feature
JP6199619B2 (en) * 2013-06-13 2017-09-20 株式会社ニューフレアテクノロジー Vapor growth equipment
KR20140147376A (en) * 2013-06-19 2014-12-30 삼성전자주식회사 Layered type color-depth sensor and 3D image acquisition apparatus employing the sensor
US20140375541A1 (en) * 2013-06-25 2014-12-25 David Nister Eye tracking via depth camera
US9325936B2 (en) * 2013-08-09 2016-04-26 Samsung Electronics Co., Ltd. Hybrid visual communication
CN104427049A (en) * 2013-08-30 2015-03-18 深圳富泰宏精密工业有限公司 Portable electronic device
US20150097925A1 (en) 2013-10-04 2015-04-09 Electronics And Telecommunications Research Institute Apparatus and method for displaying hologram based on pupil tracking using hybrid camera
US20150235408A1 (en) * 2014-02-14 2015-08-20 Apple Inc. Parallax Depth Rendering
CN104866261B (en) * 2014-02-24 2018-08-10 联想(北京)有限公司 A kind of information processing method and device
US20150324646A1 (en) * 2014-05-08 2015-11-12 Brown University Navigation methods and apparatus for the visually impaired
US10171792B2 (en) * 2014-08-15 2019-01-01 The University Of Akron Device and method for three-dimensional video communication
KR102269318B1 (en) * 2014-09-15 2021-06-28 삼성디스플레이 주식회사 Display device and display system including the same
US10248192B2 (en) * 2014-12-03 2019-04-02 Microsoft Technology Licensing, Llc Gaze target application launcher
KR102396289B1 (en) * 2015-04-28 2022-05-10 삼성디스플레이 주식회사 Three dimensional image display device and driving method thereof
JP6509027B2 (en) * 2015-05-12 2019-05-08 キヤノン株式会社 Object tracking device, optical apparatus, imaging device, control method of object tracking device, program
US9609275B2 (en) * 2015-07-08 2017-03-28 Google Inc. Single-stream transmission method for multi-user video conferencing
US20170070804A1 (en) * 2015-09-03 2017-03-09 Monster, Llc Multifunction Wireless Adapter
KR20170035608A (en) * 2015-09-23 2017-03-31 삼성전자주식회사 Videotelephony System, Image Display Apparatus, Driving Method of Image Display Apparatus, Method for Generation Realistic Image and Computer Readable Recording Medium
US10203566B2 (en) * 2015-12-21 2019-02-12 Facebook Technologies, Llc Enhanced spatial resolution using a segmented electrode array
US20180077437A1 (en) 2016-09-09 2018-03-15 Barrie Hansen Parallel Video Streaming

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020069265A1 (en) * 1999-12-03 2002-06-06 Lazaros Bountour Consumer access systems and methods for providing same
US20110063500A1 (en) * 2009-09-15 2011-03-17 Envysion, Inc. Video Streaming Method and System
US8576271B2 (en) * 2010-06-25 2013-11-05 Microsoft Corporation Combining direct and routed communication in a video conference
US20120254933A1 (en) * 2011-03-31 2012-10-04 Hunt Electronic Co., Ltd. Network video server and video control method thereof
US20150128174A1 (en) * 2013-11-04 2015-05-07 Broadcom Corporation Selecting audio-video (av) streams associated with an event
US20150281746A1 (en) * 2014-03-31 2015-10-01 Arris Enterprises, Inc. Adaptive streaming transcoder synchronization

Also Published As

Publication number Publication date
CN109565567B (en) 2020-12-08
US20180077384A1 (en) 2018-03-15
JP7001675B2 (en) 2022-01-19
KR20190026804A (en) 2019-03-13
EP3510768B1 (en) 2023-05-24
EP3510768A1 (en) 2019-07-17
KR20200078703A (en) 2020-07-01
JP2022009242A (en) 2022-01-14
WO2018049201A1 (en) 2018-03-15
US10880582B2 (en) 2020-12-29
DE202017105484U1 (en) 2018-01-09
US20180077437A1 (en) 2018-03-15
JP7443314B2 (en) 2024-03-05
US20200344500A1 (en) 2020-10-29
US10750210B2 (en) 2020-08-18
US20190306541A1 (en) 2019-10-03
KR102142643B1 (en) 2020-08-07
US10327014B2 (en) 2019-06-18
CN109565567A (en) 2019-04-02
JP2019533324A (en) 2019-11-14
KR20200096322A (en) 2020-08-11
CN112584080B (en) 2023-10-24
CN112584080A (en) 2021-03-30
KR102256707B1 (en) 2021-05-26

Similar Documents

Publication Publication Date Title
US20180077430A1 (en) Cloned Video Streaming
US20180077438A1 (en) Streaming audio and video for sporting venues
US11770591B2 (en) Systems, apparatus, and methods for rendering digital content streams of events, and synchronization of event information with rendered streams, via multiple internet channels
US11871088B2 (en) Systems, apparatus, and methods for providing event video streams and synchronized event information via multiple Internet channels
EP3459252B1 (en) Method and apparatus for spatial enhanced adaptive bitrate live streaming for 360 degree video playback
US11838563B2 (en) Switching between transmitting a preauthored video frame and a composited video frame
US10187609B2 (en) Systems and methods for providing interactive video services
US11153615B2 (en) Method and apparatus for streaming panoramic video
US20230379531A1 (en) Systems, apparatus and methods for rendering digital content
Liu et al. LIME: understanding commercial 360 live video streaming services
CN106817628B (en) Network live broadcast platform
US20240223781A1 (en) Systems and methods for multiple bit rate content encoding
CN108093300B (en) Animation capture management system
US20160182930A1 (en) Systems and methods for enabling simultaneous second screen data access during ongoing primary screen programming
KR102276636B1 (en) Method and Apparatus for Automatic Tracking and Replaying Images Based on Artificial Intelligence
US20230239525A1 (en) Server, method and terminal
Niamut et al. Live event experiences-interactive UHDTV on mobile devices
US20180063253A1 (en) Method, system and device for providing live data streams to content-rendering devices
KR101295002B1 (en) Panoramic Video Interface Providing System and Method using SNS Information
US10623803B2 (en) Essence content creation, modification and/or delivery methods and systems
Tunturipuro Building a low-cost streaming system: Streaming and camera operating system for live internet productions
EP4035399A2 (en) Systems and methods for providing content based on multiple angles

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION