WO2017180050A1 - System and method for providing virtual pan-tilt-zoom, ptz, video functionality to a plurality of users over a data network - Google Patents

System and method for providing virtual pan-tilt-zoom, ptz, video functionality to a plurality of users over a data network Download PDF

Info

Publication number
WO2017180050A1
WO2017180050A1 PCT/SE2017/050364 SE2017050364W WO2017180050A1 WO 2017180050 A1 WO2017180050 A1 WO 2017180050A1 SE 2017050364 W SE2017050364 W SE 2017050364W WO 2017180050 A1 WO2017180050 A1 WO 2017180050A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
network
tilt
camera
metadata
Prior art date
Application number
PCT/SE2017/050364
Other languages
French (fr)
Inventor
Magnus LINDEROTH
Håkan ARDÖ
Klas JOSEPHSON
Original Assignee
Spiideo Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spiideo Ab filed Critical Spiideo Ab
Priority to US16/092,467 priority Critical patent/US10834305B2/en
Priority to CN201780021360.1A priority patent/CN108886583B/en
Priority to EP17782745.8A priority patent/EP3443737B1/en
Priority to CN202111207059.4A priority patent/CN114125264B/en
Publication of WO2017180050A1 publication Critical patent/WO2017180050A1/en
Priority to US17/092,971 priority patent/US11283983B2/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras

Definitions

  • This patent application describes an inventive system and an associated method of providing virtual pan-tilt-zoom, PTZ, video functionality to a plurality of users over a data network.
  • Sports clubs such as football clubs, are becoming increasingly aware of the benefits of using video recordings of athletes and teams in practice and match. This is done in order to increase team and athlete performance, develop individual athlete skills and improve overall sports results.
  • PTZ pan-tilt-zoom
  • PTZ-cameras are often used in television production for creating video streams of for example sporting events and matches.
  • this is also a costly camera arrangement and requires a plurality of expensive PTZ-cameras to generate a good video quality from all angles. Installing PTZ-cameras in the training area is thus not an option for most smaller clubs, youth teams etc. due the great costs.
  • a first aspect of the present invention is a system for providing virtual pan-tilt-zoom, PTZ, video functionality to a plurality of users over a data network.
  • the system comprises one or more network video cameras positioned to make a video recording of a real-world target field, or a respective subarea thereof, and output a respective raw video segment onto the data network.
  • the system also comprises a back-end camera calibration unit which is adapted to obtain a relative position and orientation of the respective network video camera with respect to a three dimensional world space for the target field and save the obtained relative position and orientation as metadata for the respective network video camera.
  • the system also comprises a back- end video processing unit, connected to the data network and adapted to receive the respective raw video segment(s) recorded by the network video camera(s), and store or buffer video contents of the respective raw video segment(s) in a back-end video storage.
  • the system comprises a back-end video streaming unit adapted to stream the stored or buffered video contents and the saved metadata for the respective network video camera onto the data network.
  • the system comprises a plurality of client devices for use by the plurality of users. Each client device is adapted to receive the streamed video contents and the metadata, determine a chosen virtual pan- tilt-zoom value and present, on a display the client device, the video contents by projecting directions in the three dimensional world space, onto the display based on the chosen virtual pan-tilt-zoom value and the metadata.
  • the invention provides highly automated video recording and playback services which may be conveniently used, for instance, as a tool for coaches and athletes to analyze and discuss athletic performance on a real-world target field such as, for instance, a soccer pitch, hockey rink, handball court, basketball court, floorball court, volleyball court, etc.
  • a system is provided where one or more cameras are used to capture a real-world target field (e.g. sports field), possibly with a very wide field of view.
  • the viewer e.g. a coach or an athlete
  • the viewer can choose where to look himself.
  • PTZ pan-tilt-zoom
  • inventive video recording and playback services may be enjoyed by remote viewers in the form of live streaming of, for instance, sports events of any of the types referred to above.
  • a second aspect of the present invention is a method for providing virtual pan- tilt-zoom, PTZ, video functionality to a plurality of users over a data network.
  • the method involves providing one or more network video cameras positioned to make a video recording of a real -world target field, or a respective subarea thereof, and output a respective raw video segment onto the data network.
  • the method also involves obtaining a relative position and orientation of the respective network video camera with respect to a three dimensional world space for the target field, and saving the obtained relative position and orientation as metadata for the respective network video camera.
  • the method also involves receiving over the data network the respective raw video segment(s) recorded by the network video camera(s), and storing or buffering video contents of the respective raw video segment(s) in a cloud-based video storage. Furthermore, the method involves streaming the stored or buffered video contents and the saved metadata for the respective network video camera onto the data network. Finally, the method involves, in respective client devices for use by respective one of the plurality of users receiving the streamed video contents and the metadata, determining a chosen virtual pan-tilt-zoom value, and presenting, on a display the client device, the video contents by projecting directions in the three dimensional world space onto the display based on the chosen virtual pan-tilt-zoom value and the metadata.
  • Fig. 1 is a schematic view of a system for providing virtual pan-tilt-zoom video functionality to a plurality of users over a data network according to one embodiment.
  • Fig. 2 is a schematic front view schematically illustrating the virtual pan-tilt- zoom function according to an embodiment.
  • Fig. 3 is a schematic front view schematically illustrating the interface of a client device according to an embodiment.
  • Fig. 4 is a schematic front view schematically illustrating the interface of a client device according to an embodiment.
  • Figs. 5a-c are schematic front views showing stacking of frames according to an embodiment
  • Figs. 6a-b are schematic front views showing blacked-out pixies according to an embodiment
  • Figs. 7a-b are schematic front views schematically illustrating the interface of a client device when using the functions of tagging and drawing according to an embodiment
  • Figs. 8a-b are flowchart diagrams illustrating a method for providing virtual pan-tilt-zoom video functionality according to an embodiment.
  • Figure 1 discloses a system 100 for providing virtual pan-tilt-zoom, PTZ, video functionality to a plurality of users 60a-60c over a data network 30.
  • the data network may, for instance, be a wide-area network or aggregation of networks which form part of the Internet (commonly also referred to as the Cloud).
  • the system 100 comprises one or more network video cameras 10a- lOe which are positioned to make a video recording of a real-world target field 20, or a respective subarea thereof, and output a respective raw video segment onto the data network 30.
  • Each such raw video segment may be of a given length, such as for instance n seconds, where n is any value suitably chosen in consideration of implementation details, and contain recorded video contents, for instance, in the form of H.264 or H.265 encoded video data.
  • each raw video segment comprises recorded video contents in the form of H.264 encoded video data.
  • the real -world target field 20 is a soccer pitch, but the invention has no specific limitations in this regard.
  • each network video camera 10a- lOe may be mounted at a respective stationary position with respect to the real-world target field 20, and it need not have any real (physical) pan, tilt or zoom capabilities. Hence, a very efficient solution is provided in terms of component cost, installation and/or maintenance.
  • each network video camera 10a- lOe is mounted at a respective position which is movable with respect to the real-world target field 20.
  • the real-world target field 20 is recorded by a single ultra- wide view network video camera 10a, essentially covering the entire target field 20.
  • the real-world target field 20 is recorded by a plurality of network video cameras 10a- lOe.
  • field overview cameras 10a, 10b and 10c may be mounted at the center of the soccer pitch 20 and oriented in appropriate directions so as to record different subareas of the soccer pitch 20, with or without overlap. If the cameras are mounted close to each other with overlapping fields of view, the captured frames can be stitched together and used as if captured by a single camera with a wide field of view.
  • Detailed view cameras lOd and lOe may be mounted, for instance, near the two goals at either sides of the soccer pitch 20, thereby serving to provide close-up video views of soccer action near the goals.
  • the system 100 comprises cloud-based back-end functionality 40 acting as a service bridge between the network video cameras(s) 10a- lOe and the users 60a-60c via the data network 30.
  • the back-end functionality 40 comprises a number of units (or modules or devices) which shall be seen as functional elements rather than structural/- physical; they may be structurally implemented for instance by one or more server computers with suitable operating system, software and networking access capabilities.
  • the back-end functionality 40 comprises a back-end camera calibration unit 42 which is adapted to register calibration data for the or each of the network video camera(s) 10a- lOe.
  • the calibration of each network video camera involves determining a relative position and orientation of the respective network video camera with respect to a three dimensional world space for the target field.
  • the calibration of each network video camera 10a- lOe involves determining both extrinsic and intrinsic parameters.
  • the extrinsic parameters describe the position and orientation of the respective network video camera 10a- lOe with respect to the world space for the target field 20.
  • the intrinsic parameters describe the internal properties of the camera lOa-lOe, such as focal length and lens distortion.
  • the camera calibration can be done locally at each network camera by an installer person using an appropriate tool, such as a laptop computer, a mobile terminal or a tablet computer. Alternatively or additionally, the camera calibration can be done centrally, e.g. at the back-end functionality 40. In such a case, the back-end camera calibration unit 42 may automatically detect, or a back-end operator may manually mark, known features of the target field 20 in images from the cameras, such as for instance the border lines, penalty areas, center line and center circle of the soccer pitch 20. Using the known geometry of the sport field, the relative position of the cameras can be accurately estimated without any large overlap of the cameras' fields of view using methods known from the literature.
  • a pixel coordinate from a captured image can be converted to a direction, or a ray, in the three dimensional world space. This calculation can be done offline for a chosen set of grid points (mesh) in the source frames, facilitating fast real-time calculations by interpolation of the grid points.
  • the back-end camera calibration unit 42 registers the calibration results having been determined in any of the ways referred to above for each calibrated network video camera(s) 10a- lOe by obtaining the determined relative position and orientation, and saving them as metadata for the respective network video camera.
  • the metadata may, for instance, be stored in a back-end video storage 41.
  • the relative position and orientation can be seen as extrinsic calibration parameters for the respective network video camera.
  • the calibration may also involve intrinsic calibration parameters, such as focal length and distortion, for the respective network video camera.
  • the back-end functionality 40 also comprises a back-end video processing unit 44, which is connected to the data network 30 and adapted to receive the respective raw video segment(s) as recorded by the network video camera(s) 10a; lOa-e.
  • the video processing unit 44 is also adapted to store or buffer video contents of the respective raw video segment(s) in the back-end video storage 41.
  • the back-end video storage 41 may be any media suitable for storing digital video contents and the aforementioned metadata of each network video camera.
  • the back-end video storage 41 may be implemented as one or more magnetic hard disks, solid state drives or random access memories, included in or connected to (locally or via the data network 30) any of the units of the back-end functionality 40.
  • a back-end video streaming unit 46 of the back-end functionality 40 is adapted to stream the stored or buffered video contents and the saved metadata for the respective network video camera(s) 10a; lOa-e onto the data network 30.
  • the streaming may be made according to any commercially available standard, for instance, being based on H.264 video encoding.
  • the video content originating from the network video cameras lOa-e are distributed from the data network 30 to either be used as real-time streaming or in post recording.
  • the client side
  • the system 100 also comprises a plurality of client devices 50a-50c for use by the plurality of users 60a-60c.
  • Each client device may, for instance, be implemented as a mobile terminal (e.g. smartphone), a tablet computer, a smart watch, a personal computer, etc.
  • the client device 50a-50c is adapted to receive the streamed video contents and the metadata from the video streaming unit 46 over the data network.
  • each client device 50a-50c has a suitable networking interface, such as IEEE 808.11, UMTS, LTE, etc.
  • Each client device 50a-50c also has a display 52a-52c.
  • each client device 50a-50c has suitable video decoding capabilities, e.g. for H.264 video decoding.
  • each client device 50a-50c has video playback functionality.
  • the video playback functionality can be provided on the application level (i.e., as an app), as a plug-in to another application (e.g. a web browser), or at lower levels in the program stack in the client device.
  • each client device 50a-50c is moreover adapted to determine a chosen virtual pan-tilt-zoom value, as shown in Figure 3, and present the (decoded) video contents by projecting directions, or rays, in the three dimensional world space onto the display 52a-52c of the client device based on the chosen virtual pan-tilt-zoom value and the metadata.
  • the pan-tilt- zoom value defines a projection plane of the virtual camera, and the screen coordinate of a direction, or ray, in space is determined by where it intersects the projection plane. How the directions, or rays in world space map to pixels in the source images is determined by the camera calibration. Plane projections are not appropriate for very wide fields of view.
  • direction may be interpreted as a direction from the camera capturing the video, making it equivalent to a ray, which has a starting point and a direction.
  • FIG. 3 An embodiment of a client device 50a playing a video stream is shown in Figure 3.
  • the display 52a is arranged to display graphical objects 54a-d such as a virtual key, menu item, etc. to control the video recording and playback service.
  • the graphical objects may for example be a stop/play function 54a, a forward 54b and backward function 54c to go forward ackwards in a frame, a time line 54d showing lapsed time of the video stream, playback speed function 54e to change the playback speed, functions 54f, 54g for rewinding or skipping forward a predetermined time period, for example 5 seconds.
  • Further graphical objects relating to different camera views and slow motion functions could also be present.
  • the virtual pan-tilt- zoom value can be chosen by each individual user 60a, 60b or 60c by interaction in a user interface of the respective client device 50a, 50b or 50c.
  • the client device has a touch-sensitive user interface
  • such interaction may involve dragging on the display surface to command a desired direction for the pan and tilt components of the virtual pan-tilt-zoom value, and pinching on the display surface to command a change in the zoom component of the virtual pan-tilt-zoom value.
  • Other ways known per se of interacting in a user interface to command desired changes in virtual pan, tilt and zoom may alternatively be used.
  • This embodiment hence allows a first user 60a and a second user 60b to concurrently play back the video stream and use independently chosen and mutually different virtual pan-tilt-zoom in the presented video contents on the displays 52a and 52b of the respective client devices 50a and 50b.
  • the virtual pan-tilt-zoom value can be chosen by an entity at the back-end side. For instance, a human operator may watch the video stream and control (e.g. repeatedly or continuously adjust) the virtual pan-tilt-zoom value to match the expected preference of the remote viewers, depending on the momentary developments in the ongoing sports event.
  • the thus chosen virtual pan-tilt-zoom value may be included in the video stream from the video streaming unit 46, or sent in a separate control stream synchronized with the video stream, and retrieved by the playback functionality in each client device 50a, 50b, 50c.
  • all users (remote live stream viewers) 60a-60c can experience the same virtual pan-tilt-zoom in the presented video contents on the displays 52a-52c of the respective client devices 50a-50c, but they can also choose to control the virtual pan-tilt-zoom according to their own preference at any point during playback.
  • multiple users 60a-60c can independently pan, tilt and zoom in the video both in real- time and in post recording.
  • the sequence of virtual pan-tilt-zoom values may be used to let the video processing unit 44 reproject the video and generate a stream that can be viewed on any standard video playback device, such as a TV.
  • Figure 4 shows an embodiment where the real-world target field 20 is recorded by a plurality of network video cameras 10a- lOe and where the user 60a-60c, being a remote stream viewer, can choose between different camera views 56, 58a-d.
  • the video stream from the different camera views are shown as a main stream 56 and several additional views 58a-d.
  • the additional views 58a-d are shown as thumbnails (VIEW 1, VIEW 2, VIEW 3 and VIEW 4).
  • the additional views 58a-d show video streams from a plurality of cameras lOa-e and are arranged adjacent to the main video 56 in the user interface of the client device 50a-c.
  • the user 60a-60c can select a video stream to be the main stream 56 by interacting with their respective client device 50a, 50b or 50c. Such an interaction may involve clicking on the desired thumbnail to change that additional view to be the main stream.
  • a user 60a-60c selects a video stream as its main stream 56 the transition between the views will be visible to the user to facilitate the viewing of the stream during the change. Hence, the user 60a-c will see how the pan- tilt-zoom values change when an additional view becomes the main stream view.
  • the virtual pan-tilt-zoom value is chosen by a tracking unit 48 comprised in the back-end functionality 40.
  • the tracking unit is adapted to apply video object tracking techniques known per se by way of real-time analysis of the video stream (or the raw video segments) to automatically detect an estimated current activity center in the ongoing activity (e.g. sport event) on the target field 20, adjust the virtual pan-tilt-zoom value accordingly and either include the adjusted virtual pan-tilt-zoom value in the video stream from the video streaming unit 46, or send it in a separate control stream synchronized with the video stream from the video streaming unit 46.
  • the tracking unit 48 may be trained either automatically, for example by an algorithm, in the system or by manual input. In this embodiment, there is thus no need for manual camera handling neither for recording the stream of for managing the pan-tilt-zoom. Hence, no camera man or a human operator is needed.
  • the real-world target field 20 is recorded by a plurality of network video cameras 10a- lOe.
  • the back-end video processing unit 44 is adapted to analyze the respective raw video segments as recorded by the field overview cameras 10a- 10c and detailed view cameras lOd-lOe, so as to identify frames in the respective video segments which were recorded at the same time, and then synchronize these frames so that they appear concurrently in the video contents streamed by the video streaming unit 46.
  • each network video camera includes timestamps of the recorded frames in the raw video segments.
  • each network video camera 10a- lOe regularly synchronizes its local real-time clock against a networked absolute time reference, such as an Internet time source.
  • each network video camera 10a- lOe includes an absolute start time value in or with the raw video segments transmitted to the back-end functionality 40.
  • the video processing unit 44 accordingly uses these absolute start time values together with the time stamps of the recorded frames in the raw video segments to synchronize the frames of the raw video segments from the different network video cameras 10a- lOe.
  • the raw video segments from the field overview cameras 10a- 10c can be handled in a special way.
  • the back-end video processing unit 44 (or the video streaming unit 46) generates a composed (or stacked) video stream from the synchronized frames of the respective raw video segments from the field overview cameras 10a- 10c.
  • the synchronized frames may be stacked (see Figures 5a-c), for instance put one above the other or side by side, in one and the same frame of the composed video stream sent to the client devices 50a-50c.
  • two frames 12b, 12c are generated from the two field overview cameras 10b and 10c.
  • the frames 12b, 12c each has a height of h.
  • the two frames 12b, 12c are then stacked by putting one frame 12b above the other frame 12c in one single frame 13 of the composed video stream, resulting in a frame 13 with height 2h.
  • the heights of the frames 12b, 12c are scaled on the height so that the total height of the single frame has the same height as one of the original frames 12b, 12c, as illustrated in Fig. 5b.
  • the height of the frames being stacked is l/2h
  • the total height of the single stacked frame 13 is h.
  • the width of the frames remains the same. Scaling height and width separately, resulting in a frame with standard aspect ratio, makes it possible to get high resolution while enjoying support from a wide range of video decoders.
  • the resolution of each frame 12b, 12c before being stacked is 4K, i.e. the horizontal resolution is in the order of 4,000 pixels and the vertical resolution in the order of 2,000 pixels. After the two frames are stacked into one single frame 13, the total resolution of said stacked frame is 4K.
  • each field overview camera will allow the client device 50a-50c to project the correct contents from the synchronized frames in the stacked frame, depending on the chosen virtual pan-tilt-zoom value.
  • Letting the playback device receive the frames from the different video sources in a single stacked video has two main advantages for the playback device. First of all, it only needs a single video decoder and additionally the frames from the different video sources are automatically synchronized.
  • Figure 5c shows an embodiment where two field overview cameras 10b and 10c are arranged to cover an entire sports field where also areas that are not of interest are covered.
  • the two frames 12b, 12c shown are generated from the two field overview cameras 10b and 10c and are stacked on top of each other.
  • the areas which are not of interest are marked as blacked-out pixels 14 in the figure.
  • the blacked-out pixels 14 can be used to transfer other information.
  • the blacked-out pixels 14 can be used to transfer scaled-down versions of the frames from the other cameras. This is shown in Figure 6a.
  • the scaled-down versions of the frames may be used to show thumbnails of the video streams from all cameras adjacent to the main video, as shown in Figure 6b.
  • the blacked-out pixels 14 could additionally be used when a user switches video streams. When switching which video stream to view, the blacked-out pixels 14 may then be used to show a low-resolution version of the newly selected stream while the full-resolution version is being buffered from the video streaming unit 46.
  • the video contents from the detailed view cameras lOd-lOe are sent in separate video streams by the video streaming unit 46, but where each frame in each stream is timestamped so that it can be synchronized with the composed video stream.
  • This allows each user 60a-60c to switch between the composed video stream, i.e. to look at the field overview at a chosen virtual pan-tilt-zoom value, and any of the video streams from the detailed view cameras lOd-lOe, i.e. to instead have a detailed look, and vice versa, while making sure the same time instant is displayed in all streams.
  • each client device 50a- 50c has a function allowing the user 60a-60c to insert a tag in the video contents presented on the display 52a-52c.
  • the tag has a time parameter (reflecting the temporal position of the tagged frame in the video stream, expressed as, for instance, a hh:mm:ss value, a frame number, a time stamp, or combinations thereof) as well a spatial parameter (reflecting a real -world object in the tagged frame, and/or a position of a pixel or group of pixels in the tagged frame).
  • the tag is transmitted by the client device in question back to a tagging unit 49 of the back-end functionality 40, thereby allowing the tag to be stored in the back-end storage 41 and also allowing later retrieval when the relevant video contents are streamed a next time. Tagging can thus be performed when the user 60a-60c is watching in real-time during the recording to the video, or when watching the video at a later time.
  • coaches and team members may share common information by tagging contents in the video contents during playback.
  • the tagging function allows coaches and others to quickly navigate through extensive amounts of video material.
  • the information on which the tagging is based may also originate from other sources than the cameras lOa-e and may be inserted automatically by the system (e.g. not by the user).
  • the tag comprises information gathered by a heart rate monitor in the form of a watch, chest strap or similar wearable device worn by the athlete.
  • the tag may comprise information relating to the heart rate, gps-position, step counter and similar information relating to the health or physics of the athlete.
  • the wearable device comprises means for communication in order to transmit the gathered information to the system.
  • the communication means arranged in the wearable device may for example be Blueetooth, ANT or other low-power radio links.
  • the information gathered by the wearable device is only shown once the user inserts a tag on the athlete wearing said device.
  • the information gathered from the wearable device is always shown for all connected athletes irrespective of the used tags.
  • the spatial parameter of the tag represents a drawing made by one of the users 60a-60c in a presented video frame during playback.
  • the drawing may, for instance, be a geometrical object such as a line, circle or rectangle, as is illustrated in Figure 7a-b. This can advantageously be used as follows.
  • the in-frame drawing feature enables the user 60a-60c to draw in the videos captured by one of the network cameras. For example, the user could mark the position of an athlete or draw how he should act in the target field.
  • the in-frame drawing features may be performed by touching or clicking on the display 52a-52c on the client device 50a-50c.
  • FIG. 7a An example is seen in Figure 7a, where six athletes in the form of soccer players F1-F6 are shown on real -world target field 20 in the form of a soccer pitch.
  • the soccer player Fl is not marked, whereas the position of the soccer player F2-F4 are all marked, as indicated by the surrounding circle.
  • the chopped line between the soccer players F2-F4 is drawn by the user 60a-60c to for example create a virtual line of defenders on the pitch and/or to connect the position of the individual players F2-F4 to each other.
  • the user 60a-60c may also indicate a special area of interest in the target field 20, here marked as a lined box. This area may represent an area which the user 60a-60c wishes the player F1-F6 to move to or an area which is especially important due to other reasons.
  • the position of the soccer player F6 is marked by the surrounding circle, and the desirable direction of movement of the player F6 is drawn by drawing an arrow in the desired direction.
  • Figure 7b shows an embodiment where the user or the system has generated a plurality of virtual areas (AREA 1, AREA 2, AREA 3, AREA 4, AREA 5) on the target field 20.
  • the virtual areas may for example be used to differentiate between different tactical areas of the target field 20, to help visualize different team and player setups and formations.
  • AREA 1 and AREA 5 may for example be the tactical area where the football goal and defensive team formation is setup,
  • AREA 3 is the center field area, and
  • AREA 2 and AREA 4 are the offensive team formation areas between the goals and the center circle.
  • the areas in Figure 7b are shown in the form of a rectangle, it should be understood that the areas could have other shapes, such as a square, triangle, circle or a grid.
  • the areas may also be in the form of a text string, images, animations etc.
  • the user 60a-60c draws and marks out the areas on the target field 20.
  • the system is provided with pre-saved areas, for example stored in the back-end storage 41 or in the tagging unit 49.
  • the pre- saved areas can be activated by the system itself or by a user 60a-60c.
  • the pre-saved areas may be generated by using parameters automatically gained or generated from the target field 20 and/or by user input regarding parameters of the target field 20.
  • the pre- saved areas may for example be templates used by coaches, athletes or sports journalists.
  • the areas in Figure 7b are shown in one plane, it should be understood that the tags, such as a drawing or a pre-defined area, can be used in video streams showing different perspective of the target field 20.
  • the features that were drawn using the first camera will still appear to be in the same position in the real world.
  • the pixel coordinates can be converted to directions, or rays, in the three dimensional world space, using the calibration of the first camera.
  • the directions, or rays can be converted to positions in the world space by finding the intersection between the direction, or ray, and the ground plane of the sport field.
  • the positions on the ground plane can be projected onto frames from the second camera, using the calibration of the second camera.
  • the drawing made by one user 60a will appear on the client device 50a- 50c corresponding to the other users 60b, 60c.
  • the tags can thus be shared online in real time.
  • the tag is transmitted by the client device which belongs to the user 60a making the tag back to the tagging unit 49 of the back-end functionality 40.
  • the tag is stored in the back-end storage 41 and other users 60b-60c can retrieve, via its respective client devices 50b, 50c, the tags generated by the first user 60a. In this way the coach 60a can draw instructions on one device 50a which are seen on the client devices 50b, 50c of the athletes.
  • Figure 8a-b disclose a method for providing virtual pan-tilt-zoom, PTZ, video functionality to a plurality of users 60a-60c over a data network 30.
  • Figure 8a illustrates a first part of said method.
  • a first step 110 one or more network video cameras are provided and positioned to make a video recording of a real -world target field, or a respective subarea thereof, and output a respective raw video segment onto the data network.
  • the method also involves obtaining 120 a relative position and orientation of the respective network video camera with respect to a three dimensional world space for the target field.
  • the obtained relative position and orientation is saved 130 as metadata for the respective network video camera.
  • the system then receives 140, over the data network, the respective raw video segment(s) recorded by the network video camera(s).
  • the respective raw video segment(s) are then stored or buffered 150 in a cloud-based video storage.
  • the stored or buffered video contents and the saved metadata are streamed 160 for the respective network video camera onto the data network.
  • Figure 8b illustrates a method for use in respective client devices for use by respective one of the plurality of users.
  • the client device receives 170 the streamed video contents and the metadata.
  • the client device determines 180 a chosen virtual pan-tilt-zoom value.
  • the video contents is presented 190, on a display the client device, by projecting directions, or rays, in the three dimensional world space onto the display based on the chosen virtual pan-tilt-zoom value and the metadata.
  • one inventive aspect can be seen as a system for providing virtual pan-tilt-zoom, PTZ, video functionality to a plurality of users over a data network.
  • the system comprises one or more network video cameras positioned to make a video recording of a real-world target field, or a respective subarea thereof, and output a respective raw video segment onto the data network.
  • the system also comprises a back-end camera calibration unit which is adapted to:
  • the system also comprises a back-end video processing unit, connected to the data network and adapted to:
  • system comprises a back-end video streaming unit adapted to stream the stored or buffered video contents and the saved metadata for the respective network video camera onto the data network.
  • the system comprises a plurality of client devices for use by the plurality of users.
  • Each client device is adapted to:
  • Another inventive aspect can be seen as method for providing virtual pan-tilt- zoom, PTZ, video functionality to a plurality of users over a data network.
  • the method involves providing one or more network video cameras positioned to make a video recording of a real-world target field, or a respective subarea thereof, and output a respective raw video segment onto the data network.
  • the method also involves:
  • the method also involves:
  • the method involves streaming the stored or buffered video contents and the saved metadata for the respective network video camera onto the data network.
  • the method involves, in respective client devices for use by respective one of the plurality of users:

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Studio Devices (AREA)

Abstract

A system for providing virtual pan-tilt-zoom, PTZ, video functionality video to a plurality of users (60a-60c) over a data network (30) is provided. The system (100) comprises one or more network video cameras (10a-10e) positioned to make a video recording of a real-world target field (20), or a respective subarea thereof, and output a respective raw video segment onto the data network (30), a back-end camera calibration unit (42) which is adapted to obtain a relative position and orientation of the respective network video camera (10a-10e) with respect to a three dimensional world space for the target field (20), and to save the obtained relative position and orientation as metadata for the respective network video camera (10a-10e), a back-end video processing unit (44), connected to the data network (30) and adapted to receive the respective raw video segment(s) recorded by the network video camera(s) (10a-10e), and to store or buffer video contents of the respective raw video segment(s) in a back-end video storage (41), a back-end video streaming unit (46) adapted to stream the stored or buffered video contents and the saved metadata for the respective network video camera (10a-10e) onto the data network (30), a plurality of client devices (50a-50c) for use by the plurality of users (60a-60c). Each client device (50a-50c) is adapted to receive the streamed video contents and the metadata, determine a chosen virtual pan-tilt-zoom value and present, on a display (52a-52c) of the client device, the video contents by projecting directions, in the three dimensional world space, onto the display (52a-52c) based on the chosen virtual pan-tilt-zoom value and the metadata.

Description

SYSTEM AND METHOD FOR PROVIDING VIRTUAL PAN-TILT-ZOOM, PTZ, VIDEO FUNCTIONALITY TO A PLURALITY OF USERS OVER A
DATA NETWORK
TECHNICAL FIELD
This patent application describes an inventive system and an associated method of providing virtual pan-tilt-zoom, PTZ, video functionality to a plurality of users over a data network.
BACKGROUND
Sports clubs, such as football clubs, are becoming increasingly aware of the benefits of using video recordings of athletes and teams in practice and match. This is done in order to increase team and athlete performance, develop individual athlete skills and improve overall sports results.
To get access to video, one alternative is to use handheld regular cameras in the practice field and hire a camera man who is present all the time. This is however expensive, reduces the flexibility and time is wasted on a lot of inefficient manual camera work. In addition, the videos are in many cases of poor quality and fail to capture all important actions that happens simultaneously on different parts of the field.
Broadcast PTZ (pan-tilt-zoom) cameras are often used in television production for creating video streams of for example sporting events and matches. However, this is also a costly camera arrangement and requires a plurality of expensive PTZ-cameras to generate a good video quality from all angles. Installing PTZ-cameras in the training area is thus not an option for most smaller clubs, youth teams etc. due the great costs.
There is thus a need for a camera system providing video material for sports clubs, coaches and video analysts, that is easy to use, always available, flexible, inexpensive and set up to efficiently capture the entire practice or match field.
SUMMARY An object of the present invention is therefore to provide a solution to, or at least a mitigation of, one or more of the problems or drawbacks identified in the background section above. A first aspect of the present invention is a system for providing virtual pan-tilt-zoom, PTZ, video functionality to a plurality of users over a data network.
The system comprises one or more network video cameras positioned to make a video recording of a real-world target field, or a respective subarea thereof, and output a respective raw video segment onto the data network. The system also comprises a back-end camera calibration unit which is adapted to obtain a relative position and orientation of the respective network video camera with respect to a three dimensional world space for the target field and save the obtained relative position and orientation as metadata for the respective network video camera. The system also comprises a back- end video processing unit, connected to the data network and adapted to receive the respective raw video segment(s) recorded by the network video camera(s), and store or buffer video contents of the respective raw video segment(s) in a back-end video storage.
Furthermore, the system comprises a back-end video streaming unit adapted to stream the stored or buffered video contents and the saved metadata for the respective network video camera onto the data network. Finally, the system comprises a plurality of client devices for use by the plurality of users. Each client device is adapted to receive the streamed video contents and the metadata, determine a chosen virtual pan- tilt-zoom value and present, on a display the client device, the video contents by projecting directions in the three dimensional world space, onto the display based on the chosen virtual pan-tilt-zoom value and the metadata.
The invention provides highly automated video recording and playback services which may be conveniently used, for instance, as a tool for coaches and athletes to analyze and discuss athletic performance on a real-world target field such as, for instance, a soccer pitch, hockey rink, handball court, basketball court, floorball court, volleyball court, etc. Hence, a system is provided where one or more cameras are used to capture a real-world target field (e.g. sports field), possibly with a very wide field of view. At playback, the viewer (e.g. a coach or an athlete) can choose where to look himself. This results in a virtual camera that can be reoriented and zoomed, as if the scene had been captured by a real PTZ (pan-tilt-zoom) camera. However, with this system the camera motion can be decided after the video is recorded.
Additionally or alternatively, the inventive video recording and playback services may be enjoyed by remote viewers in the form of live streaming of, for instance, sports events of any of the types referred to above.
A second aspect of the present invention is a method for providing virtual pan- tilt-zoom, PTZ, video functionality to a plurality of users over a data network. The method involves providing one or more network video cameras positioned to make a video recording of a real -world target field, or a respective subarea thereof, and output a respective raw video segment onto the data network. For the purpose of calibrating the or each of the network video camera(s), the method also involves obtaining a relative position and orientation of the respective network video camera with respect to a three dimensional world space for the target field, and saving the obtained relative position and orientation as metadata for the respective network video camera. The method also involves receiving over the data network the respective raw video segment(s) recorded by the network video camera(s), and storing or buffering video contents of the respective raw video segment(s) in a cloud-based video storage. Furthermore, the method involves streaming the stored or buffered video contents and the saved metadata for the respective network video camera onto the data network. Finally, the method involves, in respective client devices for use by respective one of the plurality of users receiving the streamed video contents and the metadata, determining a chosen virtual pan-tilt-zoom value, and presenting, on a display the client device, the video contents by projecting directions in the three dimensional world space onto the display based on the chosen virtual pan-tilt-zoom value and the metadata.
Embodiments of the invention are defined by the appended dependent claims and are further explained in the detailed description section as well as on the drawings.
It should be emphasized that the term "comprises/comprising" when used in this specification is taken to specify the presence of stated features, integers, steps, or components, but does not preclude the presence or addition of one or more other features, integers, steps, components, or groups thereof. All terms used in the claims are to be interpreted according to their ordinary meaning in the technical field, unless explicitly defined otherwise herein. All references to "a/an/the [element, device, component, means, step, etc]" are to be interpreted openly as referring to at least one instance of the element, device, component, means, step, etc., unless explicitly stated otherwise. The steps of any method disclosed herein do not have to be performed in the exact order disclosed, unless explicitly stated.
BRIEF DESCRIPTION OF THE DRAWINGS
Objects, features and advantages of embodiments of the invention will appear from the following detailed description, reference being made to the accompanying drawings.
Fig. 1 is a schematic view of a system for providing virtual pan-tilt-zoom video functionality to a plurality of users over a data network according to one embodiment.
Fig. 2 is a schematic front view schematically illustrating the virtual pan-tilt- zoom function according to an embodiment.
Fig. 3 is a schematic front view schematically illustrating the interface of a client device according to an embodiment.
Fig. 4 is a schematic front view schematically illustrating the interface of a client device according to an embodiment.
Figs. 5a-c are schematic front views showing stacking of frames according to an embodiment,
Figs. 6a-b are schematic front views showing blacked-out pixies according to an embodiment,
Figs. 7a-b are schematic front views schematically illustrating the interface of a client device when using the functions of tagging and drawing according to an embodiment, and
Figs. 8a-b are flowchart diagrams illustrating a method for providing virtual pan-tilt-zoom video functionality according to an embodiment. DETAILED DESCRIPTION OF EMBODIMENTS
Figure 1 discloses a system 100 for providing virtual pan-tilt-zoom, PTZ, video functionality to a plurality of users 60a-60c over a data network 30. The data network may, for instance, be a wide-area network or aggregation of networks which form part of the Internet (commonly also referred to as the Cloud). The system 100 comprises one or more network video cameras 10a- lOe which are positioned to make a video recording of a real-world target field 20, or a respective subarea thereof, and output a respective raw video segment onto the data network 30. Each such raw video segment may be of a given length, such as for instance n seconds, where n is any value suitably chosen in consideration of implementation details, and contain recorded video contents, for instance, in the form of H.264 or H.265 encoded video data. In one embodiment each raw video segment comprises recorded video contents in the form of H.264 encoded video data. In the disclosed embodiment of Figure 1, the real -world target field 20 is a soccer pitch, but the invention has no specific limitations in this regard.
Different arrangements of network video earner a{ s )
Noticeably, each network video camera 10a- lOe may be mounted at a respective stationary position with respect to the real-world target field 20, and it need not have any real (physical) pan, tilt or zoom capabilities. Hence, a very efficient solution is provided in terms of component cost, installation and/or maintenance.
Alternatively, each network video camera 10a- lOe is mounted at a respective position which is movable with respect to the real-world target field 20.
In one embodiment, the real-world target field 20 is recorded by a single ultra- wide view network video camera 10a, essentially covering the entire target field 20.
In other embodiments, the real-world target field 20 is recorded by a plurality of network video cameras 10a- lOe. For instance, as seen in Figure 1, field overview cameras 10a, 10b and 10c may be mounted at the center of the soccer pitch 20 and oriented in appropriate directions so as to record different subareas of the soccer pitch 20, with or without overlap. If the cameras are mounted close to each other with overlapping fields of view, the captured frames can be stitched together and used as if captured by a single camera with a wide field of view. Detailed view cameras lOd and lOe may be mounted, for instance, near the two goals at either sides of the soccer pitch 20, thereby serving to provide close-up video views of soccer action near the goals.
The system 100 comprises cloud-based back-end functionality 40 acting as a service bridge between the network video cameras(s) 10a- lOe and the users 60a-60c via the data network 30. The back-end functionality 40 comprises a number of units (or modules or devices) which shall be seen as functional elements rather than structural/- physical; they may be structurally implemented for instance by one or more server computers with suitable operating system, software and networking access capabilities.
Back-end - Camera calibration
Hence, the back-end functionality 40 comprises a back-end camera calibration unit 42 which is adapted to register calibration data for the or each of the network video camera(s) 10a- lOe. In one embodiment, the calibration of each network video camera involves determining a relative position and orientation of the respective network video camera with respect to a three dimensional world space for the target field.
In one embodiment, the calibration of each network video camera 10a- lOe involves determining both extrinsic and intrinsic parameters. The extrinsic parameters describe the position and orientation of the respective network video camera 10a- lOe with respect to the world space for the target field 20. The intrinsic parameters describe the internal properties of the camera lOa-lOe, such as focal length and lens distortion.
The camera calibration can be done locally at each network camera by an installer person using an appropriate tool, such as a laptop computer, a mobile terminal or a tablet computer. Alternatively or additionally, the camera calibration can be done centrally, e.g. at the back-end functionality 40. In such a case, the back-end camera calibration unit 42 may automatically detect, or a back-end operator may manually mark, known features of the target field 20 in images from the cameras, such as for instance the border lines, penalty areas, center line and center circle of the soccer pitch 20. Using the known geometry of the sport field, the relative position of the cameras can be accurately estimated without any large overlap of the cameras' fields of view using methods known from the literature.
Using the calibration, a pixel coordinate from a captured image can be converted to a direction, or a ray, in the three dimensional world space. This calculation can be done offline for a chosen set of grid points (mesh) in the source frames, facilitating fast real-time calculations by interpolation of the grid points.
Hence, the back-end camera calibration unit 42 registers the calibration results having been determined in any of the ways referred to above for each calibrated network video camera(s) 10a- lOe by obtaining the determined relative position and orientation, and saving them as metadata for the respective network video camera. The metadata may, for instance, be stored in a back-end video storage 41.
As previously described, the relative position and orientation can be seen as extrinsic calibration parameters for the respective network video camera. The calibration may also involve intrinsic calibration parameters, such as focal length and distortion, for the respective network video camera.
Back-end - Video processing and storage/buffering
The back-end functionality 40 also comprises a back-end video processing unit 44, which is connected to the data network 30 and adapted to receive the respective raw video segment(s) as recorded by the network video camera(s) 10a; lOa-e. The video processing unit 44 is also adapted to store or buffer video contents of the respective raw video segment(s) in the back-end video storage 41.
The back-end video storage 41 may be any media suitable for storing digital video contents and the aforementioned metadata of each network video camera. For instance, the back-end video storage 41 may be implemented as one or more magnetic hard disks, solid state drives or random access memories, included in or connected to (locally or via the data network 30) any of the units of the back-end functionality 40.
Back end - Video streaming
A back-end video streaming unit 46 of the back-end functionality 40 is adapted to stream the stored or buffered video contents and the saved metadata for the respective network video camera(s) 10a; lOa-e onto the data network 30. The streaming may be made according to any commercially available standard, for instance, being based on H.264 video encoding.
In one embodiment, the video content originating from the network video cameras lOa-e are distributed from the data network 30 to either be used as real-time streaming or in post recording. The client side
The system 100 also comprises a plurality of client devices 50a-50c for use by the plurality of users 60a-60c. Each client device may, for instance, be implemented as a mobile terminal (e.g. smartphone), a tablet computer, a smart watch, a personal computer, etc. The client device 50a-50c is adapted to receive the streamed video contents and the metadata from the video streaming unit 46 over the data network. To this end, each client device 50a-50c has a suitable networking interface, such as IEEE 808.11, UMTS, LTE, etc. Each client device 50a-50c also has a display 52a-52c.
Moreover, each client device 50a-50c has suitable video decoding capabilities, e.g. for H.264 video decoding. In addition, each client device 50a-50c has video playback functionality. The video playback functionality can be provided on the application level (i.e., as an app), as a plug-in to another application (e.g. a web browser), or at lower levels in the program stack in the client device.
The video playback functionality of each client device 50a-50c is moreover adapted to determine a chosen virtual pan-tilt-zoom value, as shown in Figure 3, and present the (decoded) video contents by projecting directions, or rays, in the three dimensional world space onto the display 52a-52c of the client device based on the chosen virtual pan-tilt-zoom value and the metadata. In the simplest case, the pan-tilt- zoom value defines a projection plane of the virtual camera, and the screen coordinate of a direction, or ray, in space is determined by where it intersects the projection plane. How the directions, or rays in world space map to pixels in the source images is determined by the camera calibration. Plane projections are not appropriate for very wide fields of view. Hence, artificial distortion can be applied to the virtual camera to better present wide fields of view on the screen of the user device. The term direction may be interpreted as a direction from the camera capturing the video, making it equivalent to a ray, which has a starting point and a direction.
An embodiment of a client device 50a playing a video stream is shown in Figure 3. The display 52a is arranged to display graphical objects 54a-d such as a virtual key, menu item, etc. to control the video recording and playback service. The graphical objects may for example be a stop/play function 54a, a forward 54b and backward function 54c to go forward ackwards in a frame, a time line 54d showing lapsed time of the video stream, playback speed function 54e to change the playback speed, functions 54f, 54g for rewinding or skipping forward a predetermined time period, for example 5 seconds. Further graphical objects relating to different camera views and slow motion functions could also be present.
Choosing the virtual pan-tilt-zoom value
In one embodiment, targeted particularly for applications where the inventive automated video recording and playback services are provided as a tool for coaches and athletes to analyze and discuss athletic performance on a sports field, the virtual pan-tilt- zoom value can be chosen by each individual user 60a, 60b or 60c by interaction in a user interface of the respective client device 50a, 50b or 50c. When the client device has a touch-sensitive user interface, such interaction may involve dragging on the display surface to command a desired direction for the pan and tilt components of the virtual pan-tilt-zoom value, and pinching on the display surface to command a change in the zoom component of the virtual pan-tilt-zoom value. Other ways known per se of interacting in a user interface to command desired changes in virtual pan, tilt and zoom may alternatively be used.
This embodiment hence allows a first user 60a and a second user 60b to concurrently play back the video stream and use independently chosen and mutually different virtual pan-tilt-zoom in the presented video contents on the displays 52a and 52b of the respective client devices 50a and 50b.
In another embodiment, targeted particularly for applications where the inventive automated video recording and playback services are provided in the form of live streaming of, for instance, sports events to remote viewers, the virtual pan-tilt-zoom value can be chosen by an entity at the back-end side. For instance, a human operator may watch the video stream and control (e.g. repeatedly or continuously adjust) the virtual pan-tilt-zoom value to match the expected preference of the remote viewers, depending on the momentary developments in the ongoing sports event. The thus chosen virtual pan-tilt-zoom value may be included in the video stream from the video streaming unit 46, or sent in a separate control stream synchronized with the video stream, and retrieved by the playback functionality in each client device 50a, 50b, 50c. In this embodiment, all users (remote live stream viewers) 60a-60c can experience the same virtual pan-tilt-zoom in the presented video contents on the displays 52a-52c of the respective client devices 50a-50c, but they can also choose to control the virtual pan-tilt-zoom according to their own preference at any point during playback. Hence, multiple users 60a-60c can independently pan, tilt and zoom in the video both in real- time and in post recording.
The sequence of virtual pan-tilt-zoom values may be used to let the video processing unit 44 reproject the video and generate a stream that can be viewed on any standard video playback device, such as a TV.
Figure 4 shows an embodiment where the real-world target field 20 is recorded by a plurality of network video cameras 10a- lOe and where the user 60a-60c, being a remote stream viewer, can choose between different camera views 56, 58a-d. The video stream from the different camera views are shown as a main stream 56 and several additional views 58a-d. The additional views 58a-d are shown as thumbnails (VIEW 1, VIEW 2, VIEW 3 and VIEW 4). The additional views 58a-d show video streams from a plurality of cameras lOa-e and are arranged adjacent to the main video 56 in the user interface of the client device 50a-c. The user 60a-60c can select a video stream to be the main stream 56 by interacting with their respective client device 50a, 50b or 50c. Such an interaction may involve clicking on the desired thumbnail to change that additional view to be the main stream. When a user 60a-60c selects a video stream as its main stream 56 the transition between the views will be visible to the user to facilitate the viewing of the stream during the change. Hence, the user 60a-c will see how the pan- tilt-zoom values change when an additional view becomes the main stream view.
Automatically choosing the virtual pan-tilt-zoom value
In a refined version of the latter two embodiments, the virtual pan-tilt-zoom value is chosen by a tracking unit 48 comprised in the back-end functionality 40. The tracking unit is adapted to apply video object tracking techniques known per se by way of real-time analysis of the video stream (or the raw video segments) to automatically detect an estimated current activity center in the ongoing activity (e.g. sport event) on the target field 20, adjust the virtual pan-tilt-zoom value accordingly and either include the adjusted virtual pan-tilt-zoom value in the video stream from the video streaming unit 46, or send it in a separate control stream synchronized with the video stream from the video streaming unit 46. The tracking unit 48 may be trained either automatically, for example by an algorithm, in the system or by manual input. In this embodiment, there is thus no need for manual camera handling neither for recording the stream of for managing the pan-tilt-zoom. Hence, no camera man or a human operator is needed.
Multi-camera embodiments
As already mentioned, in some embodiments the real-world target field 20 is recorded by a plurality of network video cameras 10a- lOe. In such embodiments, the back-end video processing unit 44 is adapted to analyze the respective raw video segments as recorded by the field overview cameras 10a- 10c and detailed view cameras lOd-lOe, so as to identify frames in the respective video segments which were recorded at the same time, and then synchronize these frames so that they appear concurrently in the video contents streamed by the video streaming unit 46. To this end, each network video camera includes timestamps of the recorded frames in the raw video segments.
However, there may be time delays between the different raw video segments at the time of recording, at the time of transmission to the back-end functionality 40, or both. To this end, each network video camera 10a- lOe regularly synchronizes its local real-time clock against a networked absolute time reference, such as an Internet time source. Moreover, each network video camera 10a- lOe includes an absolute start time value in or with the raw video segments transmitted to the back-end functionality 40. The video processing unit 44 accordingly uses these absolute start time values together with the time stamps of the recorded frames in the raw video segments to synchronize the frames of the raw video segments from the different network video cameras 10a- lOe.
The raw video segments from the field overview cameras 10a- 10c can be handled in a special way. The back-end video processing unit 44 (or the video streaming unit 46) generates a composed (or stacked) video stream from the synchronized frames of the respective raw video segments from the field overview cameras 10a- 10c. For instance, when there are two field overview cameras 10b and 10c, each essentially covering one half of the real-world target field 20 (e.g. soccer pitch), the synchronized frames may be stacked (see Figures 5a-c), for instance put one above the other or side by side, in one and the same frame of the composed video stream sent to the client devices 50a-50c.
In an embodiment shown in Figure 5a, two frames 12b, 12c are generated from the two field overview cameras 10b and 10c. The frames 12b, 12c each has a height of h. The two frames 12b, 12c are then stacked by putting one frame 12b above the other frame 12c in one single frame 13 of the composed video stream, resulting in a frame 13 with height 2h.
In some embodiments, before being stacked, the heights of the frames 12b, 12c are scaled on the height so that the total height of the single frame has the same height as one of the original frames 12b, 12c, as illustrated in Fig. 5b. Hence, the height of the frames being stacked is l/2h, and the total height of the single stacked frame 13 is h. The width of the frames remains the same. Scaling height and width separately, resulting in a frame with standard aspect ratio, makes it possible to get high resolution while enjoying support from a wide range of video decoders.
In one embodiment, the resolution of each frame 12b, 12c before being stacked is 4K, i.e. the horizontal resolution is in the order of 4,000 pixels and the vertical resolution in the order of 2,000 pixels. After the two frames are stacked into one single frame 13, the total resolution of said stacked frame is 4K.
The relative position and orientation of each field overview camera according to the saved metadata will allow the client device 50a-50c to project the correct contents from the synchronized frames in the stacked frame, depending on the chosen virtual pan-tilt-zoom value. Letting the playback device receive the frames from the different video sources in a single stacked video has two main advantages for the playback device. First of all, it only needs a single video decoder and additionally the frames from the different video sources are automatically synchronized.
Figure 5c shows an embodiment where two field overview cameras 10b and 10c are arranged to cover an entire sports field where also areas that are not of interest are covered. The two frames 12b, 12c shown are generated from the two field overview cameras 10b and 10c and are stacked on top of each other. The areas which are not of interest are marked as blacked-out pixels 14 in the figure. The blacked-out pixels 14 can be used to transfer other information. In one embodiment, as shown in Figure 6a and 6b, the blacked-out pixels 14 can be used to transfer scaled-down versions of the frames from the other cameras. This is shown in Figure 6a. The scaled-down versions of the frames may be used to show thumbnails of the video streams from all cameras adjacent to the main video, as shown in Figure 6b. The blacked-out pixels 14 could additionally be used when a user switches video streams. When switching which video stream to view, the blacked-out pixels 14 may then be used to show a low-resolution version of the newly selected stream while the full-resolution version is being buffered from the video streaming unit 46.
When filming a field from the side, different parts of the field will be at different distances from the camera, thus receiving different resolutions in terms of pixels per meter of the field. In order to transfer or store a video with sufficient resolution of the entire field while keeping data size down, it may be of interest to transform the frames to get a more uniform resolution across the field. This can be seen as a context aware video compression. The transformed frames are then encoded, for instance, using H.264. Using the known transform described in this paragraph in combination with the known camera calibration, the video can be unwarped in the client devices 50a-50c and presented to the users 60a-60c without any distortion.
The video contents from the detailed view cameras lOd-lOe are sent in separate video streams by the video streaming unit 46, but where each frame in each stream is timestamped so that it can be synchronized with the composed video stream. This allows each user 60a-60c to switch between the composed video stream, i.e. to look at the field overview at a chosen virtual pan-tilt-zoom value, and any of the video streams from the detailed view cameras lOd-lOe, i.e. to instead have a detailed look, and vice versa, while making sure the same time instant is displayed in all streams.
Tagging
In one embodiment, the video playback functionality of each client device 50a- 50c has a function allowing the user 60a-60c to insert a tag in the video contents presented on the display 52a-52c. The tag has a time parameter (reflecting the temporal position of the tagged frame in the video stream, expressed as, for instance, a hh:mm:ss value, a frame number, a time stamp, or combinations thereof) as well a spatial parameter (reflecting a real -world object in the tagged frame, and/or a position of a pixel or group of pixels in the tagged frame). The tag is transmitted by the client device in question back to a tagging unit 49 of the back-end functionality 40, thereby allowing the tag to be stored in the back-end storage 41 and also allowing later retrieval when the relevant video contents are streamed a next time. Tagging can thus be performed when the user 60a-60c is watching in real-time during the recording to the video, or when watching the video at a later time.
In this way, coaches and team members may share common information by tagging contents in the video contents during playback. The tagging function allows coaches and others to quickly navigate through extensive amounts of video material.
The information on which the tagging is based may also originate from other sources than the cameras lOa-e and may be inserted automatically by the system (e.g. not by the user). In one embodiment the tag comprises information gathered by a heart rate monitor in the form of a watch, chest strap or similar wearable device worn by the athlete. The tag may comprise information relating to the heart rate, gps-position, step counter and similar information relating to the health or physics of the athlete. The wearable device comprises means for communication in order to transmit the gathered information to the system. The communication means arranged in the wearable device may for example be Blueetooth, ANT or other low-power radio links.
In one embodiment, the information gathered by the wearable device is only shown once the user inserts a tag on the athlete wearing said device. In other
embodiments the information gathered from the wearable device is always shown for all connected athletes irrespective of the used tags.
Drawing
In a more refined embodiment, the spatial parameter of the tag represents a drawing made by one of the users 60a-60c in a presented video frame during playback. The drawing may, for instance, be a geometrical object such as a line, circle or rectangle, as is illustrated in Figure 7a-b. This can advantageously be used as follows.
The in-frame drawing feature enables the user 60a-60c to draw in the videos captured by one of the network cameras. For example, the user could mark the position of an athlete or draw how he should act in the target field. The in-frame drawing features may be performed by touching or clicking on the display 52a-52c on the client device 50a-50c.
An example is seen in Figure 7a, where six athletes in the form of soccer players F1-F6 are shown on real -world target field 20 in the form of a soccer pitch. The soccer player Fl is not marked, whereas the position of the soccer player F2-F4 are all marked, as indicated by the surrounding circle. The chopped line between the soccer players F2-F4 is drawn by the user 60a-60c to for example create a virtual line of defenders on the pitch and/or to connect the position of the individual players F2-F4 to each other. The user 60a-60c may also indicate a special area of interest in the target field 20, here marked as a lined box. This area may represent an area which the user 60a-60c wishes the player F1-F6 to move to or an area which is especially important due to other reasons. The position of the soccer player F6 is marked by the surrounding circle, and the desirable direction of movement of the player F6 is drawn by drawing an arrow in the desired direction.
Figure 7b shows an embodiment where the user or the system has generated a plurality of virtual areas (AREA 1, AREA 2, AREA 3, AREA 4, AREA 5) on the target field 20. The virtual areas may for example be used to differentiate between different tactical areas of the target field 20, to help visualize different team and player setups and formations. AREA 1 and AREA 5 may for example be the tactical area where the football goal and defensive team formation is setup, AREA 3 is the center field area, and AREA 2 and AREA 4 are the offensive team formation areas between the goals and the center circle. Although the areas in Figure 7b are shown in the form of a rectangle, it should be understood that the areas could have other shapes, such as a square, triangle, circle or a grid. The areas may also be in the form of a text string, images, animations etc.
In one embodiment the user 60a-60c draws and marks out the areas on the target field 20. In an alternative embodiment the system is provided with pre-saved areas, for example stored in the back-end storage 41 or in the tagging unit 49. The pre- saved areas can be activated by the system itself or by a user 60a-60c. The pre-saved areas may be generated by using parameters automatically gained or generated from the target field 20 and/or by user input regarding parameters of the target field 20. The pre- saved areas may for example be templates used by coaches, athletes or sports journalists. Although the areas in Figure 7b are shown in one plane, it should be understood that the tags, such as a drawing or a pre-defined area, can be used in video streams showing different perspective of the target field 20.
When the same or another user watches the same scene, but using a second one of the network cameras mounted in a different position, the features that were drawn using the first camera will still appear to be in the same position in the real world. When the user draws something in a frame from the first camera, the pixel coordinates can be converted to directions, or rays, in the three dimensional world space, using the calibration of the first camera. The directions, or rays, can be converted to positions in the world space by finding the intersection between the direction, or ray, and the ground plane of the sport field. The positions on the ground plane can be projected onto frames from the second camera, using the calibration of the second camera.
If several users 60a-60c are watching the same video stream, in some embodiments the drawing made by one user 60a will appear on the client device 50a- 50c corresponding to the other users 60b, 60c. The tags can thus be shared online in real time. In one embodiment, the tag is transmitted by the client device which belongs to the user 60a making the tag back to the tagging unit 49 of the back-end functionality 40. The tag is stored in the back-end storage 41 and other users 60b-60c can retrieve, via its respective client devices 50b, 50c, the tags generated by the first user 60a. In this way the coach 60a can draw instructions on one device 50a which are seen on the client devices 50b, 50c of the athletes.
By the combination of editing tools and the tagging system an improved video system is achieved. Coaches and other analysts is allowed to directly in the video visualize events and demonstrate tactic on a screen using lines, arrows, hand drawn objects etc. in order to give more detailed and efficient feedback to the athletes.
Figure 8a-b disclose a method for providing virtual pan-tilt-zoom, PTZ, video functionality to a plurality of users 60a-60c over a data network 30. Figure 8a illustrates a first part of said method. In a first step 110, one or more network video cameras are provided and positioned to make a video recording of a real -world target field, or a respective subarea thereof, and output a respective raw video segment onto the data network.
For the purpose of calibrating the one or each of the network video camera(s), the method also involves obtaining 120 a relative position and orientation of the respective network video camera with respect to a three dimensional world space for the target field. The obtained relative position and orientation is saved 130 as metadata for the respective network video camera.
The system then receives 140, over the data network, the respective raw video segment(s) recorded by the network video camera(s). The respective raw video segment(s) are then stored or buffered 150 in a cloud-based video storage. In a next step the stored or buffered video contents and the saved metadata are streamed 160 for the respective network video camera onto the data network.
Figure 8b illustrates a method for use in respective client devices for use by respective one of the plurality of users. The client device receives 170 the streamed video contents and the metadata. In a next step the client device determines 180 a chosen virtual pan-tilt-zoom value. The video contents is presented 190, on a display the client device, by projecting directions, or rays, in the three dimensional world space onto the display based on the chosen virtual pan-tilt-zoom value and the metadata. General inventive concepts
As is clear from the above description, one inventive aspect can be seen as a system for providing virtual pan-tilt-zoom, PTZ, video functionality to a plurality of users over a data network.
The system comprises one or more network video cameras positioned to make a video recording of a real-world target field, or a respective subarea thereof, and output a respective raw video segment onto the data network.
The system also comprises a back-end camera calibration unit which is adapted to:
- obtain a relative position and orientation of the respective network video camera with respect to a three dimensional world space for the target field; and - save the obtained relative position and orientation as metadata for the respective network video camera.
The system also comprises a back-end video processing unit, connected to the data network and adapted to:
- receive the respective raw video segment(s) recorded by the network video camera(s); and
- store or buffer video contents of the respective raw video segment(s) in a back-end video storage.
Furthermore, the system comprises a back-end video streaming unit adapted to stream the stored or buffered video contents and the saved metadata for the respective network video camera onto the data network.
Finally, the system comprises a plurality of client devices for use by the plurality of users. Each client device is adapted to:
- receive the streamed video contents and the metadata;
- determine a chosen virtual pan-tilt-zoom value; and
- presenting, on a display the client device, the video contents by
projecting directions in the three dimensional world space onto the display based on the chosen virtual pan-tilt-zoom value and the metadata.
Another inventive aspect can be seen as method for providing virtual pan-tilt- zoom, PTZ, video functionality to a plurality of users over a data network.
The method involves providing one or more network video cameras positioned to make a video recording of a real-world target field, or a respective subarea thereof, and output a respective raw video segment onto the data network.
For the purpose of calibrating the or each of the network video camera(s), the method also involves:
- obtaining a relative position and orientation of the respective network video camera with respect to a three dimensional world space for the target field; and
- saving the obtained relative position and orientation as metadata for the respective network video camera. The method also involves:
- receiving over the data network the respective raw video segment(s) recorded by the network video camera(s); and
- storing or buffering video contents of the respective raw video
segment(s) in a cloud-based video storage.
Furthermore, the method involves streaming the stored or buffered video contents and the saved metadata for the respective network video camera onto the data network.
Finally, the method involves, in respective client devices for use by respective one of the plurality of users:
- receiving the streamed video contents and the metadata;
- determining a chosen virtual pan-tilt-zoom value; and
- presenting, on a display the client device, the video contents by
projecting directions in the three dimensional world space onto the display based on the chosen virtual pan-tilt-zoom value and the metadata.

Claims

1. A system for providing virtual pan-tilt-zoom, PTZ, video functionality to a plurality of users (60a-60c) over a data network (30), the system (100) comprising: one or more network video cameras (lOa-lOe) positioned to make a video recording of a real-world target field (20), or a respective subarea thereof, and output a respective raw video segment onto the data network (30);
a back-end camera calibration unit (42) which is adapted to obtain a relative position and orientation of the respective network video camera (10a- lOe) with respect to a three dimensional world space for the target field (20), and to save the obtained relative position and orientation as metadata for the respective network video camera (10a- lOe);
a back-end video processing unit (44), connected to the data network (30) and adapted to receive the respective raw video segment(s) recorded by the network video camera(s) (lOa-lOe), and to store or buffer video contents of the respective raw video segment(s) in a back-end video storage (41);
a back-end video streaming unit (46) adapted to stream the stored or buffered video contents and the saved metadata for the respective network video camera (10a- lOe) onto the data network (30); and
a plurality of client devices (50a-50c) for use by the plurality of users (60a- 60c), wherein each client device (50a-50c) is adapted to:
receive the streamed video contents and the metadata;
determine a chosen virtual pan-tilt-zoom value; and
present, on a display (52a-52c) of the client device, the video contents by projecting directions, in the three dimensional world space, onto the display (52a-52c) based on the chosen virtual pan-tilt-zoom value and the metadata.
2. The system as defined in claim 1, wherein the back-end camera calibration unit (42) is further adapted to obtain intrinsic parameters of the respective network video camera (lOa-lOe) with respect to the three dimensional world space for the target field (20), and to save the obtained intrinsic parameters as metadata for the respective network video camera (lOa-lOe).
3. The system as defined in claim 1 or 2, wherein the system comprises two or more network video cameras (lOa-lOe), and wherein the back-end video processing unit
(44) is further adapted to analyze the respective raw video frames as recorded by the two or more network video cameras (lOa-lOe) and to synchronize these frames.
4. The system as defined in claim 3, wherein the back-end video processing unit (44) or the video streaming unit (46) is further configured to generate a stacked video stream from the synchronized frames of the respective raw video segments from the two or more network video cameras (lOa-lOe).
5. The system as defined in any preceding claim, wherein each client device (50a-50c) is further adapted to receive a tag inserted by the user (60a-60c) in the video contents presented on the display (52a-52c).
6. The system as defined in claim 5, wherein the tag comprises a spatial parameter which represents a drawing made by the user (60a-60c) in the video contents presented on the display (52a-52c).
7. The system as defined in any of claim 1 to 4, wherein each client device (50a-50c) is further adapted to receive a tag inserted by the system in the video contents presented on the display (52a-52c).
8. The system as defined in any of claim 5 to 7, wherein the tag comprises a time parameter and a spatial parameter.
9. The system as defined in any of claim 5 to 8, wherein the system further comprises a tagging unit (49) adapted to receive the tag and store the tag in the back-end storage (41).
10. The system as defined in claim 9, wherein tagging unit (49) allows the tag to be shared with a plurality of client devices (50a-50c).
11. The system as defined in any preceding claim, wherein the chosen virtual pan-tilt-zoom value defines a projection plane of the one or more network video cameras (lOa-lOe).
12. The system as defined in any preceding claim, wherein the system (100) further comprises a tracking unit (48) configured to choose the virtual pan-tilt-zoom value.
13. The system as defined in claim 12, wherein the tracking unit (48) is configured to choose the virtual pan-tilt-zoom value by detecting an estimated current activity center in the ongoing activity in the target field (20).
14. The system as defined in any preceding claim, wherein the chosen virtual pan-tilt-zoom value is determined by the user (60a-60c).
15. A method for providing virtual pan-tilt-zoom, PTZ, video functionality to a plurality of users over a data network, wherein the method comprises:
providing (110)one or more network video cameras positioned to make a video recording of a real-world target field, or a respective subarea thereof, and output a respective raw video segment onto the data network;
obtaining (120) a relative position and orientation of the respective network video camera with respect to a three dimensional world space for the target field;
saving (130) the obtained relative position and orientation as metadata for the respective network video camera;
Receiving (140) over the data network the respective raw video segment(s) recorded by the network video camera(s); storing or buffering (150) video contents of the respective raw video segment(s) in a cloud-based video storage;
streaming (160) the stored or buffered video contents and the saved metadata for the respective network video camera onto the data network,
wherein, the method further involves, in respective client devices for use by respective one of the plurality of users:
- receiving (170) the streamed video contents and the metadata;
- determining (180) a chosen virtual pan-tilt-zoom value; and
- presenting (190), on a display the client device, the video contents by projecting directions in the three dimensional world space onto the display based on the chosen virtual pan-tilt-zoom value and the metadata.
PCT/SE2017/050364 2016-04-11 2017-04-11 System and method for providing virtual pan-tilt-zoom, ptz, video functionality to a plurality of users over a data network WO2017180050A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US16/092,467 US10834305B2 (en) 2016-04-11 2017-04-11 System and method for providing virtual pan-tilt-zoom, PTZ, video functionality to a plurality of users over a data network
CN201780021360.1A CN108886583B (en) 2016-04-11 2017-04-11 System and method for providing virtual pan-tilt-zoom, PTZ, video functionality to multiple users over a data network
EP17782745.8A EP3443737B1 (en) 2016-04-11 2017-04-11 System and method for providing virtual pan-tilt-zoom, ptz, video functionality to a plurality of users over a data network
CN202111207059.4A CN114125264B (en) 2016-04-11 2017-04-11 System and method for providing virtual pan-tilt-zoom video functionality
US17/092,971 US11283983B2 (en) 2016-04-11 2020-11-09 System and method for providing virtual pan-tilt-zoom, PTZ, video functionality to a plurality of users over a data network

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SE1650488-8 2016-04-11
SE1650488 2016-04-11

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US16/092,467 A-371-Of-International US10834305B2 (en) 2016-04-11 2017-04-11 System and method for providing virtual pan-tilt-zoom, PTZ, video functionality to a plurality of users over a data network
US17/092,971 Continuation US11283983B2 (en) 2016-04-11 2020-11-09 System and method for providing virtual pan-tilt-zoom, PTZ, video functionality to a plurality of users over a data network

Publications (1)

Publication Number Publication Date
WO2017180050A1 true WO2017180050A1 (en) 2017-10-19

Family

ID=60041713

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2017/050364 WO2017180050A1 (en) 2016-04-11 2017-04-11 System and method for providing virtual pan-tilt-zoom, ptz, video functionality to a plurality of users over a data network

Country Status (4)

Country Link
US (2) US10834305B2 (en)
EP (1) EP3443737B1 (en)
CN (2) CN108886583B (en)
WO (1) WO2017180050A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020068373A1 (en) * 2018-09-26 2020-04-02 Qualcomm Incorporated Zoomed in region of interest
EP4124014A1 (en) 2021-07-20 2023-01-25 Spiideo AB Devices and methods for wide field of view image capture

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107846633B (en) * 2016-09-18 2023-07-14 腾讯科技(深圳)有限公司 Live broadcast method and system
JP6497530B2 (en) * 2017-02-08 2019-04-10 パナソニックIpマネジメント株式会社 Swimmer status display system and swimmer status display method
GB2560948B (en) * 2017-03-29 2020-12-30 Imagination Tech Ltd Camera Calibration
US10322330B2 (en) 2017-10-03 2019-06-18 Fanmountain Llc Systems, devices, and methods employing the same for enhancing audience engagement in a competition or performance
CN114073096A (en) * 2019-06-14 2022-02-18 瑞典爱立信有限公司 Method for providing video content to a production studio by selecting an input video stream from a plurality of wireless cameras and corresponding stream controller
CA3046609A1 (en) 2019-06-14 2020-12-14 Wrnch Inc. Method and system for extrinsic camera calibration
WO2021089130A1 (en) * 2019-11-05 2021-05-14 Barco Zone-adaptive video generation
KR20220156893A (en) 2020-03-20 2022-11-28 힌지 헬스 인크. Method and system for matching 2D human poses from multiple views
US12047649B2 (en) * 2020-09-16 2024-07-23 Fanmountain Llc Devices, systems, and their methods of use in generating and distributing content
DE102020131576A1 (en) * 2020-11-27 2022-06-02 SPORTTOTAL TECHNOLOGY GmbH Generating a panoramic map to capture a scene with multiple cameras
US11657578B2 (en) 2021-03-11 2023-05-23 Quintar, Inc. Registration for augmented reality system for viewing an event
US11645819B2 (en) 2021-03-11 2023-05-09 Quintar, Inc. Augmented reality system for viewing an event with mode based on crowd sourced images
US12028507B2 (en) * 2021-03-11 2024-07-02 Quintar, Inc. Augmented reality system with remote presentation including 3D graphics extending beyond frame
US12003806B2 (en) * 2021-03-11 2024-06-04 Quintar, Inc. Augmented reality system for viewing an event with multiple coordinate systems and automatically generated model
US11527047B2 (en) 2021-03-11 2022-12-13 Quintar, Inc. Augmented reality system for viewing an event with distributed computing
EP4099704A1 (en) * 2021-06-04 2022-12-07 Spiideo AB System and method for providing a recommended video production

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120098925A1 (en) * 2010-10-21 2012-04-26 Charles Dasher Panoramic video with virtual panning capability
US20120249831A1 (en) * 2011-03-29 2012-10-04 Sony Corporation Method, apparatus and handset

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2002303082A1 (en) * 2001-01-26 2002-09-12 Zaxel Systems, Inc. Real-time virtual viewpoint in simulated reality environment
US8427538B2 (en) * 2004-04-30 2013-04-23 Oncam Grandeye Multiple view and multiple object processing in wide-angle video camera
FR2889761A3 (en) * 2005-08-09 2007-02-16 Total Immersion SYSTEM ALLOWING A USER TO LOCATE A CAMERA IN ORDER TO BE ABLE TO INSERT, QUICKLY IN AN ADJUSTED MANNER, IMAGES OF VIRTUAL ELEMENTS INTO VIDEO IMAGES OF REAL ELEMENTS CAPTURED BY THE CAMERA
JP2008017097A (en) * 2006-07-05 2008-01-24 Sharp Corp Video display device and label setting method for input terminal
US20100259595A1 (en) * 2009-04-10 2010-10-14 Nokia Corporation Methods and Apparatuses for Efficient Streaming of Free View Point Video
WO2011001180A1 (en) * 2009-07-01 2011-01-06 E-Plate Limited Video acquisition and compilation system and method of assembling and distributing a composite video
US20120294366A1 (en) * 2011-05-17 2012-11-22 Avi Eliyahu Video pre-encoding analyzing method for multiple bit rate encoding system
CN102368810B (en) 2011-09-19 2013-07-17 长安大学 Semi-automatic aligning video fusion system and method thereof
US9298986B2 (en) * 2011-12-09 2016-03-29 Gameonstream Inc. Systems and methods for video processing
US20150222815A1 (en) 2011-12-23 2015-08-06 Nokia Corporation Aligning videos representing different viewpoints
CN105074791B (en) * 2013-02-08 2018-01-09 罗伯特·博世有限公司 Adding user-selected markers to a video stream
CN103442221A (en) 2013-08-30 2013-12-11 程治永 Virtual PTZ system and method based on image zooming and cropping
US9727215B2 (en) 2013-11-11 2017-08-08 Htc Corporation Method for performing multimedia management utilizing tags, and associated apparatus and associated computer program product
CN104394400B (en) 2014-12-09 2015-12-02 山东大学 Draw filter antagonism project dummy emulation system and the method for display based on solid more
CN105262971B (en) * 2015-11-30 2018-11-13 浙江宇视科技有限公司 A kind of back method and device of fish eye camera video recording
US10268534B2 (en) * 2016-12-01 2019-04-23 Vmware, Inc. Methods and systems that use volatile event types in log files to narrow a search for potential sources of problems in a distributed computing system
US10818077B2 (en) * 2018-12-14 2020-10-27 Canon Kabushiki Kaisha Method, system and apparatus for controlling a virtual camera

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120098925A1 (en) * 2010-10-21 2012-04-26 Charles Dasher Panoramic video with virtual panning capability
US20120249831A1 (en) * 2011-03-29 2012-10-04 Sony Corporation Method, apparatus and handset

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
A. MAVLANKAR ET AL.: "Peer-to-peer multicast live video streaming with interactive virtual pan/tilt/zoom functionality", 2008 15TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, 2008, San Diego , CA, pages 2296 - 2299, XP031374497 *
ADITYA MAVLANKAR ET AL.: "Pre-fetching based on video analysis for interactive region-of-interest streaming of soccer sequences, Image Processing (ICIP", 2009 16TH IEEE INTERNATIONAL CONFERENCE ON, 7 November 2009 (2009-11-07), Piscataway , NJ, USA, pages 3061 - 3064, XP031628865 *
ARASH SHAFIEI ET AL.: "Jiku live: a live zoomable video streaming system", PROCEEDINGS OF THE 20TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM '12, 2012, New York , NY, USA, pages 1265 - 1266, XP055435976, Retrieved from the Internet <URL:https://doi.org/10.1145/2393347.2396434> *
HAKON KVALE STENSLAND ET AL.: "Bagadus: An integrated real-time system for soccer analytics. ACM Trans", MULTIMEDIA COMPUT. COMMUN. APPL., vol. 10, January 2014 (2014-01-01), XP058035302, Retrieved from the Internet <URL:http://dx.doi.org/10.1145/2541011> *
L. TONI ET AL.: "Interactive free viewpoint video streaming using prioritized network coding", 2013 IEEE 15TH INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP, 2013, Pula, pages 446 - 451, XP032524260 *
See also references of EP3443737A4 *
VAMSIDHAR REDDY GADDAM ET AL.: "Automatic Real-Time Zooming and Panning on Salient Objects from a Panoramic Video", PROCEEDINGS OF THE 22ND ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM '14, 2014, New York, NY, USA, pages 725 - 726, XP058058647, Retrieved from the Internet <URL:http://dx.doi.org/10.1145/2647868.2654882> *
VAMSIDHAR REDDY GADDAM ET AL.: "Interactive Zoom and Panning from Live Panoramic Video", IN PROCEEDINGS OF NETWORK AND OPERATING SYSTEM SUPPORT ON DIGITAL AUDIO AND VIDEO WORKSHOP (NOSSDAV '14, 2014, New York, NY , USA, pages 19 , 6, XP055371958, Retrieved from the Internet <URL:http://dx.doi.org/10.1145/2578260.2578264> *
YILDIZ ALPARSLAN ET AL.: "Evaluation and Fair Comparison of Human Tracking Methods with PTZ Cameras, Virtual", AUGMENTED AND MIXED REALITY: 7TH INTERNATIONAL CONFERENCE, VAMR 2015, HELD AS PART OF HCI INTERNATIONAL 2015, Los Angeles , CA , USA, XP047315870, Retrieved from the Internet <URL:http://dx.doi.org/10.1007/978-3-319-21067-4_17> *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020068373A1 (en) * 2018-09-26 2020-04-02 Qualcomm Incorporated Zoomed in region of interest
US11082620B2 (en) 2018-09-26 2021-08-03 Qualcomm Incorporated Zoomed in region of interest
EP4124014A1 (en) 2021-07-20 2023-01-25 Spiideo AB Devices and methods for wide field of view image capture

Also Published As

Publication number Publication date
CN114125264B (en) 2024-07-05
US11283983B2 (en) 2022-03-22
CN114125264A (en) 2022-03-01
EP3443737A4 (en) 2020-04-01
CN108886583A (en) 2018-11-23
EP3443737B1 (en) 2024-06-12
US10834305B2 (en) 2020-11-10
US20210136278A1 (en) 2021-05-06
EP3443737C0 (en) 2024-06-12
CN108886583B (en) 2021-10-26
EP3443737A1 (en) 2019-02-20
US20190109975A1 (en) 2019-04-11

Similar Documents

Publication Publication Date Title
US11283983B2 (en) System and method for providing virtual pan-tilt-zoom, PTZ, video functionality to a plurality of users over a data network
US20200310630A1 (en) Method and system of selecting a view from a plurality of cameras
US8970666B2 (en) Low scale production system and method
JP7123523B2 (en) Method and system for automatically producing television programs
US9298986B2 (en) Systems and methods for video processing
US20160205341A1 (en) System and method for real-time processing of ultra-high resolution digital video
JP7080208B2 (en) Processing multiple media streams
US11677990B2 (en) Multi-camera live-streaming method and devices
US7868914B2 (en) Video event statistic tracking system
US20150297949A1 (en) Automatic sports broadcasting system
US9202526B2 (en) System and method for viewing videos and statistics of sports events
JP2019159950A (en) Information processing device and information processing method
US20220053245A1 (en) Systems and methods for augmenting video content
JP2024527858A (en) Adding augmented reality to a subview of a high-resolution central video feed
KR101573794B1 (en) System for producing additional image of player of interest
Wang et al. Personalising sports events viewing on mobile devices
Hayes Immerse yourself in the Olympics this summer [Olympic Games-broadcasting]
Hayes Olympic games broadcasting: Immerse yourself in the olympics this summer
KR20090039272A (en) System for providing mini map information of video broadcasting program

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2017782745

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2017782745

Country of ref document: EP

Effective date: 20181112

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17782745

Country of ref document: EP

Kind code of ref document: A1