US11282545B2 - Editing of camera transition points in multicamera videos using overlapping video segments for scrubbing - Google Patents
Editing of camera transition points in multicamera videos using overlapping video segments for scrubbing Download PDFInfo
- Publication number
- US11282545B2 US11282545B2 US16/898,775 US202016898775A US11282545B2 US 11282545 B2 US11282545 B2 US 11282545B2 US 202016898775 A US202016898775 A US 202016898775A US 11282545 B2 US11282545 B2 US 11282545B2
- Authority
- US
- United States
- Prior art keywords
- video
- camera
- camera video
- transition points
- server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/11—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information not detectable on the record carrier
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/21805—Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/414—Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
- H04N21/41407—Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/4223—Cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47205—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- H04N5/247—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2621—Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
Definitions
- embodiments disclosed herein relate to video and image capturing systems and technology. More specifically, embodiments disclosed herein relate to systems and methods for editing of camera transition points in multi-camera video using overlapping video segments for scrubbing.
- a sporting event e.g., a swim meet, ski race, or track event
- a spectator or family member of a participant may have a camera capturing video from a position located in designated areas such as the stands and may be restricted from being near the race course during the event to capture different views of the participant.
- capturing different and varying views at different locations of the participant during the event is difficult, especially if only a single camera is used and limited to a designated area.
- finding transition points in the captured video and editing the video can be difficult.
- a computing system provides a video player.
- the video player plays a multi-camera video and provides transitions points for a user to select to render the multi-camera video with the selected transition points.
- the user can be presented with a video player in a browser (e.g. a web browser) or a video player in a standalone application (e.g., a mobile application) to play back multi-camera videos.
- the video player can display transition points indicated as handles in a timeline of a recorded multi-camera video. The user can select or click on the handles and drag or scrub through the video with video preview in which clips of the video are shown.
- the user can release the handles when the user reaches a desired transition point.
- the user can select or press an option or button in the video player which can trigger or signal a backend rendering service, e.g., a cloud-based service, to generate rendered or finished high resolution video with desired camera transition points.
- a backend rendering service e.g., a cloud-based service
- a computer-implemented method for electronically editing a multi-camera video of a sporting event includes receiving, by a server, a request for editing a multi-camera video including a plurality of video streams from video player of a user device.
- the server sends the multi-camera video as requested to the video player of the user device for display in the video player.
- the server causes the video player to display a widget including a timeline, a plurality of transition points, and a plurality of video segments.
- the server receives, from the user device, a set of modified transition points including a timestamp associated with each of the transition points.
- the server generates a multi-camera video based on the modified transition points.
- a system includes a server having a processor and a memory; a communication network; and a database in electronic communication with the server.
- the server is configured to provide a widget that a user can load onto a computing device having a display screen and an input device, the widget permitting the user to electronically edit a multi-camera video.
- the server receives a request for editing a multi-camera video including a plurality of video streams from the widget.
- the server sends the multi-camera video as requested to the user device for display in a video player.
- the server causes the video player to display a timeline, a plurality of transition points, and a plurality of video segments in the widget.
- the server receives, from the user device, a modified set of transition points including a timestamp associated with each of the transition points.
- the server generates a multi-camera video based on the modified transition points.
- FIG. 1A illustrates an example of a system of cameras at an example sporting event, according to an embodiment.
- FIG. 1B illustrates one example of a system of cameras to generate a multi-camera video at another example sporting event, according to an embodiment.
- FIG. 2 illustrates one example of a multi-camera video screenshot of a video player including a video widget and camera transition point handles on a timeline, according to an embodiment.
- FIG. 3 illustrates one example of a multi-camera video screenshot of a video player showing initial transition points, according to an embodiment.
- FIG. 4 illustrates one example of a multi-camera video screenshot of a video player having overlapping video segments, according to an embodiment.
- FIG. 5 illustrates one example of a multi-camera video screenshot of a video player illustrating the overlapping video segments where all areas of overlap scrubbing boundaries, according to an embodiment.
- FIG. 6 illustrates one example of a multi-camera video screenshot of a video player recording a timecode of a transition point, according to an embodiment.
- FIG. 7 illustrates one example of a multi-camera video screenshot of a video player illustrating how the user can finalize the transition point editing, which passes the timecodes of each newly selected transition points up to the rendering engine, according to an embodiment.
- FIG. 8 illustrates one example block diagram of a computing or data processing system for implementing editing of camera transition points in multi-camera video using overlapping video segments, according to an embodiment.
- FIG. 9 illustrates one example of flow diagram of an operation for implementing editing of camera transition points in multi-camera video using overlapping video segments for scrubbing, according to an embodiment.
- a sporting event can have a set path or course for participants.
- a spectator e.g., a family member or the like
- the spectator may otherwise be restricted from being near the race course during the event to capture different views of the participant.
- capturing different and varying views at different locations of the participant during the event is difficult, especially if only a single camera is used.
- Sporting events generally have timing systems indicating start and finish times (and sometimes different intermediate times often referred to as split times) of a participant racing on or around the course.
- the timing systems are generally not tied or synchronized to camera systems. Thus, when editing video captured by a spectator, it is difficult to know the time during the event that corresponds to a moment or frame in the captured video of the participant.
- Embodiments described herein are directed to systems and methods that enable improved editing of videos of sporting events.
- the captured video is time synchronized with the electronic starting system, the electronic timing system, or a combination thereof, to provide a better connection to the timing of the sporting event in which the video is captured.
- the integration with the electronic starting system, the electronic timing system, or combination thereof can be used to know when a race starts relative to a video frame corresponding to that start time.
- the time synchronization enables capturing of the frame in each of the video feeds corresponding to a time for switching camera angles (e.g., switching to a different camera in the system) so that the resulting multi-camera video smoothly transitions between camera angles with no perceptible skipped or repeated frames.
- Techniques for editing of camera transition points in multi-camera video using overlapping video segments for scrubbing are disclosed. Such techniques generally are not previously possible, as most video of sporting events is captured in a manner that is not tied to the timing of the sporting event.
- the disclosed techniques enable an end user to edit a multi-camera video easily in a browser or application, eliminating the need to use sophisticated video production software.
- a user is presented with a video player in a web browser or standalone application of a computing system to playback multi-camera videos.
- the video player can display transition points indicated as handles in a timeline of a recorded multi-camera video. Each of these handles represents a timecode associated with a camera transition point.
- the user can select or click on the handles and drag or scrub through the video with video preview in which clips of the video are shown. The user can release the handles when the user reaches a desired transition point.
- the user can select or press an option or button in the video player which sends the newly selected camera transition point timecodes to a backend rendering service, e.g., a cloud-based service, and trigger such service to generate rendered or finished high resolution video with desired camera transition points with no perceptible skipped or repeated frames.
- a backend rendering service e.g., a cloud-based service
- An event generally includes a sporting event.
- a sporting event generally includes a set path or course performed for a limited duration and to achieve the fastest (i.e., shortest) time for completing the set path or course.
- the set path or course can be completed one or multiple times, depending on the sporting event. Suitable examples of sporting events include, but are not limited to, a swim meet, a ski event (e.g., skiing or snowboarding), a track event, bicycle racing, car racing, motorcycle racing, or the like.
- a sporting event includes an electronic timing system.
- the electronic timing system can also include an electronic starting system for signaling the start of the sporting event.
- Cameras can be placed in set locations to capture different views of the participants throughout the course.
- the cameras can be situated such that different views or lanes (of the course) can be captured by the cameras without the cameras being individually manned or controlled.
- the cameras can be time synchronized with the electronic timing system, the electronic starting system, or a combination thereof, of the sporting event and the multi-camera videos can be time synchronized with the results of individual participants.
- Such a video capturing system can tie exact frames in video footage from each camera to start times for the sporting event.
- a browser or web browser, as used herein, includes a software application for accessing information on a network such as the Internet.
- a widget includes an application, or a component of a user interface, that enables a user to perform a function or to access a service.
- FIG. 1A illustrates an example of a system 85 of cameras 101 - 105 at an example sporting event 86 , according to an embodiment.
- the sporting event 86 in the illustrated embodiment is intended to be exemplary and can be representative of any event in which a participant 87 follows a set course 88 (e.g., once) from start 89 to finish 90 . It is to be appreciated that the course 88 can be repeated (i.e., participant must return from the finish 90 to the start 89 , in which case the start 89 may also represent the end of the race).
- the illustrated embodiment shows cameras 101 - 105 oriented toward the course 88 .
- the cameras 101 - 105 are oriented so that the participant 87 can be captured on video from the start 89 to the finish 90 .
- There are areas of overlap between the views of the different cameras 101 - 105 but the cameras 101 - 105 are generally oriented to capture unique views of the participant 87 .
- the cameras 101 - 105 are placed such that a pattern of transitions between the cameras 101 - 105 is known. For example, there would be no reason to begin capturing video at camera 105 and then transitioning to camera 101 during a race going from start 89 to finish 90 . Rather, a progression including camera 101 —camera 102 —camera 103 —camera 104 —camera 105 would provide a view of the participant 87 throughout the entire race from start 89 to finish 90 .
- the cameras 101 - 105 may be movable to change an orientation of the view.
- the pattern e.g., 101 - 102 - 103 - 104 - 105
- the cameras 101 - 105 can include a sensor such as, but not limited to, a motion sensor so that the cameras 101 - 105 can automatically follow a participant along the course for a portion of the course 88 .
- FIG. 1B illustrates one example of a system 100 of cameras 101 to 106 to generate multi-camera event videos, according to an embodiment.
- the system 100 can be used to capture multi-camera videos of a sporting event.
- the sporting event is a swimming event at swimming pool 150 . It will be appreciated that this is an example of a sporting event and that the actual venue and type of sporting event can vary according to the principles described herein.
- swimming pool 150 includes eight lanes 161 - 168 . Any number of lanes (e.g., fewer than eight or more than eight) can be used for a swim meet for swimming pool 150 .
- the number of lanes 161 - 168 having competitors can be less than the number of lanes in the pool. That is, in an embodiment, the swimming pool 150 can include eight lanes, but competitors may be present in some of the lanes, but not all the lanes.
- any number and type of cameras 101 - 106 can be used.
- the system 100 can include one or more cameras 101 - 106 above the water, one or more cameras 101 - 106 below the water, as well as combinations thereof.
- the system 100 includes at least two cameras 101 - 106 .
- the cameras 101 - 106 may have a changeable view provided the cameras 101 - 106 still capture a unique view relative to each other.
- the cameras 101 - 106 may also have some overlap in their views so that the participant can be captured throughout the course without interruption. Accordingly, overlapping views which still provide different views are considered to be unique for purposes of this description.
- six cameras 101 - 106 are used. It is to be appreciated that any number of cameras greater than one can be implemented for system 100 .
- the cameras 101 - 106 can include any camera capable of recording video. Suitable examples of the cameras 101 - 106 include, but are not limited to, compact or point-and-shoot cameras, zoom compact cameras, advanced compact cameras, adventure cameras, digital single lens reflex (DSLR) cameras, compact mirrorless cameras and medium format camera types, remote cameras connected to a common video recording device (similar to a multi-camera video surveillance system), or camera modules connected to a computing or data processing system as shown, e.g., in FIG. 8 , or connected to a microprocessor board such as, e.g., a Raspberry Pi® board.
- Each camera 101 - 106 or video recording device contains an accurate clock that can be synchronized to the time of a logging device 107 .
- Each of cameras 101 - 106 can have one or more memories to store video data and internal clocks.
- the cameras 101 - 106 can include limited memory onboard the camera 101 - 106 for storing video data and can instead be electronically connected to a memory separate from the cameras 101 - 106 for storing the video data.
- system 100 can implement multiple cameras 101 - 106 for other types of sporting events such as, but not limited to, a track event or a racing event in which participants race on or around a set path having start and finish times to place the participants.
- Cameras 101 - 106 can be installed in fixed locations around the swimming pool 150 to capture different views/angles of participants swimming in swimming pool 150 .
- cameras 101 and 102 can capture videos of views on the left and right side of the starting blocks 171 - 179 of participants swimming in lanes 161 - 168 of swimming pool 150 .
- Camera 101 can capture video of swimmers in lanes 161 - 164 and camera 102 can capture video of swimmers in lanes 165 - 168 at the starting blocks 171 - 179 of swimming pool 150 .
- Cameras 103 and 104 can capture videos of swimmers under water from the bottom left and right side of turn end 180 of swimming pool 150 of participants swimming in lanes 161 - 168 .
- Cameras 105 and 106 can capture videos above swimming pool 150 down at the swimmers from the bottom left side and right side of the turn end 180 pointing to lanes 161 - 168 of participants swimming. These camera placements are examples, and actual camera placement along with what is being recorded can vary according to the principles described herein.
- a pattern for transitioning between the cameras can be determined. For example, for a swimmer in lane 161 , the pattern would be camera 101 , camera 105 , camera 103 , camera 105 , camera 101 in a race in which the start and finish are both at starting block 171 . If additional laps of the pool 100 are to be completed, the pattern repeats. This pattern is established based on the course or path of the sporting event. This pattern is established because, for example, it is known that a view starting with camera 105 would not be desirable, as the participant may not even be in view of the camera 105 at the beginning of the race.
- the logging device 107 is electronically connected to an electronic starting system 110 and an electronic timing system 111 of a sporting event (e.g., swim meet at swimming pool 150 ) in which the start times of each race can be logged 113 and therefore tied to exact frames in video footage captured from cameras 101 - 106 .
- a sporting event e.g., swim meet at swimming pool 150
- logging device 107 upon receiving a start signal from the electronic starting system 110 for the sporting event, can log time/timecode data at the instant of the start of every race/event in the log 113 .
- the log 113 can be stored in a database within logging device 107 .
- the log 113 can be stored in a cloud storage electronically connected to logging device 107 .
- the log 113 can be stored in a database having a portion within logging device 107 and a portion stored in a cloud storage.
- the logging device 107 can be configured to automatically upload the information on a periodic basis (e.g., every hour, few hours, or the like) without operator interaction.
- the logging device 107 is electronically connected to the electronic starting system 110 and electronically connected to an electronic timing system 111 and a scoreboard 112 of a sporting event in which exact start times of each race can be logged along with race data 112 identifying each race such as, e.g., event number, heat number, or the like.
- race data 112 can include specific metadata (e.g., event number, heat number, race name, etc.). Race data 112 can be collected and obtained by logging device 107 electronically connected to scoreboard 112 and electronic timing system 111 . For example, logging device 107 can directly request or query the electronic timing system 111 or scoreboard 112 in real time and can be stored in log 113 along with time/timecode data.
- specific metadata e.g., event number, heat number, race name, etc.
- logging device 107 can log time/timecode data at the instant of the start of every race/event in a log 113 .
- log 113 can be stored in a database within logging device 107 or can be stored in a cloud storage connected to logging device 107 .
- Logging device 107 can also log race data 112 including specific metadata (e.g., event number, heat number, race name, etc.).
- the logging device 107 can be electronically connected to the electronic timing system 111 and scoreboard 112 of a sporting event in which race start times can be logged and race data 112 collected and a user or operator 114 collecting comprehensive event results 116 , race data 113 , and video files 115 .
- the operator 114 can collect video files 115 including audio data, race data 113 including start logs, and results file 116 , and save the data on logging device 107 or upload the collected data to a cloud-based service or cloud storage or to be accessed by a browser or a standalone application and played by a video player as described herein.
- the cloud-based service includes a server 190 and a database 191 for storing the video files 115 and race data 113 received via a network 192 (e.g., the Internet or the like).
- the server 190 and information stored in database 191 can be made available to a user device 193 via the network 192 .
- the server 190 can generally be representative of a computing device (e.g., see FIG. 8 for additional details).
- the user device 193 can generally be representative of a computing device (e.g., see FIG. 8 for additional details).
- the user device 193 can generally include a web browser having a media player embedded therein.
- the user device 193 can include a media player within an application stored on the user device 193 .
- the user device 193 and media player can be used by the user to view and edit multi-camera videos in accordance with the principles described in FIGS. 2-7 and 9 below.
- FIG. 2 illustrates one example of a multi-camera video graphical user interface (GUI) 200 , according to an embodiment.
- the illustrated multi-camera video GUI 200 is representative of an in-browser (i.e., a web browser) video player 201 having an in-browser video widget 202 and camera transition point handles 203 on a timeline 204 .
- the GUI 200 can be a standalone application (i.e., not integrated into an in-browser video player), according to other embodiments.
- the video player 201 can be a standard video player embedded in a browser customized with widget 202 and transition point handles 203 to play back multi-camera video using the system 100 as described in FIG. 1B .
- the video player can be an HTML5 video player embedded in a web browser.
- in-browser video player 201 can be configured to provide additional functionality to implement the techniques disclosed herein.
- the video files 115 and race data 113 can be sent from the server 190 to the web browser of the user device 193 for display via the network 192 (e.g., the Internet or the like).
- the video files may be stored in database 191 ( FIG. 1B ) in separate files (e.g., one file per camera 101 - 106 , it is possible to have separate files per camera 101 - 106 corresponding to a part of the sporting event).
- the video files When the video files are loaded to the video player 201 , the user interacts as if the separate video files are integrated in a single file.
- FIG. 3 illustrates one example of a multi-camera video GUI 300 , according to an embodiment.
- the GUI 300 includes a screenshot of an in-browser video player 301 showing initial transition points 302 on a timeline 304 .
- the GUI 300 can be a standalone application (i.e., not integrated into an in-browser video player), according to other embodiments.
- the initial transition points 302 can be determined by the server 190 based on data collected from sensors embedded in the race course, determined based on computer vision techniques to identify when an athlete is at a specific point in the race course, or calculated based on default rules, e.g., assuming constant pace.
- the default rules can be stored, for example, in the database 191 .
- using sensors embedded in the race course can provide most accurate transition points 203 . Because the transition points 302 are easily modifiable, assuming a constant pace can provide transition points 302 having enough accuracy. This can reduce a processing load on the server 190 generating the transition points 302 .
- the initial transition points 302 can be calculated by the server 190 based on a length of the sporting event that was recorded.
- the sporting event can be a swim race having a distance of 100 yards (e.g., four lengths of a swimming pool). It is to be appreciated that the distance and number of laps is an example and that the actual race distance and course can vary beyond the stated values within the scope of the disclosure herein.
- a participant can have a final race time, e.g., of 55 seconds, with an average time per length of the pool of 13.75 seconds.
- a first transition point 302 can be 2 seconds after the start (0:02) and a second transition point 302 can be 2.5 seconds prior to the first turn at, e.g., (0:11.25)
- a third transition point 302 can be 2.5 seconds prior to the second turn (e.g., 0:25)
- a fourth transition point 302 can be 2.5 seconds prior to the third turn (e.g., 0:38.75)
- a fifth transition point 302 can be 2 seconds after the finish (e.g., 0:57). It is to be appreciated that the timing of these transition points 302 is an example, and can vary beyond the stated values (e.g., more or less than 2.5 seconds from the turn split, etc.).
- FIG. 4 illustrates one example of a multi-camera video GUI 400 , according to an embodiment.
- the GUI is a screenshot of an in-browser video player 401 having overlapping video segments 402 . It is to be appreciated that the GUI 400 can be a standalone application (i.e., not integrated into an in-browser video player), according to other embodiments.
- the overlapping segments 402 can be loaded into the video player widget.
- there can be any number of segments 402 e.g., segments 402 A through segments 402 I and so on.
- the extent of overlap of segments 402 can be as short as a few frames or as long as the entire video.
- Each segment 402 can be representative of video from a different camera (e.g., cameras 101 - 106 ) associated with a transition point 403 within timeline 404 .
- FIG. 5 illustrates one example of a multi-camera video GUI 500 , according to an embodiment.
- the GUI 500 is an in-browser video player 501 illustrating overlapping video segments. It is to be appreciated that the GUI 500 can be a standalone application (i.e., not integrated into an in-browser video player), according to other embodiments.
- the areas of overlap have two separate camera views for the same time period and represent “scrubbing boundaries” 502 .
- the scrubbing boundaries 502 are representative of the extent to which a camera transition point 503 can be moved by a user and still have a continuous video at the end. That is, moving a transition point outside of the scrubbing boundaries 502 can result in a discontinuous video (i.e., video with gaps in which no video of the sporting event is shown). It is to be appreciated that the scrubbing boundaries 502 can be extended all the way to the beginning and the end of the sporting event as well.
- the video screen will show the video from one of the two camera views fast-forward or rewind, giving the user a clear idea of where in the video they would like the camera transition point 503 to occur. In this way, the transition points 503 can be moved to a customized position according to the user's selections.
- swipe refers to an interaction in which a user drags a cursor or playhead (e.g., a graphic line in the timeline 504 that represents the position, or frame, of the video being accessed) across a segment of a video to quickly locate specific points in the video. Scrubbing is a convenient way to quickly navigate a video file and is a common feature of modern video editing software.
- a cursor or playhead e.g., a graphic line in the timeline 504 that represents the position, or frame, of the video being accessed
- this common user interaction technique is implemented in a browser-based video player 501 to provide a convenient and intuitive way for the user to quickly select the points in time in the video the user wants a camera transition to occur.
- Initial camera transition points 503 can be indicated in video player timeline 504 with overlapping video segments extending to either side (i.e., earlier, later) of each transition point 503 .
- These overlapping segments represent the extent to which transition points 503 can be moved in each direction for the scrubbing boundaries 502 and can extend from several seconds before the race start to several seconds after the race conclusion.
- FIG. 6 illustrates one example of a multi-camera video GUI 600 , according to an embodiment.
- the GUI 600 is representative of an in-browser video player 601 recording a timecode of a transition point 602 on a timeline 604 . It is to be appreciated that the GUI 600 can be a standalone application (i.e., not integrated into an in-browser video player), according to other embodiments.
- the video timecode of the transition point 602 associated with that moment can be recorded by the video rendering service to use that specific point for a final video edit. That is, the timecode can be sent from the browser of the user device 193 to the server 190 via the network 192 for storage in the database 191 . In this way, control of transition points 602 for final recording can be obtained.
- FIG. 7 illustrates one example of a multi-camera video GUI 700 , according to an embodiment.
- the illustrated GUI 700 is an in-browser video player 701 passing timecodes of transition points 702 from the browser of the user device 193 to the server 190 via the network 192 for storage in the database 191 .
- the GUI 700 can be a standalone application (i.e., not integrated into an in-browser video player), according to other embodiments.
- the server 190 can use the transition points as received to select the consecutive video and audio segments to assemble into the final multi-camera video from the database 191 .
- FIG. 8 illustrates one example block diagram of a computing or data processing system 800 , according to an embodiment.
- the computing or data processing system 800 can represent logging device 107 , a computer or microprocessor board connected to a camera, or a cloud-based system providing a cloud-based service described herein.
- FIG. 8 illustrates various components of a computing or data processing system 800
- the components are not intended to represent any specific architecture or manner of interconnecting the components, as such details are not germane to the disclosed examples or embodiments.
- Network computers and other data processing systems or other consumer electronic devices which have fewer components or perhaps more components, may also be used with the disclosed examples and embodiments.
- computing system 800 includes a bus 801 , which is coupled to processor(s) 802 coupled to cache 804 , display controller 814 coupled to a display 815 , network interface 817 , non-volatile storage 806 , memory controller coupled to memory 810 , I/O controller 818 coupled to I/O devices 820 , and database(s) 812 .
- Processor(s) 802 can include one or more central processing units (CPUs), graphical processing units (GPUs), a specialized processor or any combination thereof.
- Processor(s) 802 can be single-threaded or multi-threaded.
- Processor(s) 802 can retrieve instructions from any of the memories including non-volatile storage 806 , memory 810 , or database 812 , and execute the instructions to perform operations described in the disclosed examples and embodiments.
- I/O devices 820 include mice, keyboards, printers, cameras and other like devices controlled by I/O controller 818 .
- the I/O device can be a combined input and output device.
- the I/O device 820 can be a display having an integrated touchscreen capable of receiving inputs from the user.
- Network interface 817 can include modems, wired and wireless transceivers, and combinations thereof, and can communicate using any type of networking protocol including wired or wireless wide area network (WAN) and local area network (LAN) protocols including LTE and Bluetooth® standards.
- WAN wide area network
- LAN local area network
- Memory 810 can be any type of memory including random access memory (RAM), dynamic random-access memory (DRAM), static random-access memory (SRAM), Flash, or combinations thereof, which require power continually to refresh or maintain the data in the memory (i.e., volatile memory).
- RAM random access memory
- DRAM dynamic random-access memory
- SRAM static random-access memory
- Flash or combinations thereof, which require power continually to refresh or maintain the data in the memory (i.e., volatile memory).
- the memory 810 can be either a volatile memory or a non-volatile memory.
- at least a portion of the memory can be virtual memory.
- Non-volatile storage 806 can be a mass storage device including a magnetic hard drive, a magnetic optical drive, an optical drive, a digital video disc (DVD) RAM, a flash memory, other types of memory systems, or combinations thereof, which maintain data (e.g. large amounts of data) even after power is removed from the system.
- the non-volatile storage 806 can include network attached storage (NAS) or connections to a storage area network (SAN) device, or the like.
- the non-volatile storage 806 can include storage that is external to the system 800 , such as in the cloud.
- memory devices 810 or database 812 can store data related to log 109 , electronic timing system 110 , scoreboard 111 and race data including video files from cameras 101 - 106 .
- memory devices 810 or database 812 can store videos and of assembled clips of a sporting event.
- processor(s) 802 can be coupled to any number of external memory devices or databases locally or remotely by way of network interface 817 , e.g., database 812 can be secured storage in a cloud environment.
- Examples and embodiments disclosed herein can be embodied in a data processing system architecture, data processing system or computing system, or a computer- readable medium or computer program product. Aspects, features, and details of the disclosed examples and embodiments can take the hardware or software or a combination of both, which can be referred to as a system or engine. The disclosed examples and embodiments can also be embodied in the form of a computer program product including one or more computer readable mediums having computer readable code which can be executed by one or more processors (e.g., processor(s) 802 ) to implement the techniques and operations disclosed herein.
- processors e.g., processor(s) 802
- the computer readable medium can include a computer readable signal medium, a computer readable storage medium, or a combination thereof.
- a computer readable storage medium can include any tangible medium capable of storing a computer program for use by a programmable processor to perform functions described herein by operating on input data and generating an output.
- a computer program is a set of instructions that can be used, directly or indirectly, in a computer system to perform a certain function or determine a certain result.
- Examples of computer readable storage media include, but are not limited to, a floppy disk; a hard disk; a random access memory (RAM); a read-only memory (ROM); a semiconductor memory device such as, but not limited to, an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), Flash memory, or the like; a portable compact disk read-only memory (CD-ROM); an optical storage device; a magnetic storage device; other similar device; or suitable combinations of the foregoing.
- a computer readable signal medium can include a propagated data signal having computer readable instructions. Examples of propagated signals include, but are not limited to, an optical propagated signal, an electro-magnetic propagated signal, or the like.
- a computer readable signal medium can include any computer readable medium that is not a computer readable storage medium that can propagate a computer program for use by a programmable processor to perform functions described herein by operating on input data and generating an output.
- FIG. 9 illustrates one example of flowchart of a method 900 for implementing in-browser editing of camera transition points in multi-camera video using overlapping video segments for scrubbing, according to an embodiment.
- the method 900 includes blocks 902 through 910 .
- initial multi-camera transition points are calculated by a computing system (e.g., the server 190 in FIG. 1B ).
- the computing system can calculate transition points based on default rules, e.g., assuming constant pace of the race.
- overlapping segments of multi-camera videos are loaded from a database (e.g., the database 191 ) into a video player video widget on a user device (e.g., the user device 193 ).
- the video player is an in-browser video player.
- the video player is in a standalone application.
- segments loaded in the video player can be lower resolution than the final multi-camera video to facilitate more responsive scrubbing and minimize bandwidth consumption of the network (e.g., the network 192 ).
- initial camera transition points are indicated in the video player timeline with overlapping video segments extending to either side (i.e., earlier, later) of each transition point.
- the overlapping segments can represent the extent to which transition points can be moved in each direction and can extend from several seconds before the race start to several seconds after the race conclusion.
- the selected transition point timecodes are passed to the backend rendering engine or service (e.g., the server 190 ), which uses the transition points received to select the consecutive video and audio segments to assemble into a final multi-camera video and generates the final multi-camera video.
- the backend rendering engine or service e.g., the server 190
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
Description
Claims (17)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/898,775 US11282545B2 (en) | 2019-06-12 | 2020-06-11 | Editing of camera transition points in multicamera videos using overlapping video segments for scrubbing |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201962860745P | 2019-06-12 | 2019-06-12 | |
| US16/898,775 US11282545B2 (en) | 2019-06-12 | 2020-06-11 | Editing of camera transition points in multicamera videos using overlapping video segments for scrubbing |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20200395048A1 US20200395048A1 (en) | 2020-12-17 |
| US11282545B2 true US11282545B2 (en) | 2022-03-22 |
Family
ID=73744661
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/898,775 Active US11282545B2 (en) | 2019-06-12 | 2020-06-11 | Editing of camera transition points in multicamera videos using overlapping video segments for scrubbing |
Country Status (1)
| Country | Link |
|---|---|
| US (1) | US11282545B2 (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US12322178B2 (en) * | 2020-06-04 | 2025-06-03 | Hole-In-One Media, Inc. | Autonomous activity monitoring system and method |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9652459B2 (en) | 2011-11-14 | 2017-05-16 | Reel Coaches, Inc. | Independent content tagging of media files |
| US20190012383A1 (en) * | 2014-08-27 | 2019-01-10 | International Business Machines Corporation | Consolidating video search for an event |
-
2020
- 2020-06-11 US US16/898,775 patent/US11282545B2/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9652459B2 (en) | 2011-11-14 | 2017-05-16 | Reel Coaches, Inc. | Independent content tagging of media files |
| US20190012383A1 (en) * | 2014-08-27 | 2019-01-10 | International Business Machines Corporation | Consolidating video search for an event |
Also Published As
| Publication number | Publication date |
|---|---|
| US20200395048A1 (en) | 2020-12-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7228695B2 (en) | Courseware recording method and device, courseware playback method and device, intelligent interactive tablet, and storage medium | |
| US11330195B2 (en) | Time synchronized cameras for multi-camera event videos | |
| CN108369816B (en) | Apparatus and method for creating video clips from omnidirectional video | |
| US11727958B2 (en) | Real time video special effects system and method | |
| JP2019160318A (en) | Information processing device, information processing method, and program | |
| US9564172B2 (en) | Video replay systems and methods | |
| US9621768B1 (en) | Multi-view media display | |
| JP6231804B2 (en) | Electronic device and control method thereof | |
| US10225598B2 (en) | System and method for visual editing | |
| TWI664855B (en) | Method and apparatus for playing back recorded video | |
| US12262158B2 (en) | Methods, systems, and media for generating a summarized video using frame rate modification | |
| CN106576190A (en) | 360 degree space image reproduction method and system therefor | |
| JP2010045765A (en) | Reproducing apparatus | |
| US11282545B2 (en) | Editing of camera transition points in multicamera videos using overlapping video segments for scrubbing | |
| JP6218480B2 (en) | Electronic device and control method thereof | |
| JP2020067716A (en) | Information processing apparatus, control method and program | |
| US9807350B2 (en) | Automated personalized imaging system | |
| JP2023177060A (en) | Playback device and method, system, program, and storage medium | |
| KR20190122053A (en) | object image tracking streaming system and method using the same | |
| CN121418593A (en) | Video processing method, video processing device, electronic equipment and computer readable storage medium | |
| EP3224798A1 (en) | System and method for visual editing | |
| CN120711161A (en) | Video processing method, video processing device and computer-readable storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| FEPP | Fee payment procedure |
Free format text: SURCHARGE FOR LATE PAYMENT, SMALL ENTITY (ORIGINAL EVENT CODE: M2554); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 4 |