WO2015192130A1 - Coordinated audiovisual montage from selected crowd-sourced content with alignment to audio baseline - Google Patents

Coordinated audiovisual montage from selected crowd-sourced content with alignment to audio baseline Download PDF

Info

Publication number
WO2015192130A1
WO2015192130A1 PCT/US2015/035830 US2015035830W WO2015192130A1 WO 2015192130 A1 WO2015192130 A1 WO 2015192130A1 US 2015035830 W US2015035830 W US 2015035830W WO 2015192130 A1 WO2015192130 A1 WO 2015192130A1
Authority
WO
WIPO (PCT)
Prior art keywords
audiovisual
audio
clips
baseline
content
Prior art date
Application number
PCT/US2015/035830
Other languages
French (fr)
Inventor
Mark T. GODFREY
Turner Evan Kirk
Ian S. SIMON
Nicholas M. KRUGE
Original Assignee
Godfrey Mark T
Turner Evan Kirk
Simon Ian S
Kruge Nicholas M
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Godfrey Mark T, Turner Evan Kirk, Simon Ian S, Kruge Nicholas M filed Critical Godfrey Mark T
Publication of WO2015192130A1 publication Critical patent/WO2015192130A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/101Collaborative creation, e.g. joint development of products or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • G06F16/4387Presentation of query results by the use of playlists
    • G06F16/4393Multimedia presentations, e.g. slide shows, multimedia albums
    • G06Q50/40
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Definitions

  • the present invention relates generally to computational techniques including digital signal processing for audiovisual content and, in particular, to
  • Audiovisual content repositories associated with video sharing services such as YouTube, Instagram, Vine, Flickr, Pinterest, etc. now contain huge collections of audiovisual content.
  • Computational system techniques have been developed that provide new ways of connecting users through audiovisual content, particularly audiovisual content that includes music. For example, techniques have been developed that seek to connect people in one of the most authentic ways possible, capturing moments at which these people are experiencing or expressing themselves relative to a particular song or music and combining these moments together to form a coordinated audiovisual work.
  • captured moments take the form of video snippets posted to social media content sites.
  • expression takes the form of audiovisual content captured in a karaoke-style vocal capture session.
  • captured moments or expressions include extreme action or point of view (POV) video captured as part of a sporting contest or activity and set to music. Often (or even typically), the originators of these video snippets have never met and simply share an affinity for a particular song or music as a "backing track" to their lives.
  • POV point of view
  • candidate audiovisual clips may be sourced from any of a variety of repositories, whether local or network-accessible.
  • Candidate clips may be retrieved using tags such as user-assigned hashtags or metadata. In this way, pre-existing associations of such tags can be used as hints that certain audiovisual clips are likely to have correspondence with a particular song or other audio baseline.
  • tags may be embodied as timeline markers used to identify particular clips or frames within a larger audiovisual signal encoding. Whatever the technique for identifying candidate clips, a subset of such clips is identified for further processing based computationally determined correspondence with an audio baseline track.
  • correspondence is determined by comparing computationally defined features of the audio baseline track with those computed for an audio track encoded in, or in association with, the candidate clip. Comparisons of audio power spectra, of rhythmic features, tempo, and/or pitch sequences and of other extracted audio features may be used to establish correspondence.
  • a method includes (i) retrieving computer readable encodings of plural audiovisual clips, the retrieved audiovisual clips having pre-existing associations with one or more tags; (ii) computationally evaluating correspondence of audio content of individual ones of the retrieved audiovisual clips with an audio baseline, the correspondence evaluation identifying a subset of the retrieved audiovisual clips for which the audio content thereof matches a least a portion of the audio baseline; (iii) for the retrieved audiovisual clips of the identified subset, computationally determining a temporal alignment with the audio baseline and, based on the determined temporal alignments, assigning individual ones of the retrieved audiovisual clips to positions along a timeline of the audio baseline; and (iv) rendering video content of the temporally-aligned
  • the method further includes presenting the one or more tags to one or more network-accessible audiovisual content repositories, wherein the retrieved audiovisual clips are selected from the one or more network-accessible audiovisual content repositories based on the presented one or more tags.
  • the tags provide markers for particular content in an audiovisual content repository, and the retrieved audiovisual clips are selected based on the markers from amongst the content of represented in the audiovisual content repository.
  • the method further includes storing, transmitting or posting a computer readable encoding of the coordinated audiovisual work.
  • the computational evaluation of correspondence of audio content of individual ones of the retrieved audiovisual clips with the audio baseline may, in some cases or embodiments, include (i) computing a first power spectrum for audio content of individual ones of the retrieved audiovisual clips with the audio baseline
  • the computational determination of temporal alignment may, in some cases or embodiments, include cross-correlating audio content of individual ones of the retrieved audiovisual clips with at least a portion of the audio baseline.
  • the audio baseline includes an audio encoding of a song.
  • the method further includes selection or indication, by a user at a user interface that is operably interactive with a remote service platform, of the tag and of the audio baseline; and responsive to the user selection or indication, performing one or more of the
  • the method further includes selection or indication of the tag and of the audio baseline by a user at a user interface provided on a portable computing device; and audiovisually rendering the coordinated audiovisual work to a display of the portable computing device.
  • the portable computing device is selected from the group of: a compute pad, a game controller, a personal digital assistant or book reader, and a mobile phone or media player.
  • the tag includes an alphanumeric hashtag and the audio baseline includes a computer readable encoding of digital audio.
  • either or both of the alphanumeric hashtag and the computer readable encoding of digital audio are supplied or selected by a user.
  • the retrieving of computer readable encodings of the plural audiovisual clips is based on correspondence of the presented tag with metadata associated, at a respective network-accessible repository, with respective ones of the audiovisual clips.
  • a retrieved from one of the one or more network-accessible repositories stores includes an API-accessible, audiovisual clip service platform. In some cases or embodiments, a retrieved from one of the one or more network-accessible repositories stores serves short, looping audiovisual clips of about six (6) seconds or less. In some cases or embodiments, a retrieved from one of the one or more network-accessible repositories stores serves at least some audiovisual content of more than about six (6) seconds, and the method further includes segmenting at least some of the retrieved audiovisual content.
  • one or more computer program products are encoded in one or more media.
  • the computer program products together include instructions executable on one or more computational systems to cause the computational systems to collectively perform the steps of any one or more of the above-described methods.
  • one or more computational systems have instructions executable on respective elements thereof to cause the computational systems to collectively perform the steps of any one or more of the above-described methods.
  • an audiovisual compositing system includes a retrieval interface to computer readable encodings of plural audiovisual clips, a digital signal processor coupled to the retrieval interface and an audiovisual rendering pipeline.
  • the retrieval interface allows selection of particular audiovisual clips from one or more content repositories based on pre-existing associations with one or more tags.
  • the digital signal processor is configured to computationally evaluate correspondence of audio content of individual ones of the selected audiovisual clips with an audio baseline, the correspondence evaluation identifying a subset of the audiovisual clips for which audio content thereof matches a least a portion of the audio baseline.
  • the digital signal processor is further configured to, for respective ones of the audiovisual clips of the identified subset, computationally determine a temporal alignment with the audio baseline and, based on the determined temporal alignments, assign individual ones of the audiovisual clips to positions along a timeline of the audio baseline.
  • the audiovisual rendering pipeline is configured to produce a coordinated audiovisual work including a mix of at least (i) video content of the identified audiovisual clips and (ii) the audio baseline, wherein the mix is based on the computationally determined temporal alignments and assigned positions along the timeline of the audio baseline.
  • the audiovisual compositing system further includes a user interface whereby a user selects the audio baseline and specifies the one or more tags for retrieval of particular audiovisual clips from the one or more content repositories.
  • the tags include either or both of user-specified hashtags and markers for identification of user selected ones the audiovisual clips within an audiovisual signal encoding.
  • FIG. 1 depicts process flows in accordance with some embodiments of the present invention(s).
  • FIG. 2 is an illustrative user interface in accordance with some embodiments of the present invention(s) by which a user may specify a hashtag for retrieval of audiovisual clips and identify, using a drag-and-drop selection, an audio baseline against which audiovisual clips corresponding to the hashtag are to be aligned to produce a coordinated audiovisual work.
  • FIG. 3 illustrates a processing sequence by which a coordinated audiovisual work is prepared from an audio baseline track and crowd-sourced video content in accordance with some embodiments of the present invention(s).
  • FIG. 4 is a network diagram that illustrates cooperation of exemplary devices in accordance with some embodiments of the present invention.
  • FIG. 1 depicts an exemplary process by which audiovisual clips 192 are retrieved (110) from an audiovisual content repository 121 , evaluated (130) for correspondence with an audio baseline 193 such as an audio signal encoding for a song against which at least some of the audiovisual clips were recorded, and aligned (140) and mixed (151 ) with the audio baseline 193 to produce a coordinated audiovisual work 195.
  • One or more tags 191 such as a hashtag, metadata or timeline markers are used to select candidate clips from available audiovisual content in repository 121 .
  • a repository such as repository 121 includes audiovisual content sourced from any of a variety of sources including purpose-built video cameras 105, smartphones (101 ), tablets, webcams and audiovisual content (video 103 and vocals 104) captured as part of a karaoke-style vocal capture session.
  • Tags 191 and an audio baseline 193 selection may be specified (102) by a user.
  • repository 121 implements a hashtag-based retrieval interface and includes social media content such as audiovisual content associated with a short, looping video clip service platform.
  • social media content such as audiovisual content associated with a short, looping video clip service platform.
  • exemplary computational system techniques and systems in accordance with the present invention(s) are illustrated and described using audiovisual content, repositories and formats typical of the Vine video-sharing application and service platform available from Twitter, Inc. Nonetheless, it will be understood that such illustrations and description are merely exemplary. Techniques of the present invention(s) may also be exploited in connection with other applications or service platforms. Techniques of the present invention(s) may also be integrated with existing video sharing applications or service platforms, as well as those hereafter developed.
  • Audio content of a candidate clip 192 is evaluated (130) for correspondence with the selected audio baseline 193. Correspondence is typically determined by comparing computationally defined features of the audio baseline 193 with those computed for an audio track encoded in, or in association with, a particular candidate clip 192. Suitable features for comparison include audio power spectra, rhythmic features, tempo, pitch sequences. For embodiments that operate on audiovisual content from a short, looping video clip service platform such as Vine, retrieved clips 192 may already be of a suitable length for use in preparation of a video montage. However, for audiovisual content of longer duration or to introduce some desirable degree of variation in clip length, optional segmentation may be applied. Segment lengths are, in general, matters of design- or user-choice.
  • alignment For video content 194 from audiovisual clips 192 for which evaluation 130 has indicated audio correspondence, alignment (140) is performed, typically by calculating for each such clip, a lag that maximizes a correlation between the audio baseline 193 and an audio signal of the given clip. Temporally aligned replicas of video 194 (with or without audio) are then mixed (151 ) with audio track 193A to produce coordinated audiovisual work 195.
  • an application and backend service developed by Smule Inc. as SMUUSH, provides a front-end user interface 210 and back- end processing for use in conjunction with video sharing services such as
  • Vine (commonly accessed by users at https://vine.co/ or using as applications for iOS and Android devices) using an application programmer interface (API) to access a repository of short, 6 second audiovisual clips that are easily searchable by hashtag.
  • a user of the SMUUSH application identifies (212) an audio baseline, e.g., an mp3 encoding of a popular song (such as the song "Classic” by MKTO) and provides (211 ) a hashtag, e.g., a Vine hashtag (such as "#Classic”) that users may have associated with video clips that are likely to relate to the audio baseline.
  • an audio baseline e.g., an mp3 encoding of a popular song (such as the song "Classic" by MKTO)
  • a hashtag e.g., a Vine hashtag (such as "#Classic”
  • users may have associated with video clips that are likely to relate to the audio baseline.
  • the SMUUSH application produces or provides a coordinated audiovisual work that, based on typically available crowd-sourced audiovisual clip content, includes people lip syncing, dancing to the beat, and otherwise expressing themselves in correspondence with song or music of the audio baseline.
  • FIG. 3 illustrates further (as a graphical flow) how an exemplary
  • the SMUUSH application (itself and/or together with cooperative hosted service(s)) performs the following:
  • Audiovisual clips associated with tag 191 ⁇ e.g., with the hashtag, #bi l l ie j ean
  • download 320 or otherwise retrieve) computer readable encodings of such clips (e.g., clips 392A, 392B, 392C and 392D) from audiovisual content repository 121 .
  • the application preferentially retrieves a most recently posted subset of audiovisual clips associated with the hashtag.
  • the application may retrieve as many audiovisual clips as possible (or a large, but capped number of audiovisual clips) and apply further selections.
  • bi l l ie_j ean . m4 a that constitutes the audio baseline 193.
  • it may be necessary to retrieve a suitable digital audio encoding of the audio baseline such as from audio store 122; however, in some cases, suitable media content may exist locally.
  • suitable digital audio encoding of the audio baseline such as from audio store 122; however, in some cases, suitable media content may exist locally.
  • One example technique is to compute a first power spectrum for the audio baseline (and/or for segmentable portions thereof).
  • a variety of spectral analysis techniques will be appreciated by persons of skill in the art of audio signal processing.
  • One example technique is to compute, for individual ones of the retrieved audiovisual clips, a second plurality of power spectra.
  • a coordinated audiovisual work 195 (billie_j ean . mp4) that synchronizes video content (394A, 394B, 394C and 394D) of the audiovisual clips with the audio baseline 193 and encodes the coordinated audiovisual work in computer readable digital form such as MPEG-4 format digital video or the like.
  • the resultant computer readable audiovisual encoding may be played, stored, transmitted and/or posted, including via network connected systems and devices such as illustrated in FIG. 3.
  • FIG. 4 is a network diagram that illustrates cooperation of exemplary devices in accordance with some embodiments of the present invention.
  • any of a variety of repositories of audiovisual content are contemplated, be they social media service platforms such as the Vine video sharing service exposed to client applications in network(s) 470, private servers (460) or cloud hosted service platforms that expose clips residing in an audiovisual content repository 407, or libraries of audiovisual content stored on, or available from, a computer 409, a portable handheld devices such as smartphone 101 , a tablet 410, etc.
  • Such content may be curated, posted with user applied tags, indexed, or simply raw with capture and/or originator metadata.
  • content in such repositories may be sourced from any of a variety of video capture platforms including portable computing devices (e.g., a
  • smartphone 101 tablet 410 or webcam enabled laptop 409) that hosts native video capture applications and/or karaoke-style audio capture with
  • video content may be sourced from a high-definition digital camcorder 105, such as those popularized under the GoProTM brand for extreme-action video photography, etc.
  • Functional flows and other implementation details depicted in FIGs. 1 -3 and described elsewhere herein will be understood in the context of networks, device configurations and platforms and information interchange pathways such as those illustrated in FIG. 4.
  • persons of skill in the art having benefit of the present application will appreciate suitably scaled, modified, alternative and/or extended infrastructure variations on the illustrative depictions of FIG. 4.
  • features computationally extracted from the video may be used to align or at least contribute an alignment with audio.
  • Examples include temporal alignment based on visual movement computationally discernible in moving images ⁇ e.g., people dancing in rhythm) to align with a known or computationally determined beat of a reference backing track or other audio baseline.
  • temporally localizable features in the video content such as a rapid change in magnitude or direction of optical flow, a rapid change in chromatic distribution and/or a rapid change in overall or spatial distribution of brightness, may contribute to (or be used in place of certain audio features) for temporal alignment with an audio baseline and/or segmentation of audiovisual content.
  • Some embodiments in accordance with the present invention(s) may take the form of, and/or be provided as, a computer program product encoded in a machine-readable medium as instruction sequences and other functional constructs of software tangibly embodied in non-transient media, which may in turn be executed in computational systems (such as, network servers, virtualized and/or cloud computing facilities, iOS or Android or other portable computing devices, and/or combinations of the foregoing) to perform methods described herein.
  • computational systems such as, network servers, virtualized and/or cloud computing facilities, iOS or Android or other portable computing devices, and/or combinations of the foregoing
  • a machine readable medium can include tangible articles that encode information in a form (e.g., as applications, source or object code, functionally descriptive information, etc.) readable by a machine (e.g., a computer, computational facilities of a mobile device or portable computing device, etc.) as well as tangible, non-transient storage incident to transmission of the information.
  • a machine e.g., a computer, computational facilities of a mobile device or portable computing device, etc.
  • a machine-readable medium may include, but is not limited to, magnetic storage medium (e.g., disks and/or tape storage); optical storage medium (e.g., CD-ROM, DVD, etc.); magneto-optical storage medium; read only memory (ROM); random access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or other types of medium suitable for storing electronic instructions, operation sequences, functionally descriptive information encodings, etc.
  • magnetic storage medium e.g., disks and/or tape storage
  • optical storage medium e.g., CD-ROM, DVD, etc.
  • magneto-optical storage medium e.g., magneto-optical storage medium
  • ROM read only memory
  • RAM random access memory
  • EPROM and EEPROM erasable programmable memory
  • flash memory or other types of medium suitable for storing electronic instructions, operation sequences, functionally descriptive information encodings, etc.

Abstract

A generally diverse set of audiovisual clips is sourced from one or more repositories for use in preparing a coordinated audiovisual work. In some cases, audiovisual clips are retrieved using tags such as user-assigned hashtags or metadata. Pre-existing associations of such tags can be used as hints that certain audiovisual clips are likely to share correspondence with an audio signal encoding of a particular song or other audio baseline. Clips are evaluated for computationally determined correspondence with an audio baseline track. In general, comparisons of audio power spectra, of rhythmic features, tempo, pitch sequences and other extracted audio features may be used to establish correspondence. For clips exhibiting a desired level of correspondence, computationally determined temporal alignments of individual clips with the baseline audio track are used to prepare a coordinated audiovisual work that mixes the selected audiovisual clips with the audio track.

Description

COORDINATED AUDIOVISUAL MONTAGE FROM SELECTED CROWD- SOURCED CONTENT WITH ALIGNMENT TO AUDIO BASELINE
TECHNICAL FIELD
The present invention relates generally to computational techniques including digital signal processing for audiovisual content and, in particular, to
techniques whereby a system or device may be programmed to produce a coordinated audiovisual work from individual clips.
BACKGROUND ART
Social media has, over the past decade, become an animating force for internet users and businesses alike. During that time, advanced mobile devices and applications have placed audiovisual capture in the hands of literally billions of users worldwide. At least in part as a result, the volume of audiovisual content amassed by users and, in some cases, posted to social networking sites and video sharing platforms has exploded. Audiovisual content repositories associated with video sharing services such as YouTube, Instagram, Vine, Flickr, Pinterest, etc. now contain huge collections of audiovisual content.
DISCLOSURE OF INVENTION(S)
Computational system techniques have been developed that provide new ways of connecting users through audiovisual content, particularly audiovisual content that includes music. For example, techniques have been developed that seek to connect people in one of the most authentic ways possible, capturing moments at which these people are experiencing or expressing themselves relative to a particular song or music and combining these moments together to form a coordinated audiovisual work. In some cases, captured moments take the form of video snippets posted to social media content sites. In some cases, expression takes the form of audiovisual content captured in a karaoke-style vocal capture session. In some cases, captured moments or expressions include extreme action or point of view (POV) video captured as part of a sporting contest or activity and set to music. Often (or even typically), the originators of these video snippets have never met and simply share an affinity for a particular song or music as a "backing track" to their lives.
In general, candidate audiovisual clips may be sourced from any of a variety of repositories, whether local or network-accessible. Candidate clips may be retrieved using tags such as user-assigned hashtags or metadata. In this way, pre-existing associations of such tags can be used as hints that certain audiovisual clips are likely to have correspondence with a particular song or other audio baseline. In some cases, tags may be embodied as timeline markers used to identify particular clips or frames within a larger audiovisual signal encoding. Whatever the technique for identifying candidate clips, a subset of such clips is identified for further processing based computationally determined correspondence with an audio baseline track. Typically, correspondence is determined by comparing computationally defined features of the audio baseline track with those computed for an audio track encoded in, or in association with, the candidate clip. Comparisons of audio power spectra, of rhythmic features, tempo, and/or pitch sequences and of other extracted audio features may be used to establish correspondence.
For clips exhibiting a desired level of correspondence with the audio baseline track, computationally determined temporal offsets of individual clips into the baseline audio track are used to prepare a new and coordinated audiovisual work that includes selected audiovisual clips temporally aligned with the audio track. In some cases, extracted audio features may be used in connection with computational techniques such as cross-correlation to establish the desired alignments. In some cases or embodiments, temporally localizable features in video content may also be used for alignment. The resulting composite audiovisual mix includes video content from selected ones of the audiovisual clips synchronized with the baseline audio track based on the determined alignments. In some cases, audio tracks of the selected audiovisual clips may be included in the composite audiovisual mix. In some embodiments in accordance with the present invention(s), a method includes (i) retrieving computer readable encodings of plural audiovisual clips, the retrieved audiovisual clips having pre-existing associations with one or more tags; (ii) computationally evaluating correspondence of audio content of individual ones of the retrieved audiovisual clips with an audio baseline, the correspondence evaluation identifying a subset of the retrieved audiovisual clips for which the audio content thereof matches a least a portion of the audio baseline; (iii) for the retrieved audiovisual clips of the identified subset, computationally determining a temporal alignment with the audio baseline and, based on the determined temporal alignments, assigning individual ones of the retrieved audiovisual clips to positions along a timeline of the audio baseline; and (iv) rendering video content of the temporally-aligned
audiovisual clips together with the audio baseline to produce a coordinated audiovisual work. In some cases or embodiments, the method further includes presenting the one or more tags to one or more network-accessible audiovisual content repositories, wherein the retrieved audiovisual clips are selected from the one or more network-accessible audiovisual content repositories based on the presented one or more tags. In some cases or embodiments, at least some of the tags provide markers for particular content in an audiovisual content repository, and the retrieved audiovisual clips are selected based on the markers from amongst the content of represented in the audiovisual content repository.
In some cases or embodiments, the method further includes storing, transmitting or posting a computer readable encoding of the coordinated audiovisual work. The computational evaluation of correspondence of audio content of individual ones of the retrieved audiovisual clips with the audio baseline may, in some cases or embodiments, include (i) computing a first power spectrum for audio content of individual ones of the retrieved
audiovisual clips; (ii) computing a second power spectrum for at least a portion of the audio baseline; and (iii) correlating the first and second power spectra. The computational determination of temporal alignment may, in some cases or embodiments, include cross-correlating audio content of individual ones of the retrieved audiovisual clips with at least a portion of the audio baseline. In some cases or embodiments, the audio baseline includes an audio encoding of a song.
In some cases or embodiments, the method further includes selection or indication, by a user at a user interface that is operably interactive with a remote service platform, of the tag and of the audio baseline; and responsive to the user selection or indication, performing one or more of the
correspondence evaluation, the determination of temporal alignment, and the rendering to produce a coordinated audiovisual work at the remote service platform. In some cases or embodiments, the method further includes selection or indication of the tag and of the audio baseline by a user at a user interface provided on a portable computing device; and audiovisually rendering the coordinated audiovisual work to a display of the portable computing device.
In some cases or embodiments, the portable computing device is selected from the group of: a compute pad, a game controller, a personal digital assistant or book reader, and a mobile phone or media player. In some cases or embodiments, the tag includes an alphanumeric hashtag and the audio baseline includes a computer readable encoding of digital audio. In some cases or embodiments, either or both of the alphanumeric hashtag and the computer readable encoding of digital audio are supplied or selected by a user. In some cases or embodiments, the retrieving of computer readable encodings of the plural audiovisual clips is based on correspondence of the presented tag with metadata associated, at a respective network-accessible repository, with respective ones of the audiovisual clips. In some cases or embodiments, a retrieved from one of the one or more network-accessible repositories stores includes an API-accessible, audiovisual clip service platform. In some cases or embodiments, a retrieved from one of the one or more network-accessible repositories stores serves short, looping audiovisual clips of about six (6) seconds or less. In some cases or embodiments, a retrieved from one of the one or more network-accessible repositories stores serves at least some audiovisual content of more than about six (6) seconds, and the method further includes segmenting at least some of the retrieved audiovisual content.
In some embodiments in accordance with present invention(s), one or more computer program products are encoded in one or more media. The computer program products together include instructions executable on one or more computational systems to cause the computational systems to collectively perform the steps of any one or more of the above-described methods. In some embodiments in accordance with present invention(s), one or more computational systems have instructions executable on respective elements thereof to cause the computational systems to collectively perform the steps of any one or more of the above-described methods.
In some embodiments in accordance with the present invention(s), an audiovisual compositing system includes a retrieval interface to computer readable encodings of plural audiovisual clips, a digital signal processor coupled to the retrieval interface and an audiovisual rendering pipeline. The retrieval interface allows selection of particular audiovisual clips from one or more content repositories based on pre-existing associations with one or more tags. The digital signal processor is configured to computationally evaluate correspondence of audio content of individual ones of the selected audiovisual clips with an audio baseline, the correspondence evaluation identifying a subset of the audiovisual clips for which audio content thereof matches a least a portion of the audio baseline. In addition, the digital signal processor is further configured to, for respective ones of the audiovisual clips of the identified subset, computationally determine a temporal alignment with the audio baseline and, based on the determined temporal alignments, assign individual ones of the audiovisual clips to positions along a timeline of the audio baseline. The audiovisual rendering pipeline is configured to produce a coordinated audiovisual work including a mix of at least (i) video content of the identified audiovisual clips and (ii) the audio baseline, wherein the mix is based on the computationally determined temporal alignments and assigned positions along the timeline of the audio baseline.
In some embodiments, the audiovisual compositing system further includes a user interface whereby a user selects the audio baseline and specifies the one or more tags for retrieval of particular audiovisual clips from the one or more content repositories. In some cases or embodiments, the tags include either or both of user-specified hashtags and markers for identification of user selected ones the audiovisual clips within an audiovisual signal encoding.
These and other embodiments, together with numerous variations thereon, will be appreciated by persons of ordinary skill in the art based on the description and claims. BRIEF DESCRIPTION OF THE DRAWINGS
The present invention(s) is (are) illustrated by way of example and not limitation with reference to the accompanying figures, in which like references generally indicate similar elements or features.
FIG. 1 depicts process flows in accordance with some embodiments of the present invention(s).
FIG. 2 is an illustrative user interface in accordance with some embodiments of the present invention(s) by which a user may specify a hashtag for retrieval of audiovisual clips and identify, using a drag-and-drop selection, an audio baseline against which audiovisual clips corresponding to the hashtag are to be aligned to produce a coordinated audiovisual work.
FIG. 3 illustrates a processing sequence by which a coordinated audiovisual work is prepared from an audio baseline track and crowd-sourced video content in accordance with some embodiments of the present invention(s). FIG. 4 is a network diagram that illustrates cooperation of exemplary devices in accordance with some embodiments of the present invention.
Skilled artisans will appreciate that elements or features in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions or prominence of some of the illustrated elements or features may be exaggerated relative to other elements or features in an effort to help to improve understanding of embodiments of the present invention.
MODE(S) FOR CARRYING OUT THE INVENTION(S)
FIG. 1 depicts an exemplary process by which audiovisual clips 192 are retrieved (110) from an audiovisual content repository 121 , evaluated (130) for correspondence with an audio baseline 193 such as an audio signal encoding for a song against which at least some of the audiovisual clips were recorded, and aligned (140) and mixed (151 ) with the audio baseline 193 to produce a coordinated audiovisual work 195. One or more tags 191 such as a hashtag, metadata or timeline markers are used to select candidate clips from available audiovisual content in repository 121 . In general, a repository (or repositories) such as repository 121 includes audiovisual content sourced from any of a variety of sources including purpose-built video cameras 105, smartphones (101 ), tablets, webcams and audiovisual content (video 103 and vocals 104) captured as part of a karaoke-style vocal capture session.
Tags 191 and an audio baseline 193 selection may be specified (102) by a user. In some embodiments, repository 121 implements a hashtag-based retrieval interface and includes social media content such as audiovisual content associated with a short, looping video clip service platform. For example, exemplary computational system techniques and systems in accordance with the present invention(s) are illustrated and described using audiovisual content, repositories and formats typical of the Vine video-sharing application and service platform available from Twitter, Inc. Nonetheless, it will be understood that such illustrations and description are merely exemplary. Techniques of the present invention(s) may also be exploited in connection with other applications or service platforms. Techniques of the present invention(s) may also be integrated with existing video sharing applications or service platforms, as well as those hereafter developed. Audio content of a candidate clip 192 is evaluated (130) for correspondence with the selected audio baseline 193. Correspondence is typically determined by comparing computationally defined features of the audio baseline 193 with those computed for an audio track encoded in, or in association with, a particular candidate clip 192. Suitable features for comparison include audio power spectra, rhythmic features, tempo, pitch sequences. For embodiments that operate on audiovisual content from a short, looping video clip service platform such as Vine, retrieved clips 192 may already be of a suitable length for use in preparation of a video montage. However, for audiovisual content of longer duration or to introduce some desirable degree of variation in clip length, optional segmentation may be applied. Segment lengths are, in general, matters of design- or user-choice.
For video content 194 from audiovisual clips 192 for which evaluation 130 has indicated audio correspondence, alignment (140) is performed, typically by calculating for each such clip, a lag that maximizes a correlation between the audio baseline 193 and an audio signal of the given clip. Temporally aligned replicas of video 194 (with or without audio) are then mixed (151 ) with audio track 193A to produce coordinated audiovisual work 195.
Referring now to FIG. 2, an application and backend service, developed by Smule Inc. as SMUUSH, provides a front-end user interface 210 and back- end processing for use in conjunction with video sharing services such as
Vine (commonly accessed by users at https://vine.co/ or using as applications for iOS and Android devices) using an application programmer interface (API) to access a repository of short, 6 second audiovisual clips that are easily searchable by hashtag. A user of the SMUUSH application identifies (212) an audio baseline, e.g., an mp3 encoding of a popular song (such as the song "Classic" by MKTO) and provides (211 ) a hashtag, e.g., a Vine hashtag (such as "#Classic") that users may have associated with video clips that are likely to relate to the audio baseline. Based on computational processing of the retrieved audiovisual clips and of the audio baseline (such as that described with reference to FIGs. 1 and 3, the SMUUSH application produces or provides a coordinated audiovisual work that, based on typically available crowd-sourced audiovisual clip content, includes people lip syncing, dancing to the beat, and otherwise expressing themselves in correspondence with song or music of the audio baseline. FIG. 3 illustrates further (as a graphical flow) how an exemplary
implementation of the technology works. Specifically, the SMUUSH application (itself and/or together with cooperative hosted service(s)) performs the following:
1 ) Search Vine for audiovisual clips associated with tag 191 {e.g., with the hashtag, #bi l l ie j ean) and download 320 (or otherwise retrieve) computer readable encodings of such clips (e.g., clips 392A, 392B, 392C and 392D) from audiovisual content repository 121 . In some cases, the application preferentially retrieves a most recently posted subset of audiovisual clips associated with the hashtag. However, in some embodiments, the application may retrieve as many audiovisual clips as possible (or a large, but capped number of audiovisual clips) and apply further selections.
2) Compute a spectral analysis (331 ) of an audio file, such as
bi l l ie_j ean . m4 a, that constitutes the audio baseline 193. In some cases, it may be necessary to retrieve a suitable digital audio encoding of the audio baseline, such as from audio store 122; however, in some cases, suitable media content may exist locally. A variety of spectral analysis techniques will be appreciated by persons of skill in the art of audio signal processing. One example technique is to compute a first power spectrum for the audio baseline (and/or for segmentable portions thereof).
Compute a spectral analysis of the audiovisual clips 392A, 392B, 392C and 392D retrieved (or selected). A variety of spectral analysis techniques will be appreciated by persons of skill in the art of audio signal processing. One example technique is to compute, for individual ones of the retrieved audiovisual clips, a second plurality of power spectra.
Computationally correlate first and second power spectra to identify which of the audiovisual clips actually contain portions of the song or music that is represented by the audio 193 that constitutes the audio baseline.
Determine (330) proper (or candidate) temporal alignments of the individual audiovisual clips with the audio baseline. In some cases, such as with audiovisual clips that correspond to chorus or refrain, multiple temporal alignments may be possible. A variety of temporal alignment techniques will be appreciated by persons of skill in the art of audio signal processing. One example technique is to compute a cross-correlation of the audio signals or of extracted audio features, using delays that correspond to sufficiently distinct maxima in the cross-correlation function to compute aligning temporal offsets.
Place (340) the individual audiovisual clips along a timeline (341) of the audio baseline using the determined temporal alignments so that clips are presented in correspondence with the portion of the song or music against which they were recorded (or mixed).
Stitch the audiovisual content together by rendering a coordinated audiovisual work 195 (billie_j ean . mp4) that synchronizes video content (394A, 394B, 394C and 394D) of the audiovisual clips with the audio baseline 193 and encodes the coordinated audiovisual work in computer readable digital form such as MPEG-4 format digital video or the like.
The resultant computer readable audiovisual encoding may be played, stored, transmitted and/or posted, including via network connected systems and devices such as illustrated in FIG. 3.
FIG. 4 is a network diagram that illustrates cooperation of exemplary devices in accordance with some embodiments of the present invention. In general, any of a variety of repositories of audiovisual content are contemplated, be they social media service platforms such as the Vine video sharing service exposed to client applications in network(s) 470, private servers (460) or cloud hosted service platforms that expose clips residing in an audiovisual content repository 407, or libraries of audiovisual content stored on, or available from, a computer 409, a portable handheld devices such as smartphone 101 , a tablet 410, etc. Such content may be curated, posted with user applied tags, indexed, or simply raw with capture and/or originator metadata. In general, content in such repositories may be sourced from any of a variety of video capture platforms including portable computing devices (e.g., a
smartphone 101 , tablet 410 or webcam enabled laptop 409) that hosts native video capture applications and/or karaoke-style audio capture with
performance synchronized video such as the Sing!™ application popularized by Smule, Inc. for iOS and Android devices. In some cases or embodiments, video content may be sourced from a high-definition digital camcorder 105, such as those popularized under the GoPro™ brand for extreme-action video photography, etc. Functional flows and other implementation details depicted in FIGs. 1 -3 and described elsewhere herein will be understood in the context of networks, device configurations and platforms and information interchange pathways such as those illustrated in FIG. 4. In addition, persons of skill in the art having benefit of the present application will appreciate suitably scaled, modified, alternative and/or extended infrastructure variations on the illustrative depictions of FIG. 4.
Other Embodiments
While the invention(s) is (are) described with reference to various
embodiments, it will be understood that these embodiments are illustrative and that the scope of the invention(s) is not limited to them. Many variations, modifications, additions, and improvements are possible.
For example, while certain illustrative embodiments have been described in which each of the audiovisual clips are sourced from existing network- accessible repositories, persons of skill in the art will appreciate that capture and, indeed, transformation, filtering and/or other processing of such audiovisual clips may also be provided. Likewise, illustrative embodiments have, for simplicity of exposition, described temporal alignment techniques in terms of relatively simple audio signal processing operations. However, based on the description herein, persons of skill in the art will appreciate that more sophisticated feature extraction and correlation techniques may be for the identification and temporal alignment of audio and video.
For example, features computationally extracted from the video may be used to align or at least contribute an alignment with audio. Examples include temporal alignment based on visual movement computationally discernible in moving images {e.g., people dancing in rhythm) to align with a known or computationally determined beat of a reference backing track or other audio baseline. In some embodiments, temporally localizable features in the video content, such as a rapid change in magnitude or direction of optical flow, a rapid change in chromatic distribution and/or a rapid change in overall or spatial distribution of brightness, may contribute to (or be used in place of certain audio features) for temporal alignment with an audio baseline and/or segmentation of audiovisual content.
More generally, while certain illustrative signal processing techniques have been described in the context of certain illustrative applications, persons of ordinary skill in the art will recognize that it is straightforward to modify the described techniques to accommodate other suitable signal processing techniques and effects. Some embodiments in accordance with the present invention(s) may take the form of, and/or be provided as, a computer program product encoded in a machine-readable medium as instruction sequences and other functional constructs of software tangibly embodied in non-transient media, which may in turn be executed in computational systems (such as, network servers, virtualized and/or cloud computing facilities, iOS or Android or other portable computing devices, and/or combinations of the foregoing) to perform methods described herein. In general, a machine readable medium can include tangible articles that encode information in a form (e.g., as applications, source or object code, functionally descriptive information, etc.) readable by a machine (e.g., a computer, computational facilities of a mobile device or portable computing device, etc.) as well as tangible, non-transient storage incident to transmission of the information. A machine-readable medium may include, but is not limited to, magnetic storage medium (e.g., disks and/or tape storage); optical storage medium (e.g., CD-ROM, DVD, etc.); magneto-optical storage medium; read only memory (ROM); random access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or other types of medium suitable for storing electronic instructions, operation sequences, functionally descriptive information encodings, etc.
In general, plural instances may be provided for components, operations or structures described herein as a single instance. Boundaries between various components, operations and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of the invention(s). In general, structures and functionality presented as separate components in the exemplary configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements may fall within the scope of the invention(s).

Claims

WHAT IS CLAIMED IS:
1 . A method comprising:
retrieving computer readable encodings of plural audiovisual clips, the retrieved audiovisual clips having pre-existing associations with one or more tags;
computationally evaluating correspondence of audio content of
individual ones of the retrieved audiovisual clips with an audio baseline, the correspondence evaluation identifying a subset of the retrieved audiovisual clips for which the audio content thereof matches a least a portion of the audio baseline;
for the retrieved audiovisual clips of the identified subset,
computationally determining a temporal alignment with the audio baseline and, based on the determined temporal alignments, assigning individual ones of the retrieved audiovisual clips to positions along a timeline of the audio baseline; and rendering video content of the temporally-aligned audiovisual clips
together with the audio baseline to produce a coordinated audiovisual work.
2. The method of claim 1 , wherein the computational evaluation of correspondence of audio content of individual ones of the retrieved
audiovisual clips with the audio baseline includes:
computing a first power spectrum for audio content of individual ones of the retrieved audiovisual clips;
computing a second power spectrum for at least a portion of the audio baseline; and
correlating the first and second power spectra.
3. The method of claim 1 , wherein the computational determination of temporal alignment includes:
cross-correlating audio content of individual ones of the retrieved
audiovisual clips with at least a portion of the audio baseline.
4. The method of claim 1 , further comprising:
presenting the one or more tags to one or more network-accessible audiovisual content repositories,
wherein the retrieved audiovisual clips are selected from the one or more network-accessible audiovisual content repositories based on the presented one or more tags.
5. The method of claim 1 ,
wherein at least some of the tags provide markers for particular content in an audiovisual content repository, and
wherein the retrieved audiovisual clips are selected based on the
markers from amongst the content of represented in the audiovisual content repository.
6. The method of any of claims 1 -5, further comprising:
storing, transmitting or posting a computer readable encoding of the coordinated audiovisual work.
7. The method of any of claims 1 -5,
wherein the audio baseline includes an audio encoding of a song.
8. The method of any of claims 1 -5, further comprising:
selection or indication, by a user at a user interface that is operably interactive with a remote service platform, of the tag and of the audio baseline; and
responsive to the user selection or indication, performing one or more of the correspondence evaluation, the determination of temporal alignment, and the rendering to produce a coordinated audiovisual work at the remote service platform.
9. The method of any of claims 1 -5, further comprising:
selection or indication of the tag and of the audio baseline by a user at a user interface provided on a portable computing device; and audiovisually rendering the coordinated audiovisual work to a display of the portable computing device.
10. The method of claim 9, wherein the portable computing device is selected from the group of:
a compute pad;
a game controller;
a personal digital assistant or book reader; and
a mobile phone or media player.
1 1 . The method of any of claims 1 -5,
wherein the tag includes an alphanumeric hashtag; and
wherein the audio baseline includes a computer readable encoding of digital audio.
12. The method of claim 1 1 ,
wherein either or both of the alphanumeric hashtag and the computer readable encoding of digital audio are supplied or selected by a user.
13. The method of any of claims 1 -5,
wherein the retrieving of computer readable encodings of the plural audiovisual clips is based on correspondence of the presented tag with metadata associated, at a respective repository, with respective ones of the audiovisual clips.
14. The method of claim 4,
wherein a retrieved from one of the one or more network-accessible repositories stores includes an API-accessible, audiovisual clip service platform.
15. The method of claim 14,
wherein a retrieved from one of the one or more network-accessible repositories stores serves short, looping audiovisual clips of about six (6) seconds or less.
16. The method of claim 14,
wherein a retrieved from one of the one or more network-accessible repositories stores serves at least some audiovisual content of more than about six (6) seconds, and
wherein the method further includes segmenting at least some of the retrieved audiovisual content.
17. One or more computer program products encoded in one or more media, the computer program products together including instructions executable on one or more computational systems to cause the computational systems to collectively perform the steps recited in any of claims 1 -5.
18. One or more computational systems with instructions executable on respective elements thereof to cause the computational systems to collectively perform the steps recited in any of claims 1 -5.
19. An audiovisual compositing system comprising:
a retrieval interface to computer readable encodings of plural
audiovisual clips, wherein the retrieval interface allows selection of particular audiovisual clips from one or more content repositories based on pre-existing associations with one or more tags; a digital signal processor coupled to the retrieval interface and configured to computationally evaluate correspondence of audio content of individual ones of the selected audiovisual clips with an audio baseline, the correspondence evaluation identifying a subset of the audiovisual clips for which audio content thereof matches a least a portion of the audio baseline;
the digital signal processor further configured to, for respective ones of the audiovisual clips of the identified subset, computationally determine a temporal alignment with the audio baseline and, based on the determined temporal alignments, assign individual ones of the audiovisual clips to positions along a timeline of the audio baseline; and
an audiovisual rendering pipeline configured to produce a coordinated audiovisual work including a mix of at least (i) video content of the identified audiovisual clips and (ii) the audio baseline, wherein the mix is based on the computationally determined temporal alignments and assigned positions along the timeline of the audio baseline.
20. The audiovisual compositing system of claim 19, further comprising:
a user interface whereby a user selects the audio baseline and
specifies the one or more tags for retrieval of particular audiovisual clips from the one or more content repositories.
21 . The audiovisual compositing system of claim 19 or claim 20, wherein the tags include either or both of user-specified hashtags and markers for identification of user selected ones the audiovisual clips within an audiovisual signal encoding.
PCT/US2015/035830 2014-06-13 2015-06-15 Coordinated audiovisual montage from selected crowd-sourced content with alignment to audio baseline WO2015192130A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201462012197P 2014-06-13 2014-06-13
US62/012,197 2014-06-13

Publications (1)

Publication Number Publication Date
WO2015192130A1 true WO2015192130A1 (en) 2015-12-17

Family

ID=54834490

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/035830 WO2015192130A1 (en) 2014-06-13 2015-06-15 Coordinated audiovisual montage from selected crowd-sourced content with alignment to audio baseline

Country Status (1)

Country Link
WO (1) WO2015192130A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11481181B2 (en) 2018-12-03 2022-10-25 At&T Intellectual Property I, L.P. Service for targeted crowd sourced audio for virtual interaction

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020082731A1 (en) * 2000-11-03 2002-06-27 International Business Machines Corporation System for monitoring audio content in a video broadcast
US20110154197A1 (en) * 2009-12-18 2011-06-23 Louis Hawthorne System and method for algorithmic movie generation based on audio/video synchronization
US20130006625A1 (en) * 2011-06-28 2013-01-03 Sony Corporation Extended videolens media engine for audio recognition
US20130132836A1 (en) * 2011-11-21 2013-05-23 Verizon Patent And Licensing Inc. Methods and Systems for Presenting Media Content Generated by Attendees of a Live Event
US20130254231A1 (en) * 2012-03-20 2013-09-26 Kawf.Com, Inc. Dba Tagboard.Com Gathering and contributing content across diverse sources

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020082731A1 (en) * 2000-11-03 2002-06-27 International Business Machines Corporation System for monitoring audio content in a video broadcast
US20110154197A1 (en) * 2009-12-18 2011-06-23 Louis Hawthorne System and method for algorithmic movie generation based on audio/video synchronization
US20130006625A1 (en) * 2011-06-28 2013-01-03 Sony Corporation Extended videolens media engine for audio recognition
US20130132836A1 (en) * 2011-11-21 2013-05-23 Verizon Patent And Licensing Inc. Methods and Systems for Presenting Media Content Generated by Attendees of a Live Event
US20130254231A1 (en) * 2012-03-20 2013-09-26 Kawf.Com, Inc. Dba Tagboard.Com Gathering and contributing content across diverse sources

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11481181B2 (en) 2018-12-03 2022-10-25 At&T Intellectual Property I, L.P. Service for targeted crowd sourced audio for virtual interaction

Similar Documents

Publication Publication Date Title
US10971191B2 (en) Coordinated audiovisual montage from selected crowd-sourced content with alignment to audio baseline
US11477156B2 (en) Watermarking and signal recognition for managing and sharing captured content, metadata discovery and related arrangements
US10650861B2 (en) Video summarization and collaboration systems and methods
KR102110057B1 (en) Song confirmation method and device, storage medium
US10467287B2 (en) Systems and methods for automatically suggesting media accompaniments based on identified media content
KR101680507B1 (en) Digital platform for user-generated video synchronized editing
KR20190139751A (en) Method and apparatus for processing video
US10645468B1 (en) Systems and methods for providing video segments
US20190349641A1 (en) Content providing server, content providing terminal and content providing method
US11710318B2 (en) Systems and methods for creating video summaries
CN102915320A (en) Extended videolens media engine for audio recognition
US11496806B2 (en) Content providing server, content providing terminal, and content providing method
CN103823870B (en) Information processing method and electronic equipment
WO2017157135A1 (en) Media information processing method, media information processing device and storage medium
KR102107678B1 (en) Server for providing media information, apparatus, method and computer readable recording medium for searching media information related to media contents
KR102550528B1 (en) System for selecting segmentation video using high definition camera and the method thereof
WO2015192130A1 (en) Coordinated audiovisual montage from selected crowd-sourced content with alignment to audio baseline
US11128927B2 (en) Content providing server, content providing terminal, and content providing method
CN113747233B (en) Music replacement method and device, electronic equipment and storage medium
US20190206445A1 (en) Systems and methods for generating highlights for a video
KR20150046407A (en) Method and server for providing contents
CN113709561B (en) Video editing method, device, equipment and storage medium
KR102131751B1 (en) Method for processing interval division information based on recognition meta information and service device supporting the same
CN113709561A (en) Video editing method, device, equipment and storage medium
Mate Automatic Mobile Video Remixing and Collaborative Watching Systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15805808

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13/04/17)

122 Ep: pct application non-entry in european phase

Ref document number: 15805808

Country of ref document: EP

Kind code of ref document: A1