WO2019040492A1 - Système d'effets audiovisuels permettant une augmentation d'une interprétation capturée sur la base de son contenu - Google Patents

Système d'effets audiovisuels permettant une augmentation d'une interprétation capturée sur la base de son contenu Download PDF

Info

Publication number
WO2019040492A1
WO2019040492A1 PCT/US2018/047325 US2018047325W WO2019040492A1 WO 2019040492 A1 WO2019040492 A1 WO 2019040492A1 US 2018047325 W US2018047325 W US 2018047325W WO 2019040492 A1 WO2019040492 A1 WO 2019040492A1
Authority
WO
WIPO (PCT)
Prior art keywords
performance
audiovisual
visual effect
vocal
encoding
Prior art date
Application number
PCT/US2018/047325
Other languages
English (en)
Inventor
David Steinwedel
Perry R. Cook
Paul T. Chi
Wei Zhou
Jon MOLDOVER
Anton Holmberg
Jingxi LI
Original Assignee
Smule, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smule, Inc. filed Critical Smule, Inc.
Priority to DE112018004717.2T priority Critical patent/DE112018004717T5/de
Priority to CN201880054029.4A priority patent/CN111345044B/zh
Publication of WO2019040492A1 publication Critical patent/WO2019040492A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440236Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by media transcoding, e.g. video is transformed into a slideshow of still pictures, audio is converted into text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/152Multipoint control units therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • H04N2007/145Handheld terminals

Definitions

  • the invention relates generally to capture and/or processing of vocal audio performances and, in particular, to techniques suitable for use in applying selected visual effects to performance synchronized video in a manner consistent with musical structure of, or underlying, the performance.
  • KaraokeTM (all available from Smule, Inc.) has shown that advanced digital acoustic techniques may be delivered in ways that provide a compelling user experience.
  • digital acoustic researchers seek to transition their innovations to commercial applications deployable to modern handheld devices such as the iPhone ® handheld and other platforms operable within the real-world constraints imposed by processor, memory and other limited computational resources thereof and/or within communications bandwidth and transmission latency constraints typical of wireless networks, significant practical challenges present. Improved techniques and functional capabilities are desired, particularly relative to video.
  • audiovisual performances including vocal music
  • the vocal performances of individual users are captured (together with performance synchronized video) on mobile devices or using set-top box type equipment in the context of a karaoke-style presentation of lyrics in correspondence with audible renderings of a backing track.
  • pitch cues may be presented to vocalists in connection with the karaoke-style presentation of lyrics and, optionally, continuous automatic pitch correction (or pitch shifting into harmony) may be provided.
  • Vocal audio of a user together with performance synchronized video is, in some cases or embodiments, captured and coordinated with audiovisual contributions of other users to form composite duet-style or glee club-style or window-paned music video-style audiovisual performances.
  • the vocal performances of individual users are captured (together with performance synchronized video) on mobile devices, television-type display and/or set-top box equipment in the context of karaoke-style presentations of lyrics in correspondence with audible renderings of a backing track.
  • Contributions of multiple vocalists can be coordinated and mixed in a manner that selects for presentation, at any given time along a given performance timeline, performance synchronized video of one or more of the contributors.
  • Selections provide a sequence of visual layouts in correspondence with other coded aspects of a performance score such as pitch tracks, backing audio, lyrics, sections and/or vocal parts.
  • Visual effects schedules are applied to audiovisual performances with differing visual effects applied in correspondence with differing elements of musical structure.
  • segmentation techniques applied to one or more audio tracks e.g., vocal or backing tracks
  • applied visual effects schedules are mood-denominated and may be selected by a performer as a component of his or her visual expression or may be determined from an audiovisual performance using machine learning techniques.
  • a method includes accessing a machine readable encoding of a first audiovisual performance and applying a first visual effect schedule to at least a portion of the first audiovisual performance encoding.
  • the first audiovisual performance is captured as vocal audio with performance synchronized video and has an associated musical structure encoding that includes at least musical section boundaries coded for temporal alignment with the first audiovisual performance encoding.
  • the applied visual effect schedule encodes differing visual effects for differing musical structure elements of the first audiovisual performance encoding and provides visual effect transitions in temporal alignment with at least some of the coded musical section boundaries.
  • the method further includes segmenting at least an audio track of the first audiovisual performance encoding to provide the associated musical structure encoding.
  • the associated musical structure encoding includes group part or musical section metadata.
  • the differing visual effects differ in either degree or type or both degree and type.
  • the method further includes selecting the first visual effect schedule from amongst a plurality of mood-denominated visual effect schedules. In some cases or embodiments, the selecting is based on a computationally-determined mood for at least the captured vocal audio. In some cases or embodiments, the selecting is based on a user interface selection by the vocal audio performer prior to, or coincident with, capture of the vocal audio. In some embodiments, the method further includes
  • the method further includes selecting a second visual effect schedule from amongst the plurality of mood- denominated visual effect schedules, the second visual effect schedule differing from the first visual effect schedule; and applying the second visual effect schedule to at least a portion of the first audiovisual performance encoding.
  • the method further includes streaming, to an audience at one or more remote client devices, the first audiovisual performance.
  • the streamed first audiovisual performance is mixed with an encoding of a backing track against which the vocal audio was captured.
  • the streamed first audiovisual performance is streamed with the first visual effect schedule applied.
  • the method further includes supplying an identification of the applied visual effect schedule for video effect rendering at one or more of the remote client devices.
  • the method further includes transferring (to, from, or via a content server or service platform) the first audiovisual performance together with at least an identifier for the one or more applied visual effect schedules.
  • the selecting is based on a user interface selection during, or prior to, audiovisual rendering of the first audiovisual performance.
  • mood values are parameterized as a two-dimensional quantity, wherein a first dimension of the mood parameterization codes an emotion and wherein second dimension of the mood parameterization codes an intensity.
  • the method further includes determining an intensity dimension of the mood parameterization based on one or more of: (i) a time-varying audio signal strength or vocal energy density measure computationally determined from the vocal audio and (ii) beats, tempo, signal strength or energy density of a backing audio track.
  • the method further includes segmenting the first audiovisual performance encoding to identify the differing musical structure elements.
  • the segmenting is based at least in part on a computational determination of vocal intensity with at least some segmentation boundaries constrained to temporally align with beats or tempo computationally extracted from a corresponding audio backing track.
  • the segmenting is based at least in part on a similarity analysis computationally performed on a temporally-aligned lyrics track to classify particular portions of first audiovisual performance encoding as verse or chorus.
  • the differing visual effects encoded by the applied visual effect schedule include for a given element thereof, one or more of: (i) a particle-based effect or lens flare, (ii) transitions between distinct source videos, (iii) animations or motion of a frame within a source video, (iv) vector graphics or images of patterns or textures; and (v) color, saturation or contrast.
  • the associated musical structure encodes musical sections of differing types and the applied visual effect schedule defines differing visual effects for different ones of the encoded musical sections.
  • the associated musical structure encodes events or transitions and the applied visual effect schedule defines differing visual effects for different ones of the encoded events or transitions.
  • the machine readable encoding further encodes at least part of a second audiovisual performance captured as second vocal audio with performance synchronized video, the first and second audiovisual performances constituting a group performance.
  • the associated musical structure encodes group parts, and the applied visual effect schedule is temporally selective for particular performance synchronized video in correspondence with the encoded musical structure.
  • the applied visual effect schedule codes for at least some musical structure elements, color matching of performance synchronized video for respective performers in the group performance.
  • the applied visual effect schedule codes for at least some musical structure elements, a visual blur or blend at an interface between performance synchronized video for respective performers in the group performance.
  • the first and second audiovisual performances are captured against a common backing track.
  • the method further includes capturing the first audiovisual performance at a network-connected vocal capture device communicatively coupled to a content server or service platform from which the musical structure encoding is supplied.
  • the audiovisual performance capture is performed at the network-connected vocal capture device in accordance with a Karaoke-style operational mechanic in which lyrics are visually presented in correspondence with audible rendering of a backing track.
  • the method is performed, at least in part, on a content server or service platform to which geographically-distributed, network- connected, vocal capture devices are communicatively coupled. In some embodiments, the method is performed, at least in part, on a network- connected, vocal capture device communicatively coupled to a content server or service platform. In some embodiments, the method is embodied, at least in part, as a computer program product encoding of instructions executable on a content server or service platform to which a plurality of geographically- distributed, network-connected, vocal capture devices are communicatively coupled.
  • a system in some embodiments in accordance with the present invention(s), includes a geographically distributed set of network-connected devices configured to capture audiovisual performances including vocal audio with performance synchronized video and a service platform.
  • the service platform is configured to (i) receive encodings of the captured audiovisual
  • the applied visual effect schedules encode differing visual effects for differing musical structure elements of the audiovisual performance encodings and provide visual effect transitions in temporal alignment with at least some of the coded musical section boundaries.
  • the service platform is configured to
  • the applied visual effect schedules are selected from amongst a plurality of mood- denominated visual effect schedules.
  • a system in some embodiments in accordance with the present invention(s), includes at least a guest and host pairing of network-connected devices configured to capture at least vocal audio.
  • the host device is configured to (i) receive from the guest device an encoding of a respective encoding of at least vocal audio and, in correspondence with an associated musical structure encoding that includes at least musical section boundaries coded for temporal alignment with an audiovisual performance encoding, to (ii) apply a selected visual effect schedules to the audiovisual performance encoding.
  • the applied visual effect schedules encode differing visual effects for differing musical structure elements of the audiovisual performance encoding and provide visual effect transitions in temporal alignment with at least some of the coded musical section boundaries.
  • the host and guest devices are coupled as local and remote peers via communication network with non-negligible peer- to-peer latency for transmissions of audiovisual content, the host device communicatively coupled as the local peer to receive a media encoding of a mixed audio performance constituting vocal audio captured at the guest device, and the guest device is communicatively coupled as the remote peer to supply the media encoding captured from a first one of the performers and mixed with a backing audio track.
  • the associated musical structure encoding is computationally determined at the host device based on segmenting at least an audio track received from the guest device.
  • the host device is configured to render the audiovisual performance coding as a mixed audiovisual performance, including vocal audio and performance synchronized video from the first and a second one of the performers, and transmit the audiovisual performance coding as an apparently live broadcast with the selected visual effect schedule applied.
  • FIG. 1 depicts information flows amongst illustrative mobile phone-type portable computing devices, television-type displays, set-top box-type media application platforms, and an exemplary content server in accordance with some embodiments of the present invention(s) in which a visual effects schedule is applied to an audiovisual performance.
  • FIGs. 1 depicts information flows amongst illustrative mobile phone-type portable computing devices, television-type displays, set-top box-type media application platforms, and an exemplary content server in accordance with some embodiments of the present invention(s) in which a visual effects schedule is applied to an audiovisual performance.
  • 2A, 2B and 2C are successive snapshots of vocal performance synchronized video along a coordinated audiovisual performance timeline wherein, in accordance with some embodiments of the present invention, video for one, the other or both of two contributing vocalist has vocal effects applied based on a mood and based on a computationally-defined audio feature such as vocal intensity computed over the captured vocals.
  • FIGs. 3A, 3B and 3C illustrates an exemplary implementation of a
  • FIG. 3A depicts information flows involving an exemplary coding of musical structure
  • FIG. 3B depicts an alternative view that focuses on an exemplary VFX rendering pipeline.
  • FIG. 3C graphically depicts presents an exemplary mapping of vocal parts and segments to visual layouts, transitions, post-processed video effects and particle-based effects.
  • FIG. 4 depicts information flows amongst illustrative mobile phone-type portable computing devices in a host and guest configuration in accordance with some embodiments of the present invention(s) in which a visual effects schedule is applied to a live-stream, duet-type group audiovisual
  • FIG. 5 is a flow diagram illustrating information transfers that contribute to or involve a composited audiovisual performance segmented to provide musical structure for video effects mapping in accordance with some embodiments of the present invention(s).
  • FIG. 6 is a functional block diagram of hardware and software components executable at an illustrative mobile phone-type portable computing device to facilitate processing of a captured audiovisual performance in accordance with some embodiments of the present invention(s).
  • FIG. 7 illustrates process steps and results of processing, in accordance with some embodiments of the present invention(s), to apply color correction and mood-denominated video effects to video for respective performers of a group performance separately captured using cameras of respective capture devices.
  • FIGs. 8A and 8B illustrate visuals for a group performance with and without use of a visual blur technique applied in accordance with some embodiments of the present invention(s).
  • FIG. 9 illustrates features of a mobile device that may serve as a platform for execution of software implementations, including audiovisual capture, in accordance with some embodiments of the present invention(s).
  • FIG. 10 is a network diagram that illustrates cooperation of exemplary devices in accordance with some embodiments of the present invention(s). Skilled artisans will appreciate that elements or features in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions or prominence of some of the illustrated elements or features may be exaggerated relative to other elements or features in an effort to help to improve understanding of embodiments of the present invention.
  • Vocal audio together with performance synchronized video may be captured and coordinated with audiovisual contributions of other users to form duet- style or glee club-style or window-paned music video-style audiovisual performances.
  • the vocal performances of individual users are captured (together with performance synchronized video) on mobile devices, television-type display and/or set-top box equipment in the context of karaoke- style presentations of lyrics in correspondence with audible renderings of a backing track.
  • pitch cues may be presented to vocalists in connection with the karaoke-style presentation of lyrics and, optionally, continuous automatic pitch correction (or pitch shifting into harmony) may be provided.
  • techniques of the present invention(s) may be applied even to single performer audiovisual content.
  • selections are in accord with a segmentation of certain audio tracks to determine musical structure of the audiovisual performance. Based on the musical structure, particle-based effects, transitions between video sources, animations or motion of frames, vector graphics or images of patterns/textures,
  • color/saturation/contrast and/or other visual effects coded in a video effects schedule are applied to respective portions of the audiovisual performance.
  • visual effects are applied in correspondence with coded aspects of a performance or features such as vocal tracks, backing audio, lyrics, sections and/or vocal parts.
  • the particular visual effects applied vary throughout the course of a given audiovisual performance based on
  • segmentation performed and/or based on vocal intensity computationally determined for one or more vocal tracks.
  • aspects of the song's musical structure are selective for the particular visual effects applied from a mood-denominated visual effect schedule, and intensity measures (typically vocal intensity, but in some cases, power density of non-vocal audio) are used to modulate or otherwise control the magnitude or prominence of the applied visual effects.
  • intensity measures typically vocal intensity, but in some cases, power density of non-vocal audio
  • song form such as ⁇ verse, chorus, verse, chorus, bridge ... ⁇
  • vocal part sequencing e.g., you sing a line, I sing a line, you sing two words, I sing three, we sing together ...
  • building intensity of a song e.g., as measured by acoustic power, tempo or some other measure
  • vocal audio can be pitch- corrected in real-time at the vocal capture device (e.g., at a portable computing device such as a mobile phone, personal digital assistant, laptop computer, notebook computer, pad-type computer or netbook) in accord with pitch correction settings.
  • pitch correction settings code a particular key or scale for the vocal performance or for portions thereof.
  • pitch correction settings include a score-coded melody and/or harmony sequence supplied with, or for association with, the lyrics and backing tracks. Harmony notes or chords may be coded as explicit targets or relative to the score-coded melody or even actual pitches sounded by a vocalist, if desired.
  • Machine usable musical instrument digital interface-style (MIDI-style) codings may be employed for lyrics, backing tracks, note targets, vocal parts (e.g., vocal part 1 , vocal part 2, ... together), musical section information (e.g., intro/outro, verse, pre-chorus, chorus, bridge, transition and/or other section codings), etc.
  • vocal parts e.g., vocal part 1 , vocal part 2, ... together
  • musical section information e.g., intro/outro, verse, pre-chorus, chorus, bridge, transition and/or other section codings
  • conventional MIDI-style codings may be extended to also encode a score- aligned, progression of visual effects to be applied.
  • user/vocalists may overcome an otherwise natural shyness or angst associated with sharing their vocal performances. Instead, even
  • a content server (or service) can mediate such coordinated performances by
  • uploads may include pitch- corrected vocal performances (with or without harmonies), dry (i.e., uncorrected) vocals, and/or control tracks of user key and/or pitch correction selections, etc.
  • Social music can be mediated in any of a variety of ways. For example, in some implementations, a first user's vocal performance, captured against a backing track at a portable computing device and typically pitch-corrected in accord with score-coded melody and/or harmony cues, is supplied, as a seed performance, to other potential vocal performers. Performance synchronized video is also captured and may be supplied with the pitch-corrected, captured vocals. The supplied vocals are typically mixed with backing
  • the successive vocal contributors are geographically separated and may be unknown (at least a priori) to each other, yet the intimacy of the vocals together with the collaborative experience itself tends to minimize this separation.
  • the backing track against which respective vocals are captured may evolve to include previously captured vocals of other contributors.
  • vocals and typically synchronized video
  • vocal interactions e.g., a duet or dialog
  • non-negligible network communication latencies will exist between at least some of the collaborating contributors, particularly where those contributors are
  • a captured audiovisual performance of a guest performer on a "live show" internet broadcast of a host performer could include a guest + host duet sung in apparent real-time synchrony.
  • the guest could be a performer who has popularized a particular musical performance.
  • the guest could be an amateur vocalist given the opportunity to sing "live" (though remote) with the popular artist or group "in studio” as (or with) the show's host.
  • the host performs in apparent synchrony with (though temporally lagged from, in an absolute sense) the guest and the apparently synchronously performed vocals are captured and mixed with the guest's contribution for broadcast or dissemination.
  • the result is an apparently live interactive performance (at least from the perspective of the host and the recipients, listeners and/or viewers of the disseminated or broadcast performance).
  • the non-negligible network communication latency from guest-to-host is masked, it will be understood that latency exists and is tolerated in the host-to-guest direction.
  • host-to-guest latency while discernible (and perhaps quite noticeable) to the guest, need not be apparent in the apparently live broadcast or other dissemination. It has been discovered that lagged audible rendering of host vocals (or more generally, of the host's captured audiovisual performance) need not psychoacoustically interfere with the guest's
  • Performance synchronized video may be captured and included in a
  • visuals may be based, at least in part, on time-varying, computationally-defined audio features extracted from (or computed over) captured vocal audio.
  • these computationally-defined audio features are selective, over the course of a coordinated audiovisual mix, for particular synchronized video of one or more of the contributing vocalists (or prominence thereof).
  • captivating visual animations and/or facilities for listener comment and ranking, as well as duet, glee club or choral group formation or accretion logic are provided in association with an audible rendering of a vocal performance (e.g., that captured and pitch-corrected at another similarly configured mobile device) mixed with backing instrumentals and/or vocals.
  • Synthesized harmonies and/or additional vocals e.g., vocals captured from another vocalist at still other locations and optionally pitch-shifted to
  • Geocoding of captured vocal performances (or individual contributions to a combined performance) and/or listener feedback may facilitate animations or display artifacts in ways that are suggestive of a performance or endorsement emanating from a particular geographic locale on a user manipulate globe. In this way, implementations of the described functionality can transform otherwise mundane mobile devices into social instruments that foster a sense of global connectivity, collaboration and community.
  • embodiments of the present invention(s) are not limited thereto, pitch-corrected, karaoke-style, vocal capture using mobile phone-type and/or television-type audiovisual equipment provides a useful descriptive context.
  • embodiments of the present invention(s) are not limited to multi-performer content, coordinated multi-performer audiovisual content, including multi-vocal content captured or prepared asynchronously or that captured and live-streamed with latency management techniques described herein, provides a useful descriptive context.
  • an iPhone ® handheld available from Apple Inc. hosts software that executes in coordination with a content server 110 to provide vocal capture and continuous real-time, score-coded pitch correction and
  • Performance synchronized video may be captured using a camera provided by, or in connection with, a television or other audiovisual media device 101 A or connected set-top box equipment (101 B) such as an Apple TVTM device. Performance synchronized video may also be captured using an on-board camera provided by handheld 101.
  • lyrics may be displayed (102, 102A) in correspondence with the audible rendering (104, 104A) so as to facilitate a karaoke-style vocal performance by a user.
  • lyrics, timing information, pitch and harmony cues (105), backing tracks (e.g., instrumentals/vocals), performance coordinated video, schedules of video effects (107), etc. may all be sourced from a network-connected content server 110.
  • backing audio and/or video may be rendered from a media store such as an iTunesTM library or other audiovisual content store resident or accessible from the handheld, a set-top box, media streaming device, etc.
  • a wireless local area network 180 may be assumed to provide communications between handheld 101 , any audiovisual and/or set-top box equipment and a wide-area network gateway to hosted service platforms such as content server 110.
  • FIG. 10 depicts an exemplary network configuration.
  • any of a variety of data communications facilities including 802.1 1 Wi-Fi, BluetoothTM, 4G-LTE wireless, wired data networks, wired or wireless audiovisual interconnects such as in accord with HDMI, AVI, Wi-Di standards or facilities may employed, individually or in combination to facilitate communications and/or audiovisual rendering described herein. Referring again to the example of FIG.
  • user vocals 103 are captured at handheld 101 , and optionally pitch-corrected continuously and in real-time either at the handheld or using computational facilities of audiovisual display and/or set-top box equipment (101 B) and audibly rendered (see 104, 104A) mixed with the backing track to provide the user with an improved tonal quality rendition of his/her own vocal performance.
  • vocal capture and audible rendering should be understood broadly and without limitation to a particular audio transducer configuration.
  • Pitch correction when provided, is typically based on score-coded note sets or cues (e.g., pitch and harmony cues 105), which provide continuous pitch- correction algorithms with performance synchronized sequences of target notes in a current key or scale.
  • score-coded harmony note sequences or sets
  • additional targets typically coded as offsets relative to a lead melody note track and typically scored only for selected portions thereof
  • pitch correction settings may be
  • lyrics, melody and harmony track note sets and related timing and control information may be encapsulated as a score coded in an appropriate container or object (e.g., in a Musical Instrument Digital Interface, MIDI, or Java Script Object Notation, json, type format) for supply together with the backing track(s).
  • an appropriate container or object e.g., in a Musical Instrument Digital Interface, MIDI, or Java Script Object Notation, json, type format
  • handheld 101 , audiovisual display 101 A and/or set-top box equipment, or both may display lyrics and even visual cues related to target notes, harmonies and currently detected vocal pitch in correspondence with an audible performance of the backing track(s) so as to facilitate a karaoke-style vocal performance by a user.
  • your_man. json and your_man.m4a may be downloaded from content server 110 (if not already available or cached based on prior download) and, in turn, used to provide background music, synchronized lyrics and, in some situations or embodiments, score-coded note tracks for continuous, real-time pitch-correction while the user sings.
  • harmony note tracks may be score coded for harmony shifts to captured vocals.
  • a captured pitch-corrected (possibly harmonized) vocal performance together with performance synchronized video is saved locally, on the handheld device or set-top box, as one or more audiovisual files and is subsequently compressed and encoded for upload (106) to content server 110 as an MPEG-4 container file.
  • MPEG-4 is an international standard for the coded representation and transmission of digital multimedia content for the Internet, mobile networks and advanced broadcast applications.
  • Other suitable codecs, compression techniques, coding formats and/or containers may be employed if desired.
  • encodings of dry vocals and/or pitch- corrected vocals may be uploaded (106) to content server 110.
  • vocals encoded, e.g., in an MPEG-4 container or otherwise
  • pitch-corrected or pitch-corrected at content server 110 can then be mixed (111 ), e.g., with backing audio and other captured (and possibly pitch- shifted) vocal performances, to produce files or streams of quality or coding characteristics selected accord with capabilities or limitations a particular target or network ⁇ e.g., handheld 120, audiovisual display and/or set-top box equipment, a social media platform, etc.).
  • performances of multiple vocalists may be accreted and combined, such as to present as a duet-style performance, glee club, window-paned music video- style composition or vocal jam session.
  • a duet-style performance glee club, window-paned music video- style composition or vocal jam session.
  • performance synchronized video contribution for example, in the illustration of FIG. 1 , performance synchronized video 122 including a performance captured at handheld 101 or using audiovisual and/or set-top box
  • Video effects applied thereto are based at least in part on application of a video effects (VFX) schedule selected (113) based either on user selection or a
  • one or more VFX schedules may be mood-denominated set of recipes and/or filters that may be applied to present a particular mood.
  • Segmentation and VFX Engine 112 determines musical structure and applies particular visual effects in accordance with the selected video effects. In general, the particular visual effects applied are based on segmentation of vocal and/or backing track audio, determined or coded musical structure, a selected or detected mood or style and computationally-determined vocal or audio intensity.
  • VFX schedule selection may be by a user at handheld 101 or using
  • a user may select a mood-denominated VFX schedule that includes video effects selected to provide a palette of "sad” or “somber” video processing effects.
  • One such palette may provide and apply, in connection with determined or coded musical structure, filters providing colors, saturations and contrast that tend to evoke a "sad” or “somber” mood, provide transitions between source videos with little visual energy and/or include particle based effects that present rain, fog, or other effects consistent with the selected mood.
  • palettes may provide and apply, again in connection with determined or coded musical structure, filters providing colors, saturations and contrast that tend to evoke an "peppy" or “energetic” mood, provide transitions between source videos with significant visual energy or movement, include lens flares or particle based effects augment a visual scene with bubbles, balloons, fireworks or other visual features consistent with the selected mood.
  • recipes and/or filters of a given VFX schedule may be parameterized, e.g., based on computational features, such as average vocal energy, extracted from audio performances or based on tempo, beat, or audio energy of backing tracks.
  • lyrics or musical selection metadata may be employed for VFX schedule selection.
  • visual effects schedules may, in some cases or embodiments, be iteratively selected and applied to a given performance or partial performance, e.g., as a user or a contributing vocalist or a post-process video editor seeks to create a particular mood, be it "sad,” “pensive,” “peppy” or “romantic.”
  • FIG. 1 depicts performance
  • FIG. 1 depicts the supply of other captured AV performances #2, #3 ... #N for audio mix and visual arrangement 111 at content server 110 to produce performance synchronized video 122.
  • applied visual effects may be varied throughout the mixed audiovisual performance rendering 123 in accord with a particular visual effects schedule and segmentation of one or more of the constituent AV performances.
  • segmentation may be based on signal processing of vocal audio and/or based on precoded musical structure, including vocal part or section notations, phrase or repetitive structure of lyrics, etc.
  • FIGs. 2A, 2B and 2C are successive snapshots 191 , 192 and 193 of vocal performance synchronized video along a coordinated audiovisual
  • performance timeline 151 wherein, in accordance with some embodiments of the present invention, video 123 for one, the other or both of two contributing vocalist has vocal effects applied based on a mood and based on a
  • VFX computationally-defined audio feature such as vocal intensity computed over the captured vocals.
  • first portion represented by snapshot 191
  • second portion represented by snapshot 192
  • VFX are applied to performance synchronized video for a single performer based on a selected or detected mood for that performer and a current vocal intensity.
  • VFX are applied to performance synchronized video of both performers based on a joint or composited mood (whether detected or selected) for the performers and a current measure of joint vocal intensity.
  • performance timeline 151 carries performance
  • snapshots 191 , 192 and 193 will be expected to apply, at different portions of the performance timeline and based on musical structure of the audio, different aspects of a particular VFX schedule, e.g., different VFX recipes and VFX filters thereof.
  • FIGs. 3A, 3B and 3C illustrate an exemplary implementation of a
  • FIG. 3A depicts information flows involving an exemplary coding of musical structure 115 in which audio features of performance synchronized vocal tracks ⁇ e.g., vocal #1 and vocal #2) and a backing track are extracted to provide segmentation and annotation for musical structure coding 115.
  • Feature extraction and segmentation 117 provides the annotations and transition markings of musical structure coding 115 to apply recipes and filters from a selected visual effects schedule prior to video rendering 119.
  • feature extraction and segmentation operates on:
  • backing tracks tempo, instantaneous loudness, beat detection.
  • a vocal track is treated as consisting of singing and silence segments.
  • Feature extraction seeks to classify portions of a solo vocal track into silence and singing segments. For duet vocal tracks of part 1 and 2, Feature extraction seeks to classify them into silence, part 1 singing, part 2 singing, and singing together segments.
  • segment typing is performed. For example, in some implementations, a global average vocal intensity and average vocal intensities per segment are computed to determine the "musical intensity" of each segment with respect to a particular singer's performance of a song. Stated differently, segmentation algorithms see, to determine whether a give section is a "louder" section, or a "quieter” section. The start time and end time of every lyric line are also retrieved from the lyric metadata in some implementations to facilitate segment typing.
  • Valid segment types and classification criteria include:
  • Feature extraction and segmentation 117 may also include further audio signal processing to extract the timing of beats and down beats in the backing track, and to align the determined segments to down beats.
  • a Beat Per Minute (BPM) measure is calculated for determining the tempo of the song, and moments such as climax, hold and crescendo identified by using vocal intensities and pitch information.
  • moment types and classification criteria may include:
  • Climax A segment is also marked as a climax segment if it has the highest vocal intensity.
  • FIG. 3B depicts additional detail for an embodiment that decomposes its visual effect schedules into a video style-denominated recipes (116B) used for VFX planning and a particular video filters (116A) used in an exemplary VFX rendering pipeline.
  • Video style may be user selected or, in some embodiments, may be selected based on a computationally-determined mood.
  • multiple recipes are defined and specialized for particular song tempos, recording type (SOLO, duet, or partner artist), etc.
  • a recipe typically defines the visual effects such as layouts, transitions, post-processing, color filter, watermarks, and logos for each segment type or moment.
  • VFX planner 118 Based the determined tempo and recording type of a song, an appropriate recipe is selected from the set (116B) thereof.
  • VFX planner 118 maps the extracted features (segments and moments that were annotated or marked in musical structure coding 115, as described above) to particular visual effects based on the selected video style recipe (116B).
  • VFX planner 118 generates a video rendering job containing a series of visual effect configurations. For each visual effect configuration, one set of configuration parameters is generated. Parameters such the name of a prebuilt video effect, input video, start and end time, backing track intensities and vocal intensities during the effect, beats timing information during the effect, specific control parameters of the video effect and etc.
  • Video effects specified in the configuration can be pre-built and coded for directly use by the VFX renderer 119 to render the coded video effect.
  • Vocal intensities and backing track intensities are used to drive the visual effects.
  • Beats timing information is used to align applied video effects with audio.
  • FIG. 3C graphically depicts presents an exemplary mapping of vocal parts and segments to visual layouts, transitions, post-processed video effects and particle-based effects, such as may be represented as musical structure coding 115 (recall FIG. 3A) or, in some embodiments, by video style- denominated recipes (116B) used for VFX planning and a particular video filters (116A) for VFX rendering.
  • musical structure coding 115 recall FIG. 3A
  • video style- denominated recipes 116B
  • 116A video filters
  • FIG. 4 depicts a variation on previously-described information flows.
  • FIG. 4 depicts flows amongst illustrative mobile phone-type portable computing devices in a host and guest configuration in accordance with some embodiments of the present invention(s) in which a visual effects schedule is applied to a live-stream, duet-type group audiovisual
  • a current host user of current host device 101 B at least partially controls the content of a live stream 122 that is buffered for, and streamed to, an audience on devices 120A, 120B ... 120N.
  • a current guest user of current guest device 101A contributes to the group audiovisual performance mix 111 that is supplied (eventually via content server 110) by current host device 101 B as live stream 122.
  • devices 120A, 120B ... 120N and, indeed, current guest and host
  • devices 101A, 101 B are, for simplicity, illustrated as handheld devices such as mobile phones, persons of skill in the art having benefit of the present disclosure will appreciate that any given member of the audience may receive live-stream 122 on any suitable computer, smart television, tablet, via a set- top box or other streaming media capable client.
  • Content that is mixed to form group audiovisual performance mix 111 is captured, in the illustrated configuration, in the context of karaoke-style performance capture wherein lyrics 102, optional pitch cues 105 and, typically, a backing track 107 are supplied from content server 110 to either or both of current guest device 101 A and current host device 101 B.
  • a current host typically exercises ultimate control over the live stream, e.g., by selecting a particular user (or users) from the audience to act as the current guest(s), by selecting a particular song from a request queue (and/or vocal parts thereof for particular users), and/or by starting, stopping or pausing the group AV performance.
  • the guest user may (in some embodiments) start/stop/pause the roll of backing track 107A for local audible rendering and otherwise control the content of guest mix 106 (backing track roll mixed with captured guest audiovisual content) supplied to current host device 101 B.
  • Roll of lyrics 102A and optional pitch cues 105A at current guest device 101A is in temporal correspondence with the backing
  • backing audio and/or video may be rendered from a media store such as an iTunesTM library resident or accessible from a handheld, set-top box, etc.
  • segmentation and VFX engine functionality such as previously described (recall FIG. 1 , segmentation and VFX
  • VFX engine 112 may, in the guest-host, live-stream configuration of FIG. 4, be distributed to host 101 B, guest 101 A and/or content server 110. Descriptions of segmentation and VFX engine 112 relative to FIGs. 3A, 3B and 3C will thus be understood to analogously describe implementations of similar functionality 112A, 112B and/or 112C relative to devices or components of FIG. 4.
  • song requests 132 are audience-sourced and conveyed by signaling paths to content selection and guest queue control logic 112 of content server 110.
  • Host controls 131 and guest controls 133 are illustrated as bi-directional signaling paths.
  • Other queuing and control logic configurations consistent with the operations described, including host or guest controlled queuing and/or song selection, will be appreciated based on the present disclosure.
  • current host device 101 B receives and audibly renders guest mix 106 as a backing track against which the current host's audiovisual performance are captured at current host device 101 B.
  • Roll of lyrics 102B and optional pitch cues 105B at current host device 101 B is in temporal correspondence with the backing track, here guest mix 106.
  • guest mix 106 To facilitate synchronization to the guest mix 106 in view of temporal lag in the peer-to- peer communications channel between current guest device 101 A and current host device 101 B as well as for guest-side start/stop/pause control, marker beacons may be encoded in the guest mix to provide the appropriate phase control of lyrics 102B and optional pitch cues 105B on screen.
  • phase analysis of any backing track 107A included in guest mix 106 may be used to provide the appropriate phase control of
  • any of a variety of communications channels may be used to convey audiovisual signals and controls between current guest device 101 A and current host device 101 B, as well as between the guest and host devices 101A, 101 B and content server 110 and between audience devices 120A, 120B ... 120N and content server 110.
  • respective telecommunications carrier wireless facilities and/or wireless local area networks and respective wide-area network gateways may provide communications to and from devices 101A, 101 B, 120A, 120B ... 120N.
  • any of a variety of data communications facilities including 802.1 1 Wi-Fi, BluetoothTM, 4G-LTE wireless, wired data networks, wired or wireless audiovisual interconnects such as in accord with HDMI, AVI, Wi-Di standards or facilities may employed, individually or in combination to facilitate communications and/or audiovisual rendering described herein.
  • User vocals 103A and 103B are captured at respective handhelds 101 A, 101 B, and may be optionally pitch-corrected continuously and in real-time and audibly rendered mixed with the locally-appropriate backing track (e.g., backing track 107A at current guest device 101 A and guest mix 106 at current host device 101 B) to provide the user with an improved tonal quality rendition of his/her own vocal performance.
  • Pitch correction is typically based on score-coded note sets or cues (e.g., the pitch and harmony cues 105A, 105B visually displayed at current guest device 101 A and at current host
  • pitch correction settings may be characteristic of a particular artist such as the artist that performed vocals associated with the particular backing track.
  • lyrics, melody and harmony track note sets and related timing and control information may be encapsulated in an appropriate container or object (e.g., in a Musical Instrument Digital Interface, MIDI, or Java Script Object Notation, json, type format) for supply together with the backing track(s).
  • devices 101 A and 101 B (as well as associated audiovisual displays and/or set-top box equipment, not specifically shown) may display lyrics and even visual cues related to target notes, harmonies and currently detected vocal pitch in correspondence with an audible performance of the backing track(s) so as to facilitate a karaoke-style vocal performance by a user.
  • lyrics, melody and harmony track note sets and related timing and control information may be encapsulated in an appropriate container or object (e.g., in a Musical Instrument Digital Interface, MIDI, or Java Script Object Notation, json, type format) for supply together with the backing track(s).
  • devices 101 A and 101 B may display lyrics and even visual cues related to target notes, harmonies
  • your_man.m4a may be downloaded from the content server (if not already available or cached based on prior download) and, in turn, used to provide background music, synchronized lyrics and, in some situations or
  • score-coded note tracks for continuous, real-time pitch- correction while the user sings.
  • harmony note tracks may be score coded for harmony shifts to captured vocals.
  • a captured pitch-corrected (possibly harmonized) vocal performance together with performance synchronized video is saved locally, on the handheld device or set-top box, as one or more audiovisual files and is subsequently compressed and encoded for communication (e.g. , as guest mix 106 or group audiovisual performance mix 111 or constituent encodings thereof) to content server 110 as an MPEG-4 container file.
  • MPEG-4 is one suitable standard for the coded representation
  • performances of multiple vocalists may be accreted and combined, such as to form a duet- style performance, glee club, or vocal jam session.
  • social network constructs may at least partially supplant or inform host control of the pairings of geographically-distributed vocalists and/or formation of geographically-distributed virtual glee clubs.
  • individual vocalists may perform as current host and guest users in a manner captured (with vocal audio and performance synchronized video) and eventually streamed as a live stream 122 to an audience.
  • Such captured audiovisual content may, in turn, be distributed to social media contacts of the vocalist, members of the audience etc., via an open call mediated by the content server.
  • the vocalists themselves, members of the audience (and/or the content server or service platform on their behalf) may invite others to join in a coordinated audiovisual
  • FIG. 5 is a flow diagram illustrating information transfers that contribute to or involve a composited audiovisual performance 211 segmented to provide musical structure for video effects mapping in accordance with some embodiments of the present invention(s).
  • Video effects schedule 210
  • intensity of applied video effects is determined based on an intensity measure from the captured audiovisual performance (typically vocal intensity), although energy density of one or more audio tracks, including a backing track, may be included in some cases or embodiments.
  • a user/vocalist sings along with a backing track karaoke style.
  • Vocals captured from a microphone input 201 are continuously pitch-corrected (252) and harmonized (255) in real-time for mix (253) with the backing track which is audibly rendered at one or more acoustic transducers 202.
  • Both pitch correction and added harmonies are chosen to correspond to pitch tracks 207 of a musical score, which in the illustrated configuration, is wirelessly communicated (261 ) to the device(s) (e.g., from content server 110 to handheld 101 or set-top box equipment, recall FIG. 1 ) on which vocal capture and pitch-correction is to be performed, together with lyrics 208 and an audio encoding of the backing track 209.
  • pitch corrected or shifted vocals may be combined (254) or aggregated for mix (253) with an audibly-rendered backing track and/or communicated (262) to content server 110 or a remote device (e.g., handheld 120 or 520, television and/or set-top box equipment, or some other media-capable, computational system 511 ).
  • pitch correction or shifting of vocals and/or segmentation of audiovisual performances may be performed at content server 110.
  • segmentation and VFX engine functionality such as previously described (recall FIG. 1 , segmentation and VFX
  • VFX engine 112 may, in other embodiments, be deployed at a handheld 101 , audiovisual and/or set-top box equipment, or other user device. Accordingly, descriptions of segmentation and VFX engine 112 relative to FIGs. 3A, 3B and 3C will be understood to analogously describe implementations of similar functionality 112D relative to signal processing pipelines of FIG. 5.
  • FIG. 6 is a functional block diagram of hardware and software components executable at an illustrative mobile phone-type portable computing device to facilitate processing of a captured audiovisual performance in accordance with some embodiments of the present invention(s).
  • capture of vocal audio and performance synchronized video may be performed using facilities of television-type display and/or set-top box equipment.
  • a handheld device e.g. , handheld device 101
  • FIG. 6 illustrates basic signal processing flows in accord with certain implementations suitable for mobile phone-type handheld device 101 to capture vocal audio and performance synchronized video, to generate pitch- corrected and optionally harmonized vocals for audible rendering (locally and/or at a remote target device), and to communicate with a content server or service platform 110 that includes segmentation and visual effects engine 112, whereby captured audiovisual performances are segmented to reveal musical structure and, based on the revealed musical structure, particular visual effects are applied from a video effects schedule.
  • vocal intensity is measured and utilized (in some embodiments) to vary or modulate intensity of mood-denominated visual effects.
  • FIG. 7 illustrates process steps and results of processing, in accordance with some embodiments of the present invention(s), to apply color correction and mood-denominated video effects (see 701 B, 702B) to video for respective performers (701 A and 702A) of a group performance separately captured using cameras of respective capture devices.
  • FIGs. 8A and 8B illustrate visuals for a group performance with (802) and without (801 ) use of a visual blur technique applied in accordance with some embodiments of the present invention(s).
  • FIG. 9 illustrates features of a mobile device that may serve as a platform for execution of software implementations, including audiovisual capture, in accordance with some embodiments of the present invention(s).
  • FIG. 9 illustrates features of a mobile device that may serve as a platform for execution of software implementations in accordance with some embodiments of the present invention.
  • FIG. 9 is a block diagram of a mobile device 900 that is generally consistent with commercially-available versions of an iPhoneTM mobile digital device.
  • the iPhone device platform together with its rich complement of sensors, multimedia facilities, application programmer interfaces and wireless application delivery model, provides a highly capable platform on which to deploy certain implementations. Based on the description herein, persons of ordinary skill in the art will appreciate a wide range of additional mobile device platforms that may be suitable (now or hereafter) for a given implementation or deployment of the inventive techniques described herein.
  • mobile device 900 includes a display 902 that can be sensitive to haptic and/or tactile contact with a user.
  • Touch-sensitive display 902 can support multi-touch features, processing multiple simultaneous touch points, including processing data related to the pressure, degree and/or position of each touch point. Such processing facilitates gestures and interactions with multiple fingers and other interactions.
  • other touch-sensitive display technologies can also be used, e.g., a display in which contact is made using a stylus or other pointing device.
  • mobile device 900 presents a graphical user interface on the touch- sensitive display 902, providing the user access to various system objects and for conveying information.
  • the graphical user interface can include one or more display objects 904, 906.
  • the display objects 904, 906, are graphic representations of system objects. Examples of system objects include device functions, applications, windows, files, alerts, events, or other identifiable system objects.
  • applications when executed, provide at least some of the digital acoustic functionality described herein.
  • the mobile device 900 supports network connectivity including, for example, both mobile radio and wireless internetworking functionality to enable the user to travel with the mobile device 900 and its associated network-enabled functions.
  • the mobile device 900 can interact with other devices in the vicinity (e.g., via Wi-Fi, Bluetooth, etc.).
  • mobile device 900 can be configured to interact with peers or a base station for one or more devices.
  • mobile device 900 may grant or deny network access to other wireless devices.
  • Mobile device 900 includes a variety of input/output (I/O) devices, sensors and transducers.
  • I/O input/output
  • sensors sensors and transducers.
  • a speaker 960 and a microphone 962 are typically included to facilitate audio, such as the capture of vocal
  • speaker 960 and microphone 962 may provide appropriate transducers for techniques described herein.
  • An external speaker port 964 can be included to facilitate hands-free voice functionalities, such as speaker phone functions.
  • An audio jack 966 can also be included for use of headphones and/or a microphone.
  • an external speaker and/or microphone may be used as a transducer for the techniques described herein.
  • a proximity sensor 968 can be included to facilitate the detection of user positioning of mobile device 900.
  • an ambient light sensor 970 can be utilized to facilitate adjusting brightness of the touch-sensitive display 902.
  • An accelerometer 972 can be utilized to detect movement of mobile device 900, as indicated by the directional arrow 974. Accordingly, display objects and/or media can be presented according to a detected orientation, e.g., portrait or landscape.
  • mobile device 900 may include circuitry and sensors for supporting a location determining capability, such as that provided by the global positioning system (GPS) or other positioning systems (e.g., systems using Wi-Fi access points, television signals, cellular grids, Uniform Resource Locators (URLs)) to facilitate geocodings described herein.
  • Mobile device 900 also includes a camera lens and imaging sensor 980.
  • instances of a camera lens and sensor 980 are located on front and back surfaces of the mobile device 900.
  • the cameras allow capture still images and/or video for association with captured pitch-corrected vocals.
  • Mobile device 900 can also include one or more wireless communication subsystems, such as an 802.1 1 b/g/n/ac communication device, and/or a BluetoothTM communication device 988.
  • Other communication protocols can also be supported, including other 802.x communication protocols (e.g., WiMax, Wi-Fi, 3G), fourth generation protocols and modulations (4G-LTE) and beyond (e.g., 5G), code division multiple access (CDMA), global system for mobile communications (GSM), Enhanced Data GSM Environment (EDGE), etc.
  • a port device 990 e.g., a Universal Serial Bus (USB) port, or a docking port, or some other wired port connection, can be included and used to establish a wired connection to other computing devices, such as other communication devices 900, network access devices, a personal computer, a printer, or other processing devices capable of receiving and/or transmitting data.
  • Port device 990 may also allow mobile device 900 to synchronize with a host device using one or more protocols, such as, for example, the TCP/IP, HTTP, UDP and any other known protocol.
  • FIG. 10 is a network diagram that illustrates cooperation of exemplary devices in accordance with some embodiments of the present invention(s). In particular, FIG.
  • FIG. 10 illustrates respective instances of handheld devices or portable computing devices such as mobile device 1001 employed in audiovisual capture and programmed with vocal audio and video capture code, user interface code, pitch correction code, an audio rendering pipeline and playback code in accord with the functional descriptions herein.
  • a first device instance is depicted as, for example, employed in a vocal audio and performance synchronized video capture, while device instance 1020A operates in a presentation or playback mode for a mixed audiovisual performance with dynamic visual prominence for performance synchronized video.
  • equipment 1020B is likewise depicted operating in a presentation or playback mode, although as described elsewhere herein, such equipment may also operate as part of a vocal audio and performance synchronized video capture facility.
  • Each of the aforementioned devices communicate via wireless data transport and/or intervening networks 1004 with a server 1012 or service platform that hosts storage and/or functionality explained herein with regard to content server 110 (recall FIGs. 1 , 4, 5 and 6).
  • Captured, pitch-corrected vocal performances with performance synchronized video mixed to present mixed AV performance rendering with applied visual effects as described herein may (optionally) be streamed and audiovisually rendered at laptop computer 1011.
  • Embodiments in accordance with the present invention may take the form of, and/or be provided as, a computer program product encoded in a machine- readable medium as instruction sequences and other functional constructs of software, which may in turn be executed in a computational system (such as a iPhone handheld, mobile or portable computing device, or content server platform) to perform methods described herein.
  • a machine readable medium can include tangible articles that encode information in a form (e.g., as applications, source or object code, functionally descriptive information, etc.) readable by a machine (e.g., a computer, computational facilities of a mobile device or portable computing device, etc.) as well as tangible storage incident to transmission of the information.
  • a machine- readable medium may include, but is not limited to, magnetic storage medium (e.g., disks and/or tape storage); optical storage medium (e.g., CD-ROM, DVD, etc.); magneto-optical storage medium; read only memory (ROM);
  • RAM random access memory
  • erasable programmable memory e.g., EEPROM
  • EPROM and EEPROM EPROM and EEPROM
  • flash memory or other types of medium suitable for storing electronic instructions, operation sequences, functionally descriptive information encodings, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Auxiliary Devices For Music (AREA)
  • Reverberation, Karaoke And Other Acoustics (AREA)

Abstract

Des séquences d'effets visuels sont appliquées à des interprétations audiovisuelles avec différents effets visuels appliqués en correspondance avec différents éléments d'une structure musicale. Des techniques de segmentation appliquées à une ou plusieurs pistes audio (par exemple des pistes vocales ou d'accompagnement) sont utilisées pour calculer certaines des composantes de la structure musicale. Dans certains cas, les séquences d'effets visuels appliquées sont appelées d'après l'ambiance et peuvent être sélectionnées par un interprète à titre de composantes de son expression visuelle ou déterminées à partir d'une interprétation audiovisuelle à l'aide de techniques d'apprentissage machine.
PCT/US2018/047325 2017-08-21 2018-08-21 Système d'effets audiovisuels permettant une augmentation d'une interprétation capturée sur la base de son contenu WO2019040492A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
DE112018004717.2T DE112018004717T5 (de) 2017-08-21 2018-08-21 System für audiovisuelle Effekte zur Erweiterung einer aufgenommenen Darbietung basierend auf deren Inhalt
CN201880054029.4A CN111345044B (zh) 2017-08-21 2018-08-21 基于所捕获的表演的内容来增强该表演的视听效果系统

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762548122P 2017-08-21 2017-08-21
US62/548,122 2017-08-21

Publications (1)

Publication Number Publication Date
WO2019040492A1 true WO2019040492A1 (fr) 2019-02-28

Family

ID=65439230

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/047325 WO2019040492A1 (fr) 2017-08-21 2018-08-21 Système d'effets audiovisuels permettant une augmentation d'une interprétation capturée sur la base de son contenu

Country Status (3)

Country Link
CN (1) CN111345044B (fr)
DE (1) DE112018004717T5 (fr)
WO (1) WO2019040492A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006047754A (ja) * 2004-08-05 2006-02-16 Namco Ltd カラオケ情報配信システム、プログラム、情報記憶媒体およびカラオケ情報配信方法
JP2010060627A (ja) * 2008-09-01 2010-03-18 Bmb Corp カラオケシステム
KR20140023665A (ko) * 2012-08-17 2014-02-27 주식회사 디자인피버 합성 영상을 이용한 노래방 시스템
KR20150033757A (ko) * 2013-09-23 2015-04-02 조경환 휴대폰어플이용 노래방 텔레비젼 시스템
US20160057316A1 (en) * 2011-04-12 2016-02-25 Smule, Inc. Coordinating and mixing audiovisual content captured from geographically distributed performers

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8370747B2 (en) * 2006-07-31 2013-02-05 Sony Mobile Communications Ab Method and system for adapting a visual user interface of a mobile radio terminal in coordination with music
US20110126103A1 (en) * 2009-11-24 2011-05-26 Tunewiki Ltd. Method and system for a "karaoke collage"
US9058797B2 (en) * 2009-12-15 2015-06-16 Smule, Inc. Continuous pitch-corrected vocal capture device cooperative with content server for backing track mix
CN104580838A (zh) * 2015-01-27 2015-04-29 苏州乐聚一堂电子科技有限公司 演唱视觉特效系统及演唱视觉特效处理方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006047754A (ja) * 2004-08-05 2006-02-16 Namco Ltd カラオケ情報配信システム、プログラム、情報記憶媒体およびカラオケ情報配信方法
JP2010060627A (ja) * 2008-09-01 2010-03-18 Bmb Corp カラオケシステム
US20160057316A1 (en) * 2011-04-12 2016-02-25 Smule, Inc. Coordinating and mixing audiovisual content captured from geographically distributed performers
KR20140023665A (ko) * 2012-08-17 2014-02-27 주식회사 디자인피버 합성 영상을 이용한 노래방 시스템
KR20150033757A (ko) * 2013-09-23 2015-04-02 조경환 휴대폰어플이용 노래방 텔레비젼 시스템

Also Published As

Publication number Publication date
DE112018004717T5 (de) 2020-06-10
CN111345044A (zh) 2020-06-26
CN111345044B (zh) 2023-03-21

Similar Documents

Publication Publication Date Title
US20230335094A1 (en) Audio-visual effects system for augmentation of captured performance based on content thereof
US11756518B2 (en) Automated generation of coordinated audiovisual work based on content captured from geographically distributed performers
US11394855B2 (en) Coordinating and mixing audiovisual content captured from geographically distributed performers
US11553235B2 (en) Audiovisual collaboration method with latency management for wide-area broadcast
US11683536B2 (en) Audiovisual collaboration system and method with latency management for wide-area broadcast and social media-type user interface mechanics
US11972748B2 (en) Audiovisual collaboration system and method with seed/join mechanic
US10943574B2 (en) Non-linear media segment capture and edit platform
US20220051448A1 (en) Augmented reality filters for captured audiovisual performances
US20220122573A1 (en) Augmented Reality Filters for Captured Audiovisual Performances
EP3808096A1 (fr) Système et procédé de diffusion audiovisuelle en continu en direct avec gestion de latence et mécanique d'interface utilisateur de type médias sociaux
WO2016070080A1 (fr) Coordination et mixage de contenus audiovisuels capturés à partir d'artistes répartis géographiquement
WO2020006556A1 (fr) Système et procédé de collaboration audiovisuelle avec mécanique d'amorce/suivi
CN111345044B (zh) 基于所捕获的表演的内容来增强该表演的视听效果系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18847768

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 18847768

Country of ref document: EP

Kind code of ref document: A1