US20130308051A1 - Method, system, and non-transitory machine-readable medium for controlling a display in a first medium by analysis of contemporaneously accessible content sources - Google Patents
Method, system, and non-transitory machine-readable medium for controlling a display in a first medium by analysis of contemporaneously accessible content sources Download PDFInfo
- Publication number
- US20130308051A1 US20130308051A1 US13/895,274 US201313895274A US2013308051A1 US 20130308051 A1 US20130308051 A1 US 20130308051A1 US 201313895274 A US201313895274 A US 201313895274A US 2013308051 A1 US2013308051 A1 US 2013308051A1
- Authority
- US
- United States
- Prior art keywords
- medium
- domain
- signal
- source
- control
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1095—Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/06—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
- G10L21/10—Transforming into visible information
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/102—Programmed access in sequence to addressed parts of tracks of operating record carriers
- G11B27/105—Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/40—Support for services or applications
- H04L65/403—Arrangements for multi-party communication, e.g. for conferences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/61—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
- H04L65/611—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for multicast or broadcast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/04—Synchronising
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/20—Services signaling; Auxiliary data signalling, i.e. transmitting data via a non-traffic channel
- H04W4/21—Services signaling; Auxiliary data signalling, i.e. transmitting data via a non-traffic channel for social networking applications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/40—Support for services or applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/06—Selective distribution of broadcast services, e.g. multimedia broadcast multicast service [MBMS]; Services to user groups; One-way selective calling services
Definitions
- the present subject matter relates to a presentation of a display in a first medium, having parameters controlled by analysis of a contemporaneously available source in a second medium as by applying a transform, such as a soundtrack, live performance, or audio input.
- a transform such as a soundtrack, live performance, or audio input.
- An example of a first medium played concurrently with a second medium is a video display accompanied by music.
- video displays have been synchronized to music tracks.
- a product called “PhotoCinema” marketed by a Japanese company called Digital Stage allows for a fairly sophisticated slide show to be created and viewed on a computer screen of a personal computer.
- Digital images stored on a personal computer can be presented in a variety of sequences, and individual images in a sequence can be zoomed.
- a chain of multiple images can be made to move from left to right across the computer screen.
- a chain of multiple images can be made to move from top to bottom across the computer screen.
- Music can be selected to accompany the slide show.
- the video program produced is simply based on arbitrary selections by a user. While the music accompanies a slideshow, the synchronization between the audio and visual components of the presentation is prepared in advance of performance. Parameters of the synchronization are not determined dynamically.
- VJ video jockey
- a video jockey is generally a person who mixes a variety of video sources together to create a unique video image for display at large club events or other venues. Automated content manipulation could be provided in the alternative or in addition.
- a typical mix of images would be some pre-mixed DVDs of video images from previous events, abstract images such as proprietary visualizations, and live images from a video camera directed at the VJ or dancers in the audience, together with overlaying of text, for example, to display the name of the event, the VJ's name, or messages input by the VJ.
- the images from the respective sources are mixed by the VJ using video mixer/switcher hardware, which controls the overlay of the separate sources on a single display depending on the selected input source and fading transitions between the sources, much like audio mixers. While overlay of images can be provided, there is not a mathematical relationship between audio sources and construction of the video display.
- a product called “Avenue 4 ” by Resolume is a fully-featured professional VJ software tool, allowing elaborate mixing and manipulation of video sources.
- the complexity and steep learning curve of such a program make it unrealistic as a consumer tool.
- Much practice is needed before sufficient mastery can be achieved to be able to work quickly.
- Streamlining and simplifying the complexity of operation would facilitate making this tool accessible to VJs of varying skill levels.
- U.S. Pat. No. 8,402,356 discloses systems, methods, and apparatus for collecting data and presenting media to a user.
- the systems generally include a data gathering module associated with an electronic device.
- the data gathering module communicates gathered data to a management module, which manages at least one user profile based on the gathered data.
- the management module may select media for presentation to a user based on the user profile, and the selected media may be displayed to the user via a media output device co-located with the user, such as a display of the user's mobile electronic device or a television, computer, billboard, or other display.
- a media output device co-located with the user
- United States Published Patent Application No. 20110283865 discloses a system and method for visual representation of sound, wherein the system and method obtain sound information from a multimedia content.
- the system generates an icon and a directional indicator, each based on the sound information obtained.
- the sound information typically includes various attributes that can be mapped to various properties of the display elements such as the icon and directional indicator in order to provide details relating to the sound via a visual display.
- This system effectively “illustrates” sounds so that particular video cues may be given a one-to-one correspondence with particular sounds. However, no video program is generated based on the sound analysis.
- United States Published Patent Application No. 20070292832 discloses a system for creating sound using visual images.
- Various controls and features are provided for the selection, editing, and arrangement of the visual images and tones used to create a sound presentation.
- Visual image characteristics such as shape, speed of movement, direction of movement, quantity, location, etc. can be set by a user.
- playback of a first medium is not synchronized to analysis of characteristics of a second medium.
- U.S. Pat. No. 7,796,162 discloses a system in which one set of cameras generates multiple synchronized camera views for broadcast from a live venue activity to remote viewers. A user chooses which view to follow. However, there is no plan for varying the sets of images sent to users. There is no uploading capability for users.
- the present subject matter relates to a method, system, and non-transitory programmed medium for presentation of a display in a first medium such as a video display having parameters controlled by analysis of a contemporaneously available source in a second medium, such as a soundtrack, live performance, or audio input.
- a first medium such as a video display having parameters controlled by analysis of a contemporaneously available source in a second medium, such as a soundtrack, live performance, or audio input.
- a display in a first medium is controlled in response to commands generated by analyzing a second, contemporaneously available medium.
- a first medium is played for distribution by a communications link to a plurality of users.
- a contemporaneously available medium e.g., a sound track, having components in first and second domains is analyzed.
- a transform is applied to signals in the first domain to generate signals in the second domain, such as generating signals in the frequency domain, based on an audio signal amplitude waveform.
- the second domain signals are ordered according to a rule. The ordering of the second domain signals is used to produce a command signal or signals to produce time varying commands to vary at least one parameter of the audio signal. Parameters may include pixilation, color saturation, contrast, or others.
- the second medium is derived from a composite of social interactions in which a plurality of users communicates with a central server. Communications are monitored and responded to in order to construct a signal comprising the second medium.
- FIG. 1 is an illustration of a venue employing the method and apparatus of the present subject matter
- FIG. 2 is a block diagram illustrating one embodiment of hardware for implementing the system illustrated in FIG. 1 ;
- FIG. 3 is an illustration of a signal transformed from a time domain signal to a frequency domain signal and of data derived from the frequency domain signal;
- FIG. 4 is a block diagram illustrating a rule-based circuit for generating control signals based on data derived from the signal in a second domain, in this embodiment a frequency domain signal;
- FIG. 5 is a block diagram illustrating generation of a video program in the context of a set of collaborative social functions used as a second medium
- FIG. 6 is a flow diagram illustrating one form of the operation of the system illustrated in FIG. 5 ;
- FIG. 7 illustrates correlation data for controlling production of control signals
- FIG. 8 is a block diagram of a system for mapping audio components to visual display controls.
- the present subject matter may be used to enhance the experience of an audience at an event by providing variations that have not been available before in displays in a first medium, e.g., a video display.
- a first medium may comprise a program presented on a video display screen.
- a second medium may comprise an audio soundtrack or an audio performance. Many characteristics of the video display may be varied in accordance with analysis of the second medium.
- a large screen video display has many different characteristics. Characteristics include hue, intensity, pixilation, color saturation, and RGB values. Control signals may be applied to command values of these and other characteristics.
- a second medium may comprise a soundtrack. In the prior art, the second medium will generally accompany the video display without affecting it. In accordance with the present subject matter, the second medium is analyzed to measure components and to generate command signals from the components based on a rule. The command signals are applied sequentially in time in order to control the video display.
- FIG. 1 is an illustration of a venue 10 comprising a system 2 in accordance with the present subject matter.
- the venue 10 may include a performance stage 12 , audience area 14 , a control room 16 , and a sound system 18 which may interact with the control room 16 in a conventional manner and which may also be coupled to a processing system as further described below.
- a video program 20 shown on a display 22 is provided in conjunction with a sound source 28 .
- the sound source 28 may comprise a prerecorded program coupled to the sound system 18 .
- the sound source 28 comprises a live performance provided by a performer or performers 29 .
- the display 22 is a screen 24 that comprises a backdrop for the performance stage 12 .
- the display 22 could comprise an array 25 of monitors over which an image is distributed.
- the display 22 could alternatively comprise a plurality of identical displays in other locations.
- the video program 20 can include matter which is synchronized with components of a performance.
- Components to which variations in the video program 20 are synchronized are provided from a synchronizing source 30 .
- the synchronizing source 30 may receive an input from the sound system 18 .
- the synchronizing source 30 provides signals from which synchronizing command signals will be generated.
- the synchronizing source 30 is coupled to the sound system 18 .
- a sound system 18 need not necessarily be operating on music.
- Other audio sources could include spoken words.
- Other sounds could also be used.
- the synchronizing source 30 could be responding to sounds of car engines at a race track.
- Sources need not necessarily be audio sources.
- Non-audio sources include phenomena that may be sensed. These sources could include ocean waves or vibration of molecules displaying nuclear magnetic resonance.
- the display 22 is driven by a video interface 42 .
- a video processor 40 provides signals for display.
- the video processor 40 may be coupled to and interact with one or more of the following components.
- a content database 50 may contain a library of video clips, still images, color patterns, and other content selectable for display.
- a portable device 60 could comprise a smartphone, tablet computer, or other portable devices that may come into existence in the future.
- Another video source is a video camera 70 .
- a control circuit 80 may be provided for selecting sources or commanding particular actions. The control circuit 80 may be commanded by an operator which for purposes of the present description is described as a video jockey or VJ 84 .
- FIG. 2 is a block diagrammatic representation which may be embodied in any of a number of monolithic integrated circuits or other components.
- the video processor 40 includes a data bus 104 which carries communications between various modules.
- various subsystems of the video processor are illustrated as discrete components. However, these subsystems could be embodied in one microcircuit chip or distributed over various components within or without the video processor 40 .
- a central processing unit (CPU) 110 is programmed to process data and generate commands to select content for a video program.
- a RAM 112 is used to facilitate CPU 110 operations.
- a transform generator 114 responds to the synchronizing source 30 ( FIG. 1 ) to produce intelligence for invoking commands to apply to video content.
- the transform generator 114 takes intelligence from the synchronizing source 30 and transforms it into data which has a meaningful relationship to parameters which will operate on the video display.
- the data is recognized and processed by an audio analyzer 116 .
- the transform generator 114 produces a Fast Fourier Transform (FFT).
- FFT Fast Fourier Transform
- the audio analyzer 116 measures values indicative of the signal in a first domain.
- the term “audio” analyzer is used because the synchronizing source 30 will often comprise a sound source.
- the operation and outputs of the second domain analyzer 118 described with respect to FIG. 4 are coupled to a signal generator 140 .
- the signal in the first domain is analyzed to provide further control signals.
- FIG. 3 consists of FIGS. 3A , 3 B, and 3 C.
- FIG. 3A is an illustration of an audio signal 200 having content in the time domain and the frequency domain. Frequency components 202 , 204 , and 206 are illustrated as being components of the audio signal 200 for simplicity in illustration. Common audio signals 200 have a far greater range of frequencies.
- the signal 200 is represented in the time domain by a waveform 210 .
- the waveform 210 represents the composite of the components, and is displayed as amplitude versus time.
- the Fourier Transform provides a representation of the signal in the frequency domain using the signal 200 as an input. The result is seen in waveform 220 which is illustrated in amplitude of components versus the frequency value of components.
- FIG. 3B illustrates a first domain analysis.
- this comprises a basic RMS amplitude analysis of the audio signal.
- RMS amplitude is a measure of overall perceived “loudness.”
- Amplitude points 230 are derived periodically.
- FIG. 3C illustrates data generation in an embodiment in which the basis for video control signals is based on frequency of signals received by second domain analyzer 118 ( FIG. 2 ).
- the second domain analyzer 118 comprises discriminators and filters to resolve the continuous waveform 220 into “bins” 240 , each having a particular frequency width. In effect, a bar graph is generated with each bar comprising one bin 240 .
- the bins 240 of adjacent frequencies are manipulated according to a rule to develop an amplitude vector for the bass, mid-range, and treble portions of the spectrum of music in a performance.
- Each of these broad frequency bins 240 is coupled to the signal generator 140 .
- the rule utilized to produce a control signal for operating on the video signal is to produce a time-varying amplitude value in accordance with outputs corresponding to a current value of a selected bin or bins in the second domain analyzer 118 .
- Preselected bins 240 are selected corresponding to develop an amplitude vector for the bass, mid-range, and treble portions of the spectrum of music.
- Each amplitude vector comprises a respective value for each of three control signals for each of three video characteristics.
- audio is analyzed in order to derive values that are applied to various modifying functions on the video signal.
- a windowed-FFT analysis of the audio is performed, and bins defined by the cumulative amplitude of a range of frequencies. Bins may have overlapping boundaries. For example, all the amp energy from 50 Hz to 400 Hz defines a range for one bin. A single amplitude value is produced which represents the amount of low-frequency energy in each window. Another bin may collect values for the energy from 300 Hz to 700 Hz. Further bins may be similarly defined. This then produces a time varying set of single-valued controls that can be mapped onto one or many control parameters.
- the control parameters embody various functions which are dynamically manipulating the video signal at the same time.
- the degree of visual effects to be applied to the video clips and stills is selected in accordance with the time-varying amplitude of respective control signals.
- the time varying amplitudes are mapped onto the selected parameters.
- the time-varying amount of energy in a bass bin 240 is applied to determine the amount of pixillation applied to a video source.
- Visual effects that could be controlled include pixillation, tiling, pan and zoom, sepia tone, and distortion.
- the production of the effects which contribute to the video program 20 needs to be achieved in time to provide the video program in synchronism with the synchronizing source 30 .
- the synchronizing source 30 looks at the audio input ahead of the actual audio playback.
- a nominal, satisfactory lead time is on the order of tenths of a second. Transformation from the first domain into the second domain and generation of control signals must be performed in time to produce the desired effect at the same time that the analyzed music exits the sound system.
- the video processor 40 ( FIG. 1 ) measures basic RMS amplitude.
- the RMS amplitude signal may be coupled to determine when and how often to change from one video source clip or image to the next.
- a minimum and maximum time are selected that must elapse before changing video sources. Once the minimum time has elapsed, a sound level is selected to trigger a change in video source. The trigger may be produced in response to a signal crossing a preselected amplitude threshold. Alternatively, an input circuit may resolve a selected peak in the RMS amplitude value to “trigger” a change in source. If the maximum time elapses without a trigger stimulus, a “timeout” signal triggers the change in the video source. Once that minimum time has elapsed, a next significant spike in the RMS amplitude value to is used to “trigger” a change in source. If a maximum time elapses without a spike, the source may be changed anyway.
- the video database of moving and still images may be preprogrammed into the database 50 ( FIG. 1 ).
- the VJ 84 may select material from a given storage location or may access media through a real-time search on a search engine or on tags of other data sources.
- a resulting composite video/audio composition may be saved to a standard QuickTime file.
- Streaming either to an external monitor, or to a streaming host on the Internet, may provide for live sharing.
- FIG. 4 is a block diagram illustrating a rule-based circuit for generating control signals based on data derived from the signal in a second domain, which in the present illustration is the frequency domain.
- the Fourier transform generator 114 provides an input signal in the second domain to the second domain analyzer 118 .
- the second domain analyzer 118 includes an arithmetic unit 250 to produce the signals represented as bins 240 in FIG. 3C .
- the signals are separated within frequency ranges, and a signal indicating an amplitude per frequency range per timeslot is stored in a data memory 260 .
- a clock circuit 265 clocks the data memory 260 to provide one set of amplitudes for a current time into a control signal register 270 .
- control signal register 270 addresses a lookup table 274 , which provides respective outputs to a gain control amplifier 276 .
- the gain control amplifier, 276 provides the parameter control signals to the video processor 40 .
- FIG. 5 is a block diagram illustrating an embodiment in which the second medium comprises a set of collaborative social functions.
- individual devices 300 - 1 through 300 - n such as smart phones or tablet computers each store an application, or app.
- Each portable device contains its own music library 302 and a program memory 304 storing an app 306 .
- Each portable device 300 includes a microprocessor 310 .
- the portable devices 300 each interact via a communications link 330 with a video processor 340 .
- the communications link 330 may comprise the Internet, telephone connections, satellite communications, and other forms of communication alone or in combination.
- the video processor 340 includes a data bus 342 and a CPU 344 . The interaction with the video processor 340 is shown for convenience and is not essential. A separate processor could be used to interact with the portable devices 300 .
- the present system uses a similarity indication for the contents of one user's music library 302 with that of another user.
- the similarity is a value based on metrics associated with each entry in a music library 302 .
- characterizations are provided to indicate musical compatibility of other users who have lesser degrees of similarity in musical tastes. Characterization may be performed in the microprocessor 310 of a device 300 in accordance with the app 306 . Alternatively, the app 306 may be uploaded to the CPU 344 for performing the characterization.
- the video processor 340 comprises a data analyzer 346 .
- the data analyzer 346 registers similarity metrics data in a manner similar to the audio analyzer 116 ( FIG. 2 ) registering frequency data.
- the similarity metrics data is monitored and used for application to a rule-based signal generation means.
- FIG. 6 is a flow diagram illustrating one form of the operation of the system of FIG. 5 .
- users use the app 306 to connect a user's portable device 300 to the video processor 340 . These connections are generally made at different times. Actions at block 400 are ongoing. They need not occur at any particular time.
- the app 306 uploads the music library metric for the respective portable device 300 to the video processor 340 .
- a user's music metric is uploaded to the video processor 340 . The upload may be initiated by means of a user request or may be programmed into the app 306 for automatic execution.
- the video processor 340 collects music metrics for selected users 60 .
- the video processor 340 selects metrics signals to compare.
- the video processor 340 performs a comparison of the selected metrics and stores values of correlations.
- a time varying signal based on a correlation function is calculated.
- a time varying signal indicative of varying correlations signals versus time is produced.
- a rule-based control signal is generated based on the correlation signals versus time output.
- FIG. 7 is a block diagram illustrating the data produced by the circuit of FIG. 5 and the method of FIG. 6 .
- the correlations that are compared can be grouped in any desired preselected manner.
- bins 524 of correlation factors are created and the number of comparisons producing correlations within the width of each bin are registered.
- the output of the analyzer 346 ( FIG. 5 ) is read periodically to provide current values for the bins 524 .
- FIG. 8 is a block diagram of a system for mapping audio components to visual display controls.
- FIG. 8 is illustrated as being embodied in analog hardware. However, software solutions, as in the programming for the CPU 110 ( FIG. 2 ), may be supplied. In this case, analog signals are converted to digital signals for processing.
- a transform generator 602 provides an amplitude versus frequency waveform 604 for a sound input.
- the waveform 604 is provided to a discriminator circuit 610 , which measures amplitude within a preselected frequency width. In this manner, bins, e.g., the bins 240 in FIG. 4 , are generated.
- the output of the discriminator circuit 610 is provided to a value register 614 .
- the value register 614 may provide signals in parallel to a control signal generator 620 .
- the control signal generator 620 converts signals from the value register 614 into inputs usable by a video processor 640 .
- the video processor 640 may include a program 642 that selects video effects in response to input signals.
- a program 642 is VFX VJ Software made by MixVibes. Visual effects in the VFX VJ Software are commanded through a graphical user interface (GUI) 646 .
- GUI graphical user interface
- the control signal generator 620 produces data streams which are interfaced to provide input signals corresponding to selection of options in a GUI 646 .
Abstract
A display in a first medium is controlled in response to commands generated by analyzing a second, contemporaneously available medium such as a video accompanied by a soundtrack having components in first and second domain. A transform is applied to signals in the first domain to generate signals in the second domain. The second domain signals are ordered according to a rule and used to produce a command signal or signals to produce time varying commands to vary at least one parameter of the video signal.
Description
- This patent application claims priority of Provisional Patent Application 61/648,593 filed May 18, 2012, Provisional Patent Application 61/670,754 filed Jul. 12, 2012, Provisional Patent Application 61/705,051 filed Sep. 24, 2012, Provisional Patent Application 61/771,629 filed Mar. 1, 2013, Provisional Patent Application 61/771,646 filed Mar. 1, 2013, Provisional Patent Application 61/771,690 filed Mar. 1, 2013, and Provisional Patent Application 61/771,704 filed Mar. 1, 2013, the disclosures of which are each incorporated by reference herein in its entirety.
- 1. Field of the Invention
- The present subject matter relates to a presentation of a display in a first medium, having parameters controlled by analysis of a contemporaneously available source in a second medium as by applying a transform, such as a soundtrack, live performance, or audio input.
- 2. Related Art
- An example of a first medium played concurrently with a second medium is a video display accompanied by music. In the prior art, video displays have been synchronized to music tracks.
- An early, well-known, display is the “visualizer” on media players on personal computers. A preprogrammed visual pattern is modulated by the music, and variations in the video display are synchronized to the music. However, in this application, there is no display in the absence of the music. The music cannot synchronize an independent video program.
- There are other applications which enable a user to manually assemble portions of various video and photographic sources into a composition in synchronism with one or more pieces of music. These typically require considerable training and practice before the user is able to produce results of a high quality.
- A product called “PhotoCinema” marketed by a Japanese company called Digital Stage allows for a fairly sophisticated slide show to be created and viewed on a computer screen of a personal computer. Digital images stored on a personal computer can be presented in a variety of sequences, and individual images in a sequence can be zoomed. A chain of multiple images can be made to move from left to right across the computer screen. A chain of multiple images can be made to move from top to bottom across the computer screen. Music can be selected to accompany the slide show.
- However, the video program produced is simply based on arbitrary selections by a user. While the music accompanies a slideshow, the synchronization between the audio and visual components of the presentation is prepared in advance of performance. Parameters of the synchronization are not determined dynamically.
- A video jockey (VJ) is generally a person who mixes a variety of video sources together to create a unique video image for display at large club events or other venues. Automated content manipulation could be provided in the alternative or in addition. A typical mix of images would be some pre-mixed DVDs of video images from previous events, abstract images such as proprietary visualizations, and live images from a video camera directed at the VJ or dancers in the audience, together with overlaying of text, for example, to display the name of the event, the VJ's name, or messages input by the VJ. The images from the respective sources are mixed by the VJ using video mixer/switcher hardware, which controls the overlay of the separate sources on a single display depending on the selected input source and fading transitions between the sources, much like audio mixers. While overlay of images can be provided, there is not a mathematical relationship between audio sources and construction of the video display.
- A product called “Avenue 4” by Resolume is a fully-featured professional VJ software tool, allowing elaborate mixing and manipulation of video sources. The complexity and steep learning curve of such a program make it unrealistic as a consumer tool. With such a system, while many effects and manipulations are possible, much practice is needed before sufficient mastery can be achieved to be able to work quickly. Streamlining and simplifying the complexity of operation would facilitate making this tool accessible to VJs of varying skill levels.
- U.S. Pat. No. 8,402,356 discloses systems, methods, and apparatus for collecting data and presenting media to a user. The systems generally include a data gathering module associated with an electronic device. The data gathering module communicates gathered data to a management module, which manages at least one user profile based on the gathered data. The management module may select media for presentation to a user based on the user profile, and the selected media may be displayed to the user via a media output device co-located with the user, such as a display of the user's mobile electronic device or a television, computer, billboard, or other display. Related methods are also provided.
- United States Published Patent Application No. 20110283865 discloses a system and method for visual representation of sound, wherein the system and method obtain sound information from a multimedia content. The system generates an icon and a directional indicator, each based on the sound information obtained. The sound information typically includes various attributes that can be mapped to various properties of the display elements such as the icon and directional indicator in order to provide details relating to the sound via a visual display. This system effectively “illustrates” sounds so that particular video cues may be given a one-to-one correspondence with particular sounds. However, no video program is generated based on the sound analysis.
- United States Published Patent Application No. 20070292832 discloses a system for creating sound using visual images. Various controls and features are provided for the selection, editing, and arrangement of the visual images and tones used to create a sound presentation. Visual image characteristics such as shape, speed of movement, direction of movement, quantity, location, etc. can be set by a user. However, playback of a first medium is not synchronized to analysis of characteristics of a second medium.
- Also, these systems do not allow for cooperative control of generating a video program. Another area regarding provision of a video display in which capability has been limited is that of audience interaction and provision of displays constructed for particular users.
- The prior art regarding communication with multiple users having devices coupled to receive a display will not enable performance of the present subject matter.
- U.S. Pat. No. 7,796,162 discloses a system in which one set of cameras generates multiple synchronized camera views for broadcast from a live venue activity to remote viewers. A user chooses which view to follow. However, there is no plan for varying the sets of images sent to users. There is no uploading capability for users.
- The present subject matter relates to a method, system, and non-transitory programmed medium for presentation of a display in a first medium such as a video display having parameters controlled by analysis of a contemporaneously available source in a second medium, such as a soundtrack, live performance, or audio input.
- In one form, a display in a first medium is controlled in response to commands generated by analyzing a second, contemporaneously available medium. A first medium is played for distribution by a communications link to a plurality of users. A contemporaneously available medium, e.g., a sound track, having components in first and second domains is analyzed. A transform is applied to signals in the first domain to generate signals in the second domain, such as generating signals in the frequency domain, based on an audio signal amplitude waveform. The second domain signals are ordered according to a rule. The ordering of the second domain signals is used to produce a command signal or signals to produce time varying commands to vary at least one parameter of the audio signal. Parameters may include pixilation, color saturation, contrast, or others.
- In a further form of the system, the second medium is derived from a composite of social interactions in which a plurality of users communicates with a central server. Communications are monitored and responded to in order to construct a signal comprising the second medium.
- The present subject matter may be further understood by reference to the following description taken in connection with the following drawings:
-
FIG. 1 is an illustration of a venue employing the method and apparatus of the present subject matter; -
FIG. 2 is a block diagram illustrating one embodiment of hardware for implementing the system illustrated inFIG. 1 ; -
FIG. 3 , consisting ofFIGS. 3A and 3C , is an illustration of a signal transformed from a time domain signal to a frequency domain signal and of data derived from the frequency domain signal; -
FIG. 4 is a block diagram illustrating a rule-based circuit for generating control signals based on data derived from the signal in a second domain, in this embodiment a frequency domain signal; -
FIG. 5 is a block diagram illustrating generation of a video program in the context of a set of collaborative social functions used as a second medium; -
FIG. 6 is a flow diagram illustrating one form of the operation of the system illustrated inFIG. 5 ; -
FIG. 7 illustrates correlation data for controlling production of control signals; -
FIG. 8 is a block diagram of a system for mapping audio components to visual display controls. - The present subject matter may be used to enhance the experience of an audience at an event by providing variations that have not been available before in displays in a first medium, e.g., a video display. A first medium may comprise a program presented on a video display screen. A second medium may comprise an audio soundtrack or an audio performance. Many characteristics of the video display may be varied in accordance with analysis of the second medium.
- A large screen video display has many different characteristics. Characteristics include hue, intensity, pixilation, color saturation, and RGB values. Control signals may be applied to command values of these and other characteristics. A second medium may comprise a soundtrack. In the prior art, the second medium will generally accompany the video display without affecting it. In accordance with the present subject matter, the second medium is analyzed to measure components and to generate command signals from the components based on a rule. The command signals are applied sequentially in time in order to control the video display.
-
FIG. 1 is an illustration of avenue 10 comprising asystem 2 in accordance with the present subject matter. Thevenue 10 may include aperformance stage 12,audience area 14, acontrol room 16, and asound system 18 which may interact with thecontrol room 16 in a conventional manner and which may also be coupled to a processing system as further described below. In order to enhance the experience of perceiving music, avideo program 20 shown on adisplay 22 is provided in conjunction with asound source 28. Thesound source 28 may comprise a prerecorded program coupled to thesound system 18. In the present illustration, thesound source 28 comprises a live performance provided by a performer orperformers 29. In one preferred form thedisplay 22 is ascreen 24 that comprises a backdrop for theperformance stage 12. Thedisplay 22 could comprise anarray 25 of monitors over which an image is distributed. Thedisplay 22 could alternatively comprise a plurality of identical displays in other locations. - In order to provide a more complete experience, the
video program 20 can include matter which is synchronized with components of a performance. Components to which variations in thevideo program 20 are synchronized are provided from a synchronizingsource 30. The synchronizingsource 30 may receive an input from thesound system 18. The synchronizingsource 30 provides signals from which synchronizing command signals will be generated. In a commonly used embodiment, it will be desirable to synchronize the video program to the songs or the music being played. In such an embodiment, the synchronizingsource 30 is coupled to thesound system 18. Asound system 18 need not necessarily be operating on music. Other audio sources could include spoken words. Other sounds could also be used. For example, the synchronizingsource 30 could be responding to sounds of car engines at a race track. - Sources need not necessarily be audio sources. Non-audio sources include phenomena that may be sensed. These sources could include ocean waves or vibration of molecules displaying nuclear magnetic resonance.
- The
display 22 is driven by avideo interface 42. Avideo processor 40 provides signals for display. Thevideo processor 40 may be coupled to and interact with one or more of the following components. Acontent database 50 may contain a library of video clips, still images, color patterns, and other content selectable for display. Aportable device 60 could comprise a smartphone, tablet computer, or other portable devices that may come into existence in the future. Another video source is avideo camera 70. Acontrol circuit 80 may be provided for selecting sources or commanding particular actions. Thecontrol circuit 80 may be commanded by an operator which for purposes of the present description is described as a video jockey orVJ 84. - One preferred embodiment of the
video processor 40 which may be utilized to perform functions of the present subject matter is illustrated inFIG. 2 .FIG. 2 is a block diagrammatic representation which may be embodied in any of a number of monolithic integrated circuits or other components. Thevideo processor 40 includes adata bus 104 which carries communications between various modules. For purposes of the present description, various subsystems of the video processor are illustrated as discrete components. However, these subsystems could be embodied in one microcircuit chip or distributed over various components within or without thevideo processor 40. - A central processing unit (CPU) 110 is programmed to process data and generate commands to select content for a video program. A
RAM 112 is used to facilitateCPU 110 operations. Atransform generator 114 responds to the synchronizing source 30 (FIG. 1 ) to produce intelligence for invoking commands to apply to video content. Thetransform generator 114 takes intelligence from the synchronizingsource 30 and transforms it into data which has a meaningful relationship to parameters which will operate on the video display. The data is recognized and processed by anaudio analyzer 116. In one preferred embodiment, thetransform generator 114 produces a Fast Fourier Transform (FFT). Theaudio analyzer 116 measures values indicative of the signal in a first domain. The term “audio” analyzer is used because the synchronizingsource 30 will often comprise a sound source. However, other sources than audio may be provided by the synchronizingsource 30. The operation and outputs of thesecond domain analyzer 118 described with respect toFIG. 4 are coupled to a signal generator 140. In a further form, described below, the signal in the first domain is analyzed to provide further control signals. -
FIG. 3 consists ofFIGS. 3A , 3B, and 3C.FIG. 3A is an illustration of anaudio signal 200 having content in the time domain and the frequency domain.Frequency components audio signal 200 for simplicity in illustration. Common audio signals 200 have a far greater range of frequencies. Thesignal 200 is represented in the time domain by awaveform 210. Thewaveform 210 represents the composite of the components, and is displayed as amplitude versus time. The Fourier Transform provides a representation of the signal in the frequency domain using thesignal 200 as an input. The result is seen inwaveform 220 which is illustrated in amplitude of components versus the frequency value of components. -
FIG. 3B illustrates a first domain analysis. In the present embodiment, this comprises a basic RMS amplitude analysis of the audio signal. RMS amplitude is a measure of overall perceived “loudness.” Amplitude points 230 are derived periodically. -
FIG. 3C illustrates data generation in an embodiment in which the basis for video control signals is based on frequency of signals received by second domain analyzer 118 (FIG. 2 ). Thesecond domain analyzer 118 comprises discriminators and filters to resolve thecontinuous waveform 220 into “bins” 240, each having a particular frequency width. In effect, a bar graph is generated with each bar comprising onebin 240. - In one form, the
bins 240 of adjacent frequencies are manipulated according to a rule to develop an amplitude vector for the bass, mid-range, and treble portions of the spectrum of music in a performance. Each of thesebroad frequency bins 240 is coupled to the signal generator 140. The rule utilized to produce a control signal for operating on the video signal is to produce a time-varying amplitude value in accordance with outputs corresponding to a current value of a selected bin or bins in thesecond domain analyzer 118. -
Preselected bins 240 are selected corresponding to develop an amplitude vector for the bass, mid-range, and treble portions of the spectrum of music. Each amplitude vector comprises a respective value for each of three control signals for each of three video characteristics. - In one embodiment, audio is analyzed in order to derive values that are applied to various modifying functions on the video signal. A windowed-FFT analysis of the audio is performed, and bins defined by the cumulative amplitude of a range of frequencies. Bins may have overlapping boundaries. For example, all the amp energy from 50 Hz to 400 Hz defines a range for one bin. A single amplitude value is produced which represents the amount of low-frequency energy in each window. Another bin may collect values for the energy from 300 Hz to 700 Hz. Further bins may be similarly defined. This then produces a time varying set of single-valued controls that can be mapped onto one or many control parameters. The control parameters embody various functions which are dynamically manipulating the video signal at the same time.
- The degree of visual effects to be applied to the video clips and stills is selected in accordance with the time-varying amplitude of respective control signals. The time varying amplitudes are mapped onto the selected parameters. In one form, the time-varying amount of energy in a
bass bin 240 is applied to determine the amount of pixillation applied to a video source. Visual effects that could be controlled include pixillation, tiling, pan and zoom, sepia tone, and distortion. - The production of the effects which contribute to the video program 20 (
FIG. 1 ) needs to be achieved in time to provide the video program in synchronism with the synchronizingsource 30. In the case of recorded music that is played back to an audience, the synchronizingsource 30 looks at the audio input ahead of the actual audio playback. A nominal, satisfactory lead time is on the order of tenths of a second. Transformation from the first domain into the second domain and generation of control signals must be performed in time to produce the desired effect at the same time that the analyzed music exits the sound system. - The video processor 40 (
FIG. 1 ) measures basic RMS amplitude. The RMS amplitude signal may be coupled to determine when and how often to change from one video source clip or image to the next. - A minimum and maximum time are selected that must elapse before changing video sources. Once the minimum time has elapsed, a sound level is selected to trigger a change in video source. The trigger may be produced in response to a signal crossing a preselected amplitude threshold. Alternatively, an input circuit may resolve a selected peak in the RMS amplitude value to “trigger” a change in source. If the maximum time elapses without a trigger stimulus, a “timeout” signal triggers the change in the video source. Once that minimum time has elapsed, a next significant spike in the RMS amplitude value to is used to “trigger” a change in source. If a maximum time elapses without a spike, the source may be changed anyway.
- The video database of moving and still images may be preprogrammed into the database 50 (
FIG. 1 ). Alternatively, theVJ 84 may select material from a given storage location or may access media through a real-time search on a search engine or on tags of other data sources. - A resulting composite video/audio composition may be saved to a standard QuickTime file. Streaming, either to an external monitor, or to a streaming host on the Internet, may provide for live sharing.
-
FIG. 4 is a block diagram illustrating a rule-based circuit for generating control signals based on data derived from the signal in a second domain, which in the present illustration is the frequency domain. InFIG. 4 one form of generating control signals is illustrated. TheFourier transform generator 114 provides an input signal in the second domain to thesecond domain analyzer 118. Thesecond domain analyzer 118 includes anarithmetic unit 250 to produce the signals represented asbins 240 inFIG. 3C . The signals are separated within frequency ranges, and a signal indicating an amplitude per frequency range per timeslot is stored in adata memory 260. Aclock circuit 265 clocks thedata memory 260 to provide one set of amplitudes for a current time into acontrol signal register 270. There are many techniques for providing a gain control signal. In the present embodiment, thecontrol signal register 270 addresses a lookup table 274, which provides respective outputs to again control amplifier 276. The gain control amplifier, 276 provides the parameter control signals to thevideo processor 40. -
FIG. 5 is a block diagram illustrating an embodiment in which the second medium comprises a set of collaborative social functions. In this embodiment, individual devices 300-1 through 300-n such as smart phones or tablet computers each store an application, or app. Each portable device contains itsown music library 302 and aprogram memory 304 storing anapp 306. - Each
portable device 300 includes amicroprocessor 310. Theportable devices 300 each interact via a communications link 330 with avideo processor 340. The communications link 330 may comprise the Internet, telephone connections, satellite communications, and other forms of communication alone or in combination. Thevideo processor 340 includes adata bus 342 and aCPU 344. The interaction with thevideo processor 340 is shown for convenience and is not essential. A separate processor could be used to interact with theportable devices 300. - The present system uses a similarity indication for the contents of one user's
music library 302 with that of another user. The similarity is a value based on metrics associated with each entry in amusic library 302. Many known functions exist for characterizing the types of music that are stored, generating a profile of musical tastes of one user that can be compared to musical tastes of other users. For example, at http://www.InstantEncore.com, users are informed of other users who have similar musical tastes. Also, characterizations are provided to indicate musical compatibility of other users who have lesser degrees of similarity in musical tastes. Characterization may be performed in themicroprocessor 310 of adevice 300 in accordance with theapp 306. Alternatively, theapp 306 may be uploaded to theCPU 344 for performing the characterization. - The
video processor 340 comprises adata analyzer 346. The data analyzer 346 registers similarity metrics data in a manner similar to the audio analyzer 116 (FIG. 2 ) registering frequency data. The similarity metrics data is monitored and used for application to a rule-based signal generation means. -
FIG. 6 is a flow diagram illustrating one form of the operation of the system ofFIG. 5 . Atblock 400, users use theapp 306 to connect a user'sportable device 300 to thevideo processor 340. These connections are generally made at different times. Actions atblock 400 are ongoing. They need not occur at any particular time. Theapp 306 uploads the music library metric for the respectiveportable device 300 to thevideo processor 340. At block 402 a user's music metric is uploaded to thevideo processor 340. The upload may be initiated by means of a user request or may be programmed into theapp 306 for automatic execution. Thevideo processor 340 collects music metrics for selectedusers 60. Atblock 404, thevideo processor 340 selects metrics signals to compare. Atblock 406, thevideo processor 340 performs a comparison of the selected metrics and stores values of correlations. Atblock 408, a time varying signal based on a correlation function is calculated. Atblock 410, a time varying signal indicative of varying correlations signals versus time is produced. Atblock 420, a rule-based control signal is generated based on the correlation signals versus time output. -
FIG. 7 is a block diagram illustrating the data produced by the circuit ofFIG. 5 and the method ofFIG. 6 . The correlations that are compared can be grouped in any desired preselected manner. InFIG. 7 ,bins 524 of correlation factors are created and the number of comparisons producing correlations within the width of each bin are registered. The output of the analyzer 346 (FIG. 5 ) is read periodically to provide current values for thebins 524. -
FIG. 8 is a block diagram of a system for mapping audio components to visual display controls.FIG. 8 is illustrated as being embodied in analog hardware. However, software solutions, as in the programming for the CPU 110 (FIG. 2 ), may be supplied. In this case, analog signals are converted to digital signals for processing. Atransform generator 602 provides an amplitude versusfrequency waveform 604 for a sound input. Thewaveform 604 is provided to adiscriminator circuit 610, which measures amplitude within a preselected frequency width. In this manner, bins, e.g., thebins 240 inFIG. 4 , are generated. The output of thediscriminator circuit 610 is provided to avalue register 614. Thevalue register 614 may provide signals in parallel to acontrol signal generator 620. Thecontrol signal generator 620 converts signals from thevalue register 614 into inputs usable by avideo processor 640. Thevideo processor 640 may include aprogram 642 that selects video effects in response to input signals. One example of aprogram 642 is VFX VJ Software made by MixVibes. Visual effects in the VFX VJ Software are commanded through a graphical user interface (GUI) 646. In accordance with the present subject matter, thecontrol signal generator 620 produces data streams which are interfaced to provide input signals corresponding to selection of options in aGUI 646. - The previous description is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the spirit or scope of the invention. For example, one or more elements can be rearranged and/or combined, or additional elements may be added. A wide range of systems may be provided consistent with the principles and novel features disclosed herein.
Claims (11)
1. A method for controlling a display in a first medium by analysis of a selected contemporaneously available content source comprising:
providing a presentation in the first medium having at least one characteristic subject to control;
providing a source in a second medium, the source having components in at least first and second domains;
applying a transform to a source in the second medium to transform a signal in the first domain into a signal in the second domain;
defining bins each collecting measured values signals in a selected range of the signal in the second domain;
a bin value corresponding to a function of the integrated value of the measured signals in each selected range over a selected time period; and
applying the bin value to control at least one characteristic of the presentation in the first medium.
2. A method according to claim 1 wherein the step of applying the bin value to control at least one characteristic of the presentation in the first medium comprises providing a control signal generator and generating with said control signal generator a characteristic control signal in correspondence to each respective bin value.
3. A method according to claim 2 wherein defining bins comprises selecting ranges that may have overlapping boundaries.
4. A method according to claim 3 further comprising reading said second source signal at a preselected number of clock periods in advance of a program point and calculating the control signals in advance of the program point.
5. A method according to claim 4 wherein said first medium comprises a video program, and said second medium comprises audio.
6. A method according to claim 5 wherein the first domain comprises amplitude and the second domain comprises frequency.
7. A non-transitory machine-readable medium for execution on a digital processor, which when executed causes the processor to perform the steps of:
monitoring a source in a second medium, the source having components in at least first and second domains;
applying a transform to a source in the second medium to transform a signal in the first domain into a signal in the second domain;
measuring selected values of selected ranges of the signal in the second domain;
producing a bin value corresponding to each selected range; and
applying the bin value to control at least one characteristic of the presentation in the first medium.
8. A non-transitory machine-readable medium according to claim 7 that causes the processor to perform the further steps of generating a plurality of bin values, applying each bin value to a control circuit, generating a control signal having a value corresponding to an amplitude of each bin value, and controlling each of a plurality of respective characteristics in said first media program.
9. A system for controlling a display in a first medium by analysis of contemporaneously available content sources comprising:
a synchronizing source receiving a program from a source in a second medium, the source having components in at least first and second domains;
a transform generator applying a transform to a source in the second medium to transform a signal in the first domain into a signal in the second domain;
a signal generator resolving the second domain signal into a plurality of bin values each indicative of amplitude in a preselected range of the second domain signal;
a control signal generator receiving each bin value and comprising means for producing a control signal in correspondence with each bin value; and
a control circuit coupled to vary the value of characteristics of a program signal in the first medium in correspondence with a respective control signal.
10. A system according to claim 9 wherein the system is coupled to receive a video program in a first medium and contemporaneously available audio content in a second medium.
11. A system according to claim 9 wherein the system is coupled to receive inputs from a second medium comprising portable interactive devices in an audience, wherein the first domain is a parameter based on one parameter of information describing a user generated by an app and wherein the second domain is an amplitude generated in correspondence with the number of occurrences of the parameter within each of a preselected number of ranges.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/895,274 US20130308051A1 (en) | 2012-05-18 | 2013-05-15 | Method, system, and non-transitory machine-readable medium for controlling a display in a first medium by analysis of contemporaneously accessible content sources |
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261648593P | 2012-05-18 | 2012-05-18 | |
US201261670754P | 2012-07-12 | 2012-07-12 | |
US201261705051P | 2012-09-24 | 2012-09-24 | |
US201361771690P | 2013-03-01 | 2013-03-01 | |
US201361771629P | 2013-03-01 | 2013-03-01 | |
US201361771704P | 2013-03-01 | 2013-03-01 | |
US201361771646P | 2013-03-01 | 2013-03-01 | |
US13/895,274 US20130308051A1 (en) | 2012-05-18 | 2013-05-15 | Method, system, and non-transitory machine-readable medium for controlling a display in a first medium by analysis of contemporaneously accessible content sources |
Publications (1)
Publication Number | Publication Date |
---|---|
US20130308051A1 true US20130308051A1 (en) | 2013-11-21 |
Family
ID=49581043
Family Applications (7)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/895,282 Abandoned US20130311581A1 (en) | 2012-05-18 | 2013-05-15 | Transmission of command execution messages for providing a shared experience to both internal, at-venue participants, and external, networked participants |
US13/895,313 Expired - Fee Related US9143564B2 (en) | 2012-05-18 | 2013-05-15 | Concert server incorporating front-end and back-end functions to cooperate with an app to provide synchronized messaging to multiple clients |
US13/895,290 Expired - Fee Related US9071628B2 (en) | 2012-05-18 | 2013-05-15 | Method and apparatus for managing bandwidth by managing selected internet access by devices in a Wi-Fi linked audience |
US13/895,307 Abandoned US20130311566A1 (en) | 2012-05-18 | 2013-05-15 | Method and apparatus for creating rule-based interaction of portable client devices at a live event |
US13/895,274 Abandoned US20130308051A1 (en) | 2012-05-18 | 2013-05-15 | Method, system, and non-transitory machine-readable medium for controlling a display in a first medium by analysis of contemporaneously accessible content sources |
US13/895,253 Expired - Fee Related US9357005B2 (en) | 2012-05-18 | 2013-05-15 | Method and system for synchronized distributed display over multiple client devices |
US13/895,300 Expired - Fee Related US9246999B2 (en) | 2012-05-18 | 2013-05-15 | Directed wi-fi network in a venue integrating communications of a central concert controller with portable interactive devices |
Family Applications Before (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/895,282 Abandoned US20130311581A1 (en) | 2012-05-18 | 2013-05-15 | Transmission of command execution messages for providing a shared experience to both internal, at-venue participants, and external, networked participants |
US13/895,313 Expired - Fee Related US9143564B2 (en) | 2012-05-18 | 2013-05-15 | Concert server incorporating front-end and back-end functions to cooperate with an app to provide synchronized messaging to multiple clients |
US13/895,290 Expired - Fee Related US9071628B2 (en) | 2012-05-18 | 2013-05-15 | Method and apparatus for managing bandwidth by managing selected internet access by devices in a Wi-Fi linked audience |
US13/895,307 Abandoned US20130311566A1 (en) | 2012-05-18 | 2013-05-15 | Method and apparatus for creating rule-based interaction of portable client devices at a live event |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/895,253 Expired - Fee Related US9357005B2 (en) | 2012-05-18 | 2013-05-15 | Method and system for synchronized distributed display over multiple client devices |
US13/895,300 Expired - Fee Related US9246999B2 (en) | 2012-05-18 | 2013-05-15 | Directed wi-fi network in a venue integrating communications of a central concert controller with portable interactive devices |
Country Status (1)
Country | Link |
---|---|
US (7) | US20130311581A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104935753A (en) * | 2015-07-03 | 2015-09-23 | 金陵科技学院 | Local-storage and synchronized method for mobile phone APP data |
CN108600274A (en) * | 2018-05-17 | 2018-09-28 | 淄博职业学院 | Safe communication system and its application method between a kind of realization computer inner-external network |
CN109068146A (en) * | 2018-08-27 | 2018-12-21 | 佛山龙眼传媒科技有限公司 | A kind of live broadcasting method of large-scale activity |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9510148B2 (en) | 2009-03-03 | 2016-11-29 | Mobilitie, Llc | System and method for wireless communication to permit audience participation |
US10057333B2 (en) | 2009-12-10 | 2018-08-21 | Royal Bank Of Canada | Coordinated processing of data by networked computing resources |
US9887965B2 (en) * | 2012-07-20 | 2018-02-06 | Google Llc | Method and system for browser identity |
US9727298B2 (en) * | 2013-05-28 | 2017-08-08 | Sony Corporation | Device and method for allocating data based on an arrangement of elements in an image |
US9489114B2 (en) * | 2013-06-24 | 2016-11-08 | Microsoft Technology Licensing, Llc | Showing interactions as they occur on a whiteboard |
US9800845B2 (en) | 2014-02-07 | 2017-10-24 | Microsoft Technology Licensing, Llc | Projector-based crowd coordination and messaging |
US9479610B2 (en) * | 2014-04-14 | 2016-10-25 | Microsoft Technology Licensing, Llc | Battery efficient synchronization of communications using a token bucket |
CN104158892A (en) * | 2014-08-22 | 2014-11-19 | 苏州乐聚一堂电子科技有限公司 | Raked stage for interaction of concert |
US10230571B2 (en) * | 2014-10-30 | 2019-03-12 | Equinix, Inc. | Microservice-based application development framework |
CN104333598A (en) * | 2014-11-06 | 2015-02-04 | 北京安奇智联科技有限公司 | Two-dimension code and network adaption based mobile terminal and web terminal interconnection method |
WO2016080906A1 (en) * | 2014-11-18 | 2016-05-26 | Razer (Asia-Pacific) Pte. Ltd. | Gaming controller for mobile device and method of operating a gaming controller |
US11045723B1 (en) | 2014-11-18 | 2021-06-29 | Razer (Asia-Pacific) Pte. Ltd. | Gaming controller for mobile device and method of operating a gaming controller |
CN104394208B (en) * | 2014-11-20 | 2018-07-03 | 北京安奇智联科技有限公司 | Document transmission method and server |
CN105634675B (en) | 2016-01-13 | 2020-05-19 | 中磊电子(苏州)有限公司 | Transmission rate control method and wireless local area network device |
US9843943B1 (en) * | 2016-09-14 | 2017-12-12 | T-Mobile Usa, Inc. | Application-level quality of service testing system |
US10785144B2 (en) * | 2016-12-30 | 2020-09-22 | Equinix, Inc. | Latency equalization |
WO2018136965A1 (en) * | 2017-01-23 | 2018-07-26 | EkRally, LLC | Systems and methods for fan interaction, team/player loyalty, and sponsor participation |
WO2019173710A1 (en) * | 2018-03-09 | 2019-09-12 | Muzooka, Inc. | System for obtaining and distributing validated information regarding a live performance |
KR102604570B1 (en) * | 2018-03-23 | 2023-11-22 | 삼성전자주식회사 | Method for supporting user input and electronic device supporting the same |
US10967259B1 (en) | 2018-05-16 | 2021-04-06 | Amazon Technologies, Inc. | Asynchronous event management for hosted sessions |
SG11202111622WA (en) * | 2019-05-10 | 2021-11-29 | Cinewav Pte Ltd | System and method for synchronizing audio content on a mobile device to a separate visual display system |
CN113194528B (en) * | 2021-03-18 | 2023-01-31 | 深圳市汇顶科技股份有限公司 | Synchronization control method, chip, electronic device, and storage medium |
Citations (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6490359B1 (en) * | 1992-04-27 | 2002-12-03 | David A. Gibson | Method and apparatus for using visual images to mix sound |
US20040264917A1 (en) * | 2003-06-25 | 2004-12-30 | M/X Entertainment, Inc. | Audio waveform cueing for enhanced visualizations during audio playback |
US20070292832A1 (en) * | 2006-05-31 | 2007-12-20 | Eolas Technologies Inc. | System for visual creation of music |
US20090077170A1 (en) * | 2007-09-17 | 2009-03-19 | Andrew Morton Milburn | System, Architecture and Method for Real-Time Collaborative Viewing and Modifying of Multimedia |
US20100183280A1 (en) * | 2008-12-10 | 2010-07-22 | Muvee Technologies Pte Ltd. | Creating a new video production by intercutting between multiple video clips |
US7796162B2 (en) * | 2000-10-26 | 2010-09-14 | Front Row Technologies, Llc | Providing multiple synchronized camera views for broadcast from a live venue activity to remote viewers |
US8026436B2 (en) * | 2009-04-13 | 2011-09-27 | Smartsound Software, Inc. | Method and apparatus for producing audio tracks |
US20110283865A1 (en) * | 2008-12-30 | 2011-11-24 | Karen Collins | Method and system for visual representation of sound |
US8205148B1 (en) * | 2008-01-11 | 2012-06-19 | Bruce Sharpe | Methods and apparatus for temporal alignment of media |
US8244103B1 (en) * | 2011-03-29 | 2012-08-14 | Capshore, Llc | User interface for method for creating a custom track |
US20120213438A1 (en) * | 2011-02-23 | 2012-08-23 | Rovi Technologies Corporation | Method and apparatus for identifying video program material or content via filter banks |
US20120237040A1 (en) * | 2011-03-16 | 2012-09-20 | Apple Inc. | System and Method for Automated Audio Mix Equalization and Mix Visualization |
US8402356B2 (en) * | 2006-11-22 | 2013-03-19 | Yahoo! Inc. | Methods, systems and apparatus for delivery of media |
US8612517B1 (en) * | 2012-01-30 | 2013-12-17 | Google Inc. | Social based aggregation of related media content |
US8621355B2 (en) * | 2011-02-02 | 2013-12-31 | Apple Inc. | Automatic synchronization of media clips |
US20140050334A1 (en) * | 2012-08-15 | 2014-02-20 | Warner Bros. Entertainment Inc. | Transforming audio content for subjective fidelity |
US20140297882A1 (en) * | 2013-04-01 | 2014-10-02 | Microsoft Corporation | Dynamic track switching in media streaming |
US8917355B1 (en) * | 2013-08-29 | 2014-12-23 | Google Inc. | Video stitching system and method |
US8966515B2 (en) * | 2010-11-08 | 2015-02-24 | Sony Corporation | Adaptable videolens media engine |
US9111579B2 (en) * | 2011-11-14 | 2015-08-18 | Apple Inc. | Media editing with multi-camera media clips |
US9143742B1 (en) * | 2012-01-30 | 2015-09-22 | Google Inc. | Automated aggregation of related media content |
US9313359B1 (en) * | 2011-04-26 | 2016-04-12 | Gracenote, Inc. | Media content identification on mobile devices |
Family Cites Families (88)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6421733B1 (en) * | 1997-03-25 | 2002-07-16 | Intel Corporation | System for dynamically transcoding data transmitted between computers |
US7092914B1 (en) * | 1997-11-06 | 2006-08-15 | Intertrust Technologies Corporation | Methods for matching, selecting, narrowcasting, and/or classifying based on rights management and/or other information |
US8266266B2 (en) * | 1998-12-08 | 2012-09-11 | Nomadix, Inc. | Systems and methods for providing dynamic network authorization, authentication and accounting |
US20040083184A1 (en) * | 1999-04-19 | 2004-04-29 | First Data Corporation | Anonymous card transactions |
US7149549B1 (en) * | 2000-10-26 | 2006-12-12 | Ortiz Luis M | Providing multiple perspectives for a venue activity through an electronic hand held device |
US7797005B2 (en) * | 2000-09-06 | 2010-09-14 | Eric Inselberg | Methods, systems and apparatus for interactive audience participation at a live entertainment event |
US7792539B2 (en) * | 2000-09-06 | 2010-09-07 | Eric Inselberg | Method and apparatus for interactive audience participation at a live entertainment event |
US20110238855A1 (en) * | 2000-09-25 | 2011-09-29 | Yevgeny Korsunsky | Processing data flows with a data flow processor |
US8316450B2 (en) * | 2000-10-10 | 2012-11-20 | Addn Click, Inc. | System for inserting/overlaying markers, data packets and objects relative to viewable content and enabling live social networking, N-dimensional virtual environments and/or other value derivable from the content |
US7818435B1 (en) * | 2000-12-14 | 2010-10-19 | Fusionone, Inc. | Reverse proxy mechanism for retrieving electronic content associated with a local network |
US7613834B1 (en) * | 2001-04-04 | 2009-11-03 | Palmsource Inc. | One-to-many device synchronization using downloaded/shared client software |
CA2410118C (en) * | 2001-10-26 | 2007-12-18 | Research In Motion Limited | System and method for controlling configuration settings for mobile communication devices and services |
US7849173B1 (en) * | 2001-12-31 | 2010-12-07 | Christopher Uhlik | System for on-demand access to local area networks |
US7733366B2 (en) * | 2002-07-01 | 2010-06-08 | Microsoft Corporation | Computer network-based, interactive, multimedia learning system and process |
US20040088212A1 (en) * | 2002-10-31 | 2004-05-06 | Hill Clarke R. | Dynamic audience analysis for computer content |
US9032465B2 (en) * | 2002-12-10 | 2015-05-12 | Ol2, Inc. | Method for multicasting views of real-time streaming interactive video |
US20040224703A1 (en) * | 2003-05-09 | 2004-11-11 | Takaki Steven M. | Method and system for enhancing venue participation by venue participants |
US7137099B2 (en) * | 2003-10-24 | 2006-11-14 | Microsoft Corporation | System and method for extending application preferences classes |
US20050091184A1 (en) * | 2003-10-24 | 2005-04-28 | Praveen Seshadri | Personalized folders |
US20050239551A1 (en) * | 2004-04-26 | 2005-10-27 | Scott Griswold | System and method for providing interactive games |
US7984190B2 (en) * | 2004-05-07 | 2011-07-19 | Panasonic Avionics Corporation | System and method for managing content on mobile platforms |
US20070110074A1 (en) * | 2004-06-04 | 2007-05-17 | Bob Bradley | System and Method for Synchronizing Media Presentation at Multiple Recipients |
US20060015450A1 (en) * | 2004-07-13 | 2006-01-19 | Wells Fargo Bank, N.A. | Financial services network and associated processes |
US20060104600A1 (en) * | 2004-11-12 | 2006-05-18 | Sfx Entertainment, Inc. | Live concert/event video system and method |
US8933967B2 (en) * | 2005-07-14 | 2015-01-13 | Charles D. Huston | System and method for creating and sharing an event using a social network |
US7991764B2 (en) * | 2005-07-22 | 2011-08-02 | Yogesh Chunilal Rathod | Method and system for communication, publishing, searching, sharing and dynamically providing a journal feed |
EP2498210A1 (en) * | 2005-07-22 | 2012-09-12 | Kangaroo Media, Inc. | System and methods for enhancing the experience of spectators attending a live sporting event |
WO2007050997A2 (en) * | 2005-10-26 | 2007-05-03 | Cellscient, Inc. | Wireless interactive communication system |
US8347373B2 (en) * | 2007-05-08 | 2013-01-01 | Fortinet, Inc. | Content filtering of remote file-system access protocols |
US8929870B2 (en) * | 2006-02-27 | 2015-01-06 | Qualcomm Incorporated | Methods, apparatus, and system for venue-cast |
US20070236334A1 (en) * | 2006-03-31 | 2007-10-11 | Borovoy Richard D | Enhancing face-to-face communication |
US8639215B2 (en) * | 2006-04-07 | 2014-01-28 | Gregory M. McGregor | SIM-centric mobile commerce system for deployment in a legacy network infrastructure |
US20070282948A1 (en) * | 2006-06-06 | 2007-12-06 | Hudson Intellectual Properties, Inc. | Interactive Presentation Method and System Therefor |
US20080034095A1 (en) * | 2006-08-01 | 2008-02-07 | Motorola, Inc. | Multi-representation media event handoff |
CN101548494B (en) * | 2006-08-22 | 2013-08-21 | 丛林网络公司 | Apparatus and method of controlled delay packet forwarding, and synchronization method |
US7764632B2 (en) * | 2006-08-24 | 2010-07-27 | Interwise Ltd. | Software bridge for multi-point multi-media teleconferencing and telecollaboration |
JP4830787B2 (en) * | 2006-10-25 | 2011-12-07 | 日本電気株式会社 | Mobile communication system, core network device, and MBMS data transmission method used therefor |
US9288276B2 (en) * | 2006-11-03 | 2016-03-15 | At&T Intellectual Property I, L.P. | Application services infrastructure for next generation networks including a notification capability and related methods and computer program products |
US20140165091A1 (en) * | 2006-11-22 | 2014-06-12 | Raj Abhyanker | Television and radio stations broadcasted by users of a neighborhood social network using a radial algorithm |
US8045965B2 (en) * | 2007-02-02 | 2011-10-25 | MLB Advanced Media L.P. | System and method for venue-to-venue messaging |
US8027560B2 (en) * | 2007-02-05 | 2011-09-27 | Thales Avionics, Inc. | System and method for synchronizing playback of audio and video |
US20080209031A1 (en) * | 2007-02-22 | 2008-08-28 | Inventec Corporation | Method of collecting and managing computer device information |
US7881702B2 (en) * | 2007-03-12 | 2011-02-01 | Socializeit, Inc. | Interactive entertainment, social networking, and advertising system |
US20080294502A1 (en) * | 2007-05-25 | 2008-11-27 | Eventmobile, Inc. | System and Method for Providing Event-Based Services |
US8645842B2 (en) * | 2007-11-05 | 2014-02-04 | Verizon Patent And Licensing Inc. | Interactive group content systems and methods |
US20090197551A1 (en) * | 2008-02-05 | 2009-08-06 | Paper Radio Llc | Billboard Receiver and Localized Broadcast System |
US20090215538A1 (en) * | 2008-02-22 | 2009-08-27 | Samuel Jew | Method for dynamically synchronizing computer network latency |
US8918541B2 (en) * | 2008-02-22 | 2014-12-23 | Randy Morrison | Synchronization of audio and video signals from remote sources over the internet |
US20100023968A1 (en) * | 2008-07-23 | 2010-01-28 | Tvworks, Llc, C/O Comcast Cable | Community-Based Enhanced Television |
US20110178854A1 (en) * | 2008-09-04 | 2011-07-21 | Somertech Ltd. | Method and system for enhancing and/or monitoring visual content and method and/or system for adding a dynamic layer to visual content |
US8392530B1 (en) * | 2008-12-18 | 2013-03-05 | Adobe Systems Incorporated | Media streaming in a multi-tier client-server architecture |
US8306013B2 (en) * | 2009-01-23 | 2012-11-06 | Empire Technology Development Llc | Interactions among mobile devices in a wireless network |
US8688517B2 (en) * | 2009-02-13 | 2014-04-01 | Cfph, Llc | Method and apparatus for advertising on a mobile gaming device |
WO2010102296A1 (en) * | 2009-03-06 | 2010-09-10 | Exactarget, Inc. | System and method for controlling access to aspects of an electronic messaging campaign |
US9064282B1 (en) * | 2009-05-21 | 2015-06-23 | Heritage Capital Corp. | Live auctioning system and methods |
US8879440B2 (en) * | 2009-09-29 | 2014-11-04 | Qualcomm Incorporated | Method and apparatus for ad hoc venue-cast service |
US8356316B2 (en) * | 2009-12-17 | 2013-01-15 | At&T Intellectual Property I, Lp | Method, system and computer program product for an emergency alert system for audio announcement |
US8503984B2 (en) * | 2009-12-23 | 2013-08-06 | Amos Winbush, III | Mobile communication device user content synchronization with central web-based records and information sharing system |
WO2011119504A1 (en) * | 2010-03-22 | 2011-09-29 | Mobitv, Inc. | Tile based media content selection |
US9165422B2 (en) * | 2010-04-26 | 2015-10-20 | Wms Gaming, Inc. | Controlling group wagering games |
US20110263342A1 (en) * | 2010-04-27 | 2011-10-27 | Arena Text & Graphics | Real time card stunt method |
US8751305B2 (en) * | 2010-05-24 | 2014-06-10 | 140 Proof, Inc. | Targeting users based on persona data |
US10096161B2 (en) * | 2010-06-15 | 2018-10-09 | Live Nation Entertainment, Inc. | Generating augmented reality images using sensor and location data |
US20120060101A1 (en) * | 2010-08-30 | 2012-03-08 | Net Power And Light, Inc. | Method and system for an interactive event experience |
US20120274775A1 (en) * | 2010-10-20 | 2012-11-01 | Leonard Reiffel | Imager-based code-locating, reading and response methods and apparatus |
WO2012066842A1 (en) * | 2010-11-15 | 2012-05-24 | 日本電気株式会社 | Behavior information gathering device and behavior information transmission device |
US8631122B2 (en) * | 2010-11-29 | 2014-01-14 | Viralheat, Inc. | Determining demographics based on user interaction |
KR20120087253A (en) * | 2010-12-17 | 2012-08-07 | 한국전자통신연구원 | System for providing customized contents and method for providing customized contents |
BR112013019302A2 (en) * | 2011-02-01 | 2018-05-02 | Timeplay Entertainment Corporation | multi-location interaction system and method for providing interactive experience to two or more participants located on one or more interactive nodes |
US20120239526A1 (en) * | 2011-03-18 | 2012-09-20 | Revare Steven L | Interactive music concert method and apparatus |
US8561080B2 (en) * | 2011-04-26 | 2013-10-15 | Sap Ag | High-load business process scalability |
US9294210B2 (en) * | 2011-08-15 | 2016-03-22 | Futuri Media, Llc | System for providing interaction between a broadcast automation system and a system for generating audience interaction with radio programming |
US8935279B2 (en) * | 2011-06-13 | 2015-01-13 | Opus Deli, Inc. | Venue-related multi-media management, streaming, online ticketing, and electronic commerce techniques implemented via computer networks and mobile devices |
US8948567B2 (en) * | 2011-06-20 | 2015-02-03 | Microsoft Technology Licensing, Llc | Companion timeline with timeline events |
US20130212619A1 (en) * | 2011-09-01 | 2013-08-15 | Gface Gmbh | Advertisement booking and media management for digital displays |
GB2511003B (en) * | 2011-09-18 | 2015-03-04 | Touchtunes Music Corp | Digital jukebox device with karaoke and/or photo booth features, and associated methods |
CN104272660A (en) * | 2011-10-11 | 2015-01-07 | 时间游戏公司 | Systems and methods for interactive experiences and controllers therefor |
US8805751B2 (en) * | 2011-10-13 | 2014-08-12 | Verizon Patent And Licensing Inc. | User class based media content recommendation methods and systems |
WO2013067526A1 (en) * | 2011-11-04 | 2013-05-10 | Remote TelePointer, LLC | Method and system for user interface for interactive devices using a mobile device |
US20130170819A1 (en) * | 2011-12-29 | 2013-07-04 | United Video Properties, Inc. | Systems and methods for remotely managing recording settings based on a geographical location of a user |
US9129087B2 (en) * | 2011-12-30 | 2015-09-08 | Rovi Guides, Inc. | Systems and methods for managing digital rights based on a union or intersection of individual rights |
US20130197981A1 (en) * | 2012-01-27 | 2013-08-01 | 2301362 Ontario Limited | System and apparatus for provisioning services in an event venue |
US20130194406A1 (en) * | 2012-01-31 | 2013-08-01 | Kai Liu | Targeted Delivery of Content |
US9330203B2 (en) * | 2012-03-02 | 2016-05-03 | Qualcomm Incorporated | Real-time event feedback |
WO2014008513A1 (en) * | 2012-07-06 | 2014-01-09 | Hanginout, Inc. | Interactive video response platform |
US20150046370A1 (en) * | 2013-08-06 | 2015-02-12 | Evernote Corporation | Providing participants with meeting notes for upcoming meeting |
US10687183B2 (en) * | 2014-02-19 | 2020-06-16 | Red Hat, Inc. | Systems and methods for delaying social media sharing based on a broadcast media transmission |
US9208506B2 (en) * | 2014-03-17 | 2015-12-08 | Bleachr Llc | Geofenced event-based fan networking: methods |
-
2013
- 2013-05-15 US US13/895,282 patent/US20130311581A1/en not_active Abandoned
- 2013-05-15 US US13/895,313 patent/US9143564B2/en not_active Expired - Fee Related
- 2013-05-15 US US13/895,290 patent/US9071628B2/en not_active Expired - Fee Related
- 2013-05-15 US US13/895,307 patent/US20130311566A1/en not_active Abandoned
- 2013-05-15 US US13/895,274 patent/US20130308051A1/en not_active Abandoned
- 2013-05-15 US US13/895,253 patent/US9357005B2/en not_active Expired - Fee Related
- 2013-05-15 US US13/895,300 patent/US9246999B2/en not_active Expired - Fee Related
Patent Citations (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6490359B1 (en) * | 1992-04-27 | 2002-12-03 | David A. Gibson | Method and apparatus for using visual images to mix sound |
US7796162B2 (en) * | 2000-10-26 | 2010-09-14 | Front Row Technologies, Llc | Providing multiple synchronized camera views for broadcast from a live venue activity to remote viewers |
US20040264917A1 (en) * | 2003-06-25 | 2004-12-30 | M/X Entertainment, Inc. | Audio waveform cueing for enhanced visualizations during audio playback |
US20070292832A1 (en) * | 2006-05-31 | 2007-12-20 | Eolas Technologies Inc. | System for visual creation of music |
US8402356B2 (en) * | 2006-11-22 | 2013-03-19 | Yahoo! Inc. | Methods, systems and apparatus for delivery of media |
US20090077170A1 (en) * | 2007-09-17 | 2009-03-19 | Andrew Morton Milburn | System, Architecture and Method for Real-Time Collaborative Viewing and Modifying of Multimedia |
US9449647B2 (en) * | 2008-01-11 | 2016-09-20 | Red Giant, Llc | Temporal alignment of video recordings |
US8205148B1 (en) * | 2008-01-11 | 2012-06-19 | Bruce Sharpe | Methods and apparatus for temporal alignment of media |
US20100183280A1 (en) * | 2008-12-10 | 2010-07-22 | Muvee Technologies Pte Ltd. | Creating a new video production by intercutting between multiple video clips |
US20110283865A1 (en) * | 2008-12-30 | 2011-11-24 | Karen Collins | Method and system for visual representation of sound |
US8026436B2 (en) * | 2009-04-13 | 2011-09-27 | Smartsound Software, Inc. | Method and apparatus for producing audio tracks |
US8966515B2 (en) * | 2010-11-08 | 2015-02-24 | Sony Corporation | Adaptable videolens media engine |
US8621355B2 (en) * | 2011-02-02 | 2013-12-31 | Apple Inc. | Automatic synchronization of media clips |
US20120213438A1 (en) * | 2011-02-23 | 2012-08-23 | Rovi Technologies Corporation | Method and apparatus for identifying video program material or content via filter banks |
US20120237040A1 (en) * | 2011-03-16 | 2012-09-20 | Apple Inc. | System and Method for Automated Audio Mix Equalization and Mix Visualization |
US8244103B1 (en) * | 2011-03-29 | 2012-08-14 | Capshore, Llc | User interface for method for creating a custom track |
US9313359B1 (en) * | 2011-04-26 | 2016-04-12 | Gracenote, Inc. | Media content identification on mobile devices |
US9111579B2 (en) * | 2011-11-14 | 2015-08-18 | Apple Inc. | Media editing with multi-camera media clips |
US8612517B1 (en) * | 2012-01-30 | 2013-12-17 | Google Inc. | Social based aggregation of related media content |
US9143742B1 (en) * | 2012-01-30 | 2015-09-22 | Google Inc. | Automated aggregation of related media content |
US20140050334A1 (en) * | 2012-08-15 | 2014-02-20 | Warner Bros. Entertainment Inc. | Transforming audio content for subjective fidelity |
US20140297882A1 (en) * | 2013-04-01 | 2014-10-02 | Microsoft Corporation | Dynamic track switching in media streaming |
US8917355B1 (en) * | 2013-08-29 | 2014-12-23 | Google Inc. | Video stitching system and method |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104935753A (en) * | 2015-07-03 | 2015-09-23 | 金陵科技学院 | Local-storage and synchronized method for mobile phone APP data |
CN108600274A (en) * | 2018-05-17 | 2018-09-28 | 淄博职业学院 | Safe communication system and its application method between a kind of realization computer inner-external network |
CN109068146A (en) * | 2018-08-27 | 2018-12-21 | 佛山龙眼传媒科技有限公司 | A kind of live broadcasting method of large-scale activity |
Also Published As
Publication number | Publication date |
---|---|
US20140019520A1 (en) | 2014-01-16 |
US20130325928A1 (en) | 2013-12-05 |
US9143564B2 (en) | 2015-09-22 |
US20130308621A1 (en) | 2013-11-21 |
US20130310083A1 (en) | 2013-11-21 |
US9246999B2 (en) | 2016-01-26 |
US9357005B2 (en) | 2016-05-31 |
US20130311566A1 (en) | 2013-11-21 |
US9071628B2 (en) | 2015-06-30 |
US20130311581A1 (en) | 2013-11-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20130308051A1 (en) | Method, system, and non-transitory machine-readable medium for controlling a display in a first medium by analysis of contemporaneously accessible content sources | |
JP7418865B2 (en) | Method, digital jukebox system and recording medium | |
US10514885B2 (en) | Apparatus and method for controlling audio mixing in virtual reality environments | |
US10541003B2 (en) | Performance content synchronization based on audio | |
US8917972B2 (en) | Modifying audio in an interactive video using RFID tags | |
CN101803336B (en) | Technique for allowing the modification of the audio characteristics of items appearing in an interactive video using RFID tags | |
US10079993B2 (en) | System for juxtaposition of separately recorded videos | |
EP2830041A2 (en) | Interactive audio content generation, delivery, playback and sharing | |
US20120206566A1 (en) | Methods and systems for relating to the capture of multimedia content of observed persons performing a task for evaluation | |
US9112980B2 (en) | Systems and methods for selectively reviewing a recorded conference | |
US20140281979A1 (en) | Modular audio control surface | |
CN109035930B (en) | Recorded broadcast data recommendation method based on Internet | |
US9305601B1 (en) | System and method for generating a synchronized audiovisual mix | |
JP6179257B2 (en) | Music creation method, apparatus, system and program | |
Otondo et al. | The soundlapse project: Exploring spatiotemporal features of wetland soundscapes | |
KR101489211B1 (en) | Method and apparatus for creating a video with photos | |
KR100714409B1 (en) | Apparutus for making video lecture coupled with lecture scenario and teaching materials and Method thereof | |
Rivas Pagador et al. | Co-creation stage: a web-based tool for collaborative and participatory co-located art performances | |
US20170330544A1 (en) | Method and system for creating an audio composition | |
WO2017126085A1 (en) | Lighting control device, lighting control method and lighting control program | |
CN106375863A (en) | Panoramic information presentation method and presentation device thereof | |
KR20170015447A (en) | Method and apparatus for providing contents complex | |
Haaraoja | Fundamentals of streaming: how to setup a virtual event system | |
David et al. | RoomZ: Spatial panning plugin for dynamic auralisations based on RIR convolution | |
CN116347154A (en) | Video processing method and device based on sub-application |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |