EP4374369A1 - Generating audiovisual content based on video clips - Google Patents
Generating audiovisual content based on video clipsInfo
- Publication number
- EP4374369A1 EP4374369A1 EP21762182.0A EP21762182A EP4374369A1 EP 4374369 A1 EP4374369 A1 EP 4374369A1 EP 21762182 A EP21762182 A EP 21762182A EP 4374369 A1 EP4374369 A1 EP 4374369A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- audio
- user
- video
- clips
- sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000001052 transient effect Effects 0.000 claims abstract description 50
- 238000000034 method Methods 0.000 claims abstract description 27
- 238000010801 machine learning Methods 0.000 claims description 77
- 230000006870 function Effects 0.000 claims description 20
- 230000033764 rhythmic process Effects 0.000 claims description 19
- 230000000007 visual effect Effects 0.000 claims description 13
- 230000008859 change Effects 0.000 claims description 12
- 239000000203 mixture Substances 0.000 claims description 9
- 230000001755 vocal effect Effects 0.000 claims description 9
- 238000013500 data storage Methods 0.000 claims description 4
- 230000006855 networking Effects 0.000 claims description 4
- 238000004519 manufacturing process Methods 0.000 claims description 2
- 238000012549 training Methods 0.000 description 33
- 238000004422 calculation algorithm Methods 0.000 description 29
- 238000012545 processing Methods 0.000 description 20
- 238000003860 storage Methods 0.000 description 18
- 238000004891 communication Methods 0.000 description 14
- 238000005516 engineering process Methods 0.000 description 13
- 230000008569 process Effects 0.000 description 9
- 238000004590 computer program Methods 0.000 description 8
- 238000000605 extraction Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 6
- 230000002787 reinforcement Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000003993 interaction Effects 0.000 description 3
- 239000010410 layer Substances 0.000 description 3
- 230000005236 sound signal Effects 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 238000013478 data encryption standard Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000013515 script Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 238000012163 sequencing technique Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000013526 transfer learning Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- ZYXYTGQFPZEUFX-UHFFFAOYSA-N benzpyrimoxan Chemical compound O1C(OCCC1)C=1C(=NC=NC=1)OCC1=CC=C(C=C1)C(F)(F)F ZYXYTGQFPZEUFX-UHFFFAOYSA-N 0.000 description 1
- 238000007664 blowing Methods 0.000 description 1
- BBBFJLBPOGFECG-VJVYQDLKSA-N calcitonin Chemical compound N([C@H](C(=O)N[C@@H](CC(C)C)C(=O)NCC(=O)N[C@@H](CCCCN)C(=O)N[C@@H](CC(C)C)C(=O)N[C@@H](CO)C(=O)N[C@@H](CCC(N)=O)C(=O)N[C@@H](CCC(O)=O)C(=O)N[C@@H](CC(C)C)C(=O)N[C@@H](CC=1NC=NC=1)C(=O)N[C@@H](CCCCN)C(=O)N[C@@H](CC(C)C)C(=O)N[C@@H](CCC(N)=O)C(=O)N[C@@H]([C@@H](C)O)C(=O)N[C@@H](CC=1C=CC(O)=CC=1)C(=O)N1[C@@H](CCC1)C(=O)N[C@@H](CCCNC(N)=N)C(=O)N[C@@H]([C@@H](C)O)C(=O)N[C@@H](CC(N)=O)C(=O)N[C@@H]([C@@H](C)O)C(=O)NCC(=O)N[C@@H](CO)C(=O)NCC(=O)N[C@@H]([C@@H](C)O)C(=O)N1[C@@H](CCC1)C(N)=O)C(C)C)C(=O)[C@@H]1CSSC[C@H](N)C(=O)N[C@@H](CO)C(=O)N[C@@H](CC(N)=O)C(=O)N[C@@H](CC(C)C)C(=O)N[C@@H](CO)C(=O)N[C@@H]([C@@H](C)O)C(=O)N1 BBBFJLBPOGFECG-VJVYQDLKSA-N 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000002355 dual-layer Substances 0.000 description 1
- 238000005538 encapsulation Methods 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0483—Interaction with page-structured environments, e.g. book metaphor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04847—Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/36—Accompaniment arrangements
- G10H1/361—Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
- G10H1/368—Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems displaying animated or moving pictures synchronized with the music or audio part
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/36—Accompaniment arrangements
- G10H1/40—Rhythm
- G10H1/42—Rhythm comprising tone forming circuits
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2250/00—Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
- G10H2250/311—Neural networks for electrophonic musical instruments or musical processing, e.g. for musical recognition or control, automatic composition or improvisation
Definitions
- Many modem computing devices such as mobile phones, personal computers, and tablets, include image capture devices, such as still and/or video cameras.
- the image capture devices can capture images, such as images that include people, animals, landscapes, and/or objects.
- the present disclosure relates generally to generating audiovisual content from captured videos.
- aspects of the subject technology relate to creation of audiovisual content from captured video. Relevant portions of audio content can be extracted from a video, along with corresponding portions of the video. A user can then sequence such portions of the audio content to create new audiovisual content.
- a device in a first example embodiment, includes a graphical user interface configured to enable generation of audiovisual content.
- the device also includes one or more processors.
- the device further includes data storage.
- the data storage has stored thereon computer-executable instructions that, when executed by the one or more processors, cause the computing device to carry out functions
- the functions include capturing, by a content generation component of the computing device, initial content comprising video, and audio associated with the video.
- the functions further include identifying one or more audio clips in the audio associated with the video based on one or more transient points in the audio.
- the functions also include extracting, for each audio clip of the one or more identified audio clips, a corresponding video clip from the video of the initial content.
- the functions additionally include providing, via the graphical user interface, a control interface to enable a user-generated sequence of audio clips, wherein each audio clip in the sequence of audio clips is selected from the one or more identified audio clips.
- the functions further include generating new audiovisual content comprising a sequence of video clips to correspond to the usergenerated sequence of audio clips, wherein each video clip in the sequence of video clips is the extracted corresponding video clip for each audio clip in the user-generated sequence of audio clips.
- the functions also include providing, by the control interface, the new audiovisual content.
- a computer-implemented method includes capturing, by a content generation component of a computing device, initial content comprising video, and audio associated with the video.
- the computer-implemented method further includes identifying one or more audio clips in the audio associated with the video based on one or more transient points in the audio.
- the computer-implemented method also includes extracting, for each audio clip of the one or more identified audio clips, a corresponding video clip from the video of the initial content.
- the computer-implemented method additionally includes providing, via a graphical user interface of the computing device, a control interface to enable a user-generated sequence of audio clips, wherein each audio clip in the sequence of audio clips is selected from the one or more identified audio clips.
- the computer-implemented method further includes generating new audiovisual content comprising a sequence of video clips to correspond to the user-generated sequence of audio clips, wherein each video clip in the sequence of video clips is the extracted corresponding video clip for each audio clip in the user-generated sequence of audio clips.
- the computer-implemented method also includes providing, by the control interface, the new audiovisual content.
- an article of manufacture including a non-transitory computer-readable medium having stored thereon program instructions that, upon execution by one or more processors of a computing device, cause the computing device to carry out operations.
- the operations include capturing, by a content generation component of the computing device, initial content comprising video, and audio associated with the video.
- the operations also include identifying one or more audio clips in the audio associated with the video based on one or more transient points in the audio.
- the operations also include extracting, for each audio clip of the one or more identified audio clips, a corresponding video clip from the video of the initial content.
- the operations additionally include providing, via a graphical user interface of the computing device, a control interface to enable a user-generated sequence of audio clips, wherein each audio clip in the sequence of audio clips is selected from the one or more identified audio clips.
- the operations further include generating new audiovisual content comprising a sequence of video clips to correspond to the user-generated sequence of audio clips, wherein each video clip in the sequence of video clips is the extracted corresponding video clip for each audio clip in the user- generated sequence of audio clips.
- the operations additionally include providing, by the control interface, the new audiovisual content.
- a system in a fourth example embodiment, includes means for capturing, by a content generation component of a computing device, initial content comprising video, and audio associated with the video; means for identifying one or more audio clips in the audio associated with the video based on one or more transient points in the audio; means for extracting, for each audio clip of the one or more identified audio clips, a corresponding video clip from the video of the initial content; means for providing, via a graphical user interface of the computing device, a control interface to enable a user-generated sequence of audio clips, wherein each audio clip in the sequence of audio clips is selected from the one or more identified audio clips; means for generating new audiovisual content comprising a sequence of video clips to correspond to the user-generated sequence of audio clips, wherein each video clip in the sequence of video clips is the extracted corresponding video clip for each audio clip in the user-generated sequence of audio clips; and means for providing, by the control interface, the new audiovisual content.
- Figure 1 illustrates a computing device, in accordance with example embodiments.
- FIG. 1 is a schematic illustration of extraction of audio and video clips, in accordance with example embodiments.
- Figure 3 is an example lookup table illustrating audio clips and corresponding video clips, in accordance with example embodiments.
- Figure 4 illustrates example sequences of audio clips and corresponding audiovisual content, in accordance with example embodiments.
- Figure 5 illustrates an example control interface, in accordance with example embodiments.
- Figure 6 illustrates another example control interface, in accordance with example embodiments.
- Figure 7 illustrates another example control interface, in accordance with example embodiments.
- Figure 8 illustrates another example control interface, in accordance with example embodiments.
- Figure 9 illustrates an example network environment for creation of audiovisual content, in accordance with example embodiments.
- Figure 10 is a diagram illustrating a training phase and an inference phase of a machine learning model, in accordance with example embodiments.
- Figure 11 illustrates a flow chart, in accordance with example embodiments.
- Example methods, devices, and systems are described herein. It should be understood that the words “example” and “exemplary” are used herein to mean “serving as an example, instance, or illustration.” Any embodiment or feature described herein as being an “example” or “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or features. Other embodiments can be utilized, and other changes can be made, without departing from the scope of the subject matter presented herein.
- Video and/or sound editing can be an elaborate process that generally requires specialized equipment, a studio environment, and experienced editors to cut, edit, and/or synthesize audiov isual content to generate soundtracks, video content, and so forth.
- Operating systems of computing devices for example those targeted for mobile devices, have simple built-in audio processing effects and offer limited options to create and/or edit audiovisual content.
- Some operating systems may be provided with audio processing architectures that may be utilized to create new audio content.
- Some mobile applications may offer an ability to merge different sound tracks, and/or create new beats for existing soundtracks.
- audio clips comprising voices, instrumental sounds, and''or other sounds can be extracted from a video.
- percussive sounds can be set to different rhythms, melodic sounds can be re-pitched across a collection of notes, and so forth, to generate audio clips.
- a user may then synthesize these audio clips to create new audio content.
- the audio content can then be played along with portions of video clips corresponding to the extracted audio clips.
- FIG. 1 illustrates computing device 100, in accordance with example embodiments.
- Computing device 100 can be a computer, phone, personal digital assistant (PDA), or any other sort of electronic device.
- PDA personal digital assistant
- Such an electronic system includes various types of computer readable media and interfaces for various other types of computer readable media.
- Computing device 100 includes a bus 102, a content capture component 110, a content extraction component 120, a content generation component 130, one or more audio components 140, netwOrk interface 150, graphical user interface(s) 160, control interface(s) 162, and controller 170, comprising processor(s) 172 and memory 174.
- computing device 100 may take the form of a desktop device, a server device, or a mobile device.
- Computing device 100 may be configured to interact with an environment. For example, computing device 100 may record audio signals from an environment around computing device 100.
- Bus 102 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of computing device 100.
- bus 102 communicatively connects processor(s) 172 with memory 174.
- Bus 102 also connects to input and output device interfaces (not shown).
- the input device interface enables the user to communicate information and select commands to computing device 100.
- Input devices used with the input device interface include, for example, alphanumeric keyboards, pointing devices (also called “cursor control devices”), and sound capturing devices (e.g., microphones).
- the output device interface enables, for example, the playback of sound, the display of images generated by computing device 100, and so forth.
- Output devices used w ith the output device interface include, for example, printers, display devices (e.g., cathode ray tubes (CRT) or liquid crystal displays (LCD)), and sound playback devices (e.g., speakers). Some implementations include devices, for example, a touchscreen that functions as both input and output devices.
- Bus 102 also couples computing device 100 to a network (not shown) through network interface 150.
- the computer can be a part of a network of computers (for example, a LAN, a WAN, or an Intranet, or a network of networks, for example, the Internet). Any or all components of computing device 100 can be used in conjunction with the subject disclosure.
- computing device 100 can include content capture component 120, such as still and/or video cameras.
- Content capture component 120 can capture images, such as images that include people, animals, landscapes, and/or objects.
- Content can include, still images, audio, video, and/or audiovisual content.
- content capture component 120 can capture initial content comprising video, and audio associated with the video.
- computing device 100 can include content extraction component 120.
- content extraction component 120 can identify one or more audio clips in the audio associated with the video based on one or more transient points in the audio.
- content extraction component 120 can extract, for each audio clip of the one or more identified audio clips, a corresponding video clip from the video of the initial content.
- clip as used herein generally refers to a portion of an audio or video. A clip can be identified based on temporal markings, and/or metadata associated with audiovisual content.
- computing device 100 can include content generation component 130.
- content generation component 130 can generate new audiovisual content comprising a sequence of video clips to correspond to the user-generated sequence of audio clips. Each video clip in the sequence of video clips is the extracted corresponding video clip for each audio clip in the user-generated sequence of audio clips.
- computing device 100 can include one or more audio components ) 140.
- One or more audio component(s) 140 can include audio output components that can be configured to output audio to an environment of computing device 100.
- the audio output components may be a part of computing device 100.
- the audio output components may include a plurality of speakers located on computing device 100.
- the audio output components may be part of a second device communicatively coupled to computing device 100.
- the audio output components may be a netw ork device configured to output audio, one or more speakers, an audio amplifier system, a headphone, a car audio, and so forth.
- one or more audio components 140 can include audio input components.
- Audio input components can be configured to record audio from an environment of computing device 100. For example, as a camera of computing device 100 captures video images, the audio input components can be configured to simultaneously record audio associated with the video images.
- the audio input components may be a part of computing device 100.
- audio input components may include a plurality of microphones located on computing device 100.
- the audio input components may be part of a second device communicatively coupled to computing device 100.
- the audio input components may be a network device configured to record audio, such as a microphone (e.g., in a headphone, a car audio), and so forth.
- audio input components may be a smart device (e.g., a smart watch, a mobile device) configured to capture audio and communicate the audio signal to computing device 100.
- Network interface 150 can include one or more wireless interfaces and/or wireline interfaces that are configurable to communicate via a network.
- Wireless interfaces can include one or more wireless transmitters, receivers, and/or transceivers, such as a BluetoothTM transceiver, a Zigbee® transceiver, a Wi-FiTM transceiver, a WiMAXTM transceiver, and/or other similar types of wireless transceivers configurable to communicate via a wireless network.
- Wireline interfaces can include one or more wireline transmitters, receivers, and/or transceivers, such as an Ethernet transceiver, a Universal Serial Bus (USB) transceiver, or similar transceiver configurable to communicate via a twisted pair wire, a coaxial cable, a fiber- optic link, or a similar physical connection to a wireline network.
- wireline transmitters such as an Ethernet transceiver, a Universal Serial Bus (USB) transceiver, or similar transceiver configurable to communicate via a twisted pair wire, a coaxial cable, a fiber- optic link, or a similar physical connection to a wireline network.
- USB Universal Serial Bus
- network interface 150 can be configured to provide reliable, secured, and/or authenticated communications. For each communication described herein, information for facilitating reliable communications (e.g., seemed audio content delivery') can be provided, perhaps as part of a secure data packet transmission (e.g., packet/message sequencing information, encapsulation headers and/or footers, size/time information, and transmission verification information such as cyclic redundancy check (CRC) and/or parity check values).
- CRC cyclic redundancy check
- Communications can be made secure (e.g., be encoded or encrypted) and/or decrypted/decoded using one or more cryptographic protocols and/or algorithms, such as, but not limited to, Data Encryption Standard (DES), Advanced Encryption Standard (AES), a Rivest-Shamir-Adelman (RSA) algorithm, a Diffie-Hellman algorithm, a secure sockets protocol such as Secure Sockets Layer (SSL) or Transport Layer Security (TLS), and/or Digital Signature Algorithm (DSA).
- DES Data Encryption Standard
- AES Advanced Encryption Standard
- RSA Rivest-Shamir-Adelman
- Diffie-Hellman algorithm a secure sockets protocol
- SSL Secure Sockets Layer
- TLS Transport Layer Security
- DSA Digital Signature Algorithm
- Other cryptographic protocols and/or algorithms can be used as well or in addition to those listed herein to secure (and then decrypt/decode) communications.
- Graphical user interface(s) 160 may be configured to provide output signals to a user by way of one or more screens (including touch screens), cathode ray tubes (CRTs), liquid crystal displays (LCDs), light emitting diodes (LEDs), displays using digital light processing (DLP) technology, and/or other similar technologies. Graphical user interface(s) 160 may also be configured to generate audible outputs, such as with a speaker, speaker jack, audio output port, audio output device, earphones, and/or other similar devices. Graphical user interface(s) 160 may further be configured with one or more haptic components that can generate haptic outputs, such as vibrations and/or other outputs detectable by touch and/or physical contact with computing device 100.
- haptic components that can generate haptic outputs, such as vibrations and/or other outputs detectable by touch and/or physical contact with computing device 100.
- Graphical user interface(s) 160 may include control interface(s) 162.
- Control interface(s) 162 can enable a user to generate a sequence of audio clips, where each audio clip in the sequence of audio clips is selected from a collection of identified audio clips.
- Control interface(s) 162 can provide the new audiovisual content.
- Control interface(s) 162 can include a plurality of selectable tabs corresponding to a plurality of audio channels. User selection of a tab of the plurality of selectable tabs enables user access to one or more channel interfaces to interact with one or more of an audio clip or a video clip in the audio channel corresponding to the user selected tab.
- the plurality of audio channels can include audio clips corresponding to one or more of a melodic note, a percussive sound, a musical composition, an instrumental sound, a silence, or a vocal phrase.
- each audio channel of the plurality of audio channels can be associated with a given audiovisual content different from the initial content.
- the one or more channel interfaces of control interface(s) 162 can include an interface with one or more icons corresponding to the one or more identified audio clips.
- the user-generated sequence of audio clips can be based on user indication of selecting at least one icon of the one or more icons to generate the sequence.
- the one or more channel interfaces of control interface(s) 162 can include an interface displaying a plurality of user-generated sequences, each sequence of the plurality of user-generated sequences corresponding to the plurality of audio channels, and further including a selectable option enabling a user to chain the one or more sequences to generate a new sequence.
- the one or more channel interfaces of control interface(s) 162 can include an interface displaying a plurality of user-generated sequences, each sequence of the plurality of user-generated sequences corresponding to the plurality of audio channels, and further including a selectable option enabling a user to mix the one or more sequences to generate a new audio track.
- the one or more channel interfaces of control interface(s) 162 can include an interface displaying a pair of coordinate axes.
- a horizontal axis corresponds to a plurality of pitch adjustments for the user-generated sequence
- a vertical axis corresponds to a plurality of simultaneously adjustable audio filter adjustments for the user-generated sequence.
- the one or more channel interfaces of control interface(s) 162 can include an interface displaying a plurality of respective volume controls for the plurality of audio channels.
- the plurality of respective volume controls can enable a user to simultaneously control volume settings of each of the plurality of audio channels.
- the one or more channel interfaces of control interface(s) 162 can include an interface displaying a first tool to adjust a tempo, a second tool to adjust a swing, and a third tool to adjust a root musical note, for an audio clip in the sequence of audio clips.
- the one or more channel interfaces of control interface(s) 162 can include an interface displaying a plurality of video edit icons, and where user selection of a video edit icon of the plurality of video edit icons enables application of a video edit feature to a video clip of the sequence of video clips.
- Controller 170 may include one or more processors 172 and memory 174.
- Processor(s) 172 can include one or more general purpose processors and/or one or more special purpose processors (e.g., display driver integrated circuit (DDIC), digital signal processors (DSPs), tensor processing units (TPUs), graphics processing units (GPUs), application specific integrated circuits (ASICs), etc.).
- DDIC display driver integrated circuit
- DSPs digital signal processors
- TPUs tensor processing units
- GPUs graphics processing units
- ASICs application specific integrated circuits
- Processor(s) 172 can be a single processor or a multi-core processor in different implementations.
- Processor(s) 172 may be configured to execute computer-readable instructions that are contained in memory 174 and/or other instructions as described herein.
- Memory 174 may include one or more non-transitory computer-readable storage media that can be read and/or accessed by processor(s) 172.
- the one or more non-transitory computer-readable storage media can include volatile and/or non-volatile storage components, such as optical, magnetic, organic or other memory or disc storage, which can be integrated in whole or in part with at least one of processor(s) 172.
- memory 174 can be implemented using a single physical device (e.g., one optical, magnetic, organic or other memory or disc storage unit), while in other examples, memory 174 can be implemented using two or more physical devices.
- ROM stores static data and instructions that are needed by processor(s) 172 and other modules of computing device 100.
- a permanent storage device that is a read-and-write memory device.
- some implementations may use a removable storage device (for example, a floppy disk, flash drive) as a permanent storage device.
- a system memory may be used that is a read-and-write memory device.
- system memory is a volatile read-and- write memory, such as a random access memory.
- System memory may store some of the instructions and data that processor(s) 172 need at runtime.
- the processes of the subject disclosure are stored in the system memory, permanent storage device, or ROM.
- the various memory units comprising memory 174 include instructions for displaying graphical elements and identifiers associated with respective applications, receiving a predetermined user input to display visual representations of shortcuts associated with respective applications, and displaying the visual representations of shortcuts. From these various memory units, processor(s) 172 retrieves instructions to execute and data to process in order to execute the processes of some implementations.
- processor(s) 172 are configured to execute instructions stored in memory 174 so as to cany out operations.
- the operations may include capturing, by a content generation component of the computing device, initial content comprising video, and audio associated with the video.
- the operations may include identifying one or more audio clips in the audio associated with the video based on one or more transient points in the audio.
- the identifying of the one or more audio clips in the initial content can include identifying, in a soundtrack of the initial content, one or more of a melodic note, a percussive sound, a musical composition, an instrumental sound, a change in audio intensity, a silence, or a vocal phrase.
- audio corresponding to a melodic note, a percussive sound, a musical composition, an instrumental sound, a silence, or a vocal phrase may be identified based on transient points indicative of such audio.
- the transient points in the initial content can include one or more of a transient location, a pause, or a cut.
- the identifying of the one or more audio clips in the initial content can be performed by a trained machine learning model.
- a machine learning model can be trained to identify the transient points, and the trained machine learning model can be deployed to identify relevant audio clips.
- the trained machine learning model can identify a classification for an audio clip of the one or more audio clips. For example, the trained machine learning model can identify that an audio clip corresponds to a melodic note, a percussive sound, a musical composition, a vocal phrase, and so forth. Then, based on the classification, a visual label may be generated for the audio clip.
- the visual label may be a schematic representation of a hand.
- the visual label may be a schematic representation of a violin. Additional and/or alternative visual labels may be generated.
- the visual label can be displayed on a selectable icon corresponding to the audio clip via the control interface.
- first key 540 of Figure 5 may correspond to a first audio clip corresponding to a percussive sound produced by a tap of a hand, and the visual label on first key 540 is a schematic representation of a hand.
- the operations may also include extracting, for each audio clip of the one or more identified audio clips, a corresponding video clip from the video of the initial content.
- the operations may further include providing, via graphical user interface(s) 160, a control interface 162 to enable a user-generated sequence of audio clips, wherein each audio clip in the sequence of audio clips is selected from the one or more identified audio clips.
- the one or more identified audio clips can include a plurality of percussive sounds including an initial rhythm
- the operations to provide the control interface(s) 162 can include generating a plurality of modified versions of the plurality of percussive sounds, where the plurality of modified versions is associated with a modified rhythm different from the initial rhythm.
- the operations can further include providing, via control interface(s) 162, the plurality of modified versions of the plurality of percussive soimds, where the user-generated sequence of audio clips is based on the plurality of modified versions of the plurality of percussive sounds.
- an audio clip of the one or more identified audio clips can include a musical note
- the operations to provide the control interface(s) 162 can include generating a plurality of repitched versions of the musical note.
- the one or more channel interfaces of control interface(s) 162 can include an interface with one or more icons corresponding to the plurality of repitched versions of the mu sical note, and the user-generated sequence of audio clips can be based on user indication of selecting at least one icon of the one or more icons to generate the sequence.
- the operations may also include generating new audiovisual content comprising a sequence of video clips to correspond to the user-generated sequence of audio clips, wherein each video clip in the sequence of video clips is the extracted corresponding video clip for each audio clip in the user-generated sequence of audio clips.
- the operations may further include identifying one or more second audio clips in a second initial content based on one or more second transient points in the second initial content.
- the operations may also include enabling, via the control interface, a second user-generated sequence of second audio clips, wherein each second audio clip in the sequence of second audio clips is selected from the one or more identified second audio clips.
- the generating of the new audiovisual content comprises generating a second sequence of video clips to correspond to the user-generated sequence of audio clips and the user-generated sequence of second audio clips.
- the operations may also include providing, by control interface(s) 162, the new audiovisual content.
- the providing of the new audiovisual content can include providing user selectable virtual tabs to enable automatic upload of the new audiovisual content to a social networking site.
- FIG. 2 is a schematic illustration 200 of extraction of audio and video clips, in accordance with example embodiments.
- a media capture component e.g., content capture component 110 of a computing device (e.g., computing device 100) may be used to capture initial content 205 comprising video, and audio associated with the video.
- initial content 205 may be a previously captured audiovisual content that is stored in a memory (e.g., memory 174) of the computing device (e.g., computing device 100).
- Initial content 205 can generally comprise any video content that is accompanied by audio content.
- initial content 205 can be a video of an object tapping different surfaces and generating a range of percussive sounds.
- initial content 205 can be a video of a kettle blowing steam and generating a whistling sound.
- initial content 205 can be a video of a newsreel where one or more individuals are conveying some news.
- initial content 205 can be a video of an orchestra playing a musical composition, a video of musical recital (e.g., piano, flute, and so forth), a video of a sports broadcast, a video of a musical concert, and so forth.
- initial content 205 can be a video of a bird chirping, an aircraft talcing off, a train arriving at a station, and so forth.
- initial content 205 can be a video portion that is accompanied by silence.
- one or more audio clips may be identified in the audio associated with the video of initial content 205, based on one or more transient points in the initial content 205.
- initial content 205 can be associated with an audio track 210.
- audio track 210 is a schematic representation of a waveform corresponding to an audio track.
- audio track 210 may be extracted from initial content 205 for further analysis.
- Audio track 210 may comprise portions of audio that may be of interest to a user.
- audio track 210 can include audio of percussive sounds, melodic notes, human voices, silence, and/or other audio that may be of interest. Each such audio portion has a starting point in audio track 210 indicative of a change in audio characteristics.
- a transient point may generally refer to such a starting point for an audio portion in an audio track.
- a transient point may be detected based on an analysis of characteristics of audio track 210. For example, a change in audio intensity (e.g., a change in volume), an occurrence of speech (e.g., a person speaking), a change in a musical instrument (e.g., from flute to piano), a change in pitch, a change in an amount of background noise (e.g., a cheering crowd), a change from sound to silence, a change in a type of audio content (e.g., from speech to music, background noise, etc.), a change in a genre of music (e.g., classical, rock, blues, jazz, etc.), a point when a second song begins to play after a first song has been played, and so forth, can act as a transient point.
- a transient point may also be indicated by a cut, a pause, and so forth.
- a transient point may be detected by using a machine learning model trained to detect transient points.
- a machine learning model may be trained on labeled data comprising audio tracks with previously known transient points indicative of a starting point of previously classified audio portions. Once trained, the machine learning model can take as input initial content 205 and/or audio track 210, and detect one or more transient points. Further description of such machine learning models are provided herein with reference to Figure 10.
- Transient points are associated with portions of audio that are of interest to a user. Accordingly, one or more audio clips can be identified for each of the one or more identified transient points. As illustrated in Figure 2, transient points T 1 , T 2 , T 3 may be identified in audio track 215. For each of the transient points T 1 , T 2 , T 3 , a corresponding audio clip A 1 , A 2 , A 3 may be respectively identified. In some embodiments, the one or more audio clips (e.g., audio clips A 1 , A 2 , A 3 ) may be extracted from initial content 205 and/or audio track 210.
- the one or more audio clips e.g., audio clips A 1 , A 2 , A 3
- the one or more audio clips may be extracted from initial content 205 and/or audio track 210.
- controller 170 may extract, for each audio clip of the one or more identified audio clips (e.g., audio clips A 1 , A 2 , A 3 ), a corresponding video clip from initial content 205.
- a video clip V 1 corresponding to A 1 a video clip V 2 corresponding to A 2
- a video clip V 3 corresponding to A 3 may be identified.
- the video clip may be offset slightly backwards (from the start of the audio clip or the transient point) so that the video clip captures images before the audio captured in the audio clip occurs. For example, when a vase is struck with a fork to produce a sound, video images leading up to the actual strike with the fork that produces the sound may be captured.
- a starting point for the video clip may be configured to be prior to the corresponding transient point.
- a starting point T' 2 for video clip V 2 is offset to be prior to transient point T 2 corresponding to audio clip A 2 .
- a starting point T' 3 for video clip V 3 is offset to be prior to transient point T 3 corresponding to audio clip A ,.
- a starting point may be configured to coincide with the corresponding transient point T 1 for audio clip 4, .
- a fixed offset (e.g., N video frames) may be used between a starting point for a video clip and a transient point for the corresponding audio clip.
- such an offset may be determined dynamically based on a type of audio and/or video. For example, when the audio clip comprises a vocal phrase, there may be no offset for the corresponding video clip. Also, for example, when the audio clip comprises a musical note played by a piano, there may be no offset for the corresponding video clip. However, when the audio clip comprises a sound of a train whistle, the start of the corresponding video clip may be offset to capture images of the train approaching a platform at a train station.
- a machine learning model can also be trained to identify whether a start of a video clip corresponding to an audio clip has to be offset with respect to the transient point for the audio clip.
- a machine learning model can be trained to determine a length of the offset (e.g., a number of video frames).
- Figure 3 is an example lookup table 300 illustrating audio clips and corresponding video clips, in accordance with example embodiments.
- controller 170 may store the extracted audio clips and corresponding video clips in a lookup table 300 in memory 174.
- first row 305 of lookup table 300 may store the one or more identified audio clips (e.g., audio clips A 1 , A 2 , A 3 ), and second row 310 of lookup table 300 may store the one or more corresponding extracted video clips (e.g., video clips V 1 , V 2 , V 3 ).
- a user may then create new audiovisual content by sequencing the one or more identified audio clips. For example, the user may select a particular audio clip and repeat it a certain number of times to generate new audio content. The corresponding video clip can be likewise sequenced to generate new audiovisual content. Generally, the user can use any combination of the one or more identified audio clips. Additionally, controller 170 may make available, via a control interface (e.g., control interface(s) 162), variations of the one or more identified audio clips (e.g., by varying respective audio characteristics such as bass, treble, pitch, rhythm, and so forth). Accordingly, a user may have access to a large repertoire of audio sounds from which to generate new musical creations.
- a control interface e.g., control interface(s) 162
- variations of the one or more identified audio clips e.g., by varying respective audio characteristics such as bass, treble, pitch, rhythm, and so forth. Accordingly, a user may have access to a large repertoire of audio sounds from which to generate new musical
- FIG. 4 illustrates example sequences 400 of audio clips and corresponding audiovisual content, in accordance with example embodiments.
- an audio clip e.g., audio clip 4 ,
- a first sequence 405 may comprise a repetition of the audio clip A, , such as, for example, A 1 A 1 A 1 A 1 A 1 A 1 , and the corresponding video clip V 1 may be similarly sequenced to generate first audiovisual content V 1 V 1 V 1 V 1 V 1 V 1 V 1 V 1 .
- a second sequence 410 may comprise a sequence of audio clips such as, for example, A 1 A 2 A 2 A 1 A 2 A 2 A 1 A 3 A 1 A 1 A 1 , and the corresponding video clips may be similarly sequenced to generate second audiovisual content V 1 V 2 V 2 V 1 V 2 V 2 V 1 V 3 V 1 V 1 V 1 ⁇
- a third sequence 415 may comprise a sequence of a sequence of audio clips such as, for example, A 1 A 2 A 1 A 2 A 3 , to generate a new sequence, A 1 A 2 A 1 A 2 A 3 A 1 A 2 A 1 A 2 A 3 A 1 A 2 A 1 A 2 A 3 A 1 A 2 A 1 A 2 A 3 , and the corresponding video clips may be similarly sequenced to generate third audiovisual content V 1 V 2 V 1 V 2 V 3 V 1 F 2 V 1 V 2 F 3 V 1 V 2 V 1 V 2 V 3 V x V 2 V 3 V 2 V 3 .
- a particular audio and/or video clip in a sequence may be edited to be a part of the sequence.
- audio clip A k and/or video clip V j may correspond to edited versions of the audio clips and/or video clips, and the same notation is used herein for simplicity.
- first audiovisual content V 1 V 1 V 1 V 1 V 1 V 1 V 1 V 1 V 1 V 1 V 1 may include one or more edited versions of the video clip V 1 (e.g., with a different image texture, tint, contrast, brightness, color, sharpness, resolution, and so forth).
- the sequence of audio clips A 1 A 1 A 1 A 1 A 1 may be repitched versions of audio clip A 1 .
- a plurality of sequences may be generated from a collection of audio clips and corresponding video clips, based on a length of the sequence, types of repetitions, different versions of one or more of the audio clips and/or video clips, and so forth.
- generated sequences may be further modified by changing a rhythm, audio intensity, and so forth.
- two or more generated sequences may be merged, mixed, and/or sequenced to generate additional sequences.
- FIG. 5 illustrates an example control interface 500, in accordance with example embodiments.
- the control interface can include a plurality of selectable tabs corresponding to a plurality of audio channels, where user selection of a tab of the plurality of selectable tabs enables user access to one or more channel interfaces to interact with one or more of an audio clip or a video clip in the audio channel corresponding to the user selected tab.
- Control interface 500 includes selectable tabs for one or more such audio channels.
- a first audio channel CHI 505 may be associated with first initial content (e.g., initial content 205).
- Additional selectable tabs, second audio channel CH2, third audio channel CHS, fourth audio channel CH4, and so forth may be associated with additional initial content.
- a selectable tab for first audio channel CHI 505 is displayed as having been selected.
- control interface 500 displays various tabs, icons and/or features to enable a user-generated sequence of audio clips, as for example, described with reference to Figure 3.
- a record tab “REC” 510 allows a user to record a new audiovisual content.
- a play tab “PLAY” 515 allows a user to play the new audiovisual content.
- a display screen 520 of a top channel interface, video channel interface 525, of control interface 500 can display the new audiovisual content.
- the one or more channel interfaces comprises an interface with one or more icons corresponding to the one or more identified audio clips.
- the user-generated sequence of audio clips is based on user indication of selecting at least one icon of the one or more icons to generate the sequence.
- a bottom channel interface, keyboard channel interface 530, of control interface 500 can be provided to enable user generation of a sequence of audio clips.
- Keyboard channel interface 530 may include an array 535 of selectable keys. For example, a first key 540 may correspond to a first audio clip, a second key may correspond to a second audio clip, and so forth. As a user taps the keys in succession, controller 170 generates a sequence of audio clips corresponding to the sequence of tapped keys.
- the one or more identified audio clips can include a plurality' of percussive sounds comprising an initial rhythm
- controller 170 can generate a plurality of modified versions of the plurality ' of percussive sounds.
- the plurality of modified versions can be associated with a modified rhythm different from the initial rhythm.
- controller 170 can provide the plurality' of modified versions of the plurality of percussive sounds via keys of keyboard channel interface 530.
- the user-generated sequence of audio clips can be based on the plurality of modified versions of the plurality of percussive sounds. For example, as a user taps the keys of keyboard channel interface 530 in succession, controller 170 generates a sequence of audio clips corresponding to the sequence of tapped keys.
- An erase tab “ERASE” 545 allows the user to erase one or more taps, thereby erasing the corresponding audio clips from the sequence.
- Forward arrow' tab 550 enables the user to toggle (e.g., by swiping left) between video channel interface 525 and other available channel interfaces of the control interface (e.g., pattern channel interface 635 of Figure 6, mixer channel interface 720 of Figure 7, master channel interface 820 of Figure 8, and so forth).
- Control interface 600 includes selectable tabs for one or more audio channels.
- second audio channel CH2 605 may be associated with second initial content (e.g., different from initial content 205).
- Additional selectable tabs, first audio channel CHI, third audio channel CH3, fourth audio channel CH4, and so forth, may be associated with additional initial content.
- a selectable tab for second audio channel CH2 605 is displayed as having been selected.
- control interface 600 displays various tabs, icons and/or features to enable a user-generated sequence of audio clips, as for example, described with reference to Figure 3.
- a record tab “REC” 610 allows a user to record a new audiovisual content.
- a play tab “PLAY” 615 allows a user to play the new audiovisual content.
- the one or more channel interfaces can include an interface displaying a plurality of user-generated sequences, each sequence of the plurality of usergenerated sequences corresponding to the plurality of audio channels, and further comprising a selectable option enabling a user to chain the one or more sequences to generate a new sequence.
- a top channel interface, pattern channel interface 635, of control interface 600 enables a user to generate patterns based on one or more generated sequences.
- An array 620 of selectable numbered icons can enable a user to select a sequence of sequences (e.g., third sequence 415 of Figure 4). For example, selectable icons labeled “1” through “16” of array 620 are displayed for illustrative purposes.
- Each such icon of array 620 may be associated with a generated sequence.
- a user may select one or more sequences, and a schematic representation of the sequence may be displayed, with a corresponding spacing for beats.
- a first schematic representation for a first sequence 625 e.g., corresponding to first audio channel CHI 505 of Figure 5
- a second schematic representation for a second sequence 630 e.g., corresponding to second audio channel CH2605
- a selectable tab “CHAIN” 640 can enable a user to chain first sequence 625, second sequence 630, and so forth, to form new audio sequences.
- the one or more channel interfaces can include an interface displaying a plurality of user-generated sequences, each sequence of the plurality of user- generated sequences corresponding to the plurality of audio channels, and further including a selectable option enabling a user to mix the one or more sequences to generate a new r audio track.
- pattern channel interface 635 can be configured to enable user-generated patterns by enabling mixing of the one or more identified audio clips corresponding to first audio channel CHI 505, second audio channel CH2 605, and so forth.
- a bottom channel interface, sound channel interface 645, of control interface 600 can be provided to enable a user to modify audio characteristics of the audio sequences.
- Sound channel interface 645 may include an erase tab “ERASE” 650 to enable a user to erase one or more edits performed to modify the audio characteristics of the audio sequences.
- the one or more channel interfaces can include an interface displaying a pair of coordinate axes, where a horizontal axis corresponds to a plurality of pitch adjustments for the user-generated sequence, and a vertical axis corresponds to a plurality of simultaneously adjustable audio filter adjustments for the user-generated sequence.
- a display 655 may display the pair of coordinate axes with a user-adjustable icon 660.
- Moving user- adjustable icon 660 along the horizontal axis can cause pitch adjustments 665 to be applied to the user-generated sequence (e.g., the pitch increases as user-adjustable icon 660 moves from left to right along the horizontal axis).
- Moving user-adjustable icon 660 along the vertical axis can cause filter adjustments 670 to be applied to the user-generated sequence (e.g., the filter opens up as user-adjustable icon 660 moves from bottom to top along the vertical axis).
- the term “filter” as used herein generally refers to an amplifier circuit based on frequencies.
- the filter can be a low-pass filter, a high-pass filter, an all-pass filter, a bandpass filter, and so forth.
- Backward arrow tab 675 enables the user to toggle (e.g., by swiping right) between sound channel interface 645 and other available channel interfaces of the control interface (e.g., keyboard channel interface 530 of Figure 5).
- forward arrow tab 680 enables the user to toggle (e.g., by swiping left) between sound channel interface 645 and other available channel interfaces of the control interface (e.g., keyboard channel interface 765 of Figure 7, function channel interface 840 of Figure 8, and so forth).
- Backward arrow tab 685 enables the user to toggle (e.g., by swiping right) between pattern channel interface 635 and other available channel interfaces of the control interface (e.g., video channel interface 525 of Figure 5).
- forward arrow tab 680 enables the user to toggle (e.g., by swiping left) between pattern channel interface 635 and other available channel interfaces of the control interface (e.g., mixer channel interface 720 of Figure 7, master channel interface 820 of Figure 8, and so forth).
- FIG. 7 illustrates another example control interface 700, in accordance with example embodiments.
- Control interface 700 includes selectable tabs for one or more audio channels.
- a third audio channel CH3 705 may be associated with another initial content (e.g., different from initial content 205). Additional selectable tabs, first audio channel CHI, second audio channel CH2, fourth audio channel CH4, and so forth, may be associated with additional initial content.
- a selectable tab for third audio channel CH3 705 is displayed as having been selected.
- control interface 700 displays various tabs, icons and/or features to enable a user-generated sequence of audio clips, as for example, described with reference to Figure 3.
- a record tab “REC” 710 allows a user to record a new audiovisual content.
- a play tab “PLAY” 715 allows a user to play the new audiovisual content.
- the one or more channel interfaces can include an interface displaying a plurality of respective volume controls for the plurality of audio channels.
- the plurality of respective volume controls can enable a user to simultaneously control volume settings of each of the plurality of audio channels.
- top channel interface, mixer channel interface 720, of control interface 700 can be provided to enable a user to simultaneously adjust volume level 725 for audio channels CHI, CH2, CH3, and CH4.
- first volume control 730 can be configured to adjust volume settings for a first generated sequence corresponding to first audio channel CHI
- second volume control 735 can be configured to adjust volume settings for a second generated sequence corresponding to second audio channel CH2
- third volume control 740 can be configured to adjust volume settings for a third generated sequence corresponding to third audio channel CH3
- fourth volume control 745 can be configured to adjust volume settings for a fourth generated sequence corresponding to fourth audio channel CH4.
- Each volume control can be associated with respective muting icons 750 that enable the user to mute 755 the corresponding audio channel. For example, user selection of a muting icon displayed below first volume control 730 can mute the audio for the first generated sequence corresponding to the first audio channel CHI, and so forth.
- a bottom channel interface, keyboard channel interface 765, of control interface 700 can be provided to enable a user to repitch a musical note in an audio clip to another note.
- Keyboard channel interface 765 may include an erase tab “ERASE” 770 to enable a user to erase one or more edits performed to repitch the musical note.
- the one or more channel interfaces can include an interface with one or more icons corresponding to the plurality of repitched versions of a musical note.
- keyboard channel interface 765 enables a user to generate one or more sequences based on repitched versions of the musical note.
- An array 760 of selectable labeled icons can enable a user to select the sequence of repitched versions of the musical note.
- Each such icon of array 760 may be associated with a different note. For example, selectable icons labeled “do,” “re,” “mi,” “fa,” “sol,” “la,” and “ti,” of array 760 are displayed for illustrative purposes. A user may select one or more selectable icons of array 760 in succession, and controller 170 can generate a sequence (e.g., first sequence 405 of Figure 4) based on corresponding repitched versions of the musical note.
- a sequence e.g., first sequence 405 of Figure 4
- Backward arrow tab 775 enables the user to toggle (e.g., by swiping right) between keyboard channel interface 765 and other available channel interfaces of the control interface (e.g.. keyboard channel interface 530 of Figure 5, sound channel interface 645 of Figure 6, and so forth).
- forward arrow tab 780 enables the user to toggle (e.g., by swiping left) between keyboard channel interface 765 and other available channel interfaces of the control interface (e.g., function channel interface 840 of Figure 8, and so forth).
- Backward arrow tab 785 enables the user to toggle (e.g., by swiping right) between mixer channel interface 720 and other available channel interfaces of the control interface (e.g., video channel interface 525 of Figure 5, pattern channel interface 635 of Figure 6, and so forth).
- forward arrow tab 790 enables the user to toggle (e.g., by swiping left) between mixer channel interface 720 and other available channel interfaces of the control interface (e.g., master channel interface 820 of Figure 8, and so forth).
- FIG. 8 illustrates another example control interface 800, in accordance with example embodiments.
- Control interface 800 includes selectable tabs for one or more audio channels.
- a fourth audio channel CH4 805 may be associated with another initial content.
- Additional selectable tabs, first audio channel CHI, second audio channel CH2, third audio channel CH3, and so forth, may be associated with additional initial content.
- a selectable tab for fourth audio channel CH4 805 is displayed as having been selected.
- control interface 800 displays various tabs, icons, and/or features to enable a user-generated sequence of audio clips, as for example, described with reference to Figure 3.
- a record tab “REC” 810 allows a user to record a new audiovisual content.
- a play tab “PLAY” 815 allows a user to play the new audiovisual content.
- the one or more channel interfaces can include an interface displaying a first tool to adjust a tempo, a second tool to adjust a swing, and a third tool to adjust a root musical note, for an audio clip in the sequence of audio clips.
- top channel interface, master channel interface 820, of control interface 800 can be provided.
- Master channel interface 820 can include first tool 825 to adjust a tempo, second tool 830 to adjust a swing, and third tool 835 to adjust a root musical note.
- tempo can be measured as beats per minute (BPM).
- first tool 825 displays “120” indicating that a sequence of percussive sounds is being played at 120 BPM.
- the term “swing” refers to techniques of adjusting rhythm that alternates between lengthening and shortening the first and second consecutive notes in two-part pulse divisions in a beat.
- the first note may be twice as long as the second note.
- a swing rhythm may involve swung eighth notes, where the notes are performed as uneven eighth notes in a quasi-triplet rhythm. Additional, and/or alternative swing rhythms may be applied to a generated sequence.
- the root musical note may be a setting for a musical scale, such as, for example, minor, major, and/or flat versions of scales A, B, C, D, E, F, and G.
- the one or more channel interfaces can include an interface displaying a plurality of video edit icons.
- User selection of a video edit icon of the plurality of video edit icons enables application of a video edit feature to a video clip of the sequence of video clips.
- a bottom channel interface, function channel interface 840, of control interface 800 can be provided to enable a user to apply a video edit feature to a video clip of the sequence of video clips.
- Function channel interface 840 can include selectable icons corresponding to each audio channel CHI, CH2, CH3, and CH4.
- first audio channel CHI may be associated with selectable icons displayed directly above it.
- a first selectable icon 845 may be configured to adjust a tape
- a second selectable icon 850 may be configured to adjust a filter
- a third selectable icon 855 may be configured to adjust a glitch
- a fourth selectable icon 860 may be configured to adjust a space.
- Similar adjustable icons may be provided for each audio channel, and may, in one example arrangement, be displayed vertically above the selectable icon for the respective audio channel.
- Function channel interface 840 may include an erase tab “ERASE” 865 to enable a user to erase one or more video edit features applied to a video clip.
- Backward arrow tab 870 enables the user to toggle (e.g., by swiping right) between function channel interface 840 and other available channel interfaces of the control interface (e.g., keyboard channel interface 530 of Figure 5, sound channel interface 645 of Figure 6, keyboard channel interface 765 of Figure 7, and so forth).
- backward arrow tab 875 enables the user to toggle (e.g., by swiping right) between master channel interface 820 and other available channel interfaces of the control interface (e.g., video channel interface 525 of Figure 5, pattern channel interface 635 of Figure 6, mixer channel interface 720 of Figure 7, and so forth).
- the one or more channel interfaces described herein can be available for each of the audio channels (e.g., first audio channel CHI, second audio channel CH2, third audio channel CHS, fourth audio channel CH4, and so forth). Additional, and/or alternative channel interfaces can be configured to provide further editing capabilities to a user.
- the new audiovisual content may be provided via the computing device (e.g., computing device 100).
- the providing of the new audiovisual content can include providing user selectable virtual tabs to enable automatic upload of the new audiovisual content to a social networking site, and/or sharing of the new r audiovisual content with other users.
- selectable icons representing one or more media upload sites may be provided, and control interface (e.g., one or more control interface(s) 162) may enable a user to directly upload the new audiovisual content to the one or more media upload sites by selecting a respective selectable icon.
- Figure 9 illustrates example network environment 900 for creation of audiovisual content, in accordance with example embodiments.
- the netwnrk environment 900 includes computing devices 902, 904, and 906, server 910, and storage 912.
- the netwnrk environment 900 can have more or fewer computing devices (e.g., 902-906) and/or server (e.g., 910) than those shown in Figure 9.
- Each of the computing devices 902, 904, and 906 can represent various forms of processing devices that have a processor, a memory, and communications capability.
- the computing devices 902, 904, and 906 may communicate with each other, with the server 910, and/or with other systems and devices not shown in Figure 9.
- processing devices can include a desktop computer, a laptop computer, a handheld computer, a personal digital assistant (PDA), a cellular telephone, a network appliance, a camera, a smart phone, an enhanced general packet radio service (EGPRS) mobile phone, a media player, a navigation device, an email device, a game console, a wired/ wireless headphone/headset, a wearable device, a wireless or wired speaker(s), or a combination of any of these processing devices or other processing devices.
- PDA personal digital assistant
- GPRS enhanced general packet radio service
- Each of the computing devices 902, 904, and 906 may be configured with built-in control interfaces and/or audio processing architecture for achieving desirable audio signal processing effects.
- an application comprising one or more control interfaces may be installed on the computing devices 902, 904, and 906 as a client application.
- the computing devices 902, 904, and 906 may be associated with a single user. Captured media content, and/or new r audiovisual content may be transmitted to and received from serv er 910 via network 908.
- each of the computing devices 902, 904, and 906 may include one or more microphones, one or more speakers, one or more sensors (e.g., an accelerometer, a gyroscope), a transducer, and so forth.
- the network 908 can be a computer network such as, for example, a local area network (LAN), wide area network (WAN), the Internet, a cellular network, or a combination thereof connecting any number of mobile clients, fixed clients, and servers. Further, the network 908 can include, but is not limited to, any one or more of the following network topologies, including a bus network, a star network, a ring network, a mesh network, a star-bus network, tree or hierarchical network, and the like.
- LAN local area network
- WAN wide area network
- the Internet a cellular network, or a combination thereof connecting any number of mobile clients, fixed clients, and servers.
- a cellular network or a combination thereof connecting any number of mobile clients, fixed clients, and servers.
- the network 908 can include, but is not limited to, any one or more of the following network topologies, including a bus network, a star network, a ring network, a mesh network, a star-bus network, tree or hierarchical network,
- communication between each client (e.g., computing devices 902, 904, and 906) and server (e.g., server 910) can occur via a virtual private network (YPN), Secure Shell (SSH) tunnel, Secure Socket Layer (SSL) communication, or other secure network connection.
- network 908 may further include a corporate network (e.g., intranet) and one or more wireless access points.
- Saver 910 may represent a single computing device such as a computer server that includes a processor and a memory.
- the processor may execute computer instructions stored in memory.
- the server 910 is configured to communicate with client applications (e.g., applications) on client devices (e.g., the computing devices 902, 904, and 906) via the network 908.
- client applications e.g., applications
- the server 910 may transmit the new audiovisual content from the computing device 902 to the computing device 906 when the user switches the device from the computing device 902 to the computing device 906.
- the computing device 902, the computing device 904, the computing device 906, or the server 910 may be, or may include all or part of, computing device 100 components that are discussed with respect to Figure 1.
- Figure 10 shows diagram 1000 illustrating a training phase 1002 and an inference phase 1004 of trained machine learning model(s) 1032, in accordance with example embodiments.
- Some machine learning techniques involve training one or more machine learning algorithms on an input set of training data to recognize patterns in the training data and provide output inferences and/or predictions about (patterns in the) training data.
- the resulting trained machine learning algorithm can be termed as a trained machine learning model.
- Figure 10 shows training phase 1002 where one or more machine learning algorithms 1020 are being trained on training data 1010 to become trained machine learning model 1032.
- trained machine learning model 1032 can receive input data 1030 and one or more inference/prediction requests 1040 (perhaps as part of input data 1030) and responsively provide as an output one or more inferences and/or predictions 1050.
- trained machine learning model(s) 1032 can include one or more models of one or more machine learning algorithms 1020.
- Machine learning algorithm(s) 1020 may include, but are not limited to: an artificial neural network ⁇ e.g., convolutional neural networks, a recurrent neural network, a Bayesian network, a hidden Markov model, a Markov decision process, a logistic regression function, a support vector machine, a suitable statistical machine learning algorithm, and/or a heuristic machine learning system).
- Machine learning algorithm(s) 1020 may be supervised or unsupervised, and may implement any suitable combination of online and offline learning.
- machine learning algorithm(s) 1020 and/or trained machine learning model(s) 1032 can be accelerated using on-device coprocessors, such as graphic processing units (GPUs), tensor processing units (TPUs), digital signal processors (DSPs), and/or application specific integrated circuits (ASICs).
- on-device coprocessors can be used to speed up machine learning algorithm(s) 1020 and/or trained machine learning model(s) 1032.
- trained machine learning model(s) 1032 can be trained, resident, and executed to provide inferences on a particular computing device, and/or otherwise can make inferences for the particular computing device.
- machine learning algorithm(s) 1020 can be trained by providing at least training data 1010 as training input using unsupervised, superv ised, semi- supervised, and/or reinforcement learning techniques.
- Training data 1010 can include a collection of videos comprising audio tracks. The videos may be labeled to identify transient points, audio clips of interest, corresponding video clips (with a starting point possibly offset from the transient point), and so forth.
- Unsupervised learning involves providing a portion (or all) of training data 1010 to machine learning algorithm(s) 1020 and machine learning algorithm(s) 1020 determining one or more output inferences based on the provided portion (or all) of training data 1010.
- Supervised learning involves providing a portion of training data 1010 to machine learning algorithm(s) 1020, with machine learning algorithm(s) 1020 determining one or more output inferences based on the provided portion of training data 1010, and the output inference(s) are either accepted or corrected based on correct results associated with training data 1010.
- supervised learning of machine learning algorithm(s) 1020 can be governed by a set of rules and/or a set of labels for the training input, and the set of rules and/or set of labels may be used to correct inferences of machine learning algorithm(s) 1020.
- Semi-supervised learning involves having correct results for part, but not all, of training data 1010.
- Reinforcement learning involves machine learning algorithm(s) 1020 receiving a reward signal regarding a prior inference, where the reward signal can be a numerical value.
- machine learning algorithm(s) 1020 can output an inference and receive a reward signal in response, where machine learning algorithm(s) 1020 are configured to try to maximize the numerical value of the reward signal.
- reinforcement learning also utilizes a value function that provides a numerical value representing an expected total of the numerical values provided by the reward signal over time.
- machine learning algorithm! s) 1020 and/or trained machine learning model(s) 1032 can be trained using other machine learning techniques, including but not limited to, incremental learning and curriculum learning.
- machine learning algorithm(s) 1020 and/or trained machine learning model(s) 1032 can use transfer learning techniques.
- transfer learning techniques can involve trained machine learning model(s) 1032 being pre-trained on one set of data and additionally trained using training data 1010.
- machine learning algorithm(s) 1020 can be pre-trained on data from one or more computing devices and a resulting trained machine learning model provided to a particular computing device, where the particular computing device is intended to execute the trained machine learning model during inference phase 1004. Then, during training phase 1002, the pre-trained machine learning model can be additionally trained using training data 1010, where training data 1010 can be derived from kernel and non-kernel data of the particular computing device.
- This further training of the machine learning algorithm(s) 1020 and/or the pre-trained machine learning model using training data 1010 of the particular computing device’s data can be performed using either supervised or unsupervised learning.
- training phase 1002 can be completed.
- the trained resulting machine learning model can be utilized as at least one of trained machine learning model(s) 1032.
- trained machine learning model(s) 1032 can be provided to a computing device, if not already on the computing device.
- Inference phase 1004 can begin after trained machine learning model(s) 1032 are provided to the particular computing device.
- trained machine learning model(s) 1032 can receive input data 1030 and generate and output one or more corresponding inferences and/or predictions 1050 about input data 1030.
- input data 1030 can be used as an input to trained machine learning model(s) 1032 for providing corresponding inference(s) and/or prediction(s) 1050 to kernel components and non-kernel components.
- trained machine learning model(s) 1032 can generate inference(s) and/or prediction(s) 1050 in response to one or more inference/prediction requests 1040.
- trained machine learning model(s) 1032 can be executed by a portion of other software.
- trained machine learning model(s) 1032 can be executed by an inference or prediction daemon to be readily available to provide inferences and/or predictions upon request.
- Input data 1030 can include data from the particular computing device executing trained machine learning model(s) 1032 and/or input data from one or more computing devices other than the particular computing device.
- Input data 1030 can include an initial content (e.g., initial content 205) and/or audio track (e.g., audio track 210) corresponding to the initial content (e.g., initial content 205).
- initial content e.g., initial content 205
- audio track e.g., audio track 210
- Inference(s) and/or prediction(s) 1050 can include output transient points (e.g., transient points T 1 , T 2 , T 3 of Figure 2), audio clips (e.g., audio clips A 1 , A 2 , A 3 of Figure 2), and/or corresponding video clips (e.g., video clips V 1 , V 2 , V 3 of Figure 2), and/or other output data produced by trained machine learning model(s) 1032 operating on input data 1030 (and training data 1010).
- trained machine learning model(s) 1032 can use output inference(s) and/or prediction(s) 1050 as input feedback 1060.
- Trained machine learning model(s) 1032 can also rely on past inferences as inputs for generating new inferences.
- a single computing device (“CD SOLO”) can include the trained version of the machine learning model, perhaps after training the machine learning model. Then, computing device CD SOLO can receive requests to identify transient points, identify one or more audio clips, and/or extract corresponding video clips from input audiovisual content, and use the trained version of machine learning model to identify the transient points, identify the one or more audio clips, and/or extract the corresponding video clips.
- two or more computing devices such as a first client device (“CD CLI”) and a server device (“CD SRV”) can be used to provide the output; e.g., a first computing device CD_CLI can generate and send requests to identify transient points, identify one or more audio clips, and/or extract corresponding video clips from input audiovisual content to a second computing device CD SRV. Then, CD SRV can use the trained version of the machine learning model, to identify the transient points, identify the one or more audio clips, and/or extract the corresponding video clips. Then, upon reception of responses to the requests, CD CLI can provide the requested output via one or more control interfaces (e.g., control interface(s) 162).
- CD CLI can generate and send requests to identify transient points, identify one or more audio clips, and/or extract corresponding video clips from input audiovisual content to a second computing device CD SRV.
- CD SRV can use the trained version of the machine learning model, to identify the transient points, identify the one or more audio clips,
- Figure 11 illustrates flow chart 1100 of operations related to using audio processing stages provided by an operating system. The operations may be executed by and/or used with any of computing devices 100, 902 - 906, or other ones of the preceding example embodiments.
- Block 1110 involves capturing, by a content generation component of a computing device, initial content comprising video, and audio associated with the video.
- Block 1120 involves identifying one or more audio clips in the audio associated with the video based on one or more transient points in the audio.
- Block 1130 involves extracting, for each audio clip of the one or more identified audio clips, a corresponding video clip from the video of the initial content.
- Block 1140 involves providing, via a graphical user interface of the computing device, a control interface to enable a user-generated sequence of audio clips, wherein each audio clip in the sequence of audio clips is selected from the one or more identified audio clips.
- Block 1150 involves generating new audiovisual content comprising a sequence of video clips to correspond to the user-generated sequence of audio clips, wherein each video clip in the sequence of video clips is the extracted corresponding video clip for each audio clip in the user-generated sequence of audio clips.
- Block 1160 involves providing, by the control interface, the new audiovisual content.
- the one or more identified audio clips include a plurality of percussive sounds comprising an initial rhythm.
- Such embodiments involve generating a plurality of modified versions of the plurality of percussive sounds, wherein the plurality of modified versions is associated with a modified rhythm different from the initial rhythm.
- Such embodiments also involve providing, via the control interface, the plurality of modified versions of the plurality of percussive sounds.
- the user-generated sequence of audio clips can be based on the plurality of modified versions of the plurality of percussive sounds.
- the control interface includes a plurality of selectable tabs corresponding to a plurality of audio channels.
- User selection of a tab of the plurality of selectable tabs can enable user access to one or more channel interfaces to interact with one or more of an audio clip or a video clip in the audio channel corresponding to the user selected tab.
- the plurality of audio channels include audio clips corresponding to one or more of a melodic note, a percussive sound, a musical composition, an instrumental sound, a silence, or a vocal phrase.
- each audio channel of the plurality of audio channels can be associated with a given audiovisual content different from the initial content.
- the one or more channel interfaces include an interface with one or more icons corresponding to the one or more identified audio clips.
- the user-generated sequence of audio clips can be based on user indication of selecting at least one icon of the one or more icons to generate the sequence.
- an audio clip of the one or more identified audio clips comprises a musical note. Such embodiments involve generating a plurality of repitched versions of the musical note.
- the one or more channel interfaces can include an interface with one or more icons corresponding to the plurality of repitched versions of the musical note.
- the usergenerated sequence of audio clips can be based on user indication of selecting at least one icon of the one or more icons to generate the sequence.
- the one or more channel interfaces include an interface displaying a plurality of user-generated sequences, each sequence of the plurality of user- generated sequences corresponding to the plurality of audio channels, and further including a selectable option enabling a user to chain the one or more sequences to generate a new sequence.
- the one or more channel interfaces include an interface displaying a plurality of user-generated sequences, each sequence of the plurality of user- generated sequences corresponding to the plurality of audio channels, and further including a selectable option enabling a user to mix the one or more sequences to generate a new audio track.
- the one or more channel interfaces include an interface displaying a pair of coordinate axes.
- a horizontal axis can correspond to a plurality of pitch adjustments for the user-generated sequence, and a vertical axis can correspond to a plurality of simultaneously adjustable audio filter adjustments for the user-generated sequence.
- the one or more channel interfaces include an interface displaying a plurality of respective volume controls for the plurality of audio channels.
- the plurality of respective volume controls can enable a user to simultaneously control volume settings of each of the plurality of audio channels.
- the one or more channel interfaces include an interface displaying a first tool to adjust a tempo, a second tool to adjust a swing, and a third tool to adjust a root musical note, for an audio clip in the sequence of audio clips.
- the one or more channel interfaces include an interface displaying a plurality of video edit icons. User selection of a video edit icon of the plurality of video edit icons can enable application of a video edit feature to a video clip of the sequence of video clips.
- Some embodiments involve identifying one or more second audio clips in a second initial content based on one or more second transient points in the second initial content. These embodiments also involve enabling, via the control interface, a second user-generated sequence of second audio clips, wherein each second audio clip in the sequence of second audio clips is selected from the one or more identified second audio clips.
- the generating of the new audiovisual content includes generating a second sequence of video clips to correspond to the user-generated sequence of audio clips and the user-generated sequence of second audio clips.
- the computing device can include an image capture device.
- the initial content can be captured by the image capture device.
- the transient points in the initial content include one or more of a transient location, a pause, or a cut.
- the identifying of the one or more audio clips in the initial content involves identifying, in a soundtrack of the initial content, one or more of a melodic note, a percussive sound, a musical composition, an instrumental sound, a change in audio intensify, a silence, or a vocal phrase.
- the identifying of the one or more audio clips in the initial content can be performed by a trained machine learning model. Some of these embodiments involve identifying, by the trained machine learning model, a classification for an audio clip of the one or more audio clips. Such embodiments further involve generating, based on the classification, a visual label associated with the audio clip. These embodiments also involve displaying, via the control interface, the visual label on a selectable icon corresponding to the audio clip.
- Some embodiments involve providing the new audiovisual content via the computing device.
- the providing of the new audiovisual content can involve providing user selectable virtual tabs to enable automatic upload of the new audiovisual content to a social networking site.
- Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer readable storage medium (also referred to as computer readable medium). When these instructions are executed by one or more processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions.
- Examples of computer readable media include, but are not limited to, magnetic media, optical media, electronic media, etc.
- the computer readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.
- the term “software” is meant to include, for example, firmware residing in read-only memory or other form of electronic storage, or applications that may be stored in magnetic storage, optical, solid state, etc., which can be read into memory for processing by a processor.
- multiple software aspects of the subject disclosure can be implemented as sub-parts of a larger program while remaining distinct software aspects of the subject disclosure.
- multiple software aspects can also be implemented as separate programs.
- any combination of separate programs that together implement a software aspect described here is within the scope of the subject disclosure.
- the software programs when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.
- a computer program (also known as a program, software, software application, script, or code) can be written in any fomi of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a standalone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment.
- a computer program may, but need not, correspond to a file in a file system.
- a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code).
- a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
- Some implementations include electronic components, for example, microprocessors, storage, and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media).
- Such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW 7 ), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-R.W, DVD-R W, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic or solid state hard drives, read-only and recordable Blu- Ray® discs, ultra-density optical discs, any other optical or magnetic media, and floppy disks.
- RAM random access memory
- ROM read-only compact discs
- CD-R recordable compact discs
- CD-RW 7 rewritable compact discs
- CD-RW 7 read-only digital versatile discs
- DVD-ROM dual-layer DVD-ROM
- the computer-readable media can store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations.
- Examples of computer programs or computer code include machine code, for example, is produced by a compiler, and fdes including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
- ASICs application specific integrated circuits
- FPGAs field programmable gate arrays
- the terms “computer”, “server”, “processor”, and “memory ' ” all refer to electronic or other technological devices. These terms exclude people or groups of people.
- display or displaying means displaying on an electronic device.
- computer readable medium and “computer readable media” are entirely restricted to tangible, physical objects that store information in a fonn that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.
- implementations of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT or LCD monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer.
- a display device e.g., a CRT or LCD monitor
- a keyboard and a pointing device e.g., a mouse or a trackball
- Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
- a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user’s client device in response
- Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back end, middleware, or front end components.
- the components of the system can be interconnected by any fonn or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (LAN) and a wide area network (WAN), an inter- network (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks).
- LAN local area network
- WAN wide area network
- inter- network e.g., the Internet
- the computing system can include clients and servers.
- a client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
- a server transmits data (e.g., an HTML page) to a client device (e.g., for purposes of displaying data to and receiving user input from a user interacting w ith the client device).
- client device e.g., for purposes of displaying data to and receiving user input from a user interacting w ith the client device.
- Data generated at the client device e.g., a result of the user interaction
- the phrase “at least one of’ preceding a series of items, with the term “and” or “or” to separate any of the items, modifies the list as a whole, rather than each member of the list (e.g., each item).
- the phrase “at least one of’ does not require selection of at least one of each item listed; rather, the phrase allows a meaning that includes at least one of any one of the items, and/or at least one of any combination of the items, and/or at least one of each of the items.
- phrases “at least one of A, B, and C” or “at least one of A, B, or C” each refer to only A, only B, or only C; any combination of A, B, and C; and/or at least one of each of A, B, and C.
- phrases such as an aspect, the aspect, another aspect, some aspects, one or more aspects, an implementation, the implementation, another implementation, some implementations, one or more implementations, an embodiment, the embodiment, another embodiment, some embodiments, one or more embodiments, a configuration, the configuration, another configuration, some configurations, one or more configurations, the subject technology, the disclosure, the present disclosure, other variations thereof and alike are for convenience and do not imply that a disclosure relating to such phrase(s) is essential to the subject technology or that such disclosure applies to all configurations of the subject technology.
- a disclosure relating to such phrase(s) may apply to all configurations, or one or more configurations.
- a disclosure relating to such phrase(s) may provide one or more examples.
- a phrase such as an aspect or some aspects may refer to one or more aspects and vice versa, and this applies similarly to other foregoing phrases.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Acoustics & Sound (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Management Or Editing Of Information On Record Carriers (AREA)
- Television Signal Processing For Recording (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Circuit For Audible Band Transducer (AREA)
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2021/043276 WO2023009104A1 (en) | 2021-07-27 | 2021-07-27 | Generating audiovisual content based on video clips |
Publications (1)
Publication Number | Publication Date |
---|---|
EP4374369A1 true EP4374369A1 (en) | 2024-05-29 |
Family
ID=77519757
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP21762182.0A Pending EP4374369A1 (en) | 2021-07-27 | 2021-07-27 | Generating audiovisual content based on video clips |
Country Status (6)
Country | Link |
---|---|
US (1) | US20240282341A1 (zh) |
EP (1) | EP4374369A1 (zh) |
JP (1) | JP2024528894A (zh) |
CN (1) | CN117836854A (zh) |
DE (1) | DE112021008025T5 (zh) |
WO (1) | WO2023009104A1 (zh) |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4243862B2 (ja) * | 2004-10-26 | 2009-03-25 | ソニー株式会社 | コンテンツ利用装置およびコンテンツ利用方法 |
US20180295427A1 (en) * | 2017-04-07 | 2018-10-11 | David Leiberman | Systems and methods for creating composite videos |
US20190051272A1 (en) * | 2017-08-08 | 2019-02-14 | CommonEdits, Inc. | Audio editing and publication platform |
-
2021
- 2021-07-27 CN CN202180101075.7A patent/CN117836854A/zh active Pending
- 2021-07-27 JP JP2024505096A patent/JP2024528894A/ja active Pending
- 2021-07-27 DE DE112021008025.3T patent/DE112021008025T5/de active Pending
- 2021-07-27 US US18/292,223 patent/US20240282341A1/en active Pending
- 2021-07-27 WO PCT/US2021/043276 patent/WO2023009104A1/en active Application Filing
- 2021-07-27 EP EP21762182.0A patent/EP4374369A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
DE112021008025T5 (de) | 2024-05-23 |
CN117836854A (zh) | 2024-04-05 |
WO2023009104A1 (en) | 2023-02-02 |
JP2024528894A (ja) | 2024-08-01 |
US20240282341A1 (en) | 2024-08-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11727904B2 (en) | Network musical instrument | |
US10790919B1 (en) | Personalized real-time audio generation based on user physiological response | |
US10068556B2 (en) | Procedurally generating background music for sponsored audio | |
US10062367B1 (en) | Vocal effects control system | |
US11043216B2 (en) | Voice feedback for user interface of media playback device | |
US20190026366A1 (en) | Method and device for playing video by each segment of music | |
US11511200B2 (en) | Game playing method and system based on a multimedia file | |
TW201238279A (en) | Semantic audio track mixer | |
US20230396573A1 (en) | Systems and methods for media content communication | |
US20120269344A1 (en) | Methods and apparatus for creating music melodies | |
US11423077B2 (en) | Interactive music feedback system | |
US9286943B2 (en) | Enhancing karaoke systems utilizing audience sentiment feedback and audio watermarking | |
CN105766001A (zh) | 用于使用任意触发的音频处理的系统和方法 | |
US20190051272A1 (en) | Audio editing and publication platform | |
US20240282341A1 (en) | Generating Audiovisual Content Based on Video Clips | |
JP2016201678A (ja) | 認識装置、映像コンテンツ提示システム | |
Hajdu et al. | On the evolution of music notation in network music environments | |
US20140281981A1 (en) | Enabling music listener feedback | |
Davis et al. | eTu {d, b} e: Case studies in playing with musical agents | |
US8912420B2 (en) | Enhancing music | |
WO2023030536A1 (zh) | 和声处理方法、装置、设备及介质 | |
US20240221707A1 (en) | Systems, methods, and computer program products for generating deliberate sequences of moods in musical compositions | |
Roessner | The beat goes static: A tempo analysis of US billboard hot 100# 1 songs from 1955–2015 | |
US20230360620A1 (en) | Converting audio samples to full song arrangements | |
KR102132905B1 (ko) | 단말 장치 및 그의 제어 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20240223 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |