WO2019242235A1 - 混音方法、装置及存储介质 - Google Patents

混音方法、装置及存储介质 Download PDF

Info

Publication number
WO2019242235A1
WO2019242235A1 PCT/CN2018/117767 CN2018117767W WO2019242235A1 WO 2019242235 A1 WO2019242235 A1 WO 2019242235A1 CN 2018117767 W CN2018117767 W CN 2018117767W WO 2019242235 A1 WO2019242235 A1 WO 2019242235A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
type
beat
chord
adjustment
Prior art date
Application number
PCT/CN2018/117767
Other languages
English (en)
French (fr)
Inventor
万景轩
肖纯智
Original Assignee
广州酷狗计算机科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州酷狗计算机科技有限公司 filed Critical 广州酷狗计算机科技有限公司
Priority to US16/617,920 priority Critical patent/US11315534B2/en
Priority to EP18919406.1A priority patent/EP3618055B1/en
Publication of WO2019242235A1 publication Critical patent/WO2019242235A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/40Rhythm
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • G10H1/0025Automatic or semi-automatic music composition, e.g. producing random music, applying rules from music theory or modifying a musical piece
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0091Means for obtaining special acoustic effects
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/38Chord
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/076Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for extraction of timing, tempo; Beat detection
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/081Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for automatic key or tonality recognition, e.g. using musical rules or a knowledge base
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • G10H2210/125Medley, i.e. linking parts of different musical pieces in one single piece, e.g. sound collage, DJ mix
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • G10H2210/131Morphing, i.e. transformation of a musical piece into a new different one, e.g. remix
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/375Tempo or beat alterations; Music timing control
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/571Chords; Chord sequences
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/325Synchronizing two or more audio tracks or files according to musical features or musical timings

Definitions

  • the present application relates to the field of multimedia technologies, and in particular, to a mixing method, device, and storage medium.
  • the mixing of songs refers to mixing other musical instrument materials into the original song, so that the songs after mixing can have the audio characteristics of other musical instrument materials.
  • the target song when the target song needs to be mixed, the target song is first sliced according to the pitch to obtain multiple audio segments, each audio segment has a corresponding pitch.
  • the pitch refers to the sound within 1 second. Number of vibrations.
  • the instrument material to be mixed is also a piece of audio.
  • the instrument material is divided into different material fragments according to chords, each material fragment has a corresponding chord, and one chord usually corresponds to multiple pitches.
  • mixing for each clip in the instrument material, look for audio clips whose pitches correspond to the chords of the clip from multiple audio clips. Then combine the found audio clip with the material clip to get a mix clip.
  • the obtained multiple mixed clips are combined to obtain a mixed song.
  • the musical instrument material refers to a piece of audio including multiple chords.
  • the target song is mixed according to the chords in the musical instrument material, it is equivalent to mixing the target in the order of the chords in the musical instrument material.
  • the audio fragments after song slicing have been reordered, resulting in a large difference between the song after mixing and the target song, losing the original melody of the target song, which is not conducive to the promotion of the above mixing method.
  • the embodiments of the present application provide a mixing method, device, and storage medium, which can be used to solve the problem of a large difference between a song after mixing and a target song in the related art.
  • the technical solution is as follows:
  • a mixing method comprising:
  • the beat feature refers to the correspondence between the beat used in the target audio and the time point information
  • the performing beat adjustment on the mixing material according to the beat feature of the target audio includes:
  • the beat of each first-type material clip in the plurality of first-type material clips is adjusted to the corresponding first-type audio clip.
  • performing the mixing processing on the target audio according to the mixing material adjusted after the beat includes:
  • chord-adjusted mixed material is merged with the target audio.
  • the performing chord adjustment on the mixed material after the beat adjustment includes:
  • chord characteristic refers to a correspondence relationship between a chord used in the target audio and time point information
  • performing chord adjustment on the mixed material after the beat adjustment according to the chord characteristics of the target audio includes:
  • a chord of each second-type material clip in the plurality of second-type material clips is adjusted to a corresponding second-type audio clip.
  • the performing chord adjustment on the mixed material after the beat adjustment includes:
  • the obtaining the mixing material includes:
  • a target musical instrument material from a mixing material library, where the mixing material library includes at least one musical instrument material, and each musical instrument material is audio with a specified beat and a duration of the specified duration;
  • a mixing device in a second aspect, includes:
  • a determining module configured to determine a beat feature of a target audio that needs to be mixed, where the beat feature refers to a correspondence between a beat used in the target audio and time point information;
  • An adjustment module configured to perform beat adjustment on the mixing material according to the beat feature of the target audio
  • a processing module configured to perform mixing processing on the target audio according to the mixed material after the beat adjustment.
  • the adjustment module is specifically configured to:
  • the beat of each first-type material clip in the plurality of first-type material clips is adjusted to the corresponding first-type audio clip.
  • the processing module includes:
  • An adjustment unit configured to perform chord adjustment on the mixed material after the beat adjustment
  • a merging unit is configured to merge the mixed material after the chord adjustment with the target audio.
  • the adjustment unit is specifically configured to:
  • chord characteristic refers to a correspondence relationship between a chord used in the target audio and time point information
  • the adjustment unit is further specifically configured to:
  • a chord of each second-type material clip in the plurality of second-type material clips is adjusted to a corresponding second-type audio clip.
  • the adjustment unit is specifically configured to:
  • the obtaining module is specifically configured to:
  • a target musical instrument material from a mixing material library, where the mixing material library includes at least one musical instrument material, and each musical instrument material is audio with a specified beat and a duration of the specified duration;
  • another mixing device where the device includes:
  • Memory for storing processor-executable instructions
  • the processor is configured to execute the steps of any one of the methods described in the first aspect.
  • a computer-readable storage medium stores instructions, and when the instructions are executed by a processor, the steps of any one of the methods described in the first aspect are implemented.
  • a computer program product containing instructions which when run on a computer, causes the computer to execute the steps of any of the methods described in the first aspect.
  • the beat characteristic of the target audio is determined, and the beat material is adjusted according to the beat characteristic of the target audio, and the target audio is mixed according to the beat adjusted material.
  • Sound processing Since the beat feature refers to the correspondence between the beat and the point-in-time information used in the target audio, it can be known that in this application, the mixing is performed according to the correspondence between the beat and the point-in-time information in the target audio. Adjust the beat of the material instead of reordering the sliced audio clips of the target song in the order of the chords in the instrument material. In this way, when mixing the target audio according to the mix material after the beat adjustment, you can retain the target audio The original melody of the target audio is beneficial to the promotion of the mixing method proposed in this application.
  • FIG. 1 is a flowchart of a mixing method according to an embodiment of the present application
  • FIG. 2 is a block diagram of a mixing device according to an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a terminal according to an embodiment of the present application.
  • FIG. 1 is a flowchart of a mixing method according to an embodiment of the present application. As shown in FIG. 1, the method includes the following steps:
  • Step 101 Obtain mixing materials.
  • step 101 may specifically be: selecting a target musical instrument material from a mixing material library, the mixing material library includes at least one musical instrument material, and each musical instrument material is a specified beat and a duration is a specified duration. Audio. Perform loop stitching on the target musical instrument material to obtain a mixing material, and the duration of the mixing material is the same as the duration of the target audio.
  • each instrument material in the mixing material library is pre-made.
  • Each instrument material is audio with a specified beat and duration of a specified duration, which means that there is only one type of beat in each instrument material, which is equivalent to each piece of audio with a repeated melody.
  • the remix material library includes drum material, piano material, bass material, and guitar material. The length of each instrument material is only 2 seconds, and each instrument material includes only one type of beat.
  • the target instrument material is loop stitched first, and the audio after the loop stitching is used as the mixing material.
  • the purpose of the loop stitching is to make the duration of the mixing material consistent with the duration of the target audio.
  • the target instrument material is a drum material with a duration of 2 seconds and the target audio is 3 minutes.
  • the drum material can be looped and stitched to obtain a mixed material with a duration of 3 minutes.
  • the mix material after loop stitching also includes only one type of beat.
  • the mixing material may also be the musical instrument material selected by the user directly without going through the above-mentioned loop stitching process.
  • musical instrument materials may only have beats
  • some types of musical instrument materials have chords in addition to the beats.
  • drum materials only have beats
  • guitar materials have both beats and chords.
  • the musical instrument material may include only one type of chord, or may include multiple types of chords, which are not specifically limited in the embodiment of the present application.
  • Step 102 Determine the beat feature of the target audio that needs to be mixed.
  • the beat feature refers to the correspondence between the beat and the time point information used in the target audio.
  • the point-in-time information refers to point-in-time information on a playback time axis of the target audio.
  • the target audio is a song
  • the duration of the song is 3 minutes.
  • the tempo characteristics of the target audio are determined, that is, it is determined that the tempo of the song between 0 seconds and 3 seconds is 2 beats and 3 seconds. Between 8 seconds and 4 beats are used.
  • Step 103 Perform beat adjustment on the mixed material according to the beat characteristics of the target audio.
  • step 103 may specifically be: dividing the target audio into multiple first-type audio segments according to the beat feature of the target audio, and Each first type audio clip corresponds to a beat. According to the time point information of each first type audio clip in the plurality of first type audio clips, a plurality of first type material clips in the mixed material are determined. The class material clip corresponds to a first type audio clip, and the time point information of each first type material clip is the same as the time point information of the corresponding first type audio clip. The tempo of the class material clip is adjusted to the tempo of the corresponding first type audio clip.
  • the duration of the target audio is 30 seconds, and the beat of the mixed material is 3 beats.
  • three first-type audio segments are obtained, which are first-type audio segment 1, first-type audio segment 2, and first-type audio segment 3.
  • the time point information of the first type of audio clip 1 is 0 seconds to 9 seconds, and the corresponding beat is 2 beats.
  • the time point information of the first type of audio clip 2 is 9 seconds to 15 seconds, and the corresponding beat is 4 beats.
  • the time point information of the first type of audio clip 3 is 15 seconds to 30 seconds, and the corresponding beat is 2 beats.
  • the first type of material clip with time point information of 0 to 9 seconds in the mixed material, and the first type material of time point information from 9 seconds to 15 seconds can be determined.
  • Clips and point-in-time information are clips of the first type with 15 to 30 seconds.
  • the tempo of the first type of material clip in the mixed material with time point information of 0 seconds to 9 seconds is adjusted from 3 to 2 beats, and the time point information of the first type of material clips of 9 to 15 seconds is adjusted.
  • the beat is adjusted from 3 to 4 beats, and the beat of the first type of clip with time point information of 15 seconds to 30 seconds is adjusted from 3 to 2 beats.
  • the tempo of any first-type clip after the adjustment is consistent with the tempo of the first-type audio clip with the same time point information. That is, by adjusting the beat material of the mixing material, the mixing material can have the same beat characteristics as the target audio. In this case, when the target audio is subsequently mixed according to the mixed material after the beat adjustment, the target audio can be mixed. Avoid mixing the audio after losing the original rhythm of the target audio.
  • Step 104 Perform mixing processing on the target audio according to the mixing material after the beat adjustment.
  • step 104 may specifically be: after adjusting the mixing material according to the tempo characteristics, directly combining the tuned mixing material and the target audio to achieve the mixing of the target audio.
  • step 104 may specifically be: performing chord adjustment on the mixed material after the beat adjustment, and merging the mixed material after the chord adjustment with the target audio.
  • chord adjustment of the mixed material after the beat adjustment has the following two implementation methods:
  • the first implementation manner is to determine the chord characteristics of the target audio.
  • the chord characteristics refer to the correspondence between the chords and time information used in the target audio. Based on the chord characteristics of the target audio, chord the mixed material after the beat adjustment. Adjustment.
  • the chord characteristics of the target audio are determined, that is, what chords are used by the target audio in which time period.
  • the target audio is a song
  • the duration of the song is 3 minutes.
  • the chord characteristics of the target audio are determined, that is, it is determined that the chord used by the song between 0 seconds and 3 seconds is E chord, and the chord is
  • the chords used between 8 and 8 seconds are G chords and so on.
  • the chord adjustment of the mixed material after the beat adjustment can be implemented as follows: According to the chord characteristics of the target audio, the target audio is divided into a plurality of second-type audio clips, and each second A type of audio clip corresponds to a chord. According to the time point information of each of the second type of audio clips in the second type of audio clips, a plurality of second type material clips in the mixed material after the beat adjustment are determined, and each second The class material clip corresponds to a second type of audio clip, and the time point information of each second type of material clip is the same as the time point information of the corresponding second type of audio clip. The chords of the class material clips are adjusted to the corresponding chords of the second type audio clips.
  • the target audio is 30 seconds long, and there is only one chord A in the mix.
  • three second-type audio segments are obtained, which are the second-type audio segment 1, the second-type audio segment 2, and the second-type audio segment 3.
  • the time point information of the second type of audio clip 1 is 0 seconds to 9 seconds
  • the corresponding chord is Chord C
  • the time point information of the second type of audio clip 2 is 9 seconds to 15 seconds
  • the corresponding chord is Chord A
  • the time point information of the second type of audio clip 3 is 15 seconds to 30 seconds
  • the corresponding chord is the chord H.
  • the second type of material clip with time point information of 0 to 9 seconds in the mixed material after the beat adjustment can be determined, and the time point information of 9 to 15 seconds.
  • the second type of clips and time point information of the second type of clips are 15 to 30 seconds.
  • chord of the second type of clip in the mixed material after the beat adjustment is 0 to 9 seconds is adjusted from chord C to chord A, and the point information of the second type is 9 seconds to 15 seconds
  • the chord of the clip does not need to be adjusted, and the tempo of the second type of clip with time point information of 15 seconds to 30 seconds is adjusted from the chord H to the chord C.
  • the chords of any second-type material clips after adjustment are consistent with the second-type audio clips with the same time point information, that is, by adjusting the chord adjustment of the mixed material after the beat adjustment, the mixed
  • the audio material has the same tempo and chord characteristics as the target audio, which is equivalent to the adjusted mixing material having exactly the same rhythm as the target audio. In this case, when subsequent mixing processing is performed on the target audio according to the mixing material, the audio after mixing can be prevented from losing the original rhythm of the target audio.
  • the second implementation manner is to determine the tone used by the target audio, and adjust the chord of the mixed material after the beat adjustment to a chord consistent with the determined tone according to the tone used by the target audio.
  • the first implementation manner described above is to perform chord adjustment on the mixed material after the beat adjustment according to the chord characteristics of the target audio. All chords included in the target audio need to be analyzed before the mixed material after the chord adjustment has the target The same chord characteristics of the audio, which can easily lead to inefficient chord adjustment. Since the chords usually correspond to the tonality, and a song usually has one tonality, in the embodiment of the present application, the chords in the mixed material can be uniformly adjusted according to the tonality of the target audio, without having to Each chord adjusts the chords in the mix material, which can improve the efficiency of chord adjustment. Among them, tonality refers to the rhythm in which the main voice of the target audio is located.
  • the tonality of the target audio is determined, and according to the tonality of the target audio, the chord of the mixed material after the beat adjustment is adjusted to a chord consistent with the determined tonality.
  • the tone of the target audio is C major, and there is only one type of chord in the mixed material after the beat adjustment, and the chord is the A chord.
  • the chord of the mixed material after the beat adjustment is adjusted to be determined.
  • the specific process of chords with the same tone is: A chord can be used as A major, and the mixing material can be adjusted according to the way from A major to C major, which is equivalent to adjusting the A chord in the mixing material. C chord.
  • the above implementation method is to first adjust the beat of the mixed material, and then perform chord adjustment to the mixed material.
  • chord adjustment it is also possible to perform chord adjustment on the mixed material first, and then perform beat adjustment on the mixed material, which is not specifically limited in the embodiment of the present application.
  • the mixing material in order to keep the original melody of the target audio after the audio mixing, the mixing material may be adjusted with a beat, and the mixing material may also be adjusted with a beat and a chord, and when the chord is adjusted, It can be adjusted according to the chord characteristics of the target audio, and can also be adjusted according to the tonality of the target audio. That is, the embodiments of the present application provide three different adjustment modes.
  • the adjustment type can be set for each instrument material in the mixing material library.
  • the adjustment type includes Three types, the first type is “beat type", which is used to instruct to adjust the mixing material according to the beat characteristics of the target audio.
  • the second type is “beat + chord type”, which is used to instruct to adjust the mixing material according to the beat characteristics and chord characteristics of the target audio.
  • the third type is “beat + tonality”, which is used to instruct to adjust the mixing material according to the beat characteristics and tonality of the target audio.
  • the beat characteristic of the target audio is determined, and the beat material is adjusted according to the beat characteristic of the target audio, and the target audio is mixed according to the beat adjusted material.
  • Sound processing Since the beat feature refers to the correspondence between the beat and the point-in-time information used in the target audio, it can be known that in this application, the mixing is performed according to the correspondence between the beat and the point-in-time information in the target audio. Adjust the beat of the material instead of reordering the sliced audio clips of the target song in the order of the chords in the instrument material. In this way, when mixing the target audio according to the mix material after the beat adjustment, the target audio can be retained. The original melody of the target audio is beneficial to the promotion of the mixing method proposed in this application.
  • FIG. 2 is a mixing device according to an embodiment of the present application. As shown in FIG. 2, the device 200 includes:
  • An obtaining module 201 for obtaining a mixing material
  • a determining module 202 is configured to determine a beat feature of a target audio that needs to be mixed, and the beat feature refers to a correspondence between a beat used in the target audio and time point information;
  • An adjustment module 203 configured to perform beat adjustment on the mixed material according to the beat characteristics of the target audio
  • the processing module 204 is configured to perform mixing processing on the target audio according to the mixed material after the beat adjustment.
  • the adjustment module 203 is specifically configured to:
  • each of the first-type audio clips in the plurality of first-type audio clips determines a plurality of first-type audio clips in the mixed material, and each of the first-type audio clips corresponds to a first-type audio clip, And the time point information of each first type material clip is the same as the time point information of the corresponding first type audio clip;
  • each first-type clip in the plurality of first-type clips is adjusted to the corresponding first-type audio clip.
  • processing module 204 includes:
  • a merging unit for merging the mixed material after the chord adjustment with the target audio.
  • the adjusting unit is specifically configured to:
  • chord characteristics refer to the correspondence between the chords and time point information used in the target audio
  • the chord adjustment is performed on the mixed material after the beat adjustment.
  • the adjustment unit is further specifically configured to:
  • each second-type audio segment corresponds to a chord
  • each second-type audio clip in the multiple second-type audio clips determine a plurality of second-type material clips in the mixed material after the beat adjustment, and each second-type material clip corresponds to a second Class audio clips, and the time point information of each second type material clip is the same as the time point information of the corresponding second type audio clip;
  • chord of each second-type clip in the plurality of second-type clips is adjusted to the corresponding second-type audio clip.
  • the adjusting unit is specifically configured to:
  • the chord of the mixed material after the beat adjustment is adjusted to a chord consistent with the determined tone.
  • the obtaining module 201 is specifically configured to:
  • the mixing material library includes at least one instrument material, and each instrument material is audio with a specified beat and a duration of the specified duration;
  • the beat characteristic of the target audio is determined, and the beat material is adjusted according to the beat characteristic of the target audio, and the target audio is mixed according to the beat adjusted material.
  • Sound processing Since the beat feature refers to the correspondence between the beat and the point-in-time information used in the target audio, it can be known that in this application, the mixing is performed according to the correspondence between the beat and the point-in-time information in the target audio. Adjust the beat of the material instead of reordering the sliced audio clips of the target song in the order of the chords in the instrument material. In this way, when mixing the target audio according to the mix material after the beat adjustment, the target audio can be retained. The original melody of the target audio is beneficial to the promotion of the mixing method proposed in this application.
  • the mixing device provided in the above embodiment only uses the division of the above functional modules as an example for the description of mixing. In practical applications, the above functions may be allocated by different functional modules according to needs. The internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the mixing device and the mixing method embodiment provided by the foregoing embodiments belong to the same concept. For specific implementation processes, refer to the method embodiment, and details are not described herein again.
  • FIG. 3 is a structural block diagram of a terminal 300 according to an embodiment of the present application.
  • the terminal 300 may be: smartphone, tablet, MP3 player (Moving Picture Experts Group Audio Layer III, Moving Picture Experts Compression Standard Audio Level 3), MP4 (Moving Picture Expert Experts Group Audio Audio Layer IV, moving picture expert compression standard audio Level 4) Player, laptop or desktop computer.
  • the terminal 300 may also be called other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
  • the terminal 300 includes a processor 301 and a memory 302.
  • the processor 301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • the processor 301 may use at least one hardware form among DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array). achieve.
  • the processor 301 may also include a main processor and a coprocessor.
  • the main processor is a processor for processing data in the wake state, also called a CPU (Central Processing Unit).
  • the coprocessor is Low-power processor for processing data in standby.
  • the processor 301 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is responsible for rendering and drawing content required to be displayed on the display screen.
  • the processor 301 may further include an AI (Artificial Intelligence) processor, which is used to process computing operations related to machine learning.
  • AI Artificial Intelligence
  • the memory 302 may include one or more computer-readable storage media, which may be non-transitory.
  • the memory 302 may also include high-speed random access memory, and non-volatile memory, such as one or more disk storage devices, flash storage devices.
  • non-transitory computer-readable storage medium in the memory 302 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 301 to implement the mixing method provided in the embodiment of the present application.
  • the terminal 300 may optionally include a peripheral device interface 303 and at least one peripheral device.
  • the processor 301, the memory 302, and the peripheral device interface 303 may be connected through a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 303 through a bus, a signal line, or a circuit board.
  • the peripheral device includes at least one of a radio frequency circuit 304, a touch display screen 305, a camera 306, an audio circuit 307, a positioning component 308, and a power source 309.
  • the peripheral device interface 303 may be used to connect at least one peripheral device related to I / O (Input / Output) to the processor 301 and the memory 302.
  • the processor 301, the memory 302, and the peripheral device interface 303 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 301, the memory 302, and the peripheral device interface 303 or Both can be implemented on separate chips or circuit boards, which is not limited in this embodiment.
  • the radio frequency circuit 304 is used to receive and transmit an RF (Radio Frequency) signal, also called an electromagnetic signal.
  • the radio frequency circuit 304 communicates with a communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 304 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
  • the radio frequency circuit 304 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and the like.
  • the radio frequency circuit 304 can communicate with other terminals through at least one wireless communication protocol.
  • the wireless communication protocol includes, but is not limited to, a metropolitan area network, mobile communication networks of various generations (2G, 3G, 4G, and 5G), a wireless local area network, and / or a WiFi (Wireless Fidelity) network.
  • the radio frequency circuit 304 may further include circuits related to NFC (Near Field Communication), which is not limited in this application.
  • the display screen 305 is used to display a UI (User Interface).
  • the UI may include graphics, text, icons, videos, and any combination thereof.
  • the display screen 305 also has the ability to collect touch signals on or above the surface of the display screen 305.
  • the touch signal can be input to the processor 301 as a control signal for processing.
  • the display screen 305 may also be used to provide a virtual button and / or a virtual keyboard, which is also called a soft button and / or a soft keyboard.
  • the display screen 305 may be one, and the front panel of the terminal 300 is provided; in other embodiments, the display screen 305 may be at least two, which are respectively provided on different surfaces of the terminal 300 or have a folded design; In still other embodiments, the display screen 305 may be a flexible display screen disposed on a curved surface or a folded surface of the terminal 300. Furthermore, the display screen 305 can also be set as a non-rectangular irregular figure, that is, a special-shaped screen.
  • the display screen 305 can be made of materials such as LCD (Liquid Crystal Display) and OLED (Organic Light-Emitting Diode).
  • the camera component 306 is used for capturing images or videos.
  • the camera component 306 includes a front camera and a rear camera.
  • the front camera is disposed on the front panel of the terminal, and the rear camera is disposed on the back of the terminal.
  • the camera assembly 306 may further include a flash.
  • the flash can be a monochrome temperature flash or a dual color temperature flash.
  • a dual color temperature flash is a combination of a warm light flash and a cold light flash, which can be used for light compensation at different color temperatures.
  • the audio circuit 307 may include a microphone and a speaker.
  • the microphone is used for collecting sound waves of the user and the environment, and converting the sound waves into electrical signals and inputting them to the processor 301 for processing, or inputting to the radio frequency circuit 304 to implement voice communication.
  • the microphone can also be an array microphone or an omnidirectional acquisition microphone.
  • the speaker is used to convert electrical signals from the processor 301 or the radio frequency circuit 304 into sound waves.
  • the speaker can be a traditional film speaker or a piezoelectric ceramic speaker.
  • the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert electrical signals into sound waves audible to humans, but also convert electrical signals into sound waves inaudible to humans for ranging purposes.
  • the audio circuit 307 may further include a headphone jack.
  • the positioning component 308 is used to locate the current geographic position of the terminal 300 to implement navigation or LBS (Location Based Service).
  • the positioning component 308 may be a positioning component based on a US-based GPS (Global Positioning System), a Beidou system in China, a Granas system in Russia, or a Galileo system in the European Union.
  • the power source 309 is used to power various components in the terminal 300.
  • the power source 309 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery.
  • the rechargeable battery may support wired charging or wireless charging.
  • the rechargeable battery can also be used to support fast charging technology.
  • the terminal 300 further includes one or more sensors 310.
  • the one or more sensors 310 include, but are not limited to, an acceleration sensor 311, a gyroscope sensor 312, a pressure sensor 313, a fingerprint sensor 314, an optical sensor 315, and a proximity sensor 316.
  • the acceleration sensor 311 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established by the terminal 300.
  • the acceleration sensor 311 may be used to detect components of the acceleration of gravity on three coordinate axes.
  • the processor 301 may control the touch display screen 305 to display the user interface in a horizontal view or a vertical view according to the gravity acceleration signal collected by the acceleration sensor 311.
  • the acceleration sensor 311 may also be used for collecting motion data of a game or a user.
  • the gyro sensor 312 can detect the body direction and rotation angle of the terminal 300, and the gyro sensor 312 can cooperate with the acceleration sensor 311 to collect a 3D motion of the user on the terminal 300. Based on the data collected by the gyro sensor 312, the processor 301 can implement the following functions: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 313 may be disposed on a side frame of the terminal 300 and / or a lower layer of the touch display screen 305.
  • a user's grip signal to the terminal 300 can be detected, and the processor 301 can perform left-right hand recognition or quick operation according to the grip signal collected by the pressure sensor 313.
  • the processor 301 controls the operability controls on the UI interface according to the user's pressure operation on the touch display screen 305.
  • the operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
  • the fingerprint sensor 314 is used to collect a user's fingerprint, and the processor 301 recognizes the identity of the user based on the fingerprint collected by the fingerprint sensor 314, or the fingerprint sensor 314 recognizes the identity of the user based on the collected fingerprint. When identifying the user's identity as a trusted identity, the processor 301 authorizes the user to perform related sensitive operations, such as unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings.
  • the fingerprint sensor 314 may be provided on the front, back, or side of the terminal 300. When a physical button or a manufacturer's logo is set on the terminal 300, the fingerprint sensor 314 can be integrated with the physical button or the manufacturer's logo.
  • the optical sensor 315 is used to collect the ambient light intensity.
  • the processor 301 may control the display brightness of the touch display screen 305 according to the ambient light intensity collected by the optical sensor 315. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 305 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 305 is decreased.
  • the processor 301 may also dynamically adjust the shooting parameters of the camera component 306 according to the ambient light intensity collected by the optical sensor 315.
  • the proximity sensor 316 also called a distance sensor, is usually disposed on the front panel of the terminal 300.
  • the proximity sensor 316 is used to collect the distance between the user and the front side of the terminal 300.
  • the processor 301 controls the touch display screen 305 to switch from the bright screen state to the closed screen state; when the proximity sensor 316 detects When the distance between the user and the front side of the terminal 300 gradually increases, the processor 301 controls the touch display screen 305 to switch from the on-screen state to the on-screen state.
  • FIG. 3 does not constitute a limitation on the terminal 300, and may include more or fewer components than shown in the figure, or combine certain components, or adopt different component arrangements.
  • An embodiment of the present application further provides a non-transitory computer-readable storage medium, and when an instruction in the storage medium is executed by a processor of a mobile terminal, the mobile terminal can execute the mixing method provided in the foregoing embodiment.
  • the embodiment of the present application further provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the mixing method provided by the foregoing embodiment.
  • the program may be stored in a computer-readable storage medium.
  • the storage medium mentioned may be a read-only memory, a magnetic disk or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Electrophonic Musical Instruments (AREA)
  • Auxiliary Devices For Music (AREA)

Abstract

一种混音方法、装置及存储介质,该方法包括:获取混音素材(101);确定目标音频的节拍特征(102);根据目标音频的节拍特征,对混音素材进行节拍调整(103);根据节拍调整之后的混音素材,对目标音频进行混音处理(104)。由于节拍特征是指目标音频中采用的节拍和时间点信息之间的对应关系,由此该方法可以保留目标音频的原有旋律,有利于混音方法的推广。

Description

混音方法、装置及存储介质
本申请要求于2018年06月22日提交的申请号为201810650947.5、申请名称为“混音方法、装置及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及多媒体技术领域,特别涉及一种混音方法、装置及存储介质。
背景技术
目前,为了提高歌曲的有趣性,通常需要对歌曲进行混音,以增加歌曲的新鲜感。其中,对歌曲进行混音是指在原歌曲中混入其他乐器素材,以使混音之后的歌曲可以具有其他乐器素材中的音频特征。
相关技术中,当需要对目标歌曲进行混音时,先将目标歌曲按照音高进行切片,得到多个音频片段,每个音频片段有对应的音高,音高是指声音在1秒内的振动次数。待混入的乐器素材也是一段音频,将该乐器素材按照和弦划分为不同的素材片段,每个素材片段有对应的和弦,一个和弦通常对应多个音高。在进行混音时,对于乐器素材中每个素材片段,从多个音频片段中查找音高与该素材片段的和弦对应的音频片段。然后将查找到的音频片段与该素材片段进行合并,得到一个混音片段。依次类推,当对所有的素材片段均执行完上述操作时,将得到的多个混音片段进行组合,就可以得到混音之后的歌曲。
在上述对目标歌曲进行混音的过程中,乐器素材是指包括多个和弦的一段音频,当按照乐器素材中的和弦对目标歌曲进行混音的,相当于按照乐器素材中和弦的顺序将目标歌曲切片后的音频片段重新进行了排序,导致混音之后的歌曲与目标歌曲之间的区别较大,丧失了目标歌曲的原有旋律,不利于上述混音方法的推广。
发明内容
本申请实施例提供了一种混音方法、装置及存储介质,可以用于解决相关 技术中混音之后的歌曲与目标歌曲之间的区别较大的问题。所述技术方案如下:
第一方面,提供了一种混音方法,所述方法包括:
获取混音素材;
确定需要进行混音的目标音频的节拍特征,所述节拍特征是指所述目标音频中采用的节拍和时间点信息之间的对应关系;
根据所述目标音频的节拍特征,对所述混音素材进行节拍调整;
根据节拍调整之后的混音素材,对所述目标音频进行混音处理。
所述根据所述目标音频的节拍特征,对所述混音素材进行节拍调整,包括:
按照所述目标音频的节拍特征,将所述目标音频划分为多个第一类音频片段,每个第一类音频片段对应一个节拍;
按照所述多个第一类音频片段中每个第一类音频片段的时间点信息,确定所述混音素材中的多个第一类素材片段,每个第一类素材片段对应一个第一类音频片段,且每个第一类素材片段的时间点信息和对应的第一类音频片段的时间点信息相同;
将所述多个第一类素材片段中每个第一类素材片段的节拍调整为对应的第一类音频片段的节拍。
可选地,所述根据节拍调整之后的混音素材,对所述目标音频进行混音处理,包括:
对所述节拍调整之后的混音素材进行和弦调整;
将和弦调整之后的混音素材与所述目标音频合并。
可选地,所述对所述节拍调整之后的混音素材进行和弦调整,包括:
确定所述目标音频的和弦特征,所述和弦特征是指所述目标音频中采用的和弦和时间点信息之间的对应关系;
根据所述目标音频的和弦特征,对所述节拍调整之后的混音素材进行和弦调整。
可选地,所述根据所述目标音频的和弦特征,对所述节拍调整之后的混音素材进行和弦调整,包括:
按照所述目标音频的和弦特征,将所述目标音频划分为多个第二类音频片段,每个第二类音频片段对应一个和弦;
按照所述多个第二类音频片段中每个第二类音频片段的时间点信息,确定 所述节拍调整之后的混音素材中的多个第二类素材片段,每个第二类素材片段对应一个第二类音频片段,且每个第二类素材片段的时间点信息和对应的第二类音频片段的时间点信息相同;
将所述多个第二类素材片段中每个第二类素材片段的和弦调整为对应的第二类音频片段的和弦。
可选地,所述对所述节拍调整之后的混音素材进行和弦调整,包括:
确定所述目标音频的调性,所述调性是指所述目标音频的主音所在的音律;
根据所述目标音频的调性,将所述节拍调整之后的混音素材的和弦调整为与确定的调性一致的和弦。
可选地,所述获取混音素材,包括:
从混音素材库中选择目标乐器素材,所述混音素材库包括至少一个乐器素材,每个乐器素材是节拍为指定节拍,时长为指定时长的音频;
对所述目标乐器素材进行循环拼接,得到所述混音素材,所述混音素材的时长与所述目标音频的时长相同。
第二方面,提供一种混音装置,所述混音装置包括:
获取模块,用于获取混音素材;
确定模块,用于确定需要进行混音的目标音频的节拍特征,所述节拍特征是指所述目标音频中采用的节拍和时间点信息之间的对应关系;
调整模块,用于根据所述目标音频的节拍特征,对所述混音素材进行节拍调整;
处理模块,用于根据节拍调整之后的混音素材,对所述目标音频进行混音处理。
可选地,所述调整模块,具体用于:
按照所述目标音频的节拍特征,将所述目标音频划分为多个第一类音频片段,每个第一类音频片段对应一个节拍;
按照所述多个第一类音频片段中每个第一类音频片段的时间点信息,确定所述混音素材中的多个第一类素材片段,每个第一类素材片段对应一个第一类音频片段,且每个第一类素材片段的时间点信息和对应的第一类音频片段的时间点信息相同;
将所述多个第一类素材片段中每个第一类素材片段的节拍调整为对应的第一类音频片段的节拍。
可选地,所述处理模块包括:
调整单元,用于对所述节拍调整之后的混音素材进行和弦调整;
合并单元,用于将和弦调整之后的混音素材与所述目标音频合并。
可选地,所述调整单元,具体用于:
确定所述目标音频的和弦特征,所述和弦特征是指所述目标音频中采用的和弦和时间点信息之间的对应关系;
根据所述目标音频的和弦特征,对所述节拍调整之后的混音素材进行和弦调整。
可选地,所述调整单元,还具体用于:
按照所述目标音频的和弦特征,将所述目标音频划分为多个第二类音频片段,每个第二类音频片段对应一个和弦;
按照所述多个第二类音频片段中每个第二类音频片段的时间点信息,确定所述节拍调整之后的混音素材中的多个第二类素材片段,每个第二类素材片段对应一个第二类音频片段,且每个第二类素材片段的时间点信息和对应的第二类音频片段的时间点信息相同;
将所述多个第二类素材片段中每个第二类素材片段的和弦调整为对应的第二类音频片段的和弦。
可选地,所述调整单元,具体用于:
确定所述目标音频的调性,所述调性是指所述目标音频的主音所在的音律;
根据所述目标音频的调性,将所述节拍调整之后的混音素材的和弦调整为与确定的调性一致的和弦。
可选地,所述获取模块,具体用于:
从混音素材库中选择目标乐器素材,所述混音素材库包括至少一个乐器素材,每个乐器素材是节拍为指定节拍,时长为指定时长的音频;
对所述目标乐器素材进行循环拼接,得到所述混音素材,所述混音素材的时长与所述目标音频的时长相同。
第三方面,提供另一种混音装置,所述装置包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为执行上述第一方面所述的任一项方法的步骤。
第四方面,提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有指令,所述指令被处理器执行时实现上述第一方面所述的任一项方法的步骤。
第五方面,提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第一方面所述的任一方法的步骤。
本申请实施例提供的技术方案带来的有益效果是:
在本申请实施例中,在获取混音素材之后,确定目标音频的节拍特征,根据目标音频的节拍特征,对混音素材进行节拍调整,根据节拍调整之后的混音素材,对目标音频进行混音处理。由于节拍特征是指目标音频中采用的节拍和时间点信息之间的对应关系,由此可知,在本申请中,是根据目标音频中的节拍和时间点信息之间的对应关系,对混音素材进行节拍调整,而不是按照乐器素材中和弦的顺序将目标歌曲切片后的音频片段重新进行排序,这样的话,当根据节拍调整之后的混音素材,对目标音频进行混音处理时,可以保留目标音频的原有旋律,有利于本申请提出的混音方法的推广。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种混音方法流程图;
图2是本申请实施例提供的一种混音装置框图;
图3是本申请实施例提供的一种终端结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
图1是本申请实施例提供的一种混音方法流程图,如图1所示,该方法包括如下步骤:
步骤101:获取混音素材。
在一种可能的实现方式中,步骤101具体可以为:从混音素材库中选择目标乐器素材,混音素材库包括至少一个乐器素材,每个乐器素材是节拍为指定节拍,时长为指定时长的音频。对所述目标乐器素材进行循环拼接,得到混音素材,混音素材的时长与目标音频的时长相同。
其中,混音素材库中的每个乐器素材为预先制作的。每个乐器素材是节拍为指定节拍,时长为指定时长的音频,是指每个乐器素材中只有一种类型的节拍,相当于每个乐器素材是一段旋律重复的音频。比如,混音素材库中包括鼓点素材、钢琴素材、贝斯素材和吉他素材等乐器素材,每个乐器素材的时长仅为2秒,且每个乐器素材中仅包括一种类型的节拍。
由于每个乐器素材的时长通常较短,因此为了能够通过目标乐器素材对目标音频进行混音,可以先根据目标乐器素材获取混音素材。也即是,先将目标乐器素材进行循环拼接,将循环拼接之后的音频作为混音素材,循环拼接的目的使混音素材的时长与目标音频的时长一致的混音素材。比如,目标乐器素材为时长为2秒的鼓点素材,目标音频为3分钟,可以将鼓点素材循环拼接,得到时长为3分钟的混音素材。另外,由于目标乐器素材的节拍为指定节拍,因此,循环拼接之后的混音素材中也仅包括一种类型的节拍。
可选地,在本申请实施例中,当乐器素材的时长与目标音频的时长一致时,混音素材也可以直接是用户选择的乐器素材,而无需通过上述循环拼接处理。这种情况下,混音素材中可以只有一种类型的节拍,也可以包括多种类型的节拍,本申请实施例在此不做具体限定。
另外,由于某些种类的乐器素材可能只有节拍,但是某些种类的乐器素材除了节拍之外还有和弦,比如,鼓点素材中只有节拍,但是吉他素材就同时具有节拍和和弦。对于同时具有节拍和和弦的乐器素材,乐器素材可以只包括一种类型的和弦,也可以包括多种类型的和弦,本申请实施例在此不做具体限定。
步骤102:确定需要进行混音的目标音频的节拍特征,节拍特征是指目标音频中采用的节拍和时间点信息之间的对应关系。
其中,时间点信息是指在目标音频的播放时间轴上的时间点信息。比如,目标音频为一首歌曲,该歌曲的时长为3分钟,确定目标音频的节拍特征,也即是,确定出该歌曲在0秒到3秒之间采用的节拍为2拍、在3秒到8秒之间采用的节拍为4拍等等。
步骤103:根据目标音频的节拍特征,对混音素材进行节拍调整。
由于节拍特征是指目标音频中采用的节拍和时间点信息之间的对应关系,因此,步骤103具体可以为:按照目标音频的节拍特征,将目标音频划分为多个第一类音频片段,并每个第一类音频片段对应一个节拍,按照多个第一类音频片段中每个第一类音频片段的时间点信息,确定混音素材中的多个第一类素材片段,每个第一类素材片段对应一个第一类音频片段,且每个第一类素材片段的时间点信息和对应的第一类音频片段的时间点信息相同,将多个第一类素材片段中每个第一类素材片段的节拍调整为对应的第一类音频片段的节拍。
比如,目标音频的时长为30秒,混音素材的节拍为3拍。将目标音频按照节拍特征划分之后,得到三个第一类音频片段,分别为第一类音频片段1、第一类音频片段2和第一类音频片段3。其中,第一类音频片段1的时间点信息为0秒到9秒,对应的节拍为2拍,第一类音频片段2的时间点信息为9秒到15秒,对应的节拍为4拍,第一类音频片段3的时间点信息为15秒到30秒,对应的节拍为2拍。这时,按照这三个音频片段的时间点信息就可以确定混音素材中时间点信息为0秒到9秒的第一类素材片段,时间点信息为9秒到15秒的第一类素材片段和时间点信息为15秒到30秒的第一类素材片段。
此时,将混音素材中时间点信息为0秒到9秒的第一类素材片段的节拍由3拍调整到2拍,将时间点信息为9秒到15秒的第一类素材片段的节拍由3拍调整到4拍,将时间点信息为15秒到30秒的第一类素材片段的节拍由3拍调整到2拍。显然,调整之后的任一第一类素材片段的节拍均和时间点信息相同的第一类音频片段的节拍一致。也即是,通过对混音素材进行节拍调整,可以使混音素材具有和目标音频相同的节拍特征,这样的话,后续在根据节拍调整之后的混音素材对目标音频进行混音处理时,可以避免混音之后的音频丧失了目标音频本来的韵律。
步骤104:根据节拍调整之后的混音素材,对目标音频进行混音处理。
在一种可能的实现方式中,步骤104具体可以为:在根据节拍特征调整完混音素材之后,直接将节拍调整之后的混音素材和目标音频进行合并,以实现 对目标音频的混音。
由于某些种类的乐器素材可能只有节拍,这时通过上述步骤101至步骤104就可以实现对目标音频的混音。但是某些种类的乐器素材除了节拍之外还有和弦。对于同时具有节拍和和弦的乐器素材,在获取到混音素材之后,这时如果仅仅对混音素材进行节拍调整,混音素材的和弦特征与目标音频的和弦特征可能不一致,将无法顺利对混音素材和目标音频进行合并。因此,针对同时具有节拍和和弦的乐器素材,在对混音素材进行节拍调整之后,还需对混音素材进行和弦调整,以根据和弦调整之后的混音素材对目标音频进行混音。因此,在另一种可能的实现方式中,步骤104具体可以为:对节拍调整之后的混音素材进行和弦调整,将和弦调整之后的混音素材与目标音频合并。
在本申请实施例中,对节拍调整之后的混音素材进行和弦调整具有以下两种实现方式:
第一种实现方式,确定目标音频的和弦特征,和弦特征是指目标音频中采用的和弦和时间点信息之间的对应关系,根据目标音频的和弦特征,对节拍调整之后的混音素材进行和弦调整。
其中,确定目标音频的和弦特征,也即是,确定出目标音频在什么时间段采用了什么和弦。比如,目标音频为一首歌曲,该歌曲的时长为3分钟,确定目标音频的和弦特征,也即是,确定出该歌曲在0秒到3秒之间采用的和弦为E和弦、在3秒到8秒之间采用的和弦为G和弦等等。
另外,根据目标音频的和弦特征,对节拍调整之后的混音素材进行和弦调整的实现方式可以为:按照目标音频的和弦特征,将目标音频划分为多个第二类音频片段,每个第二类音频片段对应一个和弦,按照多个第二类音频片段中每个第二类音频片段的时间点信息,确定节拍调整之后的混音素材中的多个第二类素材片段,每个第二类素材片段对应一个第二类音频片段,且每个第二类素材片段的时间点信息和对应的第二类音频片段的时间点信息相同,将多个第二类素材片段中每个第二类素材片段的和弦调整为对应的第二类音频片段的和弦。
比如,目标音频的时长为30秒,混音素材中仅有一种和弦A。将目标音频按照和弦特征划分之后,得到三个第二类音频片段,分别为第二类音频片段1、第二类音频片段2和第二类音频片段3。其中,第二类音频片段1的时间点信息为0秒到9秒,对应的和弦为和弦C,第二类音频片段2的时间点信息为 9秒到15秒,对应的和弦为和弦A,第二类音频片段3的时间点信息为15秒到30秒,对应的和弦为和弦H。这时,按照这三个音频片段的时间点信息就可以确定节拍调整之后的混音素材中时间点信息为0秒到9秒的第二类素材片段,时间点信息为9秒到15秒的第二类素材片段和时间点信息为15秒到30秒的第二类素材片段。
此时,将节拍调整之后的混音素材中时间点信息为0秒到9秒的第二类素材片段的和弦由和弦C调整到和弦A,时间点信息为9秒到15秒的第二类素材片段的和弦无需调整,将时间点信息为15秒到30秒的第二类素材片段的节拍由和弦H调整到和弦C。显然,调整之后的任一第二类素材片段的和弦均和时间点信息相同的第二类音频片段的和弦一致,也即是,通过对节拍调整之后的混音素材进行和弦调整,可以使混音素材具有和目标音频相同的节拍特征和和弦特征,相当于调整之后的混音素材与目标音频具有完全一致的韵律。这样的话,后续在根据混音素材对目标音频进行混音处理时,可以避免混音之后的音频丧失了目标音频本来的韵律。
第二种实现方式,确定目标音频采用的调性,根据目标音频采用的调性,将节拍调整之后的混音素材的和弦调整为与确定的调性一致的和弦。
上述第一种实现方式是根据目标音频的和弦特征对节拍调整之后的混音素材进行和弦调整,需要先对目标音频中包括的所有和弦进行分析,才能使和弦调整之后的混音素材具有和目标音频相同的和弦特征,这样容易导致和弦调整的效率较低。由于和弦通常与调性对应,且一首歌曲通常具有一个调性,因此,在本申请实施例中,可以根据目标音频的调性统一调整混音素材中的和弦,无需根据目标音频中的每个和弦调整混音素材中的和弦,可以提高和弦调整的效率。其中,调性是指目标音频的主音所在的音律。
具体地,确定目标音频的调性,根据目标音频的调性,将节拍调整之后的混音素材的和弦调整为与确定的调性一致的和弦。比如,目标音频的调性为C大调,节拍调整之后的混音素材中只有一种类型的和弦,该和弦为A和弦,那么此时将节拍调整之后的混音素材的和弦调整为与确定的调性一致的和弦的具体过程为:可以将A和弦作为A大调,按照从A大调调整到C大调的方式对混音素材进行调整,相当于将混音素材中的A和弦调整为C和弦。
需要说明的是,对于同时具有节拍和和弦的乐器素材,在获取到混音素材之后,上述实现方式是先对混音素材进行节拍调整,再对混音素材进行和弦调 整。当然,也可以先对混音素材进行和弦调整,再对混音素材进行节拍调整,本申请实施例在此不做具体限定。
在本申请实施例中,为了使混音之后的音频可以保留目标音频的原有旋律,可以对混音素材进行节拍调整,还可以对混音素材进行节拍调整和和弦调整,并且在和弦调整时可以根据目标音频的和弦特征调整,还可以根据目标音频的调性调整。也即是,本申请实施例提供了三种不同的调整方式。
另外,由于混音素材是根据混音素材库中的目标乐器素材确定的,因此,可以针对混音素材库中的每个乐器素材设置调整类型,在一种可能的实现方式中,调整类型包括三种,第一种为“节拍类型”,用于指示根据目标音频的节拍特征调整混音素材。第二种为“节拍+和弦类型”,用于指示根据目标音频的节拍特征和和弦特征调整混音素材。第三种为“节拍+调性”,用于指示根据目标音频的节拍特征和调性调整混音素材。
在本申请实施例中,在获取混音素材之后,确定目标音频的节拍特征,根据目标音频的节拍特征,对混音素材进行节拍调整,根据节拍调整之后的混音素材,对目标音频进行混音处理。由于节拍特征是指目标音频中采用的节拍和时间点信息之间的对应关系,由此可知,在本申请中,是根据目标音频中的节拍和时间点信息之间的对应关系,对混音素材进行节拍调整,而不是按照乐器素材中和弦的顺序将目标歌曲切片后的音频片段重新进行排序,这样的话,当根据节拍调整之后的混音素材,对目标音频进行混音处理时,可以保留目标音频的原有旋律,有利于本申请提出的混音方法的推广。
图2是本申请实施例提供的一种混音装置,如图2所示,装置200包括:
获取模块201,用于获取混音素材;
确定模块202,用于确定需要进行混音的目标音频的节拍特征,节拍特征是指目标音频中采用的节拍和时间点信息之间的对应关系;
调整模块203,用于根据目标音频的节拍特征,对混音素材进行节拍调整;
处理模块204,用于根据节拍调整之后的混音素材,对目标音频进行混音处理。
可选地,调整模块203,具体用于:
按照目标音频的节拍特征,将目标音频划分为多个第一类音频片段,每个第一类音频片段对应一个节拍;
按照多个第一类音频片段中每个第一类音频片段的时间点信息,确定混音素材中的多个第一类素材片段,每个第一类素材片段对应一个第一类音频片段,且每个第一类素材片段的时间点信息和对应的第一类音频片段的时间点信息相同;
将多个第一类素材片段中每个第一类素材片段的节拍调整为对应的第一类音频片段的节拍。
可选地,处理模块204包括:
调整单元,用于对节拍调整之后的混音素材进行和弦调整;
合并单元,用于将和弦调整之后的混音素材与目标音频合并。
可选地,调整单元,具体用于:
确定目标音频的和弦特征,和弦特征是指目标音频中采用的和弦和时间点信息之间的对应关系;
根据目标音频的和弦特征,对节拍调整之后的混音素材进行和弦调整。
可选地,调整单元,还具体用于:
按照目标音频的和弦特征,将目标音频划分为多个第二类音频片段,每个第二类音频片段对应一个和弦;
按照多个第二类音频片段中每个第二类音频片段的时间点信息,确定节拍调整之后的混音素材中的多个第二类素材片段,每个第二类素材片段对应一个第二类音频片段,且每个第二类素材片段的时间点信息和对应的第二类音频片段的时间点信息相同;
将所多个第二类素材片段中每个第二类素材片段的和弦调整为对应的第二类音频片段的和弦。
可选地,调整单元,具体用于:
确定目标音频的调性,调性是指目标音频的主音所在的音律;
根据目标音频的调性,将节拍调整之后的混音素材的和弦调整为与确定的调性一致的和弦。
可选地,获取模块201,具体用于:
从混音素材库中选择目标乐器素材,混音素材库包括至少一个乐器素材,每个乐器素材是节拍为指定节拍,时长为指定时长的音频;
对目标乐器素材进行循环拼接,得到混音素材,混音素材的时长与目标音频的时长相同。
在本申请实施例中,在获取混音素材之后,确定目标音频的节拍特征,根据目标音频的节拍特征,对混音素材进行节拍调整,根据节拍调整之后的混音素材,对目标音频进行混音处理。由于节拍特征是指目标音频中采用的节拍和时间点信息之间的对应关系,由此可知,在本申请中,是根据目标音频中的节拍和时间点信息之间的对应关系,对混音素材进行节拍调整,而不是按照乐器素材中和弦的顺序将目标歌曲切片后的音频片段重新进行排序,这样的话,当根据节拍调整之后的混音素材,对目标音频进行混音处理时,可以保留目标音频的原有旋律,有利于本申请提出的混音方法的推广。
需要说明的是:上述实施例提供的混音装置在混音时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的混音装置与混音方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
图3是本申请实施例提供的一种终端300的结构框图。该终端300可以是:智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、笔记本电脑或台式电脑。终端300还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
通常,终端300包括有:处理器301和存储器302。
处理器301可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器301可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器301也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器301可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器301还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用 于处理有关机器学习的计算操作。
存储器302可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器302还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器302中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器301所执行以实现本申请实施例提供的混音方法。
在一些实施例中,终端300还可选包括有:外围设备接口303和至少一个外围设备。处理器301、存储器302和外围设备接口303之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口303相连。具体地,外围设备包括:射频电路304、触摸显示屏305、摄像头306、音频电路307、定位组件308和电源309中的至少一种。
外围设备接口303可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器301和存储器302。在一些实施例中,处理器301、存储器302和外围设备接口303被集成在同一芯片或电路板上;在一些其他实施例中,处理器301、存储器302和外围设备接口303中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
射频电路304用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路304通过电磁信号与通信网络以及其他通信设备进行通信。射频电路304将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路304包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路304可以通过至少一种无线通信协议来与其它终端进行通信。该无线通信协议包括但不限于:城域网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路304还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本申请对此不加以限定。
显示屏305用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏305是触摸显示屏时,显示屏305还具有采集在显示屏305的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器301进行处理。此时,显示屏305还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中, 显示屏305可以为一个,设置终端300的前面板;在另一些实施例中,显示屏305可以为至少两个,分别设置在终端300的不同表面或呈折叠设计;在再一些实施例中,显示屏305可以是柔性显示屏,设置在终端300的弯曲表面上或折叠面上。甚至,显示屏305还可以设置成非矩形的不规则图形,也即异形屏。显示屏305可以采用LCD(Liquid Crystal Display,液晶显示屏)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。
摄像头组件306用于采集图像或视频。可选地,摄像头组件306包括前置摄像头和后置摄像头。通常,前置摄像头设置在终端的前面板,后置摄像头设置在终端的背面。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头、长焦摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能、主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能或者其它融合拍摄功能。在一些实施例中,摄像头组件306还可以包括闪光灯。闪光灯可以是单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可以用于不同色温下的光线补偿。
音频电路307可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器301进行处理,或者输入至射频电路304以实现语音通信。出于立体声采集或降噪的目的,麦克风可以为多个,分别设置在终端300的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器301或射频电路304的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声器是压电陶瓷扬声器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路307还可以包括耳机插孔。
定位组件308用于定位终端300的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。定位组件308可以是基于美国的GPS(Global Positioning System,全球定位系统)、中国的北斗系统、俄罗斯的格雷纳斯系统或欧盟的伽利略系统的定位组件。
电源309用于为终端300中的各个组件进行供电。电源309可以是交流电、直流电、一次性电池或可充电电池。当电源309包括可充电电池时,该可充电电池可以支持有线充电或无线充电。该可充电电池还可以用于支持快充技术。
在一些实施例中,终端300还包括有一个或多个传感器310。该一个或多个传感器310包括但不限于:加速度传感器311、陀螺仪传感器312、压力传感器313、指纹传感器314、光学传感器315以及接近传感器316。
加速度传感器311可以检测以终端300建立的坐标系的三个坐标轴上的加速度大小。比如,加速度传感器311可以用于检测重力加速度在三个坐标轴上的分量。处理器301可以根据加速度传感器311采集的重力加速度信号,控制触摸显示屏305以横向视图或纵向视图进行用户界面的显示。加速度传感器311还可以用于游戏或者用户的运动数据的采集。
陀螺仪传感器312可以检测终端300的机体方向及转动角度,陀螺仪传感器312可以与加速度传感器311协同采集用户对终端300的3D动作。处理器301根据陀螺仪传感器312采集的数据,可以实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。
压力传感器313可以设置在终端300的侧边框和/或触摸显示屏305的下层。当压力传感器313设置在终端300的侧边框时,可以检测用户对终端300的握持信号,由处理器301根据压力传感器313采集的握持信号进行左右手识别或快捷操作。当压力传感器313设置在触摸显示屏305的下层时,由处理器301根据用户对触摸显示屏305的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。
指纹传感器314用于采集用户的指纹,由处理器301根据指纹传感器314采集到的指纹识别用户的身份,或者,由指纹传感器314根据采集到的指纹识别用户的身份。在识别出用户的身份为可信身份时,由处理器301授权该用户执行相关的敏感操作,该敏感操作包括解锁屏幕、查看加密信息、下载软件、支付及更改设置等。指纹传感器314可以被设置终端300的正面、背面或侧面。当终端300上设置有物理按键或厂商Logo时,指纹传感器314可以与物理按键或厂商Logo集成在一起。
光学传感器315用于采集环境光强度。在一个实施例中,处理器301可以根据光学传感器315采集的环境光强度,控制触摸显示屏305的显示亮度。具体地,当环境光强度较高时,调高触摸显示屏305的显示亮度;当环境光强度较低时,调低触摸显示屏305的显示亮度。在另一个实施例中,处理器301还可以根据光学传感器315采集的环境光强度,动态调整摄像头组件306的拍摄 参数。
接近传感器316,也称距离传感器,通常设置在终端300的前面板。接近传感器316用于采集用户与终端300的正面之间的距离。在一个实施例中,当接近传感器316检测到用户与终端300的正面之间的距离逐渐变小时,由处理器301控制触摸显示屏305从亮屏状态切换为息屏状态;当接近传感器316检测到用户与终端300的正面之间的距离逐渐变大时,由处理器301控制触摸显示屏305从息屏状态切换为亮屏状态。
本领域技术人员可以理解,图3中示出的结构并不构成对终端300的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
本申请实施例还提供了一种非临时性计算机可读存储介质,当所述存储介质中的指令由移动终端的处理器执行时,使得移动终端能够执行上述实施例提供的混音方法。
本申请实施例还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述实施例提供的混音方法。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (16)

  1. 一种混音方法,所述方法包括:
    获取混音素材;
    确定需要进行混音的目标音频的节拍特征,所述节拍特征是指所述目标音频中采用的节拍和时间点信息之间的对应关系;
    根据所述目标音频的节拍特征,对所述混音素材进行节拍调整;
    根据节拍调整之后的混音素材,对所述目标音频进行混音处理。
  2. 如权利要求1所述的方法,其中,所述根据所述目标音频的节拍特征,对所述混音素材进行节拍调整,包括:
    按照所述目标音频的节拍特征,将所述目标音频划分为多个第一类音频片段,每个第一类音频片段对应一个节拍;
    按照所述多个第一类音频片段中每个第一类音频片段的时间点信息,确定所述混音素材中的多个第一类素材片段,每个第一类素材片段对应一个第一类音频片段,且每个第一类素材片段的时间点信息和对应的第一类音频片段的时间点信息相同;
    将所述多个第一类素材片段中每个第一类素材片段的节拍调整为对应的第一类音频片段的节拍。
  3. 如权利要求1所述的方法,其中,所述根据节拍调整之后的混音素材,对所述目标音频进行混音处理,包括:
    对所述节拍调整之后的混音素材进行和弦调整;
    将和弦调整之后的混音素材与所述目标音频合并。
  4. 如权利要求3所述的方法,其中,所述对所述节拍调整之后的混音素材进行和弦调整,包括:
    确定所述目标音频的和弦特征,所述和弦特征是指所述目标音频中采用的和弦和时间点信息之间的对应关系;
    根据所述目标音频的和弦特征,对所述节拍调整之后的混音素材进行和弦调整。
  5. 如权利要求4所述的方法,其中,所述根据所述目标音频的和弦特征,对所述节拍调整之后的混音素材进行和弦调整,包括:
    按照所述目标音频的和弦特征,将所述目标音频划分为多个第二类音频片段,每个第二类音频片段对应一个和弦;
    按照所述多个第二类音频片段中每个第二类音频片段的时间点信息,确定所述节拍调整之后的混音素材中的多个第二类素材片段,每个第二类素材片段对应一个第二类音频片段,且每个第二类素材片段的时间点信息和对应的第二类音频片段的时间点信息相同;
    将所述多个第二类素材片段中每个第二类素材片段的和弦调整为对应的第二类音频片段的和弦。
  6. 如权利要求3所述的方法,其中,所述对所述节拍调整之后的混音素材进行和弦调整,包括:
    确定所述目标音频的调性,所述调性是指所述目标音频的主音所在的音律;
    根据所述目标音频的调性,将所述节拍调整之后的混音素材的和弦调整为与确定的调性一致的和弦。
  7. 如权利要求1至6任一所述的方法,其中,所述获取混音素材,包括:
    从混音素材库中选择目标乐器素材,所述混音素材库包括至少一个乐器素材,每个乐器素材是节拍为指定节拍,时长为指定时长的音频;
    对所述目标乐器素材进行循环拼接,得到所述混音素材,所述混音素材的时长与所述目标音频的时长相同。
  8. 一种混音装置,所述装置包括:
    获取模块,用于获取混音素材;
    确定模块,用于确定需要进行混音的目标音频的节拍特征,所述节拍特征是指所述目标音频中采用的节拍和时间点信息之间的对应关系;
    调整模块,用于根据所述目标音频的节拍特征,对所述混音素材进行节拍调整;
    处理模块,用于根据节拍调整之后的混音素材,对所述目标音频进行混音 处理。
  9. 如权利要求8所述的装置,其中,所述调整模块,具体用于:
    按照所述目标音频的节拍特征,将所述目标音频划分为多个第一类音频片段,每个第一类音频片段对应一个节拍;
    按照所述多个第一类音频片段中每个第一类音频片段的时间点信息,确定所述混音素材中的多个第一类素材片段,每个第一类素材片段对应一个第一类音频片段,且每个第一类素材片段的时间点信息和对应的第一类音频片段的时间点信息相同;
    将所述多个第一类素材片段中每个第一类素材片段的节拍调整为对应的第一类音频片段的节拍。
  10. 如权利要求8所述的装置,其中,所述处理模块包括:
    调整单元,用于对所述节拍调整之后的混音素材进行和弦调整;
    合并单元,用于将和弦调整之后的混音素材与所述目标音频合并。
  11. 如权利要求10所述的装置,其中,所述调整单元,具体用于:
    确定所述目标音频的和弦特征,所述和弦特征是指所述目标音频中采用的和弦和时间点信息之间的对应关系;
    根据所述目标音频的和弦特征,对所述节拍调整之后的混音素材进行和弦调整。
  12. 如权利要求11所述的装置,其中,所述调整单元,还具体用于:
    按照所述目标音频的和弦特征,将所述目标音频划分为多个第二类音频片段,每个第二类音频片段对应一个和弦;
    按照所述多个第二类音频片段中每个第二类音频片段的时间点信息,确定所述节拍调整之后的混音素材中的多个第二类素材片段,每个第二类素材片段对应一个第二类音频片段,且每个第二类素材片段的时间点信息和对应的第二类音频片段的时间点信息相同;
    将所述多个第二类素材片段中每个第二类素材片段的和弦调整为对应的第二类音频片段的和弦。
  13. 如权利要求10所述的装置,其中,所述调整单元,具体用于:
    确定所述目标音频的调性,所述调性是指所述目标音频的主音所在的音律;
    根据所述目标音频的调性,将所述节拍调整之后的混音素材的和弦调整为与确定的调性一致的和弦。
  14. 如权利要求8至13任一所述的装置,其中,所述获取模块,具体用于:
    从混音素材库中选择目标乐器素材,所述混音素材库包括至少一个乐器素材,每个乐器素材是节拍为指定节拍,时长为指定时长的音频;
    对所述目标乐器素材进行循环拼接,得到所述混音素材,所述混音素材的时长与所述目标音频的时长相同。
  15. 一种混音装置,其特征在于,所述装置包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为执行上述权利要求1至权利要求7任一所述的方法的步骤。
  16. 一种计算机可读存储介质,所述计算机可读存储介质上存储有指令,所述指令被处理器执行时实现上述权利要求1至权利要求7任一所述的方法的步骤。
PCT/CN2018/117767 2018-06-22 2018-11-27 混音方法、装置及存储介质 WO2019242235A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/617,920 US11315534B2 (en) 2018-06-22 2018-11-27 Method, apparatus, terminal and storage medium for mixing audio
EP18919406.1A EP3618055B1 (en) 2018-06-22 2018-11-27 Audio mixing method and terminal, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810650947.5A CN108831425B (zh) 2018-06-22 2018-06-22 混音方法、装置及存储介质
CN201810650947.5 2018-06-22

Publications (1)

Publication Number Publication Date
WO2019242235A1 true WO2019242235A1 (zh) 2019-12-26

Family

ID=64137533

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/117767 WO2019242235A1 (zh) 2018-06-22 2018-11-27 混音方法、装置及存储介质

Country Status (4)

Country Link
US (1) US11315534B2 (zh)
EP (1) EP3618055B1 (zh)
CN (1) CN108831425B (zh)
WO (1) WO2019242235A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108831425B (zh) 2018-06-22 2022-01-04 广州酷狗计算机科技有限公司 混音方法、装置及存储介质
CN109346044B (zh) * 2018-11-23 2023-06-23 广州酷狗计算机科技有限公司 音频处理方法、装置及存储介质
CN109545249B (zh) * 2018-11-23 2020-11-03 广州酷狗计算机科技有限公司 一种处理音乐文件的方法及装置
US20230267899A1 (en) * 2020-03-11 2023-08-24 Nusic Limited Automatic audio mixing device
CN113674725B (zh) * 2021-08-23 2024-04-16 广州酷狗计算机科技有限公司 音频混音方法、装置、设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1838229A (zh) * 2004-09-16 2006-09-27 索尼株式会社 重放装置和重放方法
CN101080763A (zh) * 2004-12-14 2007-11-28 索尼株式会社 乐曲数据重构装置及方法、音乐内容重放装置及方法
CN101160615A (zh) * 2005-04-25 2008-04-09 索尼株式会社 音乐内容重放设备和音乐内容重放方法
CN105659314A (zh) * 2013-09-19 2016-06-08 微软技术许可有限责任公司 通过自动地调整样本特征来组合音频样本
CN106558314A (zh) * 2015-09-29 2017-04-05 广州酷狗计算机科技有限公司 一种混音处理方法和装置及设备
WO2017058844A1 (en) * 2015-09-29 2017-04-06 Amper Music, Inc. Machines, systems and processes for automated music composition and generation employing linguistic and/or graphical icon based musical experience descriptors
CN108831425A (zh) * 2018-06-22 2018-11-16 广州酷狗计算机科技有限公司 混音方法、装置及存储介质

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4060993B2 (ja) * 1999-07-26 2008-03-12 パイオニア株式会社 オーディオ情報記憶制御方法及び装置並びにオーディオ情報出力装置。
EP1162621A1 (en) * 2000-05-11 2001-12-12 Hewlett-Packard Company, A Delaware Corporation Automatic compilation of songs
KR20080074977A (ko) * 2005-12-09 2008-08-13 소니 가부시끼 가이샤 음악 편집 장치 및 음악 편집 방법
EP1959427A4 (en) * 2005-12-09 2011-11-30 Sony Corp MUSIC EDITING DEVICE, MUSIC EDITING INFORMATION GENERATION METHOD AND RECORDING MEDIUM ON WHICH MUSIC EDITOR INFORMATION IS RECORDED
US7642444B2 (en) * 2006-11-17 2010-01-05 Yamaha Corporation Music-piece processing apparatus and method
JP5007563B2 (ja) * 2006-12-28 2012-08-22 ソニー株式会社 音楽編集装置および方法、並びに、プログラム
US7863511B2 (en) * 2007-02-09 2011-01-04 Avid Technology, Inc. System for and method of generating audio sequences of prescribed duration
JP2012103603A (ja) * 2010-11-12 2012-05-31 Sony Corp 情報処理装置、楽曲区間抽出方法、及びプログラム
JP5974436B2 (ja) * 2011-08-26 2016-08-23 ヤマハ株式会社 楽曲生成装置
US9098679B2 (en) * 2012-05-15 2015-08-04 Chi Leung KWAN Raw sound data organizer
CN103928037B (zh) * 2013-01-10 2018-04-13 先锋高科技(上海)有限公司 一种音频切换方法及终端设备
US10331098B2 (en) * 2013-12-03 2019-06-25 Guangzhou Kugou Computer Technology Co., Ltd. Playback control method, player device, and storage medium
TWI624827B (zh) * 2015-05-14 2018-05-21 仁寶電腦工業股份有限公司 節拍標記方法
CN105023559A (zh) * 2015-05-27 2015-11-04 腾讯科技(深圳)有限公司 K歌处理方法及系统
EP3306606A4 (en) * 2015-05-27 2019-01-16 Guangzhou Kugou Computer Technology Co., Ltd. METHOD, APPARATUS AND SYSTEM FOR AUDIO PROCESSING
US9804818B2 (en) * 2015-09-30 2017-10-31 Apple Inc. Musical analysis platform
CN106653037B (zh) * 2015-11-03 2020-02-14 广州酷狗计算机科技有限公司 音频数据处理方法和装置
CN106652997B (zh) * 2016-12-29 2020-07-28 腾讯音乐娱乐(深圳)有限公司 一种音频合成的方法及终端
CN107863095A (zh) * 2017-11-21 2018-03-30 广州酷狗计算机科技有限公司 音频信号处理方法、装置和存储介质
CN107871012A (zh) * 2017-11-22 2018-04-03 广州酷狗计算机科技有限公司 音频处理方法、装置、存储介质及终端
CN108156575B (zh) * 2017-12-26 2019-09-27 广州酷狗计算机科技有限公司 音频信号的处理方法、装置及终端
CN108156561B (zh) * 2017-12-26 2020-08-04 广州酷狗计算机科技有限公司 音频信号的处理方法、装置及终端

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1838229A (zh) * 2004-09-16 2006-09-27 索尼株式会社 重放装置和重放方法
CN101080763A (zh) * 2004-12-14 2007-11-28 索尼株式会社 乐曲数据重构装置及方法、音乐内容重放装置及方法
CN101160615A (zh) * 2005-04-25 2008-04-09 索尼株式会社 音乐内容重放设备和音乐内容重放方法
CN105659314A (zh) * 2013-09-19 2016-06-08 微软技术许可有限责任公司 通过自动地调整样本特征来组合音频样本
CN106558314A (zh) * 2015-09-29 2017-04-05 广州酷狗计算机科技有限公司 一种混音处理方法和装置及设备
WO2017058844A1 (en) * 2015-09-29 2017-04-06 Amper Music, Inc. Machines, systems and processes for automated music composition and generation employing linguistic and/or graphical icon based musical experience descriptors
CN108831425A (zh) * 2018-06-22 2018-11-16 广州酷狗计算机科技有限公司 混音方法、装置及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3618055A4 *

Also Published As

Publication number Publication date
CN108831425A (zh) 2018-11-16
US11315534B2 (en) 2022-04-26
CN108831425B (zh) 2022-01-04
EP3618055A1 (en) 2020-03-04
EP3618055A4 (en) 2020-05-20
US20210272542A1 (en) 2021-09-02
EP3618055B1 (en) 2023-12-27

Similar Documents

Publication Publication Date Title
CN110336960B (zh) 视频合成的方法、装置、终端及存储介质
CN108769561B (zh) 视频录制方法及装置
WO2020253096A1 (zh) 视频合成的方法、装置、终端及存储介质
WO2019242235A1 (zh) 混音方法、装置及存储介质
CN108538302B (zh) 合成音频的方法和装置
WO2021068903A1 (zh) 确定音量的调节比例信息的方法、装置、设备及存储介质
CN109144346B (zh) 歌曲分享方法、装置及存储介质
CN109635133B (zh) 可视化音频播放方法、装置、电子设备及存储介质
CN111061405B (zh) 录制歌曲音频的方法、装置、设备及存储介质
CN108831424B (zh) 音频拼接方法、装置及存储介质
CN110266982B (zh) 在录制视频时提供歌曲的方法和系统
WO2021139535A1 (zh) 播放音频的方法、装置、系统、设备及存储介质
CN109743461B (zh) 音频数据处理方法、装置、终端及存储介质
CN109102811B (zh) 音频指纹的生成方法、装置及存储介质
WO2020244516A1 (zh) 在线互动的方法和装置
CN111081277B (zh) 音频测评的方法、装置、设备及存储介质
CN113596516A (zh) 进行连麦合唱的方法、系统、设备及存储介质
CN109346044B (zh) 音频处理方法、装置及存储介质
CN112086102B (zh) 扩展音频频带的方法、装置、设备以及存储介质
CN112616082A (zh) 视频预览方法、装置、终端及存储介质
WO2021003949A1 (zh) 歌曲播放方法、装置及系统
CN109036463B (zh) 获取歌曲的难度信息的方法、装置及存储介质
WO2022227589A1 (zh) 音频处理方法及装置
CN111063372B (zh) 确定音高特征的方法、装置、设备及存储介质
CN111063364B (zh) 生成音频的方法、装置、计算机设备和存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018919406

Country of ref document: EP

Effective date: 20191129

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18919406

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE