CN110634465A - Music matching method, mobile terminal, data processing method and music matching system - Google Patents
Music matching method, mobile terminal, data processing method and music matching system Download PDFInfo
- Publication number
- CN110634465A CN110634465A CN201810661357.2A CN201810661357A CN110634465A CN 110634465 A CN110634465 A CN 110634465A CN 201810661357 A CN201810661357 A CN 201810661357A CN 110634465 A CN110634465 A CN 110634465A
- Authority
- CN
- China
- Prior art keywords
- music
- audio
- user
- spontaneous
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 143
- 238000003672 processing method Methods 0.000 title claims abstract description 12
- 230000002269 spontaneous effect Effects 0.000 claims abstract description 172
- 230000008569 process Effects 0.000 claims abstract description 47
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 41
- 239000000203 mixture Substances 0.000 claims description 56
- 238000006243 chemical reaction Methods 0.000 claims description 32
- 238000012545 processing Methods 0.000 claims description 28
- 238000003860 storage Methods 0.000 claims description 25
- 230000004044 response Effects 0.000 claims description 24
- 238000004891 communication Methods 0.000 claims description 13
- 238000004519 manufacturing process Methods 0.000 claims description 11
- 230000033764 rhythmic process Effects 0.000 description 16
- 238000013518 transcription Methods 0.000 description 8
- 230000035897 transcription Effects 0.000 description 8
- 230000015572 biosynthetic process Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000003786 synthesis reaction Methods 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000001755 vocal effect Effects 0.000 description 3
- 238000005034 decoration Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000011435 rock Substances 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 239000004677 Nylon Substances 0.000 description 1
- 229910000831 Steel Inorganic materials 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009172 bursting Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 229920001778 nylon Polymers 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000001020 rhythmical effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000010959 steel Substances 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
- 239000002023 wood Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/36—Accompaniment arrangements
- G10H1/40—Rhythm
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H7/00—Instruments in which the tones are synthesised from a data store, e.g. computer organs
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/101—Music Composition or musical creation; Tools or processes therefor
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2250/00—Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
- G10H2250/315—Sound category-dependent sound synthesis processes [Gensound] for musical use; Sound category-specific synthesis-controlling parameters or control means therefor
- G10H2250/455—Gensound singing voices, i.e. generation of human voices for musical applications, vocal singing sounds or intelligible words at a desired pitch or with desired vocal effects, e.g. by phoneme synthesis
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Auxiliary Devices For Music (AREA)
- Electrophonic Musical Instruments (AREA)
Abstract
The invention discloses a music matching method, a mobile terminal, a data processing method and a music matching system. Wherein, the method comprises the following steps: acquiring a user spontaneous sound frequency; matching music elements according to the spontaneous sound frequency of the user; and synthesizing the spontaneous audio of the user and the music elements to obtain music. The invention solves the technical problem of low music composing efficiency of non-professional persons in the process of using music composing software due to the fact that the music composing software in the prior art is too specialized.
Description
Technical Field
The invention relates to the field of mobile terminals, in particular to a music matching method, a mobile terminal, a data processing method and a music matching system.
Background
Under the development of electronic technology, the song editing no longer depends on traditional musical instruments, according to the inspiration of the song editor, write through traditional musical instrument accompaniment, in the development process of computer program, can realize realizing the audio frequency simulation to multiple musical instruments through the program of computer, and then realize the song editing based on this audio frequency simulation.
In the prior art, the music composition is realized through a computer program at a PC end through a simulator of each musical instrument, but music composition software to which the computer program belongs is generally oriented to professional musicians, namely, the musicians with professional music theory knowledge can finish the music composition by using the music composition software, so that an insurmountable use threshold is generated for the popular public to music.
Aiming at the problem that music composition efficiency is low in the process of using music composition software by non-professional persons due to the fact that the music composition software in the prior art is too specialized, an effective solution is not provided at present.
Disclosure of Invention
The embodiment of the invention provides a music composing method, a mobile terminal, a data processing method and a music composing system, which at least solve the technical problem of low music composing efficiency of non-professional persons in the process of using music composing software due to the fact that the music composing software in the prior art is too specialized.
According to an aspect of an embodiment of the present invention, there is provided a method of dubbing music, including: acquiring a user spontaneous sound frequency; matching music elements according to the spontaneous sound frequency of the user; and synthesizing the spontaneous audio of the user and the music elements to obtain music.
Optionally, the acquiring the user spontaneous audio includes: displaying the score prompt message, wherein the score prompt message comprises: any one or a combination of at least two of tempo, beat, or song style; receiving a beat selection response message returned according to the music matching prompt message; under the condition that the beat response message does not need a metronome, acquiring the self-sounding audio of the user through an acquisition device, and determining the self-sounding audio of the user as audio data to be made; and under the condition that the beat response message comprises a required metronome, acquiring the spontaneous audio generated by the user according to the beat of the metronome through an acquisition device, and determining the self-generated audio of the user as the audio data to be made.
Optionally, matching the score elements according to the user spontaneous audio includes: matching score elements from a preset database according to the spontaneous audio of the user, wherein the score elements comprise: timbre and accompaniment; under the condition that the preset database comprises a tone library and an accompaniment library, corresponding tones are obtained from the tone library in a matching mode according to the spontaneous sound frequency of the user, and corresponding accompaniments are obtained from the accompaniment library in a matching mode.
Further, optionally, synthesizing the user spontaneous audio and the soundtrack element to obtain the music comprises: and synthesizing the timbre and the accompaniment with the spontaneous audio of the user to generate the music.
Optionally, before matching the soundtrack elements according to the user's spontaneous audio, the method further comprises: storing the user spontaneous audio; and carrying out format conversion on the user spontaneous audio to obtain the user spontaneous audio after the format conversion.
Further, optionally, performing format conversion on the user spontaneous audio to obtain the format-converted user spontaneous audio includes: carrying out format conversion on the spontaneous audio of the user to obtain the MIDI initial audio of the musical instrument digital interface; and determining the MIDI initial audio of the musical instrument digital interface as the user spontaneous sound audio after format conversion.
Optionally, generating the musical composition comprises: carrying out format conversion on the synthesized audio file to obtain an audio file to be played; and determining the audio file to be played as the music.
Further, optionally, the musical composition comprises: composing information, wherein the composing information comprises: and adding the music elements in the composing process.
Optionally, after generating the musical composition, the method further comprises: and if the music is re-composed, matching the music elements according to the music, and synthesizing the music and the music elements to obtain the re-composed music.
Optionally, after generating the musical composition, the method further comprises: displaying prompt information, wherein the prompt information comprises: downloading prompt information and/or sharing prompt information; receiving a response message returned according to the prompt message; and executing corresponding operation according to the response message.
According to an aspect of the embodiments of the present invention, there is also provided another method of dubbing music, including: displaying the score prompt information; displaying user spontaneous audio generated by the user according to the music matching prompt information; displaying and automatically matching according to the spontaneous sound frequency of the user to obtain the music elements; displaying and playing music, wherein the music is generated by synthesizing the spontaneous audio of the user and the music elements; wherein, the score prompt message includes: any one or a combination of at least two of tempo, beat, or song style.
According to an aspect of the embodiments of the present invention, there is also provided another method of dubbing music, including: acquiring audio data, wherein the audio data comprises: a user self-sounding audio; sending the spontaneous audio of the user to a music production platform; and receiving the music returned by the music production platform according to the self-sounding audio of the user.
According to an aspect of the embodiments of the present invention, there is also provided another method of dubbing music, including: receiving user spontaneous audio; matching music elements according to the spontaneous sound frequency of the user; and synthesizing the spontaneous audio of the user and the music elements to obtain music.
According to another aspect of the embodiments of the present invention, there is also provided a mobile terminal, including: the system comprises a sound acquisition device, a data processing device and a playing device, wherein the sound acquisition device is used for acquiring user spontaneous audio generated by a user; the data processing device is used for automatically matching the music elements according to the spontaneous sound frequency of the user; synthesizing the spontaneous audio of the user and the music elements to generate music; and the playing device is used for playing the music.
Optionally, the mobile terminal further includes: the rhythm simulator is used for generating music matching prompt information according to the setting of a user and displaying the music matching prompt information by the display device, so that the sound collection device collects user spontaneous audio frequency generated by the user according to the music matching prompt information; wherein, the score prompt message includes: any one or a combination of at least two of tempo, beat, or song style.
According to another aspect of the embodiments of the present invention, there is also provided another mobile terminal, including: the system comprises a sound acquisition device, a data processing device, a communication device and a playing device, wherein the sound acquisition device is used for acquiring user spontaneous audio generated by a user; the data processing device is used for carrying out format conversion on the spontaneous audio of the user to obtain an audio file to be sent and sending the audio file to be sent through the communication device; after the communication device receives the music returned according to the audio file to be sent, carrying out format conversion on the music to obtain the music to be played; and the playing device is used for playing the music to be played.
Optionally, the mobile terminal further includes: the rhythm simulator is used for generating music matching prompt information according to the setting of a user and displaying the music matching prompt information by the display device, so that the sound collection device collects user spontaneous audio frequency generated by the user according to the music matching prompt information; wherein, the score prompt message includes: any one or a combination of at least two of tempo, beat, or song style.
According to another aspect of the embodiments of the present invention, there is also provided a system for dubbing music, including: the client is used for acquiring audio data, wherein the audio data comprises: a user self-sounding audio; sending the user spontaneous audio to a background data processing end; the background data processing end is used for receiving the user spontaneous audio; matching music elements according to the spontaneous sound frequency of the user; and synthesizing the spontaneous audio of the user and the music elements to obtain music.
According to still another aspect of the embodiments of the present invention, there is provided a storage medium including a stored program, wherein when the program runs, a device on which the storage medium is located is controlled to perform: acquiring a user spontaneous sound frequency; matching music elements according to the spontaneous sound frequency of the user; and synthesizing the spontaneous audio of the user and the music elements to obtain music.
According to an aspect of another embodiment of the present invention, there is also provided a data processing method, including: displaying an audio input interface, wherein the audio input interface is used for recording or receiving first audio; recording or receiving a first audio, wherein the first audio corresponds to a user account; acquiring a score element matched with the first audio, wherein the score element comprises: timbre and accompaniment; and generating second audio according to the first audio and the score elements.
Optionally, the obtaining of the score element matched with the first audio includes: and acquiring the score elements matched with the first audio and the user account.
In the embodiment of the invention, an AI intelligent music matching mode is adopted, and the self-sounding audio of the user is obtained; matching music elements according to the spontaneous sound frequency of the user; the music is obtained by synthesizing the spontaneous audio and the music matching elements of the user, and the purpose of realizing professional music composition for non-professionals is achieved, so that the technical effect of improving the music composition efficiency of the non-professionals in the process of using music composition software is achieved, and the technical problem that the music composition efficiency of the non-professionals in the process of using the music composition software is low due to the fact that the music composition software is too specialized in the prior art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a computer terminal of a method of dubbing music according to an embodiment of the present invention;
fig. 2 is a flowchart of a method of dubbing music according to a first embodiment of the present invention;
fig. 3 is a timing diagram of a business system in a method of dubbing music according to a first embodiment of the present invention;
fig. 4 is a flow chart of score in a score method according to a first embodiment of the invention;
FIG. 5 is a schematic diagram of a user's interaction when using an music matching function in a music matching method according to a first embodiment of the present invention;
fig. 6 is a flowchart of a method of dubbing music according to a second embodiment of the present invention;
fig. 7 is a flowchart of a method of dubbing music according to a third embodiment of the present invention;
fig. 8 is a flowchart of a method of dubbing music according to a fourth embodiment of the present invention;
fig. 9 is a flowchart of a data processing method according to a fifth embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical terms related to the present application are:
1) a metronome: a mechanical, electrical or electronic device capable of delivering a steady beat at various speeds.
2) BPM: the short term of Beat Per Minute refers to the unit of beats Per Minute, which is the number of beats of sound made between time segments of one Minute, and the unit of this number is BPM.
3) Timbre (Timbre), also called sound quality, refers to the characteristic that the frequency of sound generated by vibration of different objects shows in waveform, and the Timbre refers to the characteristic of musical instrument. The tone color library is to obtain audio samples for playing various different musical instruments, the audio sampling rate is 24bit or higher, for example, the pure wood guitar tone color library includes guitar tone colors including nylon, steel strings, 12 strings, bass, and the like.
4) The accompaniment is converted into music, namely melody, also called melody library, and generally refers to a sequence of organized and rhythmic tones formed by combining a plurality of tones, and is carried out according to a single sound part which is formed by a certain pitch, duration and volume and has logic factors. The melody is formed by organically combining a plurality of basic music elements, such as mode, rhythm, beat, strength, timbre performance method and the like.
5) AI: artificial Intelligence (Artificial Intelligence), abbreviated in english as AI. The method is a new technical science for researching and developing theories, methods, technologies and application systems for simulating, extending and expanding human intelligence.
6) MIDI: musical Instrument Digital Interface;
7) user spontaneous audio: the mobile terminal generates the user spontaneous sound audio after collecting the user humming sound collected by the external or internal sound collecting device, for example, the sound generated by the user through spontaneous sound such as humming without characters, humming with characters, B-Box, and the like.
Example 1
There is also provided, in accordance with an embodiment of the present invention, a method embodiment of soundtrack, it is noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking the example of running on a computer terminal, fig. 1 is a hardware structure block diagram of a computer terminal of a method for matching music according to an embodiment of the present invention. As shown in fig. 1, the computer terminal 10 may include one or more (only one shown) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), a memory 104 for storing data, and a transmission module 106 for communication functions. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 can be used for storing software programs and modules of application software, such as program instructions/modules corresponding to the method of music score in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the software programs and modules stored in the memory 104, that is, implementing the method of music score of the application program. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 can be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
In the above operating environment, the present application provides a method of dubbing music as shown in fig. 2. Fig. 2 is a flowchart of a method for dubbing music according to a first embodiment of the present invention.
Step S202, acquiring user spontaneous audio;
in step S202, the acquiring the user spontaneous audio may include: the method for matching music provided by the application is based on the method for realizing matching music provided by the application, and is not particularly limited.
Step S204, matching music elements according to the spontaneous audio of the user;
in the above step S204, based on the user spontaneous audio obtained in step S202, in the process of matching the music elements of the pair, the music matching method provided in the present application includes the following steps:
the first method is as follows: matching and synthesizing by an application program installed on the mobile terminal;
specifically, if the mobile terminal itself stores enough music materials in advance and the computing capability of the mobile terminal itself can meet the requirement of music composition, the mobile terminal where the application program is located matches the music matching elements after the user self-sounding audio is acquired in step S202;
the second method comprises the following steps: and sending the spontaneous audio of the user to a music production platform for matching and synthesizing.
Specifically, if the user experience is improved and the music matching result is fed back quickly, the user spontaneous audio is uploaded through the mobile terminal, or the user spontaneous audio is uploaded to the music composition making platform through the application program installed on the mobile terminal, and the music composition making platform matches the music matching elements according to the user spontaneous audio.
And S206, synthesizing the spontaneous audio of the user and the music elements to obtain music.
In step S206, the user spontaneous audio and the soundtrack element are synthesized to obtain music based on the soundtrack element obtained in step S204 and the user spontaneous audio in step S202.
In summary, with reference to steps S202 to S206, the method for matching music provided by the present Application may be applied to a mobile terminal, and the method for matching music provided by the present Application may be implemented by using an Application program (APP for short) installed in the mobile terminal as a carrier, and is different from traditional music composing software in the prior art, and the method for matching music provided by the present Application reduces the speciality of music composing, so that even a user who has not learned music professionally can capture inspiration through the mobile terminal, and further implement music dream of music composing.
Specifically, the music matching method provided by the application synthesizes the music with the humming melody by acquiring the humming melody of the user and calling the music matching elements in the preset database through the APP, and finally generates the music.
It should be noted that, in the application, in the process of calling the dubbing music elements in the preset database, intelligent matching can be realized through the AI model, and the dubbing music elements corresponding to the timbre, the sound frequency and the beat matching are matched according to the melody hummed by the user, for example, if the melody hummed by the user is slow in beat and gentle in timbre, the piano in the timbre library in the preset database is selected as the timbre of the accompaniment, and then the automatic tuning is performed on the hummed melody from the syllable matching in the accompaniment library in the preset database, and finally the combination of the hummed melody, the timbre of the piano accompaniment and the audio information of the automatic tuning is performed to generate the dubbing music.
In the process of realizing intelligent music matching, the method provided by the application can meet the requirement of non-professional persons on the transcription, can help professional music persons to capture the inspirational melody, and can also be divided into a non-professional mode and a professional mode in the realization process, wherein in the non-professional mode, a user can roughly adjust each detail in the melody according to the instruction of the APP by music generated by the hummed melody; in the professional mode, the user can finely adjust the music piece according to the requirements of professional musicians in addition to the rough adjustment in the non-professional mode.
In addition, the mobile terminal to which the method of dubbing music provided by the present application is applied may include: the method for matching music provided by the present application is described with reference to being applicable to a smart phone, taking the method for matching music provided by the present application as an example, and is not limited specifically.
Besides presenting the method of dubbing music provided by the present application by using an independent APP program, the dubbing music can also be presented by being used as a recording function in a song listening APP, for example, when there is a willingness to adapt to a certain song, the dubbing function in the APP can be re-processed by humming, wherein the processing procedure can be implemented by the technical solutions recorded in steps S202 to S206.
In the embodiment of the invention, an AI intelligent music matching mode is adopted, and the self-sounding audio of the user is obtained; matching music elements according to the spontaneous sound frequency of the user; the music is obtained by synthesizing the spontaneous audio and the music matching elements of the user, and the purpose of realizing professional music composition for non-professionals is achieved, so that the technical effect of improving the music composition efficiency of the non-professionals in the process of using music composition software is achieved, and the technical problem that the music composition efficiency of the non-professionals in the process of using the music composition software is low due to the fact that the music composition software is too specialized in the prior art is solved.
Optionally, the step S202 of acquiring the user spontaneous audio includes:
step S2021, displaying the score prompt information;
specifically, the score prompt information includes: in the method for matching music provided by the application, in the process of starting humming by a user, the user can preset the tempo by displaying matching music prompt information and then humming according to the tempo, or humming according to the speed of the tempo by setting the tempo; or, humming according to the selected song style, for example, in case the song style is jazz style, adding transcription during humming according to jazz style, and single-double syllable accompaniment; or, under the condition that the song style is rock, emphasizing the first sound in the humming process according to the rock style, or humming emotion;
in addition, the humming can be performed according to a mode of 'beat + rhythm + song style', that is, if the song style selects jazz, the rhythm selection is slow (slow rhythm includes slow, medium and slow, and extra slow; and similarly, fast rhythm classification), and the beat selects a weak and strong mode, then the user performs humming according to the music matching prompting information of 'beat + rhythm + song style', it should be noted that in the music matching prompting information provided by the application, both the rhythm and the beat can be set, and the song style can be selected according to the user requirements.
Step S2022, receiving a beat selection response message returned according to the music score prompt message;
specifically, the beat selection response message is received, and for the user who uses the beat for the first time or a non-professional music fan, if the beat is not well mastered, the beat in the humming process is not set after the dubbing music prompt message is displayed, and if the metronome is not needed, the step S2023 is executed; if a metronome is required, step S2024 is performed.
Step S2023, under the condition that the beat response message includes that the metronome is not needed, acquiring the spontaneous audio of the user through an acquisition device, and determining the self-generated audio of the user as the audio data to be made;
specifically, under the condition that a metronome is not needed, the humming melody is directly collected through the collection device, and the humming melody is determined as the audio data to be produced, wherein the collection device can comprise a microphone externally connected with the mobile terminal or a microphone internally arranged in the mobile terminal.
Step S2024, under the condition that the beat response message includes the required metronome, acquiring, by the acquisition device, the spontaneous audio generated by the user according to the beat of the metronome, and determining the user' S self-generated audio as the audio data to be produced.
Specifically, under the condition that a metronome is needed, if the mobile terminal records the user humming melody in a play-out mode, because the metronome can make beat sounds in the play-out recording process in the starting state, in order to avoid interference of the beat sounds to the transcription in the transcription process, the music matching method provided by the application isolates the beat sounds made by the metronome in a mode of connecting into an earphone, namely, the user can still record the humming melody through a microphone of the mobile terminal, and only the beat sounds responded in the earphone remind the user of the humming beat;
it should be noted that, although the sound of the metronome recorded at the previous stage can be processed in the composition process, the sound is used as an application program for real-time feedback, which consumes too much time and cannot be fed back in time; therefore, the method for dubbing music provided by the application provides a preferable example, namely, the sound of the metronome is isolated from the sound of humming recorded by the user in a manner that the earphone is connected to the mobile terminal, so that the data processing pressure in the transcription process is reduced, and the transcription efficiency is improved.
Optionally, the step S204 of matching the score elements according to the user' S spontaneous audio includes:
step S2041, matching score elements from a preset database according to the user spontaneous audio, wherein the score elements comprise: timbre and accompaniment; under the condition that the preset database comprises a tone library and an accompaniment library, corresponding tones are obtained from the tone library in a matching mode according to the spontaneous sound frequency of the user, and corresponding accompaniments are obtained from the accompaniment library in a matching mode.
Further, optionally, in step S206, synthesizing the user spontaneous audio and the soundtrack element to obtain the music piece includes:
in step S2061, the timbre and accompaniment are synthesized with the user' S own voice frequency to generate a musical composition.
Optionally, before matching the score elements according to the user' S spontaneous audio in step S204, the method for matching score provided by the present application further includes:
step S201, storing the user spontaneous audio;
step S203, the format conversion is carried out on the user spontaneous audio, and the user spontaneous audio after the format conversion is obtained.
Further, optionally, performing format conversion on the user spontaneous audio in step S203 to obtain the format-converted user spontaneous audio includes:
step S2031, carrying out format conversion on the user spontaneous audio to obtain the MIDI initial audio of the musical instrument digital interface;
step S2032, determining the MIDI initial audio of the musical instrument digital interface as the format-converted user spontaneous sound audio.
Alternatively, the generating of the music in step S2061 includes:
step S20611, carrying out format conversion on the synthesized audio file to obtain an audio file to be played;
in step S20612, the audio file to be played is determined as a music piece.
Further, optionally, the musical composition comprises: composing information, wherein the composing information comprises: and adding the music elements in the composing process.
Optionally, after the music is generated in step S2061, the method for dubbing music provided by the present application further includes:
and step S2062, if the music is re-composed, matching the music elements according to the music, and synthesizing the music and the music elements to obtain the re-composed music.
Specifically, if the composition is re-performed, the above steps S202 to S206 are re-performed.
Optionally, after the music is generated in step S2061, the method for dubbing music provided by the present application further includes:
step S2063, displaying a prompt message, the prompt message including: downloading prompt information and/or sharing prompt information;
step S2064, receiving a response message returned according to the prompt message;
step S2065, executing the corresponding operation according to the response message.
Specifically, in combination with steps S2063 to S2065, after the music is generated, the user may select to download or upload and share to each social platform according to the prompt information displayed on the current display interface, that is, download and/or share.
In summary, the method for providing music according to the present application is specifically shown in fig. 3, where fig. 3 is a timing chart of a business system in the music matching method according to an embodiment of the present invention.
Here, in a preferred example provided, the method for implementing the soundtrack provided by the present application comprises a client and a network, wherein, in the implementation process, as shown in fig. 3,
at a client side, a user performs humming according to the rhythm of a metronome in the client side, an audio acquisition device records the humming sound of the user, the recorded humming sound of the user is generated into self-sounding audio of the user, and the self-sounding audio of the user is sent to a network side;
receiving user spontaneous audio sent by a client at a network end, processing the user spontaneous audio through a database and an algorithm core, copying the user spontaneous audio into the database, then performing audio file conversion on the user spontaneous audio, namely performing MIDI conversion to obtain MIDI format initial audio, inputting the MIDI format initial audio into an AI dubbing model, wherein the AI dubbing model calls a tone library and an accompaniment library in the database to obtain tone and accompaniment matched with the MIDI format initial audio, after the tone and the accompaniment are obtained, performing track synthesis with the MIDI format initial audio, finally performing audio format conversion on the dubbing synthesized audio file to obtain audio in a universal playing format, such as MP3, WMA and the like, performing data statistics on the audio after music composition in the process of feeding back the audio to the music composition of the client, the content covered by the data statistics may include: the audio generation time, the editing times and the added music elements are finally fed back to the client, so that the audio is presented to the user by the client, and whether to re-match music is determined through a music composing instruction received by the client; or, the music is downloaded and/or shared to each social platform by receiving a response message of the user to the composed music.
The AI composition is characterized in that music elements in a tone library and an accompaniment library in a database are called according to the sounding characteristics of the user's spontaneous audio by combining big data analysis and according to the sounding characteristics, a response is generated in time in a simplest and optimal mode and is fed back to a user to generate music according to the user's spontaneous audio, and even if the user never learns professional music knowledge, the user can feel the fun of composition;
meanwhile, the client can display the composition parameters of the music in the composition process to the user, and for professional musicians, preliminary composition samples can be formed according to the humming melody of 'inspiration bursting', each composition inspiration can be effectively recorded, and for professional musicians, the professional musicians can further edit the composition samples, namely, tone, rhythm, beat, accompaniment music, music style and the like are adjusted, and even other audio files and the composition samples can be subjected to audio track fusion through a sound mixing technology to finally form mature music.
Specifically, as shown in fig. 4 during the music matching process, fig. 4 is a flow chart of music matching in the music matching method according to the first embodiment of the present invention. That is, with reference to fig. 3, in the process of implementing the method for dubbing music provided by the present application, the music composing of the user's spontaneous audio is implemented through four steps, as shown in fig. 4:
a, audio acquisition, wherein a user acquires humming sound by using a mobile terminal to obtain a humming audio parent (namely, the user spontaneous audio provided by the application);
b, audio recognition and analysis, C, audio synthesis, and D, music generation and secondary processing; and matching the vocal elements with the humming audio female parent by calling a vocal matching model, and synthesizing the vocal elements and the humming audio female parent to generate music.
Fig. 5 shows a process of displaying fig. 4 on a mobile terminal, that is, a smart phone, as an example, where fig. 5 is a schematic view of interaction when a user uses a music matching function in a music matching method according to an embodiment of the present invention. As shown in fig. 5, from left to right, the following are in sequence: clicking a start recording icon of a mobile phone screen to trigger an audio recording interface, wherein rhythm/beat, voiceprint images and a recording control icon are displayed in the audio recording interface, the recording control icon can control the start and the end of recording, and a user can record according to beat prompt tones in the process of beat prompt; and triggering a score synthesis interface after the recording is finished, and finally obtaining the music after the composition synthesis.
Specifically, the common music composing methods include melody repetition, transposition, blurring, musical interval or rhythm companding, pitch longitudinal and transverse arrangement and combination in harmony and contraposition, tone color combination in a distributor, parallel, contraposition, convolution, playing and ringing in a music style and the like, which can be realized by an algorithm model, and this also indicates that the music itself has computability. The method for matching music provided by the application mainly requires that music inspiration recording and matching music generation are carried out anytime and anywhere through a mobile phone end device user.
The front-end is typically the user's client device (the physical device is a cell phone with a microphone, pad, etc.). The user opens equipment and can begin the recording, and the user can follow the analog electronic metronome of acquiescence or manual slip regulation simultaneously in the recording and control the rhythm of oneself humming, if do not need the metronome then with default value tune 0, then the recording process metronome is closed, suggests the user to use the earphone to follow the metronome, can not beat sound like this can not be sampled in the recording. After the audio recording is finished, the user carries out music matching on the current audio, the music matching process is automatically realized, and the audio music matching synthesis can be finished only by waiting for a few seconds. The final result of the soundtrack will be automatically returned to the user for listening via the audio music format. And meanwhile, corresponding music tones, beats, accompaniment chords and the like are returned to the user for reference in the process.
In the music matching method, the user can complete the music matching of humming by himself and generate corresponding musical works through simple operation, the generation process is simple and fast, the requirements on music literacy such as user music theory and the like are basically zero, the music matching method belongs to a fool type non-teaching type product, the music matching is accurate, the quality after the music matching is generated and the interestingness of the music matching is high, and more people can make zero-distance contact with the music.
Through collection and score synthesis of humming score at a mobile phone end, a common user does not need complex and fussy professional operation, the threshold of contact with music is reduced by an artificial intelligence mode, and the user who does not have a professional music background in time can play music conveniently and easily through humming score.
Example 2
According to an aspect of the embodiment of the present invention, there is provided another method for matching music, and fig. 6 is a flowchart of the method for matching music according to the second embodiment of the present invention, as shown in fig. 6, including:
step S600, displaying the score prompt information;
step S601, displaying user spontaneous audio generated by a user according to the music matching prompt information;
step S602, displaying and automatically matching the audio according to the user' S own voice to obtain the music elements;
and step S603, displaying and playing music, wherein the music is generated by synthesizing the user spontaneous audio and the music elements.
Specifically, with reference to steps S600 to S603, different from embodiment 1, in the embodiment of the present application, the description is given by taking the example of being applicable to a mobile terminal, and preferably, the description is given by taking the example of being applicable to a smart phone, in the process of implementing a method for matching music, firstly, the smart phone displays matching prompt information, and if the user does not need the matching prompt information, humming recording is directly performed; secondly, if the music matching prompt information is needed, recording the melody hummed by the user according to the music matching prompt information, and displaying the self-sounding audio of the user; thirdly, matching corresponding music matching elements with the spontaneous sound and the audio of the user through intelligent music matching of a background of the smart phone; and finally, displaying the music which is the music generated by synthesizing the user spontaneous audio and the music elements.
In the embodiment of the invention, an AI intelligent music matching mode is adopted, and music matching prompt information is displayed; displaying user spontaneous audio generated by the user according to the music matching prompt information; displaying and automatically matching according to the spontaneous sound frequency of the user to obtain the music elements; the method comprises the steps of displaying and playing music, wherein the music is generated by synthesizing the spontaneous audio and the music matching elements of a user, and the purpose of realizing professional music composition for non-professionals is achieved, so that the technical effect of improving the music composition efficiency of the non-professionals in the process of using music composition software is achieved, and the technical problem that the music composition efficiency of the non-professionals in the process of using the music composition software is low due to the fact that the music composition software is too specialized in the prior art is solved.
Example 3
According to an aspect of the embodiment of the present invention, there is further provided another method for matching music, and fig. 7 is a flowchart of the method for matching music according to a third embodiment of the present invention, as shown in fig. 7, on a client side, including:
step S700, acquiring audio data, wherein the audio data includes: a user self-sounding audio;
step S701, sending the spontaneous audio of the user to a music production platform;
step S702, receiving the music returned by the music production platform according to the spontaneous audio of the user.
Specifically, with reference to steps S700 to S702, at the client side, if the music written according to the humming melody of the user is fed back in real time in order to improve the experience of the user, after the self-sounding audio of the user is acquired, the self-sounding audio of the user is sent to the music production platform, and after the music production platform produces the music, the written music is returned, so that the writing efficiency and the response efficiency are improved, the experience of the user is enhanced, and the data processing pressure at the mobile terminal side is also reduced.
In the embodiment of the invention, an AI intelligent music matching mode is adopted, and audio data are acquired, wherein the audio data comprise: a user self-sounding audio; sending the spontaneous audio of the user to a music production platform; the method has the advantages that the music returned by the music production platform according to the spontaneous audio of the user is received, and the purpose of realizing professional music composition for non-professionals is achieved, so that the technical effect of improving the music composition efficiency of the non-professionals in the process of using music composition software is achieved, and the technical problem that the music composition efficiency of the non-professionals in the process of using the music composition software is low due to the fact that the music composition software is too specialized in the prior art is solved.
Example 4
According to an aspect of the embodiment of the present invention, there is further provided a method for dubbing music, and fig. 8 is a flowchart of the method for dubbing music according to a fourth embodiment of the present invention, as shown in fig. 8, on a network side, including:
step S800, receiving user spontaneous audio;
step S801, matching music elements according to the user spontaneous sound and audio;
and S802, synthesizing the spontaneous audio of the user and the music elements to obtain music.
Specifically, in combination with steps S800 to S802, corresponding to the client side in embodiment 3, on the network side, that is, the music composition making platform, after receiving the user spontaneous audio, respectively extracting the timbre and the accompaniment corresponding to the user spontaneous audio according to the timbre library and the accompaniment library in the preset database, and synthesizing the extracted timbre and the accompaniment with the user spontaneous audio to obtain the music composition.
In the embodiment of the invention, an AI intelligent music matching mode is adopted, and the self-sounding audio of a user is received; matching music elements according to the spontaneous sound frequency of the user; the music is obtained by synthesizing the spontaneous audio and the music matching elements of the user, and the purpose of realizing professional music composition for non-professionals is achieved, so that the technical effect of improving the music composition efficiency of the non-professionals in the process of using music composition software is achieved, and the technical problem that the music composition efficiency of the non-professionals in the process of using the music composition software is low due to the fact that the music composition software is too specialized in the prior art is solved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method of dubbing music according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 5
According to an aspect of another embodiment of the present invention, there is further provided a data processing method, and fig. 9 is a flowchart of a data processing method according to a fifth embodiment of the present invention, as shown in fig. 9, including:
step S902, displaying an audio input interface, wherein the audio input interface is used for recording or receiving a first audio;
step S904, recording or receiving a first audio, wherein the first audio corresponds to a user account;
step S906, acquiring a score element matched with the first audio, wherein the score element includes: timbre and accompaniment;
wherein obtaining the soundtrack elements that match the first audio comprises: and acquiring the score elements matched with the first audio and the user account.
Step S908 is to generate a second audio according to the first audio and the score element.
Specifically, with reference to steps S902 to S908, the data processing method provided by the present application is shown in the form of an independent APP program in the process of performing AI intelligent music preparation, for example, by being applied to a mobile terminal, where the mobile terminal is an intelligent mobile phone, and the specific steps are as follows:
1. the method comprises the steps that a user clicks a transcription APP icon in a display interface of the smart phone, an audio input interface is displayed after the transcription APP is opened, the user can input audio in a humming mode or take a pre-recorded audio file as input audio, the real-time recorded audio file or the pre-recorded audio file is taken as the input audio, and after the audio input interface selects an input mode, the real-time recording is carried out or the pre-recorded audio is taken as first audio for inputting;
2. and under the condition that the data processing capacity of the smart phone can meet the music matching requirement and the database of music matching factors is locally stored in the smart phone, synthesizing the tone and the accompaniment in the database of the locally stored music matching factors, the habit of the historical music composition behavior in a user account (namely music matching elements called in the history during the music composition process) and the first audio frequency by calling the tone and the accompaniment in the database of the locally stored music matching factors and generating a second audio frequency, wherein the second audio frequency is a playable music composition sample after music composition.
In the embodiment of the invention, an AI intelligent music matching mode is adopted, and an audio input interface is displayed, wherein the audio input interface is used for recording or receiving a first audio; recording or receiving a first audio, wherein the first audio corresponds to a user account; acquiring a score element matched with the first audio, wherein the score element comprises: timbre and accompaniment; according to the first audio and the elements of the music score, the second audio is generated, and the purpose of realizing professional music composition for non-professionals is achieved, so that the technical effect of improving the music composition efficiency of the non-professionals in the process of using music composition software is achieved, and the technical problem that the music composition efficiency of the non-professionals in the process of using the music composition software is low due to the fact that the music composition software in the prior art is too specialized is solved.
Example 6
According to an embodiment of the present invention, there is also provided a mobile terminal for implementing the method for dubbing music, the mobile terminal including: sound collection system, data processing device and play device.
The voice acquisition device is used for acquiring user spontaneous voice frequency generated by a user; the data processing device is used for automatically matching the music elements according to the spontaneous sound frequency of the user; synthesizing the spontaneous audio of the user and the music elements to generate music; and the playing device is used for playing the music.
Optionally, the mobile terminal provided in the present application further includes: the rhythm simulator is used for generating music matching prompt information according to the setting of a user, and the display device displays the music matching prompt information, so that the sound collection device collects the user spontaneous sound frequency generated by the user according to the music matching prompt information.
Example 7
According to another aspect of the embodiments of the present invention, there is also provided another mobile terminal, including: the system comprises a sound acquisition device, a data processing device, a communication device and a playing device, wherein the sound acquisition device is used for acquiring user spontaneous audio generated by a user; the data processing device is used for carrying out format conversion on the spontaneous audio of the user to obtain an audio file to be sent and sending the audio file to be sent through the communication device; after the communication device receives the music returned according to the audio file to be sent, carrying out format conversion on the music to obtain the music to be played; and the playing device is used for playing the music to be played.
Optionally, the mobile terminal further includes: the rhythm simulator is used for generating music matching prompt information according to the setting of a user, and the display device displays the music matching prompt information, so that the sound collection device collects the user spontaneous sound frequency generated by the user according to the music matching prompt information.
Example 8
According to another aspect of the embodiments of the present invention, there is also provided a system for dubbing music, including: the client is used for acquiring audio data, wherein the audio data comprises: a user self-sounding audio; sending the user spontaneous audio to a background data processing end; the background data processing end is used for receiving the user spontaneous audio; matching music elements according to the spontaneous sound frequency of the user; and synthesizing the spontaneous audio of the user and the music elements to obtain music.
Example 9
According to still another aspect of the embodiments of the present invention, there is provided a storage medium including a stored program, wherein when the program runs, a device on which the storage medium is located is controlled to perform: acquiring a user spontaneous sound frequency; matching music elements according to the spontaneous sound frequency of the user; and synthesizing the spontaneous audio of the user and the music elements to obtain music.
Example 10
The embodiment of the invention also provides a storage medium. Optionally, in this embodiment, the storage medium may be configured to store program codes executed by the method for dubbing music provided in the first embodiment.
Optionally, in this embodiment, the storage medium may be located in any one of computer terminals in a computer terminal group in a computer network, or in any one of mobile terminals in a mobile terminal group.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: acquiring a user spontaneous sound frequency; matching music elements according to the spontaneous sound frequency of the user; and synthesizing the spontaneous audio of the user and the music elements to obtain music.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: the acquiring of the user spontaneous sound frequency comprises the following steps: displaying the score prompt information; receiving a beat selection response message returned according to the music matching prompt message; under the condition that the beat response message does not need a metronome, acquiring the self-sounding audio of the user through an acquisition device, and determining the self-sounding audio of the user as audio data to be made; and under the condition that the beat response message comprises a required metronome, acquiring the spontaneous audio generated by the user according to the beat of the metronome through an acquisition device, and determining the self-generated audio of the user as the audio data to be made.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: matching soundtrack elements according to user spontaneous audio comprises: matching score elements from a preset database according to the spontaneous audio of the user, wherein the score elements comprise: timbre and accompaniment; under the condition that the preset database comprises a tone library and an accompaniment library, corresponding tones are obtained from the tone library in a matching mode according to the spontaneous sound frequency of the user, and corresponding accompaniments are obtained from the accompaniment library in a matching mode.
Further, optionally, in the present embodiment, the storage medium is configured to store program code for performing the following steps: synthesizing the spontaneous audio of the user and the elements of the music score to obtain music, wherein the music comprises: and synthesizing the timbre and the accompaniment with the spontaneous audio of the user to generate the music.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: storing the user spontaneous audio before matching the music elements according to the user spontaneous audio; and carrying out format conversion on the user spontaneous audio to obtain the user spontaneous audio after the format conversion.
Further, optionally, in the present embodiment, the storage medium is configured to store program code for performing the following steps: carrying out format conversion on the user spontaneous audio to obtain the user spontaneous audio after the format conversion, wherein the step of obtaining the user spontaneous audio after the format conversion comprises the following steps: carrying out format conversion on the spontaneous audio of the user to obtain the MIDI initial audio of the musical instrument digital interface; and determining the MIDI initial audio of the musical instrument digital interface as the user spontaneous sound audio after format conversion.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: the generating of the music piece includes: carrying out format conversion on the synthesized audio file to obtain an audio file to be played; and determining the audio file to be played as the music.
Further, optionally, in the present embodiment, the storage medium is configured to store program code for performing the following steps: the music piece includes: composing information, wherein the composing information comprises: and adding the music elements in the composing process.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: after the music is generated, if the music is re-composed, matching the music elements according to the music, and synthesizing the music and the music elements to obtain the re-composed music.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: after the musical composition is generated, a cue message is displayed, the cue message including: downloading prompt information and/or sharing prompt information; receiving a response message returned according to the prompt message; and executing corresponding operation according to the response message.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (21)
1. A method of dubbing music comprising:
acquiring a user spontaneous sound frequency;
matching music elements according to the user spontaneous sound frequency;
and synthesizing the user spontaneous audio and the music matching element to obtain music.
2. The method of dubbing music according to claim 1, wherein the obtaining user-generated audio comprises:
displaying the score prompt information, wherein the score prompt information comprises: any one or a combination of at least two of tempo, beat, or song style;
receiving a beat selection response message returned according to the music matching prompt message;
under the condition that the beat response message does not need a metronome, acquiring user spontaneous audio through an acquisition device, and determining the user spontaneous audio as the audio data to be made;
and under the condition that the beat response message comprises a required metronome, acquiring spontaneous audio generated by a user according to the beat of the metronome through an acquisition device, and determining the self-generated audio of the user as the audio data to be made.
3. The method of dubbing according to claim 1, wherein the matching of dubbing elements based on the user-generated audio comprises:
matching the music elements from a preset database according to the user spontaneous audio, wherein the music elements comprise: timbre and accompaniment;
and under the condition that the preset database comprises a tone library and an accompaniment library, obtaining corresponding tones from the tone library according to the spontaneous audio of the user, and obtaining corresponding accompaniments from the accompaniment library.
4. The method of dubbing music according to claim 3, wherein the synthesizing the user spontaneous audio and the dubbing music element to obtain music comprises:
and synthesizing the timbre, the accompaniment and the user spontaneous sound audio to generate the music.
5. Method of dubbing music according to any of the claims 1 to 4, characterized in that before said matching of dubbing elements according to said user's spontaneous audio, the method further comprises:
storing the user spontaneous audio;
and carrying out format conversion on the user spontaneous audio to obtain the user spontaneous audio after the format conversion.
6. The method of dubbing music according to claim 5, wherein the format converting the user-generated audio to obtain the format-converted user-generated audio comprises:
carrying out format conversion on the user spontaneous audio to obtain an initial audio of a Musical Instrument Digital Interface (MIDI);
and determining the MIDI initial audio of the musical instrument digital interface as the converted spontaneous sound audio of the user.
7. The method of dubbing according to claim 4, wherein the generating the musical composition comprises:
carrying out format conversion on the synthesized audio file to obtain an audio file to be played;
and determining the audio file to be played as the music.
8. The method of dubbing according to claim 7, wherein the musical composition comprises: composition information, wherein the composition information comprises: and adding the music elements in the composing process.
9. The method of dubbing according to claim 1, wherein after generating the musical composition, the method further comprises:
and if the music is re-composed, matching the music elements according to the music, and synthesizing the music and the music elements to obtain the re-composed music.
10. The method of dubbing according to claim 1, wherein after generating the musical composition, the method further comprises:
displaying prompt information, wherein the prompt information comprises: downloading prompt information and/or sharing prompt information;
receiving a response message returned according to the prompt message;
and executing corresponding operation according to the response message.
11. A method of dubbing music comprising:
displaying the score prompt information;
displaying user spontaneous audio generated by the user according to the music matching prompt information;
displaying and automatically matching according to the user spontaneous sound frequency to obtain the music elements;
displaying and playing music, wherein the music is generated by synthesizing the user spontaneous audio and the music elements;
wherein, the score prompt message includes: any one or a combination of at least two of tempo, beat, or song style.
12. A method of dubbing music comprising:
obtaining audio data, wherein the audio data comprises: a user self-sounding audio;
sending the user spontaneous audio to a music production platform;
and receiving the music returned by the music production platform according to the spontaneous sound frequency of the user.
13. A method of dubbing music comprising:
receiving user spontaneous audio;
matching music elements according to the user spontaneous sound frequency;
and synthesizing the user spontaneous audio and the music matching element to obtain music.
14. A mobile terminal, comprising: a sound collection device, a data processing device and a playing device, wherein,
the sound acquisition device is used for acquiring user spontaneous sound frequency generated by a user;
the data processing device is used for automatically matching music elements according to the user spontaneous sound frequency; synthesizing the user spontaneous audio and the music matching element to generate music;
the playing device is used for playing the music.
15. The mobile terminal of claim 14, wherein the mobile terminal further comprises: a beat simulator and a display device, wherein,
the beat simulator is used for generating music matching prompt information according to the setting of a user and displaying the music matching prompt information by the display device so that the sound collection device collects the user spontaneous audio generated by the user according to the music matching prompt information; wherein, the score prompt message includes: any one or a combination of at least two of tempo, beat, or song style.
16. A mobile terminal, comprising: a sound collection device, a data processing device, a communication device and a playing device, wherein,
the sound acquisition device is used for acquiring user spontaneous sound frequency generated by a user;
the data processing device is used for carrying out format conversion on the user spontaneous audio to obtain an audio file to be sent, and sending the audio file to be sent through the communication device; after the communication device receives the music returned according to the audio file to be sent, carrying out format conversion on the music to obtain the music to be played;
and the playing device is used for playing the music to be played.
17. The mobile terminal of claim 16, wherein the mobile terminal further comprises: a beat simulator and a display device, wherein,
the beat simulator is used for generating music matching prompt information according to the setting of a user and displaying the music matching prompt information by the display device so that the sound collection device collects the user spontaneous audio generated by the user according to the music matching prompt information; wherein, the score prompt message includes: any one or a combination of at least two of tempo, beat, or song style.
18. A system for dubbing music comprising: a client side and a background data processing side,
the client is configured to acquire audio data, where the audio data includes: a user self-sounding audio; sending the user spontaneous audio to a background data processing end;
the background data processing end is used for receiving the user spontaneous audio; matching music elements according to the user spontaneous sound frequency; and synthesizing the user spontaneous audio and the music matching element to obtain music.
19. A storage medium, characterized in that the storage medium includes a stored program, wherein when the program runs, a device on which the storage medium is located is controlled to execute: obtaining audio data, wherein the audio data comprises: a user self-sounding audio; matching music elements according to the user spontaneous sound frequency; and synthesizing the user spontaneous audio and the music matching element to obtain music.
20. A data processing method, comprising:
displaying an audio input interface, wherein the audio input interface is used for recording or receiving first audio;
recording or receiving a first audio, wherein the first audio corresponds to a user account;
obtaining a score element matched with the first audio, wherein the score element comprises: timbre and accompaniment;
and generating a second audio according to the first audio and the score element.
21. The data processing method of claim 20, wherein obtaining a soundtrack element that matches the first audio comprises:
and acquiring the score elements matched with the first audio and the user account.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810661357.2A CN110634465A (en) | 2018-06-25 | 2018-06-25 | Music matching method, mobile terminal, data processing method and music matching system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810661357.2A CN110634465A (en) | 2018-06-25 | 2018-06-25 | Music matching method, mobile terminal, data processing method and music matching system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110634465A true CN110634465A (en) | 2019-12-31 |
Family
ID=68967168
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810661357.2A Pending CN110634465A (en) | 2018-06-25 | 2018-06-25 | Music matching method, mobile terminal, data processing method and music matching system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110634465A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111276115A (en) * | 2020-01-14 | 2020-06-12 | 孙志鹏 | Cloud beat |
CN112951184A (en) * | 2021-03-26 | 2021-06-11 | 平安科技(深圳)有限公司 | Song generation method, device, equipment and storage medium |
CN112954481A (en) * | 2021-02-07 | 2021-06-11 | 脸萌有限公司 | Special effect processing method and device |
CN113838444A (en) * | 2021-10-13 | 2021-12-24 | 广州酷狗计算机科技有限公司 | Method, device, equipment, medium and computer program for generating composition |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101394578A (en) * | 2007-09-18 | 2009-03-25 | 爱唱数码科技(上海)有限公司 | System for establishing ring by interactive voice response and method therefor |
CN101399036A (en) * | 2007-09-30 | 2009-04-01 | 三星电子株式会社 | Device and method for conversing voice to be rap music |
CN104766603A (en) * | 2014-01-06 | 2015-07-08 | 安徽科大讯飞信息科技股份有限公司 | Method and device for building personalized singing style spectrum synthesis model |
CN105070283A (en) * | 2015-08-27 | 2015-11-18 | 百度在线网络技术(北京)有限公司 | Singing voice scoring method and apparatus |
CN105702249A (en) * | 2016-01-29 | 2016-06-22 | 北京精奇互动科技有限公司 | A method and apparatus for automatic selection of accompaniment |
CN105788582A (en) * | 2016-05-06 | 2016-07-20 | 深圳芯智汇科技有限公司 | Portable karaoke sound box and karaoke method thereof |
-
2018
- 2018-06-25 CN CN201810661357.2A patent/CN110634465A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101394578A (en) * | 2007-09-18 | 2009-03-25 | 爱唱数码科技(上海)有限公司 | System for establishing ring by interactive voice response and method therefor |
CN101399036A (en) * | 2007-09-30 | 2009-04-01 | 三星电子株式会社 | Device and method for conversing voice to be rap music |
CN104766603A (en) * | 2014-01-06 | 2015-07-08 | 安徽科大讯飞信息科技股份有限公司 | Method and device for building personalized singing style spectrum synthesis model |
CN105070283A (en) * | 2015-08-27 | 2015-11-18 | 百度在线网络技术(北京)有限公司 | Singing voice scoring method and apparatus |
CN105702249A (en) * | 2016-01-29 | 2016-06-22 | 北京精奇互动科技有限公司 | A method and apparatus for automatic selection of accompaniment |
CN105788582A (en) * | 2016-05-06 | 2016-07-20 | 深圳芯智汇科技有限公司 | Portable karaoke sound box and karaoke method thereof |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111276115A (en) * | 2020-01-14 | 2020-06-12 | 孙志鹏 | Cloud beat |
CN112954481A (en) * | 2021-02-07 | 2021-06-11 | 脸萌有限公司 | Special effect processing method and device |
CN112954481B (en) * | 2021-02-07 | 2023-12-12 | 脸萌有限公司 | Special effect processing method and device |
US12040000B2 (en) | 2021-02-07 | 2024-07-16 | Lemon Inc. | Special effect processing method and apparatus |
CN112951184A (en) * | 2021-03-26 | 2021-06-11 | 平安科技(深圳)有限公司 | Song generation method, device, equipment and storage medium |
CN113838444A (en) * | 2021-10-13 | 2021-12-24 | 广州酷狗计算机科技有限公司 | Method, device, equipment, medium and computer program for generating composition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6645956B2 (en) | System and method for portable speech synthesis | |
TW495735B (en) | Audio controller and the portable terminal and system using the same | |
CN110634465A (en) | Music matching method, mobile terminal, data processing method and music matching system | |
CN104040618B (en) | For making more harmonious musical background and for effect chain being applied to the system and method for melody | |
JP7424359B2 (en) | Information processing device, singing voice output method, and program | |
WO2006030612A1 (en) | Content creating device and content creating method | |
US11521585B2 (en) | Method of combining audio signals | |
JP7363954B2 (en) | Singing synthesis system and singing synthesis method | |
CN103959372A (en) | System and method for providing audio for a requested note using a render cache | |
CN108053814B (en) | Speech synthesis system and method for simulating singing voice of user | |
CN107274876A (en) | A kind of audition paints spectrometer | |
JP5598516B2 (en) | Voice synthesis system for karaoke and parameter extraction device | |
CN108922505B (en) | Information processing method and device | |
JPH09247105A (en) | Bgm terminal equipment | |
JP2002006842A (en) | Sound data reproducing method of portable terminal device | |
KR101790107B1 (en) | Method and server of music comprehensive service | |
CN108806732A (en) | A kind of background music processing method and electronic equipment based on artificial intelligence | |
KR20090023912A (en) | Music data processing system | |
JP2023013684A (en) | Singing voice quality conversion program and singing voice quality conversion device | |
JP4211636B2 (en) | Performance control data generation apparatus, music material data distribution server, and program | |
KR101975193B1 (en) | Automatic composition apparatus and computer-executable automatic composition method | |
JPH10143170A (en) | Musical piece data forming device and karaoke sing-along machine | |
Dai et al. | An Efficient AI Music Generation mobile platform Based on Machine Learning and ANN Network | |
CN115240621A (en) | Audio data processing method, computer device, and storage medium | |
CN114550690A (en) | Song synthesis method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191231 |