CN1205499A - Interactive musical accompaniment method and equipment - Google Patents

Interactive musical accompaniment method and equipment Download PDF

Info

Publication number
CN1205499A
CN1205499A CN 97114542 CN97114542A CN1205499A CN 1205499 A CN1205499 A CN 1205499A CN 97114542 CN97114542 CN 97114542 CN 97114542 A CN97114542 A CN 97114542A CN 1205499 A CN1205499 A CN 1205499A
Authority
CN
China
Prior art keywords
beat
musical background
background file
song
musical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 97114542
Other languages
Chinese (zh)
Other versions
CN1068948C (en
Inventor
苏文钰
张靖敏
简良臣
余德彰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Corporation Industrial Technology Research Institute of consortium
MStar Semiconductor Co., Ltd.
Original Assignee
Industrial Technology Research Institute ITRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial Technology Research Institute ITRI filed Critical Industrial Technology Research Institute ITRI
Priority to CN97114542A priority Critical patent/CN1068948C/en
Publication of CN1205499A publication Critical patent/CN1205499A/en
Application granted granted Critical
Publication of CN1068948C publication Critical patent/CN1068948C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Landscapes

  • Electrophonic Musical Instruments (AREA)

Abstract

An interactive method and device for music accompaniment features that the music accompaniment file is processed by a music accompniment machine which can change the music time of said file to match it with user's music time. A sound analyzer is used to recognize the user's music time, separate the user's song signals from background noise, and attach segmental position information to the song signales with user's music time. An MIDI controller is used to change the music time of the music accompaniment file.

Description

The method and apparatus of the musical background of interactivity
The present invention relates to a kind of musical background system, the different singers of particularly a kind of correspondence regulate the musical background system of musical sound parameter.
One musical background equipment, so-called karaoke machine, the music score of playback song or accompanying music.This makes user or singer can corresponding appropriate music sing the lyrics of melody.Usually, the accompaniment of the lyrics and melody all is stored in the same medium.For example, traditional karaoke machine 100 of figure one expression, it comprises: CD player 102, video signal generator 104, video display 106, music accompaniment signal generator 108, loudspeaker 110, microphone 112 and mixer 114.When the user inserted CD 116 in the CD player 102, karaoke machine 100 brought into operation, and included video or lyrics signal (not shown) and audio frequency or accompaniment signal (not shown) in the CD 116.Video signal generator 104 extracts vision signal from CD 116, and the vision signal the extracted lyrics as song are presented on the video display 106.Accompaniment signal generator 108 extracts vision signal from CD 116, and sends it to mixer 114.The singer sings the lyrics that are presented on the video display 104 facing to microphone 112 basically simultaneously, and microphone is converted to song the electroacoustic signal 118 of expression song.Electroacoustic signal 118 is sent to mixer 114.Mixer 114 makes up sound signal and electroacoustic signal 118, and exports the voice signal 120 of combination to loudspeaker 110, thereby produces music.
Yet karaoke machine 100 just reproduces the musical background of storage faithfully, comprises beat.Beat is defined as the beat of music, the expression sing or musical background in fundamental note repeat regularly.This forces user or singer and is stored in fixing on the CD (or the medium of other permission-as personal computer memory) or the musical background parameter coordination of storage in advance.If the singer can not catch up with fixing beat, he can not be synchronous with musical background so.Therefore, the singer's beat that must adjust him is with the fixing beat of the music that adapts to storage.So people expect to adjust the music parameter of storage to adapt to singer's performance style.
Advantage of the present invention and purpose will partly be described or grasp by putting into practice the present invention by explanation.By means of each element and combination, especially element of pointing out in the appended claims and combination can realize and reach advantage of the present invention and purpose.
In order to realize advantage of the present invention and purpose of the present invention, that place like this shows and main description, system of the present invention basis is handled the musical background file by the beat of user's foundation.According to the method for processing musical background file of the present invention, the step of being carried out by processor comprises: select musical background file that is used to handle and the sound that will have the feature beat to convert the electric signal of representation feature beat to.Thereby this processing procedure changes the beat of musical background file and the feature beat coupling of being represented by electric signal, and output electric signal and musical background file.
Of the present inventionly be used for handling the equipment that is stored in storer music accompaniment file, comprising: first controller is used for extracting selected musical background file from storer; A microphone, the sound that is used for having the feature beat is converted to electric signal; One analyzer is used for electric signal filtering and recognition feature beat; Thereby one second controller is with the beat and the feature beat coupling of musical background file.
Comprise the computing machine Applicable media according to computer program of the present invention, this medium has computer-readable code, and this code is used for handling the data at musical instrument digital interface (MIDI) controller; This computing machine Applicable media comprises: one selects module, is used to select the musical background file of the midi format handled by first controller; One analysis module; The external voice that is used for having the feature beat is converted to the electric signal of representation feature beat; One control treatment module, be used to quicken or the beat of the musical background file that slows down to mate with the feature beat.
Adopt and constitute the accompanying drawing of the part of instructions in this manual, described a preferential embodiment of the present invention together in conjunction with explanatory note, and explained purpose of the present invention, advantage and principle.Wherein:
Fig. 1 is the synoptic diagram of traditional karaoke machine.
Fig. 2 is the synoptic diagram of musical background of the present invention system.
Fig. 3 is the process flow diagram of the method for processing accompaniment music of the present invention.
Fig. 4 is the synoptic diagram of voice analyzer shown in Figure 2.
Fig. 5 is the process flow diagram of the method for the elimination slight elevated noise finished by noise eliminator shown in Figure 4.
Fig. 6 is the typical waveform profile diagram that is input to voice analyzer.
Fig. 7 is the process flow diagram of the present invention with a kind of method of the singing voice signals segmentation of estimation.
Fig. 8 is the process flow diagram of the present invention with the another kind of method of the singing voice signals segmentation of estimation.
Fig. 9 A, 9B are the process flow diagram of the present invention with the fuzzy logic operation of the beat change of music accompaniment signal.
Figure 10 be used for determining the accompaniment signal whether with the oscillogram of the fuzzy logic member function (Fuzzy logic membership function) of Fig. 9 fragment position coupling.
Whether fully Figure 11 is used for determining the acceleration oscillogram of fuzzy logic member function among Fig. 9.
Will describe a preferential embodiment of the present invention in detail with reference to accompanying drawing below.What comprised in the instructions will do schematically to explain and do not provide constraints with the full content shown in the accompanying drawing.
Method and apparatus of the present invention can change the beat of musical background, thereby makes the beat of musical background and singer's natural beat coupling.This change mainly realizes by surveying time that a part spent (for example singing the time that speech spent) that the singer gives song recitals and should the time comparing with standard time of that part of song of performance of programming in advance.According to comparative result, the musical background machine is adjusted into beat coupling with the singer with the beat of musical background.
Shown in Figure 2 is the musical background system 200 that constitutes according to the present invention.Musical background system 200 comprises: controller 202, musical background storer 204, microphone 206, voice analyzer 208, Real-time and Dynamic MIDI controller 210 and loudspeaker 212.
In preferential embodiment, musical background storer 204 is arranged in the ROM part of personal computer, random-access memory (ram) or some equivalent storage medium of personal computer.What constitute controller 202 can be personal computer, and depends on the medium of musical background storer 204 to a certain extent.Those skilled in the art can form hardware embodiment that musical background system 200 install according to the method for being taught here, and in preferential embodiment, this device is to be realized by the software module that is installed on the personal computer master controller 202.
Fig. 3 is the process flow diagram 300 of musical background system 200 operations.At first, the singer selects a first song (step 302).Select according to this, controller 202 extracts the file that prestores that comprises with the musical background information of midi format storage from musical background storer 204, but and this document is stored in the MIDI controller 210 memory access devices (step 304).For example, extract selected musical background message file in a plurality of musical background message files of controller 202 from the ROM (musical background storer 204) that is stored in main personal computer, and in the RAM of main personal computer (not shown), store this musical background information.This RAM can link to each other with controller 202 or MIDI controller 210.The singer sings the lyrics of selected musical background facing to microphone 206.Microphone 206 is converted to electric signal to flow to voice analyzer 208 (step 306) with song.
Comprise the ground unrest that do not expect to have-from the electric signal of microphone 206 output as noise from loudspeaker 212.In order to eliminate the noise that does not expect to have, as will be described below, 208 pairs of electric signal of voice analyzer carry out filtering (step 308).In addition, voice analyzer 208 is with the beat of electric signal segmentation with identification singer song.The musical background message file (step 310) that MIDI controller 210 retrieves from addressable memory.Step 310 walks abreast with step 306 and step 308 simultaneously basically and carries out.Thereby the parameter that Real-time and Dynamic MIDI controller 210 utilizes the beat of the song of having discerned to change music accompaniment signal makes the beat coupling (step 312) of the beat and the singing voice signals of music accompaniment signal.The accompaniment MIDI file of selected song all is pre-stored among for example main personal computer RAM, and it can be at playback duration by MIDI controller 210 access in real time.Like this, the change of beat can not disturbed the transmission of melody.In other words, the change of beat does not influence the smoothness of music.
For the beat that makes music and singer's beat coupling, device of the present invention can be determined the beat of singer's song.Fig. 4 is the structural drawing of voice analyzer 208, and this voice analyzer can be determined singer's beat.Voice analyzer 208 can be determined the natural beat that the singer gives song recitals, and it comprises that a noise eliminator 402 separates and the time of a sectionaliser 404 to determine that the singer sings a part of song (as a speech) with the ground unrest that other is not expected with the sound that the singer is sung.
Noise eliminator 402 has the function of the sound that filtering do not expect, so that only adopt singer's song to determine beat.It is necessary that the sound of not expecting is eliminated, because receiver such as microphone 206 not only can pick up the noise that is produced by the singer, and can pick up the noise that produces as the 406 left and right sound channels loudspeakers that are positioned at the musical background system 200 of singer's closer locations by other source.The noise singing voice signals is generally handled by noise eliminator 402.After finishing dealing with, noise eliminator 402 outputs one estimation singing voice signals 408.Sectionaliser 404 utilizes this estimation singing voice signals 408 to determine the beat of singer's song.The fragment position information that sectionaliser 404 outputs attach the natural beat of the expression singer song on estimation singing voice signals 408.The estimation singing voice signals 408 that contains subsidiary fragment position information is flagged as fragment position estimation singing voice signals 410 in Fig. 4.
Fig. 5 is the process flow diagram 500 of the operation of expression noise eliminator 402.At first, noise singing voice signals 406 is imported into noise eliminator 402 (step 502).Noise singing voice signals 406 comprises: actual singing voice signals, and by S A[n] expression; Left speaker sound channel noise and right loudspeaker channel noise, the overall noise that is received by microphone 206 is by n here 0[n] expression.Here point [n] be along on the time shaft certain a bit.This combined sound can be expressed from the next:
S 0[n]=S A[n]+n 0[n] (formula 1)
In second step, noise eliminator 402 is removed slight elevated noise (step 504).If supposition as the signal of not expecting of left speaker sound channel noise and right loudspeaker channel noise emission by n 1[n] represents (n 1[n] signal equals the loudspeaker actual noise that (loudspeaker) located to produce in the source), and n 0[n] signal equals the loudspeaker noise at the microphone place, that is: noise also should comprise the decay of path upper speaker noise behind the path between process loudspeaker and the microphone, and the part that then exceeds standard in the noise singing voice signals 406 can be expressed as:
Y[n]=∑ h[i] n 1[n-i] (formula 2)
Wherein i=0 to N-1 and
H[z]=Z{h[n] (formula 3)
Here formula 3 is represented the estimation parameter of noise eliminators 402.Function h[i] expression from the source (as loudspeaker) of noise to the variation of upper speaker noise in path the microphone.So, h[i] expression path filter effect.After the sound that exceeds standard was removed by noise eliminator 402, it was exported by S cThe estimation singing voice signals 408 of [n] expression, S here c[n]=S 0[n]-y[n], S c[n] is in the estimation that does not have the singer's song under the situation of slight elevated noise.Error between actual song and the estimation singing voice signals 408 is defined as e[n]:
e 2[n]=(S A[n]-S C[n]) 2(formula 4)
Noise eliminator 402 is to design according to the least error between desired actual song and the estimation singing voice signals 408.Error is by e[n] expression.The parameter of noise eliminator 402 is obtained by iterative computation:
Equal 0 to N-1 for i, and 0<η<2, carry out interative computation up to the error minimum.η is systematic learning (system learning) parameter, is preset by system designer.This makes estimation singing voice signals 408 (S c[n]) output to sectionaliser 404 (step 506).
Sectionaliser 404 is used to distinguish the position of each speech of singing on the time shaft.For example, Fig. 6 expresses possibility sings sound wave profile 600.Sound wave profile 600 comprises the lyrics 602,604 etc.For example, the lyrics 604 start from the primary importance 606 corresponding to the lyrics 602 end positions, and end at the second place 608 corresponding to next lyrics (not shown) reference position.Sectionaliser 404 utilizes multiple distinct methods to determine first and second positions 606 and 608 of each lyrics on time shaft.For example, can utilize energy envelope method and nonlinear properties time-vector method.
Fig. 7 represents that sectionaliser 404 utilizes the process flow diagram 700 of energy envelope method.Shown in waveform profiles 600, the lyrics 602,604 etc. are continuous.These speech are by the borderline region section of being divided into, and wherein borderline region is the territory, nearest-neighbour of first and second positions 606 and 608, and there is a tangible energy level depression in this zone, is following energy thereafter and is rising.Therefore, can determine the segmentation position by the variation of detected energy.Suppose that waveform profiles 600 is by x[n] expression, x[n wherein] equal S A[n], then the segmentation position is determined by the process of process flow diagram 700 general introductions.At first, a moving window (sliding window) that utilizes estimation singing voice signals 408 to determine to have 2N+1 length, (step 702) as follows:
Figure 97114542000921
Wherein N is the time value that is preset by system designer.Like this, As time goes on, specific energy is confirmed as:
E[n]=[1/ (2N+1)] ∑ ︱ W[i] x[n-1] |, i=-N arrives+N (formula 7)
Next step, the primary importance 606 (step 704) of definite section when energy signal increases above a predetermined threshold.In other words, when 7 to one predetermined thresholds of formula were big, the lyrics 604 started from position n.Work as T 1(E[n+d]) is less than or equal to E[n] time and E[n+d] be less than or equal to T 2(E[n+2d]) time, the segmentation position is determined.T 1And T 2Be the constant between 0 to 1, d is by the predetermined interval of system designer.T 1, T 2With d be for song predetermined.The segmentation position is output to Real-time and Dynamic MIDI controller 210.Time location information is attached on the estimation singing voice signals and as time location estimation singing voice signals 410 and exports (step 708) from sectionaliser 404.
Process flow diagram 800 expressions shown in Figure 8 utilize the nonlinear properties time-vector method to determine the segmentation position.At first, utilize the test singing voice signals x[n of record in advance], a vector is defined as (step 802):
X[n]={x[n],x[n+1],…,x[n-N],x[n]·x[n],x[n]·x[n-1],…,x[n-N]·x[n-N]} T
(formula 8)
X[n] be the vector that singing voice signals constitutes.Segmentation feature is defined as (step 804):
Z[n]=1 segmentation position (formula 9)
0 segmentation position not
Next step, evaluation function is defined as (step 806):
e x[n]=α TX[n] (formula 10)
E wherein x[n] is the estimator of segmentation position, α TIt is normal vector.Cost function (cost function) is defined as:
Wherein E represents the expectation value of the function of its relevant range.About " probability, stochastic variable and the statistical treatment " of the more information of function expectation value, Megraw-Hill, 1984 referring to A.Papoulis.Utilize the Wiener-Hopf formula right
Figure 9711454200104
[n] minimizes, the Wiener-Hopf formula for example:
α=R -1 β(formula 12)
R=E{X[n] X T[n] } and β=Z[n] X[n] (formula 13)
About the more information of Wiener-Hopf formula " adaptive system identification and signal processing algorithm ", Prentice-Hall, 1993 referring to N.Kalouptisidis etc.The different songs that different singers are sung write down so that obtain α, β and R as training data (training data).The segmentation position Z[n of signal recited above] at first determine by sequencer.Formula 12 and formula 13 are used for calculating αAfter obtaining α, utilize formula 10 to calculate estimation function e x[n].Then the segmentation position is defined as:
The segmentation position=be if ︱ e x[n]-1 ︱≤ε (formula 14)
Not other situation wherein ε be degree of confidence coefficient (step 808).The segmentation position is attached on the estimation singing voice signals and exports to Real-time and Dynamic MIDI controller 210 (steps 810).
In a word, the nonlinear properties time-vector method uses a plurality of detection singing voice signals of record in advance of configuration to utilize formula 8 to obtain vector X[n].The hearer at first discerns the segmentation position of detection signal and obtains Z[n] value.Utilize formula 12 and 13 to calculate α, βAnd R.In case α, βCalculate with R, utilize formula 11 and formula 14 can determine the segmentation position of singing voice signals.Real-time and Dynamic MIDI controller 210 utilizes by quickening or slow down that be stored in can be by the accompaniment music in the storer of MIDI controller 210 visits in the segmentation position of voice analyzer 208 identifications.
Musical background information preferably is stored in the musical background storer 204 with midi format.If musical background information is not with the midi format storage, then before the musical background information stores is in the storer that can be visited by MIDI controller 210, needing MIDI converter (not shown) that music accompaniment signal is converted to the compatible form of MIDI.
The method and apparatus of the instructions that Real-time and Dynamic MIDI controller 210 is applied for jointly at Alvin Wen-Yu SU etc.-Real-time and Dynamic MIDI control (application number _ _ _, the date of application is identical with the application, openly quotes with for referencial use here) in description is arranged more fully.Particularly, this midi signal that is converted and music accompaniment signal are imported into the software control subroutine.This software control subroutine utilizes the fuzzy logic control principle to quicken or the beat of the music accompaniment signal that slows down, thereby reaches the beat coupling with switching singing voice signals.The process flow diagram 900 expression software control subroutines of Fig. 9 are how to adjust beat.At first, software control subroutine is measured segmentation position (step 902).Figure 10 represents segmentation position P[n] curve map.P[n is also determined in the position that the identification of software control subroutine is measured] whether leaned on the back (step 904) very much.If P[n] leaned on the back very much, then music accompaniment signal receives very big positive signal for faster (step 906); Otherwise determine P[n] whether too forward (step 908), if P[n] too forward, then music accompaniment signal receives very big negative signal for faster (step 910).If P[n] be not very forward or lean on very much after, Q[n then] be defined as P[n]-P[n-1], Q[n] determined (step 912).Figure 11 represents Q[n] curve map.Next step, the software control subroutine determines whether P[n] fall behind and Q[n] coupling (step 914) forward fast whether.If P[n] be to fall behind and Q[n] be to mate forward fast, then initial positive acceleration value is increased substantially (step 916).Otherwise, further determine P[n] whether be fall behind and Q[n] whether be slow (step 918) of coupling forward.If P[n] be to fall behind and Q[n] be slowly to mate forward, then initial positive acceleration value improves (step 920).Otherwise, further determine P[n] whether be fall behind and Q[n] whether be (step 922) that does not change.If P[n] be to fall behind and Q[n] be not change, then initial positive acceleration value is improved (step 924) a little.Otherwise, further determine P[n] whether be fall behind and Q[n] whether be slow (step 926) of coupling backward.If P[n] be to fall behind and Q[n] be slowly to mate backward, then positive acceleration is worth constant (step 928).Otherwise, further determine P[n] whether be fall behind and Q[n] whether be (step 930) of coupling backward fast.If P[n] be to fall behind and Q[n] be to mate backward fast, then initial positive acceleration value is lowered (step 932).Otherwise, further determine P[n] whether be leading and Q[n] whether be (step 934) of slowly mating forward.If P[n] be leading and Q[n] be slowly to mate forward, then initial negative acceleration value does not change (step 936).Otherwise, further determine P[n] whether be leading and Q[n] whether be indeclinable (step 938).If P[n] be leading and Q[n] be indeclinable, then initial negative acceleration value increases (step 940) a little.Otherwise, further determine P[n] whether be leading and Q[n] whether be (step 942) of slowly mating backward.If P[n] be leading and Q[n] be slowly to mate backward, then initial negative acceleration value increases (step 944).Otherwise, further determine P[n] whether be leading and Q[n] whether be (step 946) of mating backward fast.If P[n] be leading and Q[n] be to mate backward fast, then initial negative acceleration value increases considerably (step 948).Otherwise, further determine P[n] whether be leading and Q[n] whether be (step 950) of mating forward fast.If P[n] be leading and Q[n] be to mate forward fast, then initial negative acceleration value reduces (step 952).In case the beat of music accompaniment signal and the midi signal of conversion have mated, the signal of then performing music outputs to loudspeaker 212 (step 954).
Though above-mentioned disclosure is to change the musical background file according to singer's beat, it also can be used for any external signal-as musical instrument, loudspeaker, nature sound.Unique needs be: external signal has discernible beat or discernible segmentation position.
For those skilled in the art, it is apparent that under situation about not departing from the scope of the present invention with main idea, can make different modifications and changes with the structure of preferential embodiment to method of the present invention.With reference to disclosed herein instructions and practice of the present invention, other embodiments of the invention also are clearly for those skilled in the art.Instructions and example are only made example, and true scope of the present invention and main idea are as described in the following claim.

Claims (21)

1. method of handling the musical background file comprises that the step of being carried out by processor has:
The musical background file that selection is used to handle;
The sound that will have the feature beat is converted to the electric signal of representation feature beat;
The melody beat of musical background file is changed feature beat coupling to represent with electric signal;
Output electric signal and musical background file.
2. a processing is stored in the device of the musical background file in the storer, comprising:
First controller is used for extracting selected musical background file from storer;
Microphone, the sound that is used for having the feature beat is converted to electric signal;
Analyzer is used for this electric signal filtering and recognition feature beat;
Second controller is used for melody beat and feature beat coupling with the musical background file.
3. computer program comprises:
Computer usable medium with computer-readable code is used for handling the data at musical instrument digital interface (MIDI) controller, and this computer usable medium comprises:
One selects module, and it is configured for being selected by first controller musical background file of midi format to be processed;
One analysis module, its external voice that is configured for having the feature beat is converted to the electric signal of representation feature beat; And
One control treatment module, it is configured for the melody beat of musical background file is quickened to mate with the feature beat.
4. method of handling the musical background file comprises that the step of being carried out by processor has:
The musical background file that selection is used to handle;
The song that the singer is sung is converted to the electric singing voice signals of representing the song beat;
The melody beat of musical background file is changed song beat coupling to represent with this electricity singing voice signals; And
Should the electricity singing voice signals and the musical background file export as song.
5. method as claimed in claim 4, wherein switch process comprises:
With electric singing voice signals filtering to eliminate the ground unrest do not expect; With
With filtered signal subsection with identification song beat.
6. method as claimed in claim 5, wherein filter step comprises:
The ground unrest of not expecting is eliminated in path according to ground unrest between source of background noise and the microphone;
Ground unrest according to estimation carries out filtering to electric singing voice signals; With
Singing voice signals according to filtered electric singing voice signals output estimation.
7. method as claimed in claim 6, the step that wherein generates wave filter comprises: make the actual singing voice signals part of electric singing voice signals and the error minimum between the estimation singing voice signals thereby set up learning parameter.
8. method as claimed in claim 5, wherein division step comprises:
Measure the energy of this filtered signal;
The reference position of identification when the energy of measuring rises above a predetermined threshold; With
The end position of identification when the energy decreases of measuring is lower than a predetermined threshold.
9. method as claimed in claim 5, wherein division step comprises:
Pre-stored test singing voice signals;
Utilize pre-stored test singing voice signals to generate the vector amount;
According to test signal definition vector segmentation position;
Thereby make the cost function minimum according to vector amount and vector segmentation position calculation evaluation function;
Determine that according to evaluation function the actual segment position is whether in certain degree of confidence index.
10. method as claimed in claim 4, the step that wherein changes the melody beat comprises the beat that quickens the musical background file.
11. method as claimed in claim 10, wherein accelerating step comprises:
With electric singing voice signals segmentation, so that according to segmentation location recognition song beat;
Measure the segmentation position; With
Determine to make the musical background file necessary accekeration that conforms to the segmentation position.
12. method as claimed in claim 11, wherein determining step comprises: determine whether the segmentation position is one of following column position: promptly be ahead of very much the musical background file, be ahead of the musical background file, lag behind the musical background file, lag behind very much the musical background file, be matched with the musical background file.
13. method as claimed in claim 12, wherein segmentation position determining step comprises:
When definite fragment bit is changed to leading musical background file, when falling behind one of musical background file and coupling musical background file, measures the difference between segmentation position and the adjacent previous segment position.
14. a processing is stored in the device of the musical background file in the storer, comprising:
First controller is used for extracting user-selected musical background file from storer;
Microphone is used for user's song is converted to electric signal;
Voice analyzer is used for this electric signal filtering and identification song beat; With
Second controller is used for melody beat and song beat coupling with the musical background file.
15. device as claimed in claim 14, wherein this musical background file adopts midi format.
16. device as claimed in claim 14, wherein this voice analyzer comprises:
Noise eliminator is used for eliminating the ground unrest of not expecting from this electric signal; And
Sectionaliser is used to discern the song beat.
17. a processing is stored in the device of the musical background file in the storer, comprising:
Select the device of musical background file;
From storer, extract the device of musical background file;
User's song is converted to the device of electric signal;
The device of the song beat of identification electric signal; And
Thereby the melody beat of musical background file is changed the device that mates with the song beat.
18. device as claimed in claim 17, the device that wherein changes the melody beat of musical background file comprise the device that is used for the acceleration of melody beat.
19. the electric signal according to expression user song is handled the device that is stored in the musical background file in the storer, comprising:
Voice analyzer is used for the song beat with electric signal filtering and identification user song; And
Controller is used for the beat coupling with the melody beat and the song of musical background file.
20. device as claimed in claim 19, wherein this controller comprise be used for the melody beat quicken with the device of song beat coupling.
21. a computer program comprises:
Computer usable medium with computer-readable code is used for handling the data at musical instrument digital interface MIDI controller, and this computer usable medium comprises:
One selects module, and it is configured for selecting the musical background file that will be handled by the MIDI controller;
One analysis module, it is configured for user's song is converted to the electric signal of expression song beat; And
One control treatment module, it is configured for the melody beat of musical background file is quickened to mate with the song beat.
CN97114542A 1997-07-11 1997-07-11 Interactive musical accompaniment method and equipment Expired - Lifetime CN1068948C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN97114542A CN1068948C (en) 1997-07-11 1997-07-11 Interactive musical accompaniment method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN97114542A CN1068948C (en) 1997-07-11 1997-07-11 Interactive musical accompaniment method and equipment

Publications (2)

Publication Number Publication Date
CN1205499A true CN1205499A (en) 1999-01-20
CN1068948C CN1068948C (en) 2001-07-25

Family

ID=5173008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN97114542A Expired - Lifetime CN1068948C (en) 1997-07-11 1997-07-11 Interactive musical accompaniment method and equipment

Country Status (1)

Country Link
CN (1) CN1068948C (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100354924C (en) * 2000-12-05 2007-12-12 娱乐技术有限公司 Method for analyzing music using sound information of instruments
CN100437662C (en) * 2001-10-20 2008-11-26 哈尔·C·索尔特 Interactive game providing instruction in musical notation and in learning an instrument
CN101567184A (en) * 2009-03-24 2009-10-28 广州酷狗计算机科技有限公司 Method for producing dynamic karaoke lyrics
CN102116672A (en) * 2009-12-31 2011-07-06 陈新伟 Rhythm sensing method, device and system
CN102456352A (en) * 2010-10-26 2012-05-16 深圳Tcl新技术有限公司 Background audio frequency processing device and method
CN101609667B (en) * 2009-07-22 2012-09-05 福州瑞芯微电子有限公司 Method for realizing karaoke function in PMP player
CN103149853A (en) * 2013-03-07 2013-06-12 许倩 Noise converting device for factory and use method thereof
CN104581530A (en) * 2014-12-23 2015-04-29 福建星网视易信息系统有限公司 Circuit, device and system for reducing signal noise
CN105161081A (en) * 2015-08-06 2015-12-16 蔡雨声 APP humming composition system and method thereof
CN108682438A (en) * 2018-04-13 2018-10-19 广东小天才科技有限公司 A kind of method and electronic equipment based on recording substance adjustment accompaniment
CN110534078A (en) * 2019-07-30 2019-12-03 黑盒子科技(北京)有限公司 A kind of fine granularity music rhythm extracting system and method based on audio frequency characteristics
CN112669798A (en) * 2020-12-15 2021-04-16 深圳芒果未来教育科技有限公司 Accompanying method for actively following music signal and related equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5521323A (en) * 1993-05-21 1996-05-28 Coda Music Technologies, Inc. Real-time performance score matching
CN1034890C (en) * 1994-07-07 1997-05-14 株式会社金泳 A rhythm control device of the supplementary rhythm instrument for a computer music player

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100354924C (en) * 2000-12-05 2007-12-12 娱乐技术有限公司 Method for analyzing music using sound information of instruments
CN100437662C (en) * 2001-10-20 2008-11-26 哈尔·C·索尔特 Interactive game providing instruction in musical notation and in learning an instrument
CN101567184A (en) * 2009-03-24 2009-10-28 广州酷狗计算机科技有限公司 Method for producing dynamic karaoke lyrics
CN101567184B (en) * 2009-03-24 2013-07-10 广州酷狗计算机科技有限公司 Method for producing dynamic karaoke lyrics
CN101609667B (en) * 2009-07-22 2012-09-05 福州瑞芯微电子有限公司 Method for realizing karaoke function in PMP player
CN102116672A (en) * 2009-12-31 2011-07-06 陈新伟 Rhythm sensing method, device and system
CN102456352A (en) * 2010-10-26 2012-05-16 深圳Tcl新技术有限公司 Background audio frequency processing device and method
CN103149853A (en) * 2013-03-07 2013-06-12 许倩 Noise converting device for factory and use method thereof
CN104581530A (en) * 2014-12-23 2015-04-29 福建星网视易信息系统有限公司 Circuit, device and system for reducing signal noise
CN104581530B (en) * 2014-12-23 2018-12-28 福建星网视易信息系统有限公司 It is a kind of to reduce the circuit of signal noise, device and system
CN105161081A (en) * 2015-08-06 2015-12-16 蔡雨声 APP humming composition system and method thereof
CN105161081B (en) * 2015-08-06 2019-06-04 蔡雨声 A kind of APP humming compositing system and its method
CN108682438A (en) * 2018-04-13 2018-10-19 广东小天才科技有限公司 A kind of method and electronic equipment based on recording substance adjustment accompaniment
CN110534078A (en) * 2019-07-30 2019-12-03 黑盒子科技(北京)有限公司 A kind of fine granularity music rhythm extracting system and method based on audio frequency characteristics
CN112669798A (en) * 2020-12-15 2021-04-16 深圳芒果未来教育科技有限公司 Accompanying method for actively following music signal and related equipment
CN112669798B (en) * 2020-12-15 2021-08-03 深圳芒果未来教育科技有限公司 Accompanying method for actively following music signal and related equipment

Also Published As

Publication number Publication date
CN1068948C (en) 2001-07-25

Similar Documents

Publication Publication Date Title
EP2659485B1 (en) Semantic audio track mixer
US7667126B2 (en) Method of establishing a harmony control signal controlled in real-time by a guitar input signal
Su et al. Sparse Cepstral, Phase Codes for Guitar Playing Technique Classification.
US8193436B2 (en) Segmenting a humming signal into musical notes
CN1068948C (en) Interactive musical accompaniment method and equipment
CN109979488B (en) System for converting human voice into music score based on stress analysis
WO2002047064A1 (en) Method for analyzing music using sounds of instruments
WO2004034375A1 (en) Method and apparatus for determining musical notes from sounds
CN101093661B (en) Pitch tracking and playing method and system
US6629067B1 (en) Range control system
Cogliati et al. Piano music transcription modeling note temporal evolution
CN101093660A (en) Musical note syncopation method and device based on detection of double peak values
Sarkar et al. Raga identification from Hindustani classical music signal using compositional properties
Van Balen Automatic recognition of samples in musical audio
Gao et al. Vocal melody extraction via dnn-based pitch estimation and salience-based pitch refinement
Li et al. An approach to score following for piano performances with the sustained effect
MA et al. Four-way classification of tabla strokes with models adapted from Automatic Drum Transcription
Ramırez et al. Deep learning and intelligent audio mixing
Kitahara Mid-level representations of musical audio signals for music information retrieval
CN114078464B (en) Audio processing method, device and equipment
Schuller et al. HMM-based music retrieval using stereophonic feature information and framelength adaptation
AU2020104383A4 (en) Projection filter based universal framework to match the musical notes of synthesizer and indian classical instruments
WO2022244818A1 (en) Sound generation method and sound generation device using machine-learning model
Caspe et al. FM Tone Transfer with Envelope Learning
EP1970892A1 (en) Method of establishing a harmony control signal controlled in real-time by a guitar input signal

Legal Events

Date Code Title Description
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C06 Publication
PB01 Publication
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE; MORNING

Free format text: FORMER OWNER: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE

Effective date: 20081107

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20081107

Address after: Hsinchu County of Taiwan Province

Co-patentee after: MStar Semiconductor Co., Ltd.

Patentee after: Corporation Industrial Technology Research Institute of consortium

Address before: Hsinchu County of Taiwan Province

Patentee before: Industrial Technology Research Institute

CX01 Expiry of patent term

Granted publication date: 20010725

CX01 Expiry of patent term