WO2005111997A1 - オーディオ再生装置 - Google Patents
オーディオ再生装置 Download PDFInfo
- Publication number
- WO2005111997A1 WO2005111997A1 PCT/JP2005/005149 JP2005005149W WO2005111997A1 WO 2005111997 A1 WO2005111997 A1 WO 2005111997A1 JP 2005005149 W JP2005005149 W JP 2005005149W WO 2005111997 A1 WO2005111997 A1 WO 2005111997A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sound
- data
- tune
- vocal
- music
- Prior art date
Links
- 230000001755 vocal effect Effects 0.000 claims abstract description 105
- 238000001514 detection method Methods 0.000 claims abstract description 56
- 238000000034 method Methods 0.000 claims description 7
- 230000005236 sound signal Effects 0.000 claims 3
- 230000006870 function Effects 0.000 description 10
- 239000011295 pitch Substances 0.000 description 10
- 230000002238 attenuated effect Effects 0.000 description 7
- 230000015572 biosynthetic process Effects 0.000 description 6
- 230000004044 response Effects 0.000 description 6
- 238000003786 synthesis reaction Methods 0.000 description 6
- 230000007613 environmental effect Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K15/00—Acoustics not otherwise provided for
- G10K15/04—Sound-producing devices
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/36—Accompaniment arrangements
- G10H1/361—Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
- G10H1/366—Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems with means for modifying or correcting the external signal, e.g. pitch correction, reverberation, changing a singer's voice
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/46—Volume control
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
Definitions
- the present invention relates to an audio playback device having a karaoke function.
- Patent Document 1 JP-A-11-175077
- Patent Document 2 JP-A-2000-47677
- the karaoke apparatus disclosed in Patent Document 1 prepares an accompaniment sound and sample vocal singing data, and mixes (mixes) the accompaniment sound and the example vocal singing data with a speaker or the like. In addition to playing back music, the singing voice of the user input through the microphone is mixed to play back music.
- the pitch of the sample vocal singing data is compared with the pitch of the user's singing voice, and if the pitch difference is within a predetermined range, it is determined that the pitch matches, and the sample vocal singing is performed.
- the pitch difference is out of the predetermined range, it is determined that the pitch is out of range, and the volume of the vocal singing data is increased. This makes it easier to listen to the sample vocal singing data and accompaniment sounds, so that singing practice can be performed in accordance with the pitch of the sample vocal singing data.
- Patent Document 1 the model vocal singing data is prepared, and when the pitch difference from the sample vocal singing data is large, the volume of the sample vocal singing data is increased and the practice is performed. If it is small, lower the volume of the sample vocal singing data and practice. In other words, support singing practice while listening to the sample vocal singing data as needed.
- the karaoke apparatus disclosed in Patent Document 2 prepares accompaniment sound and vocal sound data as an example, and converts the accompaniment sound, vocal sound data, and the singing voice of the user input through the microphone. They are configured to be mixed and reproduced by a speaker or the like.
- the level of the vocal sound data and the input level of the microphone input If the singing voice of the user is lower than the vocal sound data, it is determined that the singing voice is a part that the user does not understand, and the volume of the vocal sound data as a model is increased. When the user's singing voice is at a higher level than the vocal sound data, the user's singing voice and the accompaniment sound are listened to by determining that the user can sing and increasing the volume of the user's singing voice. It makes it easy to practice singing.
- Patent Document 1 JP-A-111-175077
- Patent Document 2 JP-A-2000-47677
- the volume of the vocal singing data with respect to the user's singing voice is determined according to the pitch difference between the model vocal singing data and the user's singing voice.
- the volume is adjusted based on these pitches, it responds to the surrounding environmental sounds and conversations, etc., and automatically adjusts the volume regardless of whether the user is singing or not. (For example, increasing the volume of the sample vocal singing data).
- the volume of the vocal sound data corresponding to the singing voice of the user is determined according to the level of the vocal sound data as a model and the singing voice of the user.
- the conventional karaoke apparatuses disclosed in Patent Document 1 and Patent Document 2 described above prepare vocal singing data as an example, and allow the user to listen to the vocal singing data as necessary.
- an audio playback device that plays karaoke and plays music data from a storage medium such as a CD, a compact disc (CD) on which recorded vocal singing data of the model is recorded, and the vocal of the music data is used.
- a storage medium such as a CD, a compact disc (CD) on which recorded vocal singing data of the model is recorded, and the vocal of the music data is used.
- the present invention has been made in view of such conventional problems, and it is possible to accurately determine the singing voice of a user, and to use a karaoke sound reproducing apparatus that does not provide a model vocal singing data. The purpose is to be able to enjoy.
- the invention according to claim 1 is an audio reproducing apparatus provided with a mixing means for mixing and outputting a sound pickup signal output from a sound pickup means for picking up sound and a music signal output from a sound source means.
- a live device wherein the first tune detection means detects a tune of a sound pickup signal output from the sound pickup means, and the second tune detects a tune of a vocal sound of the music signal output from the sound source means.
- a tune detecting means for judging the similarity between the tune characteristic of the vocal sound and the tune characteristic of the vocal sound detected by the first and second tune detection means, And a vocal volume adjusting means for removing or attenuating the vocal sound of the music signal supplied from the sound source means to the mixing means when judging that there is similarity.
- the invention according to claim 3 is an audio reproducing apparatus comprising a mixing means for mixing and outputting a sound pickup signal output from a sound pickup means for picking up a sound and a music signal output from a sound source means.
- An audio reproduction method in a live device comprising: a first tune detection step of detecting a tune of a picked-up signal output from the sound pickup means; and a tune of a vocal sound of a music signal output from the sound source means.
- FIG. 1 is a block diagram showing a configuration of an audio playback device according to an embodiment of the present invention.
- FIG. 2 is a block diagram illustrating a configuration of an audio playback device according to an embodiment.
- FIG. 3 is a flowchart for explaining an operation of the audio reproducing apparatus shown in FIG. 2;
- FIG. 1 is a block diagram illustrating a configuration of an audio playback device according to the present embodiment.
- the audio reproducing device 1 includes a microphone MIC and an input amplifier unit.
- the microphone MIC and the input amplifier unit 2 are provided as sound pickup means for picking up a singing voice or the like of a user.
- the input amplifier unit 2 amplifies the sound pickup signal picked up by the microphone MIC, AZD conversion to sound pickup data Dau consisting of a data string is output.
- the sound source unit 3 is a sound source unit that outputs music data Dson composed of a digital data string, and includes various storage media such as an MD (Min Disc), a CD (Compact Disc), and a DVD (Digital Versatile Disc).
- Information reproduction device that reproduces and outputs music recorded on the Internet, radio receivers that receive and output radio and television broadcasts, and music that is distributed via communication networks such as the Internet. It is formed by receiving means for outputting the data.
- the tune detection unit 4 extracts the characteristics of the singing voice of the user having the tune by performing the tune detection at predetermined intervals on the collected sound data Dau output from the input amplifier unit 2.
- the tune detection unit 4 includes a "tonality (key)", a “change amount (BPM) of a beat (beat)", and a “change amount of a chord (chord: chord)".
- CPM '', ⁇ Maximum beat level '', ⁇ Average intensity of musical tone '', and ⁇ Maximum intensity of musical tone '' Is supplied to the comparison unit 6.
- the tune detection unit 5 receives music data Dson output from the sound source unit 3 at predetermined intervals. By performing the tune detection while synchronizing with the tune detection section 4, the characteristics of the vocal sound of the singer having the tune are extracted.
- the tune detection section 5 also includes a “tonality (key)”, a “change in beat (BPM)”, and a “chord (chord).
- Chord) change amount (CPM) “Maximum beat level”, “Average intensity of musical tone”, and “Maximum intensity of musical tone” are feature-extracted as parameters representing tune. Then, the feature amount CHy including the extracted six types of parameters is supplied to the comparison unit 6.
- the comparison unit 6 compares the feature amounts CHx and C Hy supplied in synchronization with the predetermined period from the tune detection units 4 and 5 for each of the above parameters, and calculates a difference value for each of the parameters. . If the difference value of each parameter is within the range of the predetermined reference value, it is determined that the singing voice of the user input through the microphone and the vocal sound of the singer are similar, and the control signal CNT is changed. If the difference value of each parameter is out of the range of a predetermined reference value, it is determined that the singing voice of the user input through the microphone and the vocal sound of the singer are not similar, and the control signal CNT is output. Do not output.
- the comparing unit 6 when the feature quantity CHy supplied on the characteristics amount CHx and singers vocals sound on the vocal of the user from the music tone detection unit 4 and 5, the feature amount and the feature amount
- control signal CNT is output; otherwise, the control signal CNT is not output.
- the comparing unit 6 determines that the feature CHx and the feature CHY are similar to each other. Do not output the control signal CNT.
- the tune detection unit 4 detects tune based on the sound pickup data Dau when the user does not sing, the feature CHx having no tune property is detected. For this reason, even when the characteristic amount CHx when the user is not singing and the characteristic amount CHy relating to the singer's vocal sound are supplied to the comparison unit 6, the comparison unit 6 does not output the control signal CNT. ,.
- the vocal volume adjustment unit 7 removes the singer's vocal sound data included in the music data Dson or attenuates the value of the data during the period in which the control signal CNT is supplied. And output.
- the characteristic amount CHX relating to the user's singing voice and the characteristic amount CHy relating to the singer's vocal sound are output from the tune detection units 4 and 5, and the comparing unit 6 outputs the characteristic amount CHx and the characteristic amount CHy.
- the vocal volume adjustment unit 7 removes or attenuates the singer's vocal sound data only during the output period of the control signal CNT.
- the music data Dc of the accompaniment sound is generated and output, and during a period in which the control signal CNT is not output, the music data Dson is passed as it is as the music data Dc and output.
- the mixing unit 8 mixes the sound pickup data Dau from the input amplifier unit 2 and the music data Dc from the vocal volume adjustment unit 7 to supply the data to a speaker or the like so as to reproduce music. Generate and output music playback data Dout.
- the mixing unit 8 collects the sound from the input amplifier unit 2.
- the music data Dc that does not mix the data D au (that is, the sound collection data Dau) is output as it is as the music reproduction data Dout.
- each parameter of the feature amount CHx representing the tune of the sound picked up by the microphone MIC (hereinafter simply described as “feature amount CHx”)
- feature amount CHx The similarity with each parameter of the characteristic amount CHy (hereinafter simply referred to as “characteristic amount CHy”) representing the tune of the vocal sound by the music data Dson output from the sound source unit 3 is compared, and the characteristic amounts CHx, CHy If the characteristics are similar, the vocal sound is removed or attenuated, and the collected sound is reproduced. If the feature values CHx and CHy are not similar, the collected sound is not reproduced. Since the singer's vocal sound is reproduced, it is possible to accurately detect the singing voice uttered by the user without being affected by conversation and surrounding environmental sounds.
- the feature amount CHx and the feature amount CHy are not similar. It determines that the picked-up sound is not the singing voice of the user, and does not output the control signal CNT. Therefore, as a result, it is possible to accurately detect the singing voice uttered by the user.
- the vocal sound is removed or attenuated and the collected sound is reproduced, so that the singer's vocal sound is not disturbed. You can enjoy. In other words, karaoke can be enjoyed not only with a karaoke device that prepares vocal singing data but also with a normal audio device.
- FIG. 2 is a block diagram showing the configuration of the audio reproducing apparatus according to the present embodiment, and the same or corresponding parts as in FIG. 1 are denoted by the same reference numerals.
- FIG. 3 is a flowchart for explaining the operation of the audio playback device of the present embodiment.
- the audio reproducing device 1 includes a microphone MIC and an input amplifier unit 2, a sound source unit 3, a bandpass filter 9 provided on the input amplifier unit 2 side, and a sound source unit 3 And a vocal volume adjustment section 7 and a mixing section 8.
- the tune detection sections 4 and 5 are configured by computer programs. Formed by a digital signal processor (DSP) that operates according to the program
- the band-pass filter 9 digitally processes the collected sound data Dau, which is composed of digital data trains output from the input amplifier unit 2, to generate audio data corresponding to the frequency band components of human uttered voice. Dvce is extracted and supplied to the tune detection unit 4.
- the band-pass filter 10 performs digital arithmetic processing on the music data Dson output from the sound source unit 3 to extract vocal sound data Dvoc corresponding to a frequency band component of human uttered voice, and a tune detection unit.
- Supply 5
- the tune detection section 4 includes a key detection section 4a, a beat change detection section 4b, a chord change detection section 4c, a beat maximum value detection section 4d, an average intensity detection section 4e, and a maximum intensity detection section 4f. It is configured.
- the key detection unit 4a the beat change amount detection unit 4b, the chord change amount detection unit 4c, the beat maximum value detection unit 4d, the average intensity detection unit 4e, and the maximum intensity detection unit 4f
- the ⁇ key '' is displayed.
- Feature data Dxl feature data Dx2 representing "change in beat (beat) (BPM)", feature data Dx3 representing "change in chord (chord) (CPM)” and "beat Feature data Dx4 representing the maximum level of the tone, feature data Dx5 representing the average intensity of the musical tone, and feature data Dx6 representing the maximum intensity of the musical tone.
- These six types of feature data Dxl Dx6 are generated.
- the characteristic amount CHx is supplied to the comparison unit 6.
- the tune detecting section 5 includes, similarly to the tune detecting section 4, a key detecting section 5a, a beat change detecting section 5b, a chord changing detecting section 5c, a beat maximum value detecting section 5d, an average intensity detecting section 5e, It is configured to have a strength detecting section 5f.
- the key detection unit 5a, the beat change amount detection unit 5b, the chord change amount detection unit 5c, the beat maximum value detection unit 5d, the average intensity detection unit 5e, and the maximum intensity detection unit 5f are connected to the tune detection unit 4 side. It operates in synchronization with each of the detectors 4a-4f installed in the unit, and performs tune detection on the vocal sound data Dvoc in each predetermined period range, thereby improving the “tonality (key)”.
- the characteristic data Dyl to be represented the characteristic data Dy2 representing the change in beat (BPM) (BPM), the characteristic data Dy3 representing the change in chord (chord) (CPM), and the characteristic data Dy3 Feature data Dy4 representing the maximum level of the musical tone, feature data Dy5 representing the average intensity of the musical tone, and feature data Dy6 representing the maximum intensity of the musical tone, and these six types of feature data Dyl— Dy6 is supplied to the comparison unit 6 as a feature quantity CHy.
- the comparison unit 6 compares the feature amounts CHx and C Hy supplied from the tune detection units 4 and 5 in synchronization with a predetermined cycle for each of the above-described parameters, and calculates a difference value for each of the parameters. .
- the comparison unit 6 determines the difference between the feature data Dxl and Dyl, the difference between the feature data Dx2 and Dy2, the difference between the feature data Dx3 and Dy3, the difference between the feature data Dx4 and Dy4, and the feature data D The difference between x5 and Dy5 and the difference between feature data Dx6 and Dy6 are calculated.
- the difference value of each parameter is within the range of a predetermined reference value, it is determined that the singing voice of the user input by microphone and the vocal sound of the singer are similar, and the control signal is determined. If the difference value of each parameter is out of the range of the predetermined reference value, it is determined that the singing voice of the user and the vocal sound of the singer input through the microphone are not similar, and Does not output control signal CNT.
- the comparison unit 6 compares the feature amount CHx relating to the singing voice uttered by the user with the singer's vocal By comparing the similarity with the feature CHy related to the sound, if the feature CHx and the feature CHy are similar, the control signal CNT is output; otherwise, the control signal CNT is not output. I have.
- the vocal volume adjustment section 7 is configured to include a band-pass filter 7a, a voice analysis / synthesis section 7b, a mouth-pass filter 7c, and a subtractor 7d.
- the band-pass filter 7a performs digital arithmetic processing on the music data Dson output from the sound source unit 3 to convert the music data Dson into a frequency band component of human uttered voice.
- the corresponding vocal sound data Dvoc is extracted and supplied to the voice analysis / synthesis unit 7b.
- the speech analysis / synthesis unit 7b has an adaptive digital filter for speech analysis that approximates the inverse characteristic of the transfer function of the human vocal tract, and a digital filter for speech synthesis that approximates the transfer function of the human vocal tract. are doing.
- the adaptive digital filter for voice analysis performs a voice recognition process based on the voice sound data Dvoc, and furthermore, each of the digital filters for voice synthesis based on the voice recognition result.
- the tap coefficients are adjusted adaptively, and impulse response train data hvoc equivalent to pseudo vocal sound is output from the digital filter for speech synthesis.
- the low-pass filter 7c removes the high-frequency noise component of the impulse response train data hvoc and supplies it to the subtractor 7d.
- the subtractor 7d subtracts the impulse response train data hvoc corresponding to the pseudo vocal sound supplied via the low-pass filter 7c and the music data Dson during the period in which the control signal CNT is output.
- the data related to the vocal sound included in the music data Dson is removed or attenuated, and the music data Dc after the subtraction processing is supplied to the mixing unit 8.
- the mixing section 8 mixes the sound pickup data Dau from the input amplifier section 2 and the music data Dc from the subtractor 7d to supply the data to a speaker or the like for reproduction. Generate and output Dout.
- the mixing unit 8 outputs the signal from the input amplifier unit 2 when the vocal sound data of the singer is not removed or attenuated by the vocal volume adjustment unit 7, that is, during a period when the control signal CNT is not output from the comparison unit 6.
- Mixed sound collection data Dau The music data Dc (that is, the sound pickup data Dau) which is not to be output is output as the music reproduction data Dout as it is.
- the user loads various storage media such as an MD, a CD, and a DVD into the information reproducing apparatus as the sound source unit 3 and starts reproduction by, for example, turning on a karaoke function, or Turn on the wireless receiver and turn on the power radio function, for example, to start reception of radio broadcasts, etc., or to receive music distributed via a communication network such as the Internet and start playback.
- a karaoke function is turned on, for example, the karaoke function is turned on, the audio reproducing device 1 of the present embodiment is activated to start the karaoke operation, and the sound source section 3 starts reproducing in step ST1.
- the tune detection sections 4 and 5 perform parallel processing while synchronizing with each other, and detect a feature CHx from the collected sound data Dau and a feature CHy from the music data Dson.
- step ST4 the comparing unit 6 determines the similarity by comparing the feature amounts CHx and CHy, and if it is determined that there is similarity, performs the processing in step ST5, and then proceeds to step ST6. However, if it is determined that there is no similarity (including no sound pickup data), the process directly proceeds to step ST6 without performing the process of step ST5.
- step ST5 the vocal volume adjustment unit 7 generates impulse response sequence data hvoc corresponding to a pseudo vocal sound based on the vocal sound data Dvoc included in the music data Dson, and By subtracting the impulse response train data hvoc from the data Dson, the data related to the vocal sound is removed or attenuated, and the music data Dc of the accompaniment sound is generated. Further, the mixing section 8 generates and outputs music reproduction data Dout by mixing the music data Dc of the accompaniment sound and the sound collection data Dau.
- step ST4 when the comparison unit 6 determines that there is no similarity, the vocal volume adjustment unit 7 performs processing such as subtracting the in-no-response sequence data hvoc from the music data Dson. Since the mixing is not performed, the mixing unit 8 does not mix the music data Dson output from the sound source unit 3 and the picked-up data Dau output from the input amplifier unit 2 without mixing. Data Dson is output as data Dout for music reproduction.
- step ST6 the karaoke operation is continued or stopped depending on whether or not the karaoke function of the sound source unit 3 is turned off. That is, if the karaoke function of the sound source unit 3 has not been turned off, the process returns to steps ST2 and ST3 to repeat the processing. If the karaoke function of the sound source unit 3 is turned off, the karaoke operation ends.
- the characteristic amount CHx representing the tune of the sound picked up by the microphone MIC and the characteristic of the vocal sound of the music data from the sound source unit 3 Compared to the amount CHy, the vocal sound of the music data is removed or attenuated and the collected sound is played back only when they are similar, so the effects of conversation and surrounding environmental sounds are reduced.
- a singing voice uttered by a user who does not receive can be detected with high precision, and karaoke can be enjoyed not only with a karaoke device that prepares vocal singing data as a model but also with a normal audio device.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Reverberation, Karaoke And Other Acoustics (AREA)
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006513500A JPWO2005111997A1 (ja) | 2004-05-14 | 2005-03-22 | オーディオ再生装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-145047 | 2004-05-14 | ||
JP2004145047 | 2004-05-14 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005111997A1 true WO2005111997A1 (ja) | 2005-11-24 |
Family
ID=35394380
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/005149 WO2005111997A1 (ja) | 2004-05-14 | 2005-03-22 | オーディオ再生装置 |
Country Status (2)
Country | Link |
---|---|
JP (1) | JPWO2005111997A1 (ja) |
WO (1) | WO2005111997A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007256879A (ja) * | 2006-03-27 | 2007-10-04 | Yamaha Corp | オーディオデータ再生装置および携帯端末装置 |
JP2008145777A (ja) * | 2006-12-11 | 2008-06-26 | Yamaha Corp | 楽音信号発生装置及びカラオケ装置 |
JP2015132724A (ja) * | 2014-01-14 | 2015-07-23 | ヤマハ株式会社 | 録音方法 |
US9953545B2 (en) | 2014-01-10 | 2018-04-24 | Yamaha Corporation | Musical-performance-information transmission method and musical-performance-information transmission system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04238384A (ja) * | 1991-01-22 | 1992-08-26 | Brother Ind Ltd | 練習機能付き電子音楽再生装置 |
JPH06295193A (ja) * | 1993-04-09 | 1994-10-21 | Matsushita Electric Ind Co Ltd | 再生装置 |
JPH0720861A (ja) * | 1993-06-30 | 1995-01-24 | Casio Comput Co Ltd | 自動演奏装置 |
JPH08115091A (ja) * | 1994-10-14 | 1996-05-07 | Sharp Corp | 歌唱評価機能付き音響機器 |
JPH08146979A (ja) * | 1994-11-25 | 1996-06-07 | Matsushita Electric Ind Co Ltd | 音声制御装置 |
JP2004102148A (ja) * | 2002-09-12 | 2004-04-02 | Taito Corp | リズム感採点機能を有するカラオケ採点装置 |
-
2005
- 2005-03-22 JP JP2006513500A patent/JPWO2005111997A1/ja active Pending
- 2005-03-22 WO PCT/JP2005/005149 patent/WO2005111997A1/ja active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04238384A (ja) * | 1991-01-22 | 1992-08-26 | Brother Ind Ltd | 練習機能付き電子音楽再生装置 |
JPH06295193A (ja) * | 1993-04-09 | 1994-10-21 | Matsushita Electric Ind Co Ltd | 再生装置 |
JPH0720861A (ja) * | 1993-06-30 | 1995-01-24 | Casio Comput Co Ltd | 自動演奏装置 |
JPH08115091A (ja) * | 1994-10-14 | 1996-05-07 | Sharp Corp | 歌唱評価機能付き音響機器 |
JPH08146979A (ja) * | 1994-11-25 | 1996-06-07 | Matsushita Electric Ind Co Ltd | 音声制御装置 |
JP2004102148A (ja) * | 2002-09-12 | 2004-04-02 | Taito Corp | リズム感採点機能を有するカラオケ採点装置 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007256879A (ja) * | 2006-03-27 | 2007-10-04 | Yamaha Corp | オーディオデータ再生装置および携帯端末装置 |
JP4591391B2 (ja) * | 2006-03-27 | 2010-12-01 | ヤマハ株式会社 | オーディオデータ再生装置および携帯端末装置 |
JP2008145777A (ja) * | 2006-12-11 | 2008-06-26 | Yamaha Corp | 楽音信号発生装置及びカラオケ装置 |
US9953545B2 (en) | 2014-01-10 | 2018-04-24 | Yamaha Corporation | Musical-performance-information transmission method and musical-performance-information transmission system |
JP2015132724A (ja) * | 2014-01-14 | 2015-07-23 | ヤマハ株式会社 | 録音方法 |
US9959853B2 (en) | 2014-01-14 | 2018-05-01 | Yamaha Corporation | Recording method and recording device that uses multiple waveform signal sources to record a musical instrument |
Also Published As
Publication number | Publication date |
---|---|
JPWO2005111997A1 (ja) | 2008-03-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9224375B1 (en) | Musical modification effects | |
US5889223A (en) | Karaoke apparatus converting gender of singing voice to match octave of song | |
US7974838B1 (en) | System and method for pitch adjusting vocals | |
CN102790932B (zh) | 区分信号信息内容和控制信号处理功能的音频系统和方法 | |
US5811708A (en) | Karaoke apparatus with tuning sub vocal aside main vocal | |
US8796527B2 (en) | Tone reproduction apparatus and method | |
US20120294459A1 (en) | Audio System and Method of Using Adaptive Intelligence to Distinguish Information Content of Audio Signals in Consumer Audio and Control Signal Processing Function | |
TW594602B (en) | Device and method for learning and evaluating accompanied singing | |
US5966687A (en) | Vocal pitch corrector | |
US7851688B2 (en) | Portable sound processing device | |
US9148104B2 (en) | Reproduction apparatus, reproduction method, provision apparatus, and reproduction system | |
CN107195288A (zh) | 一种助唱方法及系统 | |
JP2002215195A (ja) | 音楽信号処理装置 | |
CN101448186A (zh) | 扬声器音效自动调整系统及方法 | |
JP4060993B2 (ja) | オーディオ情報記憶制御方法及び装置並びにオーディオ情報出力装置。 | |
WO2005111997A1 (ja) | オーディオ再生装置 | |
US5684262A (en) | Pitch-modified microphone and audio reproducing apparatus | |
JP2005530213A (ja) | 音声信号処理装置 | |
GB2431282A (en) | Automatic playing and recording apparatus for accoustic/electric guitar | |
US20050100170A1 (en) | Method of recording and playing compact disk quality sound signals for a doorbell system, and a receiver embodying such method | |
US20230057082A1 (en) | Electronic device, method and computer program | |
JP2005037845A (ja) | 音楽再生装置 | |
US7495166B2 (en) | Sound processing apparatus, sound processing method, sound processing program and recording medium which records sound processing program | |
JP3554649B2 (ja) | 音声処理装置とその音量レベル調整方法 | |
JP2005107088A (ja) | 歌唱音声評価装置、カラオケ採点装置及びそのプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DPEN | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed from 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2006513500 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase |