CN109410971B - Method and device for beautifying sound - Google Patents

Method and device for beautifying sound Download PDF

Info

Publication number
CN109410971B
CN109410971B CN201811349639.5A CN201811349639A CN109410971B CN 109410971 B CN109410971 B CN 109410971B CN 201811349639 A CN201811349639 A CN 201811349639A CN 109410971 B CN109410971 B CN 109410971B
Authority
CN
China
Prior art keywords
resonance
frequency
voice
sound
beautifying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811349639.5A
Other languages
Chinese (zh)
Other versions
CN109410971A (en
Inventor
单花连
周峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Binghe Computer Technology Development Co ltd
Original Assignee
Wuxi Binghe Computer Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Binghe Computer Technology Development Co ltd filed Critical Wuxi Binghe Computer Technology Development Co ltd
Priority to CN201811349639.5A priority Critical patent/CN109410971B/en
Publication of CN109410971A publication Critical patent/CN109410971A/en
Application granted granted Critical
Publication of CN109410971B publication Critical patent/CN109410971B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/15Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being formant information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/003Changing voice quality, e.g. pitch or formants
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/18Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/45Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of analysis window

Abstract

The invention provides a method for beautifying sound, which comprises the following steps: acquiring a voice signal of a singer, and performing pre-emphasis, frame division and windowing on the voice signal to divide the voice signal into short time frames; estimating a pitch frequency f0 and a first formant frequency f1 of each frame of voice signal, and making a ratio f1/f 0; calculating the average value of the signal ratio f1/f0 of each frame in the current sentence of the singer; setting a threshold value for judging the resonance degree between the voice of the singer and the original voice, and comparing the average value with the threshold value; if the average value is larger than the threshold value, the resonance is poor; when the poor resonance is detected, the frequency band at the first resonance peak position is weakened, and/or the frequency band at the fundamental frequency position is enhanced to increase resonance beautifying sound, so that singing becomes easier, beautiful singing voice can be sung without professional training, and the entertainment of singing is improved.

Description

Method and device for beautifying sound
Technical Field
The invention relates to the technical field of intelligent voice, in particular to a method and a device for beautifying sound.
Background
Timbre refers to some attribute of sound produced audibly from which a listener can judge the difference between two sounds presented in the same way and having the same pitch and loudness. Thus, the voice timbre during singing refers to the voice characteristics that people use to determine which singer specifically is when different singers sing the same song.
The quality of resonance is an important index for measuring the tone of the singer. If the sound is lack of resonance in singing, the sound is often thin and weak and lack of color. And good sound resonance greatly helps to expand the volume, improve the tone color and spread the sound. However, at present, the description of resonance is mainly based on subjective feelings, such as whether the sound has penetration, whether the color is bright, and whether the sound is crisp. These subjective feelings are different from person to person and difficult to quantify, so that the quality of resonance cannot be objectively and accurately judged, and resonance beautification processing cannot be performed on the voice of a singer, so that the voice is more attractive and enjoyable.
Disclosure of Invention
The invention aims to provide a method for beautifying sound, which enables the sound of a common singer to be more graceful and vivid by automatically identifying the part with poor singing in singing and carrying out resonance beautifying processing on the part of singing.
According to a first aspect of the present invention, there is provided a method of beautifying sound, the method comprising the steps of: acquiring a voice signal of a singer, and performing pre-emphasis, frame division and windowing on the voice signal to divide the voice signal into short time frames; estimating a pitch frequency f0 and a first formant frequency f1 of each frame of voice signal, and making a ratio f1/f 0; calculating the average value of the signal ratio f1/f0 of each frame in the current sentence of the singer; setting a threshold value for judging the resonance degree between the voice of the singer and the original voice, and comparing the average value with the threshold value; if the average value is larger than the threshold value, the resonance is poor; when a poor resonance is detected, the frequency band at the first resonance peak position is attenuated and/or the frequency band at the fundamental frequency position is enhanced to increase the resonance beautification sound.
Preferably, when a poor resonance is detected, the frequency band at the first resonance peak location is attenuated and/or the frequency band at the first overtone frequency location is enhanced to increase the resonance beautification sound.
Preferably, the threshold for determining the degree of resonance between the singer's voice and the acoustic sound is 1.5-2, preferably 2.
Preferably, the digital filter used in the pre-emphasis process is: h (z) ═ 1-uz-1Wherein u is between 0.94 and 0.97.
Preferably, the window function used in the frame windowing process is a rectangular window.
Preferably, the pitch frequency f0 is estimated by a short-time average amplitude difference method.
Preferably, the first formant frequency f1 is estimated by LPC interpolation.
According to another aspect of the present invention, there is also provided a device for beautifying sound, including: the voice processing device comprises a preprocessing unit, a short-time frame generating unit and a short-time frame generating unit, wherein the preprocessing unit is used for acquiring a voice signal of a singer, and pre-emphasizing, framing and windowing the voice signal to divide the voice signal into short-time frames; the evaluation unit is used for evaluating the pitch frequency f0 and the first formant frequency f1 of each frame of voice signal, making a ratio f1/f0 and calculating the average value of the ratio f1/f0 of each frame of signal in the current sentence of the singer; a comparison unit for setting a threshold for judging a degree of resonance between the voice of the singer and the original voice, and comparing the average with the threshold; if the average value is larger than the threshold value, the resonance is poor; and the resonance beautifying unit is used for weakening the frequency band at the position of the first resonance peak and/or enhancing the frequency band at the position of the fundamental tone frequency to increase resonance beautifying sound when the poor resonance is detected.
Preferably, the resonance beautification unit is configured to weaken a frequency band at the first resonance peak position and/or strengthen a frequency band at the first harmonic overtone position to increase the resonance beautification sound when a poor resonance is detected.
According to another aspect of the present invention, there is also provided a machine-readable storage medium having stored thereon instructions for causing a machine to perform the method of beautifying sound described above.
By using the method and the device for beautifying the voice and the machine-readable storage medium, the part with poor singing in singing can be automatically identified, and the resonance beautifying processing can be automatically carried out on the part of the singing voice, so that the singing becomes easier, beautiful singing voice can be sung without professional training, and the entertainment of singing is improved.
Drawings
The invention will be described in more detail hereinafter on the basis of embodiments and with reference to the accompanying drawings. Wherein:
FIG. 1 is a gray scale diagram of the distribution of good resonance peaks;
FIG. 2 is a gray scale graph of the distribution of the poor resonance formants;
FIG. 3 is a flow chart of a method for beautifying sound according to an embodiment of the present invention.
Detailed Description
The invention will be further explained with reference to the drawings.
Sound is generated by vibration, and an object vibrates back and forth, so that the object can hardly vibrate according to a determined period all the time, namely, when the sound generating object generates sound, a plurality of partial vibrations are generated, and waves (harmonic waves) with different frequencies are generated. The waves of different frequencies cannot be distinguished individually because they are out of phase (i.e., separated by a short time), and thus the waves mix together to give an overall perception of timbre. The frequency with the largest relative magnitude in one sound determines the pitch of the sound, while the relative magnitude distribution of the other frequencies determines the timbre. People distinguish different sound-producing objects, and the frequency only determines the tone but the tone inherent to the sound-producing object, rather than the frequency of the sound.
Resonance, i.e., physical resonance, refers to the condition where a physical system vibrates with a greater amplitude at a particular frequency than at other frequencies, which are referred to as resonant frequencies or peaks. The wonderful and dynamic singing voice can not be resonated, because the resonance can enlarge the voice and make the voice mellow, smooth and bright. Good acoustic resonance contributes greatly to volume expansion, timbre improvement and sound transmission.
According to the invention, through researching a plurality of sections of resonance peak distribution gray-scale maps of good resonance and bad resonance sent by a plurality of singers who are trained by professional vocal music, the quality of resonance can be judged according to the intensity and the distribution condition of the resonance peak, and the quality of resonance can also be judged according to the ratio of the first resonance peak to the fundamental frequency.
Table 1 shows the average ratio of the first resonance peak to the fundamental frequency in each of the different resonances from 5 singers. The columns in the table are different singers, and the 'good' and 'bad' in each row respectively represent the quality of resonance.
TABLE 1 ratio of first formant frequency to fundamental frequency
Figure BDA0001863669350000041
Figure BDA0001863669350000051
As shown in fig. 1, the distribution gray scale of the resonance formants of a singer is shown. In the figure, the horizontal axis represents time, the vertical axis represents frequency, and the brighter the color, the stronger the peak value of the formant is. It can be seen from the figure that the distribution of good resonance formants is relatively uniform and has a strong formant only at low frequency, while the formant at high frequency is not obvious. When the resonance is good, the first resonance peak frequency is near the fundamental frequency or the first harmonic, i.e. the ratio of the first resonance peak to the fundamental frequency should be less than 2.
As shown in fig. 2, it is a distribution gray scale diagram of the resonance formants of a singer's difference. It can be seen from the figure that poor resonance has other, more pronounced, formants in addition to the low-frequency formants. When the resonance is poor, the first resonance peak will appear near other overtones, i.e. the ratio of the first resonance peak to the fundamental frequency should be greater than 2.
Fig. 3 is a flowchart of a method for beautifying sound according to an embodiment of the present invention, which specifically includes the following steps:
firstly, a pretreatment step: by pre-emphasis digital filter H (z) ═ 1-uz-1The pre-emphasis is realized, the high-frequency component in the signal is improved, the frequency spectrum of the signal becomes flat, and the subsequent analysis and calculation are facilitated. In the above formula, u is generally between 0.94 and 0.97. The signal is then windowed and framed, dividing the signal into short time frames. The window function uses a rectangular window:
Figure BDA0001863669350000052
secondly, an estimation step: the pitch frequency f0 of each frame of the speech signal is estimated by a short-time average amplitude difference method. For a frame signal, the short-time average amplitude difference function is defined as:
Figure BDA0001863669350000061
wherein, N is the number of sampling points of a frame signal. For a periodic voiced signal, the short-term average magnitude difference function exhibits a periodicity characteristic consistent with that of the voiced signal, with the valley period being the pitch period and the reciprocal of the period being the corresponding pitch frequency f 0. The formant frequency f1 was estimated by LPC interpolation: linear prediction coefficients are obtained for each frame of signal, and a power spectrum response curve of the vocal tract transfer function is obtained according to the prediction coefficients. The peak of the curve corresponds to the position of the formant frequency, and then a more accurate formant frequency is obtained by a parabolic interpolation method, and the first formant frequency is recorded as f 1.
A comparison step of setting a threshold for judging the degree of resonance between the voice of the singer and the original voice, for example, the threshold is 2, calculating an average value of the ratios in the current sentence, comparing the average value with the threshold, and if the average value is greater than 2, determining that the resonance is poor; when the amount is less than 2, the resonance is good, and the closer to 1, the better the resonance.
And fourthly, resonance beautifying, namely weakening the frequency band at the position of the first resonance peak and/or strengthening the frequency band at the position of the fundamental tone frequency and the frequency band at the position of the first harmonic tone when poor resonance is detected, so that the ratio of the average value to the threshold value is reduced to increase resonance and beautify sound.
According to yet another aspect of the present invention, there is also provided a machine-readable storage medium having stored thereon instructions for causing a machine to perform the method of beautifying sound described above.
By using the method and the device for beautifying the voice and the machine-readable storage medium, the part with poor singing in singing can be automatically identified, and the resonance beautifying processing can be automatically carried out on the part of the singing voice, so that the singing becomes easier, beautiful singing voice can be sung without professional training, and the entertainment of singing is improved.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and the full scope and equivalents thereof, and any modification, equivalent replacement, or improvement made within the spirit and principle of the invention should be included in the protection scope of the invention.
Those skilled in the art will understand that all or part of the steps in the method according to the above embodiments may be implemented by a program, which is stored in a storage medium and includes several instructions to enable a single chip, a chip, or a processor (processor) to execute all or part of the steps in the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In addition, any combination of various different implementation manners of the embodiments of the present invention is also possible, and the embodiments of the present invention should be considered as disclosed in the embodiments of the present invention as long as the combination does not depart from the spirit of the embodiments of the present invention.

Claims (10)

1. A method of beautifying sound, the method comprising the steps of: acquiring a voice signal of a singer, and performing pre-emphasis, frame division and windowing on the voice signal to divide the voice signal into short time frames; estimating a pitch frequency f0 and a first formant frequency f1 of each frame of voice signal, and making a ratio f1/f 0; calculating the average value of the signal ratio f1/f0 of each frame in the current sentence of the singer; setting a threshold value for judging the resonance degree between the voice of the singer and the original voice, and comparing the average value with the threshold value; if the average value is larger than the threshold value, the resonance is poor; when a poor resonance is detected, the frequency band at the first resonance peak position is attenuated and/or the frequency band at the fundamental frequency position is enhanced to increase the resonance beautification sound.
2. A method of beautifying sound according to claim 1, wherein when poor resonance is detected, the frequency band at the location of the first resonance peak is attenuated and/or the frequency band at the location of the first overtone frequency is enhanced to increase the resonance beautifying sound.
3. The method for beautifying sound according to claim 1, wherein the threshold value for judging the degree of resonance between the voice of the singer and the original sound is 1.5-2.
4. A method for beautifying sound according to claim 1, wherein the digital filter used in said pre-emphasis process is: h (z) ═ 1-uz-1Wherein u is between 0.94 and 0.97.
5. The method of claim 1, wherein the windowing function used in the framing windowing is selected from rectangular windows.
6. The method of beautifying sound according to claim 1, wherein said pitch frequency f0 is estimated by a short-time average amplitude difference method.
7. Method for beautifying the sound according to claim 1, characterized in that said first formant frequency f1 is estimated by LPC interpolation.
8. An apparatus for beautifying sound, the apparatus comprising: the voice processing device comprises a preprocessing unit, a short-time frame generating unit and a short-time frame generating unit, wherein the preprocessing unit is used for acquiring a voice signal of a singer, and pre-emphasizing, framing and windowing the voice signal to divide the voice signal into short-time frames; the evaluation unit is used for evaluating the pitch frequency f0 and the first formant frequency f1 of each frame of voice signal, making a ratio f1/f0 and calculating the average value of the ratio f1/f0 of each frame of signal in the current sentence of the singer; a comparison unit for setting a threshold for judging a degree of resonance between the voice of the singer and the original voice, and comparing the average with the threshold; if the average value is larger than the threshold value, the resonance is poor; and the resonance beautifying unit is used for weakening the frequency band at the position of the first resonance peak and/or enhancing the frequency band at the position of the fundamental tone frequency to increase resonance beautifying sound when the poor resonance is detected.
9. An apparatus for beautifying sound according to claim 8, wherein the resonance beautification unit is configured to attenuate a frequency band at the first resonance peak location and/or enhance a frequency band at the first overtone location to increase the resonance beautification sound when a poor resonance is detected.
10. A machine-readable storage medium having stored thereon instructions for causing a machine to perform the method of beautifying sound of any one of claims 1 to 7.
CN201811349639.5A 2018-11-13 2018-11-13 Method and device for beautifying sound Active CN109410971B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811349639.5A CN109410971B (en) 2018-11-13 2018-11-13 Method and device for beautifying sound

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811349639.5A CN109410971B (en) 2018-11-13 2018-11-13 Method and device for beautifying sound

Publications (2)

Publication Number Publication Date
CN109410971A CN109410971A (en) 2019-03-01
CN109410971B true CN109410971B (en) 2021-08-31

Family

ID=65473266

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811349639.5A Active CN109410971B (en) 2018-11-13 2018-11-13 Method and device for beautifying sound

Country Status (1)

Country Link
CN (1) CN109410971B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110728993A (en) * 2019-10-29 2020-01-24 维沃移动通信有限公司 Voice change identification method and electronic equipment
CN112053697B (en) * 2020-07-24 2023-03-31 厦门快商通科技股份有限公司 Method and equipment for improving voice infection of customer service staff
CN113395577A (en) * 2020-09-10 2021-09-14 腾讯科技(深圳)有限公司 Sound changing playing method and device, storage medium and electronic equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5895449A (en) * 1996-07-24 1999-04-20 Yamaha Corporation Singing sound-synthesizing apparatus and method
JP2005157363A (en) * 2003-11-21 2005-06-16 Samsung Electronics Co Ltd Method of and apparatus for enhancing dialog utilizing formant region
CN101650941A (en) * 2008-12-26 2010-02-17 中国科学院声学研究所 Objective evaluation method for singing tone clearness based on audio frequency spectrum characteristic analysis
CN102779527A (en) * 2012-08-07 2012-11-14 无锡成电科大科技发展有限公司 Speech enhancement method on basis of enhancement of formants of window function
CN103366759A (en) * 2012-03-29 2013-10-23 北京中传天籁数字技术有限公司 Speech data evaluation method and speech data evaluation device
CN104282316A (en) * 2013-07-01 2015-01-14 亿览在线网络技术(北京)有限公司 Karaoke scoring method based on voice matching, and device thereof
CN105825852A (en) * 2016-05-23 2016-08-03 渤海大学 Oral English reading test scoring method
CN106024010A (en) * 2016-05-19 2016-10-12 渤海大学 Speech signal dynamic characteristic extraction method based on formant curves
CN106024017A (en) * 2015-03-18 2016-10-12 展讯通信(上海)有限公司 Voice detection method and device
CN106409309A (en) * 2016-10-21 2017-02-15 深圳市音络科技有限公司 Tone quality enhancement method and microphone
CN107818796A (en) * 2017-11-16 2018-03-20 重庆师范大学 A kind of music exam assessment method and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5895449A (en) * 1996-07-24 1999-04-20 Yamaha Corporation Singing sound-synthesizing apparatus and method
JP2005157363A (en) * 2003-11-21 2005-06-16 Samsung Electronics Co Ltd Method of and apparatus for enhancing dialog utilizing formant region
CN101650941A (en) * 2008-12-26 2010-02-17 中国科学院声学研究所 Objective evaluation method for singing tone clearness based on audio frequency spectrum characteristic analysis
CN103366759A (en) * 2012-03-29 2013-10-23 北京中传天籁数字技术有限公司 Speech data evaluation method and speech data evaluation device
CN102779527A (en) * 2012-08-07 2012-11-14 无锡成电科大科技发展有限公司 Speech enhancement method on basis of enhancement of formants of window function
CN104282316A (en) * 2013-07-01 2015-01-14 亿览在线网络技术(北京)有限公司 Karaoke scoring method based on voice matching, and device thereof
CN106024017A (en) * 2015-03-18 2016-10-12 展讯通信(上海)有限公司 Voice detection method and device
CN106024010A (en) * 2016-05-19 2016-10-12 渤海大学 Speech signal dynamic characteristic extraction method based on formant curves
CN105825852A (en) * 2016-05-23 2016-08-03 渤海大学 Oral English reading test scoring method
CN106409309A (en) * 2016-10-21 2017-02-15 深圳市音络科技有限公司 Tone quality enhancement method and microphone
CN107818796A (en) * 2017-11-16 2018-03-20 重庆师范大学 A kind of music exam assessment method and system

Also Published As

Publication number Publication date
CN109410971A (en) 2019-03-01

Similar Documents

Publication Publication Date Title
CN109410971B (en) Method and device for beautifying sound
US8073688B2 (en) Voice processing apparatus and program
JP2009230154A (en) Sound signal processing device and sound signal processing method
WO2001056021A1 (en) System and method for modifying speech signals
US20060130637A1 (en) Method for differentiated digital voice and music processing, noise filtering, creation of special effects and device for carrying out said method
Alku et al. Closed phase covariance analysis based on constrained linear prediction for glottal inverse filtering
CN111583894B (en) Method, device, terminal equipment and computer storage medium for correcting tone color in real time
JP4516157B2 (en) Speech analysis device, speech analysis / synthesis device, correction rule information generation device, speech analysis system, speech analysis method, correction rule information generation method, and program
CN109416911B (en) Speech synthesis device and speech synthesis method
CN105957515A (en) Voice Synthesis Method, Voice Synthesis Device, Medium for Storing Voice Synthesis Program
CN112669797B (en) Audio processing method, device, electronic equipment and storage medium
Jensen The timbre model
CN109360583B (en) Tone evaluation method and device
JP2002268658A (en) Device, method, and program for analyzing and synthesizing voice
CN105895079A (en) Voice data processing method and device
Rigaud et al. Drum extraction from polyphonic music based on a spectro-temporal model of percussive sounds
CN107507610B (en) Chinese tone recognition method based on vowel fundamental frequency information
JP5573529B2 (en) Voice processing apparatus and program
Jensen et al. Hybrid perception
US11495200B2 (en) Real-time speech to singing conversion
JP5262875B2 (en) Follow-up evaluation system, karaoke system and program
WO2020158891A1 (en) Sound signal synthesis method and neural network training method
Jensen Perceptual and physical aspects of musical sounds
JP2018077281A (en) Speech synthesis method
KR20230102973A (en) Methods and Apparatus for calculating song scores

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant