WO2018124590A1 - 공진기를 이용한 화자 인식 방법 및 장치 - Google Patents

공진기를 이용한 화자 인식 방법 및 장치 Download PDF

Info

Publication number
WO2018124590A1
WO2018124590A1 PCT/KR2017/015020 KR2017015020W WO2018124590A1 WO 2018124590 A1 WO2018124590 A1 WO 2018124590A1 KR 2017015020 W KR2017015020 W KR 2017015020W WO 2018124590 A1 WO2018124590 A1 WO 2018124590A1
Authority
WO
WIPO (PCT)
Prior art keywords
speaker
difference
band
magnitude
resonance
Prior art date
Application number
PCT/KR2017/015020
Other languages
English (en)
French (fr)
Korean (ko)
Inventor
김재흥
강성찬
박상하
윤용섭
이충호
Original Assignee
삼성전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자 주식회사 filed Critical 삼성전자 주식회사
Priority to JP2019534648A priority Critical patent/JP7048619B2/ja
Priority to US16/474,465 priority patent/US11341973B2/en
Priority to EP17888519.0A priority patent/EP3598086B1/en
Priority to CN201780080753.XA priority patent/CN110121633B/zh
Priority to KR1020197013600A priority patent/KR102520858B1/ko
Publication of WO2018124590A1 publication Critical patent/WO2018124590A1/ko
Priority to US17/741,087 priority patent/US11887606B2/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/06Decision making techniques; Pattern matching strategies
    • G10L17/14Use of phonemic categorisation or speech recognition prior to speaker recognition or verification
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H13/00Measuring resonant frequency
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/02Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/04Training, enrolment or model building
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/20Pattern transformations or operations aimed at increasing system robustness, e.g. against channel noise or different working conditions

Definitions

  • the present disclosure relates to a speaker recognition method and apparatus using a resonator.
  • Spectrum analyzers that analyze the spectrum of sound or vibration can be used in a variety of instruments. For example, it may be employed and used in a computer, a car, a mobile phone, or a home appliance for speech recognition, speaker recognition, and situation recognition related to sound or vibration. In addition, it may be mounted and used in buildings, various home appliances, etc. to analyze vibration information.
  • sensors such as a mechanical resonator, an electrical analog, or a digital filter may be used to filter signals in a frequency band of a specific region. Fourier transform and the like can be performed using the signals obtained from these sensors.
  • the present disclosure provides a speaker recognition method using a resonator.
  • the present disclosure provides a device capable of speaker recognition including a resonator.
  • a speaker recognition method includes: receiving electrical signals corresponding to a speaker's voice from at least some of a plurality of resonators having different resonance bands; Calculating a magnitude difference of a resonance band using the electrical signals; And recognizing the speaker using the size difference of the resonance bands.
  • the difference in magnitude of the resonance band may be a difference in magnitude of electrical signals output from two resonators having neighboring resonance frequencies based on frequency.
  • Recognizing the speaker may include generating a bit map of a band slope by encoding a magnitude difference between the resonance bands; And recognizing the speaker by using the bitmap of the band slope.
  • the encoding may convert the magnitude difference of the resonance band into any one of three or more odd values.
  • the three or more odd values may be opposite in sign with corresponding absolute values having the same absolute value among the remaining values based on one value.
  • the three or more odd values may include a, 0, and -a (where a is a constant).
  • recognizing the speaker may include generating a speaker model using a bit map of the band slope; And registering the speaker model as an authentication template.
  • recognizing the speaker may include generating a speaker feature value using a bitmap of the band slope; And comparing the speaker feature value with the registered authentication template to determine whether the speaker is a registered speaker.
  • the recognizing the speaker may include determining a vowel among voices of the speaker using the difference in magnitude of the resonance band.
  • the determining of the vowel may include estimating the relative positions of the formants using the size difference of the resonance bands; And determining the vowel from the relative positions of the formants.
  • the number of the formants may be three.
  • the difference in magnitude of the resonance band may be determined by the magnitude of electrical signals received from four resonators of the resonator sensors.
  • Recognizing the speaker may include assigning a weight to the determined vowel; Generating a bitmap of the band slope using a difference in magnitude of the resonance band, which is different from the difference in magnitude of the resonance band used to determine the vowel; Generating a speaker feature value using the bitmap of the band slope; And recognizing whether the speaker is a registered speaker by comparing the speaker feature value with an authentication template using the weight.
  • the weight of the determined vowel may be higher than the weight of the other vowels.
  • the weight may be assigned 1 to the determined vowel and 0 to the other vowel.
  • the number of magnitude differences of the resonance bands used to generate the bit map of the band slope may be greater than the number of magnitude differences of the resonance bands used to determine the vowels.
  • the speaker recognition apparatus includes a plurality of resonators having different resonant bands, the resonator for outputting electrical signals corresponding to the voice of the speaker (speaker) from at least some of the plurality of resonators sensor; And a processor configured to calculate a size difference of the resonance band using the electrical signals and recognize the speaker using the size difference of the resonance band.
  • the difference in magnitude of the resonance band may be a difference in magnitude of electrical signals output from two resonators having neighboring resonance frequencies based on frequency.
  • the processor may generate a bit map of a band slope by encoding the magnitude difference of the resonance band, and recognize the speaker by using the bit map of the band slope.
  • the processor may encode the magnitude difference of the resonance band by converting the magnitude difference of the resonance band into any one of three or more odd values.
  • the processor may determine whether the speaker is a registered speaker by comparing the speaker feature value determined using the bitmap of the band slope with a registered authentication template.
  • the processor may determine a vowel among voices of the speaker by using the difference in magnitude of the resonance band.
  • the processor may estimate the relative positions of the formants using the size difference of the resonance band, and determine the vowel from the relative positions of the formants.
  • the magnitude of the electrical signals received from four resonators of the resonator sensor may be determined by the magnitude of the electrical signals received from four resonators of the resonator sensor.
  • the processor assigns a weight to the determined vowel, generates speaker feature values using a size difference of the resonance band that is different from the size difference of the resonance band used to determine the vowel, and uses the weight.
  • the speaker can be recognized by comparing the speaker feature value with the authentication template.
  • the number of magnitude differences of the resonance bands used to generate the bit map of the band slope may be greater than the number of magnitude differences of the resonance bands used to determine the vowels.
  • the speaker recognition method the step of receiving signals of the frequency band corresponding to the voice of the speaker (speaker); Calculating a magnitude difference of the signals; Determining a vowel in the speaker's speech using the size difference; And determining whether the speaker is a registered speaker by using the determined vowel.
  • the determining of the vowel may include estimating relative positions of the formants using the size difference; And determining the vowel from the relative positions of the formants.
  • the signals of the frequency band may be received from a plurality of resonators having different resonance bands.
  • the determining of whether the speaker is a registered speaker includes: assigning a weight to the determined vowel; Generating a feature value of the speaker corresponding to the speaker's voice; And comparing the speaker's feature value with an authentication template by using the weight to determine whether the speaker is a registered speaker.
  • the assigning of the weights may include assigning a weight of the determined vowel higher than a weight of another vowel.
  • the weight may be assigned to the determined vowel and 0 to the other vowel.
  • long speech is not required for speaker recognition, and accurate speaker recognition is possible even with a relatively short input signal.
  • the efficiency of speaker recognition can be improved.
  • the resonator sensor does not require Fourier transform, maintains information of the frequency band, and can improve time resolution.
  • the effect on common noise can be eliminated.
  • FIG. 1 is a plan view illustrating a schematic structure of a resonator sensor including a plurality of resonators according to an exemplary embodiment.
  • FIG. 2 is a cross-sectional view of the resonator cut based on L1-L2 according to the exemplary embodiment shown in FIG. 1.
  • FIG. 3 is a block diagram schematically illustrating a speaker recognition apparatus including a resonator according to an exemplary embodiment.
  • FIG. 4 is a diagram illustrating a speaker recognition method using a resonator according to an exemplary embodiment.
  • 5 is an example of a graph showing voices having different resonance bands.
  • FIG. 6 is a diagram illustrating an example of generating a bitmap of a band slope by using a difference in magnitude of a resonance band according to an exemplary embodiment.
  • FIG. 7 is a graph illustrating an equation for encoding a difference in magnitudes of resonance bands, according to an exemplary embodiment.
  • FIG. 8 is a diagram illustrating a bit map of a two-dimensional band slope over time according to an exemplary embodiment.
  • Fig. 9 is a spectrum showing the resonance band of the vowel [AH] pronunciation.
  • Fig. 10 is a spectrum showing the resonance band of the vowel [EE] pronunciation.
  • 11 and 12 are graphs illustrating estimating positions of formants using resonators spaced apart from each other in relation to vowel determination according to an exemplary embodiment.
  • Fig. 13 is a reference diagram showing the positions of formants of a vowel according to an exemplary embodiment.
  • FIG. 14 is a flowchart illustrating a method of recognizing a speaker using a bitmap of vowels and band slopes.
  • 15 is a reference diagram for explaining a comparison between a speaker feature value and an authentication template during short speech.
  • 16 and 17 are diagrams illustrating examples in which center frequencies of a plurality of resonators of a resonator sensor according to an exemplary embodiment are set at equal intervals.
  • 20 and 21 are views illustrating examples in which center frequencies of a plurality of resonators of a resonator sensor are set at random intervals according to an exemplary embodiment.
  • FIG. 22 is a plan view illustrating a schematic structure of a resonator sensor including a plurality of resonators according to an exemplary embodiment.
  • 23 to 25 are graphs illustrating examples of variously changing bandwidths of a plurality of resonators of a resonator sensor according to an exemplary embodiment.
  • FIG. 26 is a graph illustrating a wider bandwidth of a specific resonator among a plurality of resonators of a resonator sensor according to an exemplary embodiment.
  • FIG. 1 is a plan view illustrating a schematic structure of a resonator sensor including a plurality of resonators according to an exemplary embodiment.
  • the resonator sensor 100 of FIG. 1 can be used as a spectrum analyzer for analyzing the spectrum of sound or vibration.
  • the resonator sensor 100 may include a plurality of resonators having different resonance bands, for example, a first resonator R1, a second resonator R2, and an n-th resonator Rn.
  • the number of unit resonators included in the resonator sensor 100 may be two or more, may be determined according to the user's selection, and there is no limitation.
  • the resonators R1, R2... Rn may be formed to have a length of about several mm or less, and may be manufactured by, for example, a MEMS process. Each resonator resonates only for a frequency of a specific band, and the resonant frequency band is called a resonant band.
  • FIG. 2 is a cross-sectional view of the resonator cut based on L1-L2 according to the exemplary embodiment shown in FIG. 1.
  • the first resonator R1 may include a fixing part 11 and a support part 14 protruding in one direction, for example, a y direction from the fixing part 11.
  • the sensor part 12 and the mass part 16 may be formed on the support part 14.
  • the sensor part 12 may be formed at one end of the support part 14, for example, an area adjacent to the fixing part 11.
  • the mass part 16 may be formed at a region relatively far from the other end, for example, the fixing part 11, which is opposite to one end of the support part 14.
  • the fixing part 11 is an area formed so that the supporting parts 14 of the resonators R1, R2... Rn protrude, and may be formed of a material that is typically used as a substrate of an electronic device.
  • the support 14 may be formed of Si or the like, may have a beam or a thin plate shape in one direction, and may be referred to as a cantilever or cantilever.
  • One end of the support part 14 may be fixed by the fixing part 11, and the other end may be freely vibrated in the vertical direction, for example, the z direction, as shown in FIG. 2 without being fixed by another object.
  • the supporting part of the resonator may have a shape in which both sides thereof are fixed to the fixing part so that the central portion of the supporting part vibrates.
  • the sensor unit 12 is an area for sensing a signal due to the flow of the support of the resonators R1 and R2.
  • the sensor unit 12 may include a lower electrode 12a, a piezoelectric material layer 12b, and an upper electrode 12c sequentially formed on one surface of the support 14.
  • the lower electrode 12a and the upper electrode 12c of the sensor unit 12 may be formed of a conductive material, for example, molybdenum (Mo) or the like.
  • An insulating layer may be further selectively formed between the lower electrode 12a and the support part 14.
  • the piezoelectric material layer 12b can be used without limitation as long as it is a piezoelectric material material that can be used in the piezo sensor.
  • the piezoelectric material layer 12b is formed, for example, including AlN, ZnO, SnO, PZT, ZnSnO 3 , Polyvinylidene fluoride (PVDF), poly (vinylidene fluoride-trifluoroethylene) (P (VDF-TrFE)) or PMN-PT Can be.
  • PVDF Polyvinylidene fluoride
  • P (VDF-TrFE) poly (vinylidene fluoride-trifluoroethylene)
  • PMN-PT Can PMN-PT Can be.
  • the resonators R1 and R2... Rn are not limited to the piezoelectric method including the piezo sensor as described above, and an electrostatic sensor may be used.
  • the forming material of the mass portion 16 is not limited, and may be formed of, for example, a metal such as Au.
  • the configuration in which the first resonator R1 illustrated in FIG. 2 includes the fixing part 11, the support part 14, the sensor part 12, and the mass part 16 is the second resonator R2 to FIG. 1. The same may be applied to the n resonator Rn.
  • an inertial force may be generated depending on the behavior of the mass portion 16.
  • a resonance phenomenon may occur, and an inertial force may increase.
  • This inertial force generates a bending moment in the sensor unit 12, and the bending moment may cause stress in each layer of the sensor unit 12.
  • a charge of a magnitude proportional to the stress acting may occur in the piezoelectric material layer 12b, and a voltage is generated in inverse proportion to the capacitance between the electrodes 12a and 12c.
  • the voltage generated by the sensor unit 12 is detected and interpreted by an input signal such as voice, vibration, or force from outside of the resonators R1, R2,.
  • the frequency band of the input signal sensed by the resonators R1, R2 ... Rn may be an audible frequency band in the range of approximately 20 Hz to 20 kHz, but is not limited thereto. Ultrasonic waves of 20 kHz or more or ultra low frequency of 20 Hz or less The voice of the band can be received.
  • the present disclosure provides an apparatus and method for recognizing a speaker using an output value detected by the resonator sensor 100, that is, an electrical signal.
  • FIG. 3 is a block diagram schematically illustrating a speaker recognition apparatus including a resonator according to an exemplary embodiment.
  • the speaker recognition apparatus 200 may output a resonator sensor 100 and a resonator sensor that output electrical signals having a specific value in response to an external input signal.
  • the processor 210 may include a processor 210 that calculates a size difference of the resonance band from the electrical signal received from the apparatus 100, and recognizes the speaker using the size difference of the resonance band.
  • the resonator sensor 100 may include a plurality of resonators having different resonant frequencies, that is, resonant bands. Each resonator of the resonator sensor 100 may output an electrical signal corresponding to the input signal.
  • a resonator having a resonance band included in the frequency of the input signal outputs a large electrical signal (for example, a voltage), and a resonator having a resonance band not included in the frequency of the input signal is large.
  • each resonator of the resonator sensor 100 outputs an electrical signal corresponding to the input signal, so that the resonator sensor 100 may output an electrical signal divided by frequency.
  • the resonator sensor 100 may be configured to include at least a part of the processor 210 to be described later.
  • the resonator sensor 100 may include an operation such as correcting an electrical signal for the voice or calculating a characteristic of the electrical signal, in addition to the operation of detecting the speaker's voice.
  • the resonator sensor 100 may be a functional module having a hardware module and a software module.
  • the processor 210 may drive an operating system and an application program to control a plurality of components connected to the processor 210.
  • the processor 210 may perform speaker recognition using the electrical signal obtained from the resonator sensor 100.
  • the processor 210 may calculate a magnitude difference of the resonance band by using the electrical signal received from the resonator sensor 100, and generate a bit map of the band slope by encoding the calculated difference of the resonance band.
  • the difference in magnitude of the resonance band may mean a difference in magnitude of electrical signals output from resonators having different resonance bands.
  • the bit map of the band slope is a map that simplifies the difference in magnitude of the resonance band, which will be described later.
  • the processor 210 may generate a bitmap of the band slope from the registration process voice of the specific speaker, and generate a personalized speaker model using the bitmap of the band slope.
  • the processor 210 may convert a bitmap of the band slope into a Fast Fourier Transform (FFT), a 2D Discrete Cosine Transform (DCT), a Dynamic Time Warping (DTW), an artificial neural network, a vector quantization (VQ), Characteristic values of the speaker registration process speech may be generated by using a GMM (Gaussian mixture model), and a personalized speaker model may be generated from the feature values of the speech process.
  • the processor 210 may generate a personalized speaker model by applying feature values of the registration process voice to a universal background model (UBM).
  • UBM universal background model
  • the personalized speaker model generated as described above may be stored in a secure area of the memory 220 as an authentication template for use in comparison with a voice of a specific speaker input thereafter.
  • the processor 210 During voice authentication, the processor 210 generates a bitmap of the band slope from the inputted speaker's voice, generates feature values using the bitmap of the band slope, and compares the speaker with the registered authentication template. You can authenticate. In this case, the processor 210 may convert the shape of the feature value of the unspecified speaker for comparison with the registered authentication template, and determine the similarity by comparing the converted feature value with the registered authentication template. Similarity may be applied to a maximum likelihood estimation method. The processor 210 may determine that authentication succeeds when the similarity is greater than the first reference value, and may determine that authentication fails when the similarity is less than or equal to the first reference value.
  • the first reference value is a value in which the characteristic value of the unspecified speaker becomes a criterion for determining that the authentication template is the same and may be defined in advance.
  • the processor 210 may calculate the magnitude difference of the resonance band using the electrical signal received from the resonator sensor 100, and determine the vowel using the calculated difference of the resonance band.
  • the vowel may include a plurality of formants which are frequency bands in which acoustic energy is concentrated. Different speakers may have different formants, but not so much that they can't be distinguished from other vowels. Accordingly, the vowels pronounced irrespective of the speaker can be generally classified, and a model corresponding to the vowels determined in the authentication template can be used for speaker recognition.
  • the speaker recognition apparatus 200 may include a memory 220 in which an authentication template is stored.
  • the memory 220 may temporarily store information about an unspecified speaker's voice.
  • the speaker recognition apparatus 200 may further include a display 230 for displaying information or the like.
  • the display 230 may display various types of information on the recognition, for example, an indicator indicating a user interface for recognition, an recognition result, and the like.
  • FIG. 4 is a diagram illustrating a speaker recognition method using a resonator according to an exemplary embodiment.
  • the processor 210 may receive an electrical signal corresponding to the speaker's voice from the resonator sensor 100 (S310). Each resonator of the resonator sensor 100 may output an electrical signal corresponding to the voice, and the processor 210 may receive the electrical signal.
  • the processor 210 may calculate a size difference of the resonance band by using the electrical signal received from the resonator sensor 100 (S320).
  • the difference in magnitude of the resonance band may be a difference in magnitude of electrical signals received from different resonators, for example, a magnitude difference of electrical signals output from two resonators having neighboring resonance frequencies based on frequency.
  • the processor 210 may calculate the size difference of the resonance band by using the entire resonator included in the resonator sensor 100.
  • the processor 210 calculates the magnitude difference between the electrical signals received by the first resonator and the second resonator as the magnitude difference between the first resonant bands
  • the magnitude difference between the electrical signals received by the second resonator and the third resonator is calculated as the magnitude difference between the second resonance bands
  • the difference between the electrical signals received by the n-1 resonator and the n-th resonator is determined by the n-1 resonance bands.
  • the processor 210 may calculate the size difference of the resonance band using only some resonators included in the resonator sensor 100. For example, the processor 210 may calculate the magnitude difference of the resonance band by using electrical signals received from the first resonator, the fourth resonator, the k-th resonator, and the n-th resonator. If the resonant bands of the first resonator and the fourth resonator are adjacent to each other, the resonant bands of the fourth resonator and the k-th resonator are adjacent to each other, and the resonant bands of the k-th and n-th resonators are adjacent to each other, the processor 210 may be connected to the first resonator.
  • the difference between the electrical signals received by the fourth resonator may be calculated as the magnitude difference between the first resonance band, and the difference between the electrical signals received by the fourth resonator and the k resonator may be calculated as the magnitude difference between the second resonance band, The difference between the electrical signals received by the k-th resonator and the n-th resonator may be calculated as the magnitude difference of the third resonance band.
  • the processor 210 may recognize the speaker by using the calculated size difference of the resonance band (S330). For example, the processor 210 generates a bit map of the band slope by encoding the magnitude difference of the resonance band, generates a feature value of the speaker voice using the bit map of the band slope, and stores the generated feature value in the authentication. The speaker can be recognized by comparison with the template.
  • the bit map of the band slope is a map that simplifies the difference in magnitude of the resonance band, which will be described later.
  • the processor 210 may determine the vowel using the difference in the size of the resonance band, and the determined vowel may be used to determine whether the speaker who pronounced is the registered speaker. For example, weights may be assigned to models corresponding to a determined collection among personalized speaker models included in the authentication template, or only those models may be used for speaker recognition.
  • the speaker recognition apparatus 200 may have a difference in size of a resonance band. You can recognize the speaker by using. The method using the magnitude difference of the resonance band can efficiently remove the common noise existing between the resonance frequencies.
  • the 5 is an example of a graph showing voices having different resonance bands.
  • the hatched region is a frequency region that is weakly related to the center frequency of the resonance band and may correspond to noise.
  • the removal of the common noise does not need to use various algorithms for noise removal or can be simplified, so that speech recognition can be performed more efficiently. In other words, if the resonant band uses the size difference, the preprocessing for noise removal may be omitted or simplified.
  • each of the resonators R1, R2... Rn of the resonator sensor 100 may output an electrical signal in response to the voice of the speaker.
  • Each resonator R1, R2... Rn may have a resonant frequency as shown in FIG. 6A.
  • a plurality of resonant frequencies are mixed in the speaker's voice, and each resonator may output an electrical signal corresponding to the frequency included in the speaker's voice. For example, when the first frequency H1 is included in the speaker's voice, the first resonator R1 may resonate and output a large electrical signal.
  • the processor 210 may calculate the magnitude difference of the resonance band as shown in FIG. 6B by using the electrical signal received from the resonator sensor 100.
  • the processor 210 may calculate the magnitude difference of the resonance band by using electrical signals output from neighboring resonators based on the resonance frequency.
  • FIG. 6B illustrates a result of calculating the difference in magnitude of the resonance band using the entire resonator included in the resonator sensor 100.
  • the processor 210 determines the difference in magnitude of the electrical signals of neighboring resonators among the first to n-th resonators. We can calculate by size difference.
  • the magnitude difference G 1 of the first resonance band is a difference between the magnitudes of the electrical signals received from the first resonator and the second resonator
  • the magnitude difference G 2 of the second resonance band is the second resonator and the third resonance band.
  • the magnitude difference between the electrical signals received by the resonator and the magnitude difference G 3 of the third resonance band is the magnitude difference between the electrical signals received by the third and fourth resonators.
  • the magnitude difference G n-1 of the n ⁇ 1 th resonance band is a magnitude difference between the electrical signals received by the n ⁇ 1 th resonator and the n th resonator.
  • the processor 210 may encode the size difference of the resonance band as shown in FIG. 6C.
  • the processor 210 may encode the difference of speech using the following equation.
  • H k is the band characteristic (i.e., electrical signal) of the k th resonator
  • H k + 1 is the band characteristic of the k + 1 th resonator
  • T k is the characteristic between the k th band resonator and the k + 1 th band resonator
  • may be determined according to the embodiment as an arbitrary constant.
  • ⁇ and ⁇ are threshold values, and encoding values for the speaker's voice may vary according to the magnitude of the threshold value.
  • the processor 210 determines that the difference in the output value between the resonators R1, R2... Rn having adjacent resonance bands is equal to or greater than a specific value ⁇ . By expressing it as 1, -1 if less than - ⁇ , and 0 if less than - ⁇ and not less than - ⁇ , the difference in magnitude of the resonance band can be encoded into three result values (-1, 0, +1). You can ..
  • FIG. 6 (d) graphs the bit values shown in FIG. 6 (c).
  • the maximum and minimum magnitudes of the electrical signals output from the resonator sensor 100 differ by about 100 times. However, if the signal output from the resonator sensor 100 is converted into a bit value of the band slope, as shown in FIG. 6 (d), it can be simplified to eight levels.
  • the processor 210 encodes the difference between the resonance bands as -1, 0, and 1, but this is merely exemplary.
  • the processor 210 may encode the size difference of the resonance band in various forms.
  • the processor 210 may encode the difference in magnitude of the resonance band into any one of three or more odd values, and based on one of the three or more odd values, the corresponding value among the remaining values is an absolute value.
  • the signs can be the same and opposite one another.
  • the processor 210 may encode the difference in magnitude of the resonance band as -2, -1, 0, 1, 2.
  • the processor 210 may encode the difference in magnitude of the resonance band into any one of an even number. Corresponding values among even values may have opposite signs with the same absolute value.
  • the processor 210 may encode the difference in magnitude of the resonance band as -3, -1, 1, or 3.
  • a bit map of the two-dimensional band slope over time may be generated.
  • the bitmap of the two-dimensional band slope may be a feature for speaker recognition, depending on the speaker.
  • 8 is a diagram illustrating a bit map of a two-dimensional band slope over time according to an exemplary embodiment. As shown in FIG. 8, a bit map of the band slope may be generated for each time frame.
  • the processor 210 may generate a bit map of the band slope according to a frame of a predetermined time unit, but is not limited thereto.
  • bitmap generation method of the 2D band slope may vary depending on the utilization of the recognition.
  • the processor 210 may register a speaker's voice by generating a personalized speaker model of a specific speaker using the bitmap of the band slope and storing the personalized speaker model as an authentication template. Subsequently, when the voice of the unspecified speaker is received, it may be determined whether the unspecified speaker is the same as the registered speaker by comparing the similarity with the previously stored authentication template.
  • a specific speaker may speak 'start'.
  • Each resonator or part of the resonator of the resonator sensor 100 may output an electrical signal corresponding to 'start'.
  • the processor 210 generates a bit map of the band slope by calculating and encoding a size difference of the resonance band from the electrical signal received from the resonator sensor 100, and then corresponds to the 'start' using the bit map of the band slope.
  • a personalized feature value can be calculated, a personalized speaker model can be created with the personalized feature value, and registered as an authentication template.
  • the processor 210 when the unspecified speaker speaks 'start', the processor 210 generates a bitmap of the band slope corresponding thereto, and calculates feature values corresponding to the 'start' of the unspecified speaker using the bitmap.
  • the processor 210 converts the feature values into a form that can be compared with the authentication template, compares the converted feature values with the authentication template, and determines whether or not the speaker who is registered with the unspecified speaker is correct to perform speaker recognition. have.
  • a processing process may be simplified than processing of voice using STFT (Short Time Fourier Transform) and MFCC (Mel Frequency Cepstrum Coefficients).
  • the speaker recognition method may additionally use vowels.
  • the vowel may include formants, which are component phonemes.
  • the formant means a distribution of frequency intensities of acoustic energy generated by the cavity resonance due to the shape and size of a passage of a human speech organ, that is, a frequency band in which acoustic energy is concentrated.
  • 9 and 10 are graphs showing energy distributions of specific vowels in the speech model. 9 is a spectrum showing a resonance band of the vowel [AH] pronunciation, and FIG. 10 is a spectrum showing a resonance band of the vowel [EE] pronunciation. Looking at the spectrum of the vowels with reference to FIGS. 9 and 10, it can be seen that there are several resonance bands instead of one.
  • the vowel [AH] pronunciation and the vowel [EE] pronunciation may have different spectra.
  • the change in the spectrum of these speakers is not enough to distinguish between the vowels [AH] and [EE]. This phenomenon applies equally to other vowels. In other words, vowels can generally be distinguished despite the speaker's individual voice characteristics.
  • the resonance band may be referred to as the first formant F1, the second formant F2, and the third formant F3 from the lower frequency side, and the center frequency of the first formant F1 is the smallest.
  • the center frequency of the third formant F3 is largest.
  • the center frequency of the second formant F2 may have a magnitude between the first formant F1 and the third formant F3. Comparing the output from the speaker with the outputs of the resonators R1, R2 ... Rn of the resonator sensor 100 shown in FIG. 1, the center frequency of the voice can be determined and the first formant F1 ), The positions of the second formant F2 and the third formant F3 can be obtained. When the positions of the first formant F1, the second formant F2, and the third formant F3 are obtained, vowels in the voice from the speaker can be obtained.
  • 11 and 12 are graphs illustrating estimating positions of formants using resonators spaced apart from each other in relation to vowel determination according to an exemplary embodiment.
  • Two different resonators among the resonators R1, R2... Rn of the resonator sensor 100 shown in FIG. 1 may output an electrical signal corresponding to an input signal from the speaker.
  • the two spaced apart resonators may be adjacent or non-adjacent resonators.
  • a first resonator having a resonance frequency of ⁇ a and a second resonator having a resonance frequency of ⁇ e may output electrical signals having different magnitudes corresponding to an input signal of a speaker.
  • the output value H 1 ( ⁇ ) at the first resonator may be very large, and the output value H 2 ( ⁇ ) at the second resonator may be absent or Can be very small.
  • both the output value H 1 ( ⁇ ) in the first resonator and the output value H 2 ( ⁇ ) in the second resonator may be very small.
  • the output value H 1 ( ⁇ ) at the first resonator may be absent or very small, and the output value H 2 ( ⁇ ) at the second resonator may be very large. Can be.
  • the center frequency of the voice when the center frequency of the voice has a value of ⁇ a , ⁇ b , ⁇ c , ⁇ d or ⁇ e , the output values at the first resonator and the second resonator are different from each other. Therefore, it can be seen that the difference between the output values of the first resonator and the second resonator (H 2 ( ⁇ ) ⁇ H 1 ( ⁇ )) also varies depending on the center frequency of the voice, as shown in FIG. 12.
  • the center frequency of the voice can be inversely determined from the difference in the output values of the two resonance periods. That is, the formant, which is the center frequency of the voice, may be determined using the size difference of the resonance bands between the resonators, and the vowel may be determined from the position of the center frequency.
  • the vowel generally includes three formants.
  • the processor 210 may select four resonators of the resonator sensor 100 and determine the formants using the electrical signals output from the selected resonators.
  • Fig. 13 is a reference diagram showing the positions of formants of a vowel according to an exemplary embodiment.
  • the horizontal axis represents the type of vowels
  • the vertical axis represents the center frequencies of the first formant F1, the second formant F2, and the third formant F3 according to each vowel.
  • Positions of the first formant F1, the second formant F2, and the third formant F3 according to each vowel shown in FIG. 13 may use position data of the formants of the respective vowels known in general. .
  • the position of the formants of the vowel may be obtained using a vowel information database by various speakers, which may be referred to as a universal background model (UBM).
  • UBM universal background model
  • each vowel generally includes three formants. And, it can be seen that the position of the formant is different for each vowel.
  • the lowest center frequency formant may be referred to as a first formant, the highest center frequency formant, a third formant, and a middle center frequency formant as a second formant.
  • the processor 210 may select four resonators having different resonant frequencies from among the resonator sensors 100 shown in FIG. 1. In selecting four resonators, one of the resonators having a resonant frequency lower than the center frequency of the first formant is the first resonator, and a resonance between the center frequency of the first formant and the center frequency of the second formant Any one of the resonators having a frequency as the second resonator, any one of the resonators having a resonant frequency between the center frequency of the second formant and the center frequency of the third formant as the third resonator, Any one of the resonators having a resonant frequency greater than the center frequency of may be selected as the fourth resonator. For example, the processor 210 may select four resonators having resonant frequencies of about 300 Hz, about 810 Hz, about 2290 Hz, and about 3000 Hz, respectively.
  • the processor 210 may determine the first to third formants by using a difference between output values of two resonators adjacent to a resonance band among the four resonators.
  • the first formant is determined by the difference between the output values of the first and second resonators (H 2 ( ⁇ ) ⁇ H 1 ( ⁇ )), and the difference between the output values of the second and third resonators (
  • the second formant can be determined by H 3 ( ⁇ ) -H 2 ( ⁇ )).
  • the third formant F3 may be determined by the difference between the output values of the third and fourth resonators H 4 ( ⁇ ) ⁇ H 3 ( ⁇ ).
  • the processor 210 may determine the difference between the output values of the first and second resonators (H 2 ( ⁇ ) ⁇ H 1 ( ⁇ )), and the difference between the output values of the second and third resonators (H 3 ( ⁇ ) ⁇ H 2 ( ⁇ ). ), And the first to third formants, respectively, can be determined from the difference between the output values of the third and fourth resonators (H 4 ( ⁇ ) -H 3 ( ⁇ )). Regardless of who the speaker is, you can determine the pronounced vowel. The determined vowel may be used to determine whether the pronunciation speaker is a registered speaker. Specifically, only models corresponding to the determined collection among the personalized speaker models included in the authentication template may be used for speaker recognition.
  • the processor 210 may receive an electrical signal corresponding to a speaker's voice from the resonator sensor 100 (S1110).
  • the speaker may speak 'us', and the resonator sensor 100 outputs an electrical signal corresponding to the 'us', so that the processor 210 may receive the electrical signal corresponding to the 'us'. have.
  • the processor 210 may calculate a size difference of a resonance band by using an electrical signal received from some resonators (S1120). Some resonators may be predefined to determine the formant of the vowel. For example, the processor 210 may calculate the magnitude difference of the resonance band by using the electrical signals received from the four predetermined resonators to determine the three formants described above.
  • the processor 210 may determine the vowel using the size difference of the resonance bands of some resonators (S1130). For example, the processor 210 may determine the first to third formants using the size difference of the four resonator bands and determine the vowel using the relative positional relationship of the first to third formants. The graph shown in FIG. 13 may be used when determining the vowel. For example, the processor 210 may determine a collection of 'TT' and ' ⁇ ' in order of time using the relative positional relationship of the first to third formants.
  • the processor 210 may assign a weight to the determined collection (S1140). For example, the processor 210 may assign the weight of the determined vowel higher than the weight of the other vowel.
  • the processor 210 may generate a bit map of the band slope using the electrical signals received from all the resonators included in the resonator sensor 100 (S1150). Specifically, a bit map of the slope of the band may be generated by calculating and encoding a magnitude difference between the resonance bands by using electrical signals received from all the resonators of the resonator sensor 100. In operation S1150, the processor 210 generates a bit map of the band slope by using the electrical signals received from all the resonators, but may generate a bit map of the band slope by using the electrical signals received from some resonators.
  • the bitmap of the band slope may be larger than the number of resonators used for vowel determination because it must include information about the speaker's voice in more detail than the vowel determination.
  • the processor 210 may generate a speaker feature value using the generated bitmap of the band slope (S1160).
  • the processor 210 converts the bitmap of the band slope into a Fast Fourier Transform (FFT), a 2D Discrete Cosine Transform (DCT), a Dynamic Time Warping (DTW), an artificial neural network, a vector quantization (VQ), and a Gaussian mixture.
  • FFT Fast Fourier Transform
  • DCT 2D Discrete Cosine Transform
  • DTW Dynamic Time Warping
  • VQ vector quantization
  • Gaussian mixture a Gaussian mixture.
  • the speaker feature value may be generated from the bitmap of the band slope using a model).
  • the speaker feature value may be converted into a form comparable to the authentication template, and the processor 210 may generate a generalized background.
  • a universal background model (UBM) can also be used.
  • the processor 210 may recognize the speaker by comparing the speaker feature value converted using the weight with the authentication template (S1170). High weights may be applied to the model corresponding to the determined vowel components of the authentication template, and low weights may be applied to the other vowel components. For example, if the determined vowels are 'TT' and ' ⁇ ', the processor applies high weights to the models corresponding to the components of 'TT' and ' ⁇ ' in the authentication template, and applies the lower weights to the remaining components. Speaker feature values and authentication templates can be compared. When the comparison result is greater than or equal to the reference value, the processor 210 may determine that the pronunciation speaker is a registered speaker. When the comparison result is less than the reference value, the processor 210 may determine that the pronunciation speaker is not the registered speaker.
  • the comparison result is greater than or equal to the reference value
  • the processor 210 may determine that the pronunciation speaker is a registered speaker.
  • the processor 210 may determine that the pronunciation speaker is not the registered speaker.
  • the assigned weight may be 1 or 0.
  • the processor 210 may use only the model corresponding to the determined collection among the authentication templates for comparison.
  • FIG. 15 is a reference diagram for explaining a comparison between a speaker feature value and an authentication template during short speech.
  • the hatched area represents the UBM model
  • the + pattern area represents the personalized speaker model, that is, the registered authentication template
  • the ⁇ represents the speaker feature value.
  • the processor 210 may obtain 'TT' and ' ⁇ ' as the vowel component uttered.
  • a vowel component of 'TT' and ' ⁇ ' may be a feature representing the speaker.
  • the speaker feature value 1230 is applied. Since the influence of the vowel component 1210 ignited in determining the similarity with, the accuracy of speaker recognition can be increased.
  • Generating speaker feature values (S1150 and S1160) and a series of steps (S1120 to S1140) for allocating weights are not necessarily executed sequentially, and are simultaneously performed, or in the process of allocating weights. Some steps may be performed first, followed by steps S1150 and S1160 of generating speaker feature values. For example, in the resonator sensor 100 shown in FIG. 1, the step S1130 of determining the vowel from the speaker's voice using four resonators having different bands is performed, and the entire resonators R1 and R2. In operation S1150, a bit map of a band slope may be generated using the signal output by Rn).
  • the personalized models can be distinguished by vowels and the model corresponding to the determined vowel can be used for comparison for recognition.
  • the speaker may be recognized by applying the weight assigned by the.
  • the resonator sensor 100 may include a plurality of mechanical resonators of various types. In the case of the resonator sensor 100, it may have various forms and the shape or arrangement of the included resonators may be selected as necessary.
  • the center frequency of the resonators included in the resonator sensor 100 may be changed by adjusting the length L of the support 14 shown in FIG. 2. According to the needs of the user, the resonators of the resonator sensor 100 may be formed to have various center frequency intervals.
  • 16 and 17 illustrate examples in which center frequencies of a plurality of resonators of a resonator sensor 100a are set at equal intervals.
  • the center frequency of the resonator Rm may be inversely proportional to the square of the resonator length, that is, the length L of the support 14 shown in FIG. 2. Accordingly, as illustrated in FIG. 17, when the resonators Rm included in the resonator sensor 100a have a constant difference in length between adjacent resonators Rm, a resonator having a relatively low center frequency is provided. This ratio can be made large compared to the ratio of the resonators having the center frequency in the high frequency region.
  • 18 and 19 are diagrams illustrating examples in which center frequencies of a plurality of resonators of the resonator sensor 100b are set at equal intervals, according to an exemplary embodiment.
  • the resonators Rn included in the resonator sensor 100b may form a smaller difference in length between adjacent resonators Rm from a long resonator to a short resonator. have.
  • the difference in the center frequencies of the resonators Rm may be set to have a constant equal difference interval.
  • 20 and 21 are diagrams illustrating examples in which center frequencies of the plurality of resonators of the resonator sensor 100c are set at random intervals according to an exemplary embodiment.
  • the resonator sensor 100c may be formed in a shape that does not have a specific regularity in the distance between the lengths of the resonators Ro included in the resonator sensor 100c.
  • the length of the resonators in some sections may be adjusted.
  • the resonator sensors 100, 100a, 100b, and 100c include resonators having equal or equivalent resonant frequencies or are formed to have resonant frequencies of any band. Can include them.
  • FIG. 22 is a plan view illustrating a schematic structure of a resonator sensor 100d including a plurality of resonators according to an exemplary embodiment.
  • a plurality of resonator sensors 100d extend from the support part 30 and the support part 30 in which the cavity or the through hole 40 is formed in the center portion and surround the cavity or the through hole 40. It may include the resonators (R) of. 1 illustrates a structure in which the resonators R1, R2... Rn of the resonator sensor 100 extend in parallel in one direction, but as shown in FIG. 13, the resonator sensor 100d according to the present disclosure is shown in FIG. It can be formed to have various structures.
  • 23 to 25 are graphs illustrating examples of variously changing bandwidths of a plurality of resonators of a resonator sensor according to an exemplary embodiment.
  • the band of the resonators may be narrowly formed in order to change the frequency interval of the band of the resonators or to improve the resolution of a specific band as necessary.
  • the resonator frequency bandwidth of FIG. 23 is referred to as the reference bandwidth S11
  • the resonators may be formed to have a bandwidth S12 that is narrower than the reference bandwidth S11.
  • the resonators may be formed to have a wider bandwidth S13 than the reference bandwidth S11 of FIG. 23.
  • FIG. 26 is a graph illustrating a wider bandwidth of a specific resonator among a plurality of resonators of a resonator sensor according to an exemplary embodiment.
  • the bandwidth S22 of specific resonators of the resonator sensor 100 used to determine the collection of input signals of FIG. 3 is relatively larger than the bandwidth S21 of the remaining resonators of the resonator sensor 100. By forming it widely, the process of determining the collection of input signals can be performed more efficiently.
  • the speaker recognition method and apparatus as described above may be applied to various fields. For example, by accurately recognizing whether a speaker is a registered speaker through a voice signal, the mobile device, a specific device employed or mounted in a home or a vehicle can be operated or unlocked.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Telephone Function (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
  • User Interface Of Digital Computer (AREA)
PCT/KR2017/015020 2016-12-29 2017-12-19 공진기를 이용한 화자 인식 방법 및 장치 WO2018124590A1 (ko)

Priority Applications (6)

Application Number Priority Date Filing Date Title
JP2019534648A JP7048619B2 (ja) 2016-12-29 2017-12-19 共振器を利用した話者認識方法及びその装置
US16/474,465 US11341973B2 (en) 2016-12-29 2017-12-19 Method and apparatus for recognizing speaker by using a resonator
EP17888519.0A EP3598086B1 (en) 2016-12-29 2017-12-19 Method and device for recognizing speaker by using resonator
CN201780080753.XA CN110121633B (zh) 2016-12-29 2017-12-19 用于通过使用谐振器来识别说话者的方法及设备
KR1020197013600A KR102520858B1 (ko) 2016-12-29 2017-12-19 공진기를 이용한 화자 인식 방법 및 장치
US17/741,087 US11887606B2 (en) 2016-12-29 2022-05-10 Method and apparatus for recognizing speaker by using a resonator

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR20160182792 2016-12-29
KR10-2016-0182792 2016-12-29

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US16/474,465 A-371-Of-International US11341973B2 (en) 2016-12-29 2017-12-19 Method and apparatus for recognizing speaker by using a resonator
US17/741,087 Continuation US11887606B2 (en) 2016-12-29 2022-05-10 Method and apparatus for recognizing speaker by using a resonator

Publications (1)

Publication Number Publication Date
WO2018124590A1 true WO2018124590A1 (ko) 2018-07-05

Family

ID=62709541

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/015020 WO2018124590A1 (ko) 2016-12-29 2017-12-19 공진기를 이용한 화자 인식 방법 및 장치

Country Status (6)

Country Link
US (2) US11341973B2 (zh)
EP (1) EP3598086B1 (zh)
JP (1) JP7048619B2 (zh)
KR (1) KR102520858B1 (zh)
CN (1) CN110121633B (zh)
WO (1) WO2018124590A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3614110A1 (en) * 2018-08-21 2020-02-26 Samsung Electronics Co., Ltd. Sound direction detection sensor and electronic apparatus including the same
US10823814B2 (en) 2017-09-01 2020-11-03 Samsung Electronics Co., Ltd. Sound direction detection sensor including multi-resonator array

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200024602A (ko) * 2018-08-28 2020-03-09 삼성전자주식회사 사용자 단말의 학습 방법 및 장치
KR102626924B1 (ko) 2019-06-20 2024-01-19 삼성전자주식회사 지향성 음향 센서와, 이를 이용한 지향 특성의 조절 방법 및 특정 방향의 음향 신호 감쇄 방법
US12067135B2 (en) * 2020-12-14 2024-08-20 Netflix, Inc. Secure video capture platform
KR20220121631A (ko) * 2021-02-25 2022-09-01 삼성전자주식회사 음성 인증 방법 및 이를 이용한 장치
US20230169981A1 (en) * 2021-11-30 2023-06-01 Samsung Electronics Co., Ltd. Method and apparatus for performing speaker diarization on mixed-bandwidth speech signals
KR20230086877A (ko) 2021-12-08 2023-06-16 삼성전자주식회사 지향성 음향 센서
KR20230095689A (ko) 2021-12-22 2023-06-29 삼성전자주식회사 마이크로폰 패키지 및 이를 포함하는 전자 장치

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010080735A (ko) * 1999-10-21 2001-08-22 가시오 가즈오 스펙트로그램의 상관관계를 이용한 화자 인식
KR20030013855A (ko) * 2001-08-09 2003-02-15 삼성전자주식회사 음성등록방법 및 음성등록시스템과 이에 기초한음성인식방법 및 음성인식시스템
US6791433B1 (en) * 1999-07-14 2004-09-14 International Business Machines Corporation Signal processing by means of resonators
KR20100115033A (ko) * 2009-04-17 2010-10-27 고려대학교 산학협력단 모음 특징을 이용한 음성구간 검출 시스템 및 방법과 이에 사용되는 음향 스펙트럼 유사도 측정 방법
KR20140050951A (ko) * 2012-10-22 2014-04-30 한국전자통신연구원 음성 인식 시스템

Family Cites Families (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4343969A (en) * 1978-10-02 1982-08-10 Trans-Data Associates Apparatus and method for articulatory speech recognition
US4379949A (en) * 1981-08-10 1983-04-12 Motorola, Inc. Method of and means for variable-rate coding of LPC parameters
US5054085A (en) * 1983-05-18 1991-10-01 Speech Systems, Inc. Preprocessing system for speech recognition
GB8716194D0 (en) 1987-07-09 1987-08-12 British Telecomm Speech recognition
US5856722A (en) 1996-01-02 1999-01-05 Cornell Research Foundation, Inc. Microelectromechanics-based frequency signature sensor
US5729694A (en) * 1996-02-06 1998-03-17 The Regents Of The University Of California Speech coding, reconstruction and recognition using acoustics and electromagnetic waves
SE515447C2 (sv) * 1996-07-25 2001-08-06 Telia Ab Metod och anordning för talverifiering
JPH1097274A (ja) 1996-09-24 1998-04-14 Kokusai Denshin Denwa Co Ltd <Kdd> 話者認識方法及び装置
JP3248452B2 (ja) 1997-05-26 2002-01-21 住友金属工業株式会社 音響センサ
US6502066B2 (en) * 1998-11-24 2002-12-31 Microsoft Corporation System for generating formant tracks by modifying formants synthesized from speech units
US6751354B2 (en) * 1999-03-11 2004-06-15 Fuji Xerox Co., Ltd Methods and apparatuses for video segmentation, classification, and retrieval using image class statistical models
JP2002196784A (ja) * 2000-12-27 2002-07-12 Sumitomo Metal Ind Ltd 時系列信号の識別方法及び装置
EP1246164A1 (en) * 2001-03-30 2002-10-02 Sony France S.A. Sound characterisation and/or identification based on prosodic listening
WO2004049283A1 (en) 2002-11-27 2004-06-10 Visual Pronunciation Software Limited A method, system and software for teaching pronunciation
JP2005202309A (ja) 2004-01-19 2005-07-28 Sony Corp 認証方法、認証装置及びmemsフィルタバンク
US20050171774A1 (en) * 2004-01-30 2005-08-04 Applebaum Ted H. Features and techniques for speaker authentication
DE102004013952A1 (de) 2004-03-22 2005-10-20 Infineon Technologies Ag Schaltkreis-Anordnung und Signalverarbeitungs-Vorrichtung
US7454337B1 (en) * 2004-05-13 2008-11-18 The United States Of America As Represented By The Director, National Security Agency, The Method of modeling single data class from multi-class data
US7991167B2 (en) * 2005-04-29 2011-08-02 Lifesize Communications, Inc. Forming beams with nulls directed at noise sources
CN101051464A (zh) * 2006-04-06 2007-10-10 株式会社东芝 说话人认证的注册和验证方法及装置
US10154819B2 (en) * 2006-04-20 2018-12-18 Jack S. Emery Systems and methods for impedance analysis of conductive medium
US7863714B2 (en) 2006-06-05 2011-01-04 Akustica, Inc. Monolithic MEMS and integrated circuit device having a barrier and method of fabricating the same
US7953600B2 (en) * 2007-04-24 2011-05-31 Novaspeech Llc System and method for hybrid speech synthesis
US8103027B2 (en) 2007-06-06 2012-01-24 Analog Devices, Inc. Microphone with reduced parasitic capacitance
JP5203730B2 (ja) * 2008-01-28 2013-06-05 株式会社東芝 磁気共鳴診断装置
US20090326939A1 (en) * 2008-06-25 2009-12-31 Embarq Holdings Company, Llc System and method for transcribing and displaying speech during a telephone call
DE112009002542A5 (de) 2008-10-14 2011-09-08 Knowles Electronics, Llc Mikrofon mit einer Mehrzahl von Wandlerelementen
CN101436405A (zh) * 2008-12-25 2009-05-20 北京中星微电子有限公司 说话人识别方法和系统
WO2011026247A1 (en) * 2009-09-04 2011-03-10 Svox Ag Speech enhancement techniques on the power spectrum
US8831942B1 (en) * 2010-03-19 2014-09-09 Narus, Inc. System and method for pitch based gender identification with suspicious speaker detection
US8756062B2 (en) * 2010-12-10 2014-06-17 General Motors Llc Male acoustic model adaptation based on language-independent female speech data
CN102655003B (zh) 2012-03-21 2013-12-04 北京航空航天大学 基于声道调制信号mfcc的汉语语音情感点识别方法
DE112012006876B4 (de) * 2012-09-04 2021-06-10 Cerence Operating Company Verfahren und Sprachsignal-Verarbeitungssystem zur formantabhängigen Sprachsignalverstärkung
US20140100839A1 (en) * 2012-09-13 2014-04-10 David Joseph Arendash Method for controlling properties of simulated environments
US9305559B2 (en) * 2012-10-15 2016-04-05 Digimarc Corporation Audio watermark encoding with reversing polarity and pairwise embedding
CN102968990B (zh) * 2012-11-15 2015-04-15 朱东来 说话人识别方法和系统
US10203762B2 (en) * 2014-03-11 2019-02-12 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US9621713B1 (en) * 2014-04-01 2017-04-11 Securus Technologies, Inc. Identical conversation detection method and apparatus
US10008216B2 (en) * 2014-04-15 2018-06-26 Speech Morphing Systems, Inc. Method and apparatus for exemplary morphing computer system background
KR102207928B1 (ko) * 2014-08-13 2021-01-26 삼성전자주식회사 음향 센싱 소자 및 주파수 정보 획득 방법
KR101718214B1 (ko) * 2015-06-09 2017-03-20 한국과학기술원 사물인터넷용 초저전력 유연압전 음성인식 센서
US9558734B2 (en) * 2015-06-29 2017-01-31 Vocalid, Inc. Aging a text-to-speech voice
KR20180015482A (ko) 2016-08-03 2018-02-13 삼성전자주식회사 음향 스펙트럼 분석기 및 이에 구비된 공진기들의 배열방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6791433B1 (en) * 1999-07-14 2004-09-14 International Business Machines Corporation Signal processing by means of resonators
KR20010080735A (ko) * 1999-10-21 2001-08-22 가시오 가즈오 스펙트로그램의 상관관계를 이용한 화자 인식
KR20030013855A (ko) * 2001-08-09 2003-02-15 삼성전자주식회사 음성등록방법 및 음성등록시스템과 이에 기초한음성인식방법 및 음성인식시스템
KR20100115033A (ko) * 2009-04-17 2010-10-27 고려대학교 산학협력단 모음 특징을 이용한 음성구간 검출 시스템 및 방법과 이에 사용되는 음향 스펙트럼 유사도 측정 방법
KR20140050951A (ko) * 2012-10-22 2014-04-30 한국전자통신연구원 음성 인식 시스템

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3598086A4 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10823814B2 (en) 2017-09-01 2020-11-03 Samsung Electronics Co., Ltd. Sound direction detection sensor including multi-resonator array
EP3614110A1 (en) * 2018-08-21 2020-02-26 Samsung Electronics Co., Ltd. Sound direction detection sensor and electronic apparatus including the same
CN110850360A (zh) * 2018-08-21 2020-02-28 三星电子株式会社 声音方向检测传感器以及包括其的电子装置
KR20200021780A (ko) * 2018-08-21 2020-03-02 삼성전자주식회사 소리 방향 탐지 센서 및 이를 포함하는 전자 장치
US10645493B2 (en) 2018-08-21 2020-05-05 Samsung Electronics Co., Ltd. Sound direction detection sensor and electronic apparatus including the same
US10873808B2 (en) 2018-08-21 2020-12-22 Samsung Electronics Co., Ltd. Sound direction detection sensor and electronic apparatus including the same
KR102477099B1 (ko) 2018-08-21 2022-12-13 삼성전자주식회사 소리 방향 탐지 센서 및 이를 포함하는 전자 장치

Also Published As

Publication number Publication date
EP3598086A1 (en) 2020-01-22
EP3598086A4 (en) 2020-08-26
US20190348050A1 (en) 2019-11-14
US11887606B2 (en) 2024-01-30
US20220270615A1 (en) 2022-08-25
KR102520858B1 (ko) 2023-04-13
EP3598086B1 (en) 2024-04-17
JP2020504329A (ja) 2020-02-06
CN110121633A (zh) 2019-08-13
CN110121633B (zh) 2023-04-04
US11341973B2 (en) 2022-05-24
JP7048619B2 (ja) 2022-04-05
KR20190092379A (ko) 2019-08-07

Similar Documents

Publication Publication Date Title
WO2018124590A1 (ko) 공진기를 이용한 화자 인식 방법 및 장치
US10477322B2 (en) MEMS device and process
Bala et al. Voice command recognition system based on MFCC and DTW
JP4249778B2 (ja) 板バネ構造を有する超小型マイクロホン、スピーカ及びそれを利用した音声認識装置、音声合成装置
JP4293377B2 (ja) 音声入力装置及びその製造方法、並びに、情報処理システム
JP5088950B2 (ja) 集積回路装置及び音声入力装置、並びに、情報処理システム
WO2009142250A1 (ja) 集積回路装置及び音声入力装置、並びに、情報処理システム
JP2009284110A (ja) 音声入力装置及びその製造方法、並びに、情報処理システム
KR101807069B1 (ko) 마이크로폰 및 그 제조방법
KR100785803B1 (ko) 판 스프링 구조를 갖는 초소형 마이크로 폰, 스피커 및이를 이용한 음성 인식/합성장치
US20100244162A1 (en) Mems device with reduced stress in the membrane and manufacturing method
US11647338B2 (en) Flexible piezoelectric acoustic sensor fabricated integrally with Si as the supporting substrate, voice sensor using thin film polymer and voice sensor with different thickness and voice sensing method using same
JP2000148184A (ja) 音声認識装置
CN111261184A (zh) 声源分离装置和声源分离方法
WO2020091123A1 (ko) 문맥 기반의 음성인식 서비스를 제공하기 위한 방법 및 장치
WO2020145509A2 (ko) 디제이 변환에 의한 주파수 추출 방법
Kaspersen et al. Hydranet: A real-time waveform separation network
JP2002229592A (ja) 音声認識装置
WO2020096078A1 (ko) 음성인식 서비스를 제공하기 위한 방법 및 장치
CN111782860A (zh) 一种音频检测方法及装置、存储介质
Ozdogan Development of a Pull-In Free Electrostatic MEMS Microphone
Park et al. Zero-crossing-based feature extraction for voice command systems using neck-microphones
JPH04271398A (ja) 骨伝導マイク検出型音節認識装置
JP5097692B2 (ja) 音声入力装置及びその製造方法、並びに、情報処理システム
KR20240082741A (ko) 멀티 공진형 음향센서, 이를 이용한 음성인식 또는 화자인식 시스템 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17888519

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20197013600

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2019534648

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017888519

Country of ref document: EP

Effective date: 20190729