CN110875057B - ICF frame-based sound-forming voice function damage level converter - Google Patents

ICF frame-based sound-forming voice function damage level converter Download PDF

Info

Publication number
CN110875057B
CN110875057B CN201910789160.1A CN201910789160A CN110875057B CN 110875057 B CN110875057 B CN 110875057B CN 201910789160 A CN201910789160 A CN 201910789160A CN 110875057 B CN110875057 B CN 110875057B
Authority
CN
China
Prior art keywords
speech
rate
duration
voiced
standard deviation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910789160.1A
Other languages
Chinese (zh)
Other versions
CN110875057A (en
Inventor
黄兰茗
黄兰婷
葛胜男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Huimin Medical Equipment Co ltd
Original Assignee
Shanghai Huimin Medical Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Huimin Medical Equipment Co ltd filed Critical Shanghai Huimin Medical Equipment Co ltd
Priority to CN201910789160.1A priority Critical patent/CN110875057B/en
Publication of CN110875057A publication Critical patent/CN110875057A/en
Application granted granted Critical
Publication of CN110875057B publication Critical patent/CN110875057B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/66Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Abstract

The invention discloses a sound-constituting voice function damage grade converter based on an ICF framework, which comprises the following function evaluation indexes: s1, reclassifying the b320 sound-constituting function, and subdividing the b320 sound-constituting function by the ICF; s2, reclassifying the b3300 speech fluency, and ICF subdividing the b3300 speech fluency; s3, reclassifying the b3301 speech rhythm, and ICF subdividing the b3301 speech rhythm; s4, reclassifying the b3302 speech rate, and subdividing the b3302 speech rate by ICF; s5, reclassifying b3303 tones, and ICF subdividing b3303 tones, so that accurate language function assessment can be performed on the patient, and subsequent treatment of the patient is facilitated.

Description

ICF frame-based sound-forming voice function damage level converter
Technical Field
The invention relates to the technical field of speech treatment, in particular to an ICF framework-based constituent voice function damage level converter.
Background
Based on international function, disability and health classification and ICF speech function evaluation standard and speech disorder core classification combination (WHO-FIC), the noise-forming speech function is subdivided into b320 noise-forming function, b3300 speech fluency, b3301 speech rhythm, b3302 speech rate and b3303 tone, and the existing speech disorder measuring equipment cannot evaluate the accurate speech function of the patient only according to the evaluation of the speech function, so that the subsequent treatment of the patient is not facilitated.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides an ICF framework-based constituent voice function impairment level converter.
In order to achieve the purpose, the invention adopts the following technical scheme:
an ICF framework-based constitutive speech function damage level converter comprises the following function evaluation indexes:
s1, reclassifying the b320 sound-constituting function, and the ICF subdividing the b320 sound-constituting function into: obtaining (individual) initial consonant phoneme, comparing (opposite) initial consonant phoneme, articulation definition (%), oral sensation (%), mandibular movement (%), lip movement (%), tongue movement (%);
s2, reclassifying b3300 speech fluency, the ICF subdivides the b3300 speech fluency into/pa/syllable duration (ms),/ta/syllable duration (ms),/ka/syllable duration (ms),/pata/syllable duration (ms),/paka/syllable duration (ms),/pa/voiced duration (ms),/ta/voiced duration (ms),/ka/voiced duration (ms),/paka/voiced duration (ms),/taka/voiced duration (ms),/paka/voiced duration (ms),/ta/pause duration (ms),/paka/pause duration (ms), and/pause duration (ms),/pa/pause duration (ms), A/ka/pause duration (ms),/pata/pause duration (ms),/paka/pause duration (ms),/taka/pause duration (ms),/pataka/pause duration (ms), continuous speech capability syllable duration (ms), continuous speech capability pause duration (ms);
s3, reclassifying the b3301 speech rhythm, and ICF subdividing the b3301 speech rhythm into amplitude standard deviation (dB), total duration (ms) of stressed syllables and occurrence rate (%) of stressed syllables;
s4, reclassifying the b3302 speech rate, the ICF subdividing the b3302 speech rate into/pa/speech rate (S),/ta/speech rate (S),/ka/speech rate (S),/pata/speech rate (S),/paka/speech rate (S),/taka/speech rate (S),/pataka/speech rate (S); a/pa/voiced rate (in/s),/ta/voiced rate (in/s),/ka/voiced rate (in/s),/pata/voiced rate (in/s),/paka/voiced rate (in/s), continuous speech capability speech rate (in/s), continuous speech capability constituent rate (in/s);
and S5, reclassifying the b3303 intonation, and subdividing the b3303 intonation into a speech fundamental frequency standard deviation (Hz), a speech fundamental frequency dynamic range (Hz) and a fundamental frequency mutation occurrence rate (%) by the ICF.
Preferably, the reference value of the vocal function in S1 is in a vocal sound database comprising oral sensation mean (%) of different genders, each age group (2, 3, 4, 5, 6, 7, 8-17 years old, each 18-99 years old for adult, total 8), oral sensation standard deviation (%), oral sensation variation range (%), oral sensation limit (Max-Min) (%); a mandible movement mean value (%), a mandible movement standard deviation (%), a mandible movement variation range (Max-Min) (%), a mandible movement limit value (%), a lip movement mean value (%), a lip movement standard deviation (%), a lip movement variation range (Max-Min) (%), and a lip movement limit value (%); a tongue motion mean value (%), a tongue motion standard deviation (%), a tongue motion variation range (Max-Min) (%), and a tongue motion limit value (%); a mean (%) of the sound-making clarity, a standard deviation (%) of the sound-making clarity, a variation range (Max-Min) (%), and a limit (%) of the sound-making clarity; obtaining the average value (number) of the consonant phoneme, obtaining the variation range (Max-Min) (number) of the consonant phoneme, obtaining the limit value (number) of the consonant phoneme, comparing the average value (pair) of the consonant phoneme, comparing the variation range (Max-Min) (pair) of the consonant phoneme and comparing the limit value (pair) of the consonant phoneme.
Preferably, the reference value of the speech fluency in S2 is a speech fluency database comprising syllable duration mean values (ms), syllable duration standard deviations (ms), syllable duration variation ranges (Max-Min) (ms), syllable duration limit values (ms) of different genders, oral rotation rates (/ pa/,/ta/,/ka/,/pata/,/paka/,/taka /) of age groups (age groups of 3, 4, 5, 6, 7, 8-17, age groups of 18-99, and adult groups of 7), and different genders; a voiced sound time length mean value (ms), a voiced sound time length variation range (Max-Min) (ms), a voiced sound time length limit value (ms), a pause time length mean value (ms), a pause time length variation range (Max-Min) (ms) and a pause time length limit value (ms); and syllable duration mean (ms), syllable duration standard deviation (ms), syllable duration variation range (Max-Min) (ms), syllable duration limit (ms), pause duration mean (ms), pause duration variation range (Max-Min) (ms), pause duration limit (ms) for continuous speech capability.
Preferably, the reference value of the speech rhythm in S3 is in a speech rhythm database including a mean value (dB) of amplitude standard deviations of different genders, age groups (3, 4, 5, 6, 7, 8-17 years old, 18-99 years old, adult, and 7 total), a variation range (Max-Min) (dB) of amplitude standard deviations, a limit value (dB) of amplitude standard deviations, a mean value (ms) of total duration of stressed syllables, a variation range (Max-Min) (ms) of total duration of stressed syllables, a limit value (ms) of total duration of stressed syllables, a mean value (%) of stressed syllables, a standard deviation (%) of stressed syllables, a variation range (Max-Min) (%), and a limit value (%) of stressed syllables.
Preferably, the reference value of speech rate in S4 is in a speech rate database including speech rate mean (S/S), speech rate standard deviation (S/S), speech rate variation range (Max-Min) (S/S), speech rate limit value (S/S), voiced speech rate mean (S/S), voiced speech rate standard deviation (S/S), voiced speech rate variation range (Max-Min) (S/S), voiced speech rate limit value (S/S), voiced speech rate mean (S/S), voiced speech rate standard deviation (S/S), voiced speech rate variation range (Max-Min) (S/S) for different sexes, oral rotation rates of age (2, 3, 4, 5, 6, 7, 8-17 years old, 18-99 years old, and 8 years old) of age), and the like, Voiced speech rate limit (pieces/s), and speech rate mean (pieces/s), speech rate standard deviation (pieces/s), speech rate variation range (Max-Min) (pieces/s), speech rate limit (pieces/s), articulation rate mean (pieces/s), articulation rate standard deviation (pieces/s), articulation rate variation range (Max-Min) (pieces/s), articulation rate limit (pieces/s).
Preferably, the reference value of the intonation in S5 is in a intonation database, and the intonation database comprises a mean value (Hz) of speech fundamental frequency standard deviations of different genders and different ages (2, 3, 4, 5, 6, 7, 8-17 years old, 18-99 years old, adult and 8 total), a variation range (Max-Min) (Hz) of speech fundamental frequency standard deviations, a limit value (Hz) of speech fundamental frequency standard deviations, a mean value (Hz) of speech fundamental frequency dynamic range, a variation range (Max-Min) (Hz) of speech fundamental frequency dynamic range, a limit value (Hz) of speech fundamental frequency dynamic range, a mean value (%) of fundamental frequency mutation occurrence rate, a variation range (Max-Min) (%) of fundamental frequency mutation occurrence rate and a limit value (%) of fundamental frequency mutation occurrence rate.
The invention has the beneficial effects that:
the method carries out accurate evaluation aiming at an evaluation object, inputs the obtained evaluation result under a corresponding class, and obtains the final sound-constituting function damage grade through ICF converter (ICF-DrSpeech) conversion. Conversion can be carried out after the evaluation of all the sound-forming functions is finished, and the damage level of each sub-function is obtained; conversion can be carried out after each evaluation is finished to obtain the damage level of the subfunction, so that accurate language function evaluation can be carried out on the patient, and the subsequent treatment of the patient is facilitated.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments.
An ICF framework-based constitutive speech function damage level converter comprises the following function evaluation indexes:
s1, reclassifying the b320 sound-constituting function, and the ICF subdividing the b320 sound-constituting function into: obtaining (individual) initial consonant phoneme, comparing (opposite) initial consonant phoneme, articulation definition (%), oral sensation (%), mandibular movement (%), lip movement (%), tongue movement (%); reference values for the sound-constituting function reside in a sound-constituting database including oral sensation mean (%) of different genders, ages (2, 3, 4, 5, 6, 7, 8-17 years old, and 18-99 years old for adult, and 8 total), oral sensation standard deviation (%), oral sensation variation range (%), oral sensation limit (Max-Min) (%); a mandible movement mean value (%), a mandible movement standard deviation (%), a mandible movement variation range (Max-Min) (%), a mandible movement limit value (%), a lip movement mean value (%), a lip movement standard deviation (%), a lip movement variation range (Max-Min) (%), and a lip movement limit value (%); a tongue motion mean value (%), a tongue motion standard deviation (%), a tongue motion variation range (Max-Min) (%), and a tongue motion limit value (%); a mean (%) of the sound-making clarity, a standard deviation (%) of the sound-making clarity, a variation range (Max-Min) (%), and a limit (%) of the sound-making clarity; obtaining an average value (number) of the initial sound position, obtaining a variation range (Max-Min) (number) of the initial sound position, obtaining a limit value (number) of the initial sound position, comparing the average value (pair) of the initial sound position, comparing the variation range (Max-Min) (pair) of the initial sound position and comparing the limit value (pair) of the initial sound position;
s2, reclassifying b3300 speech fluency, the ICF subdivides the b3300 speech fluency into/pa/syllable duration (ms),/ta/syllable duration (ms),/ka/syllable duration (ms),/pata/syllable duration (ms),/paka/syllable duration (ms),/pa/voiced duration (ms),/ta/voiced duration (ms),/ka/voiced duration (ms),/paka/voiced duration (ms),/taka/voiced duration (ms),/paka/voiced duration (ms),/ta/pause duration (ms),/paka/pause duration (ms), and/pause duration (ms),/pa/pause duration (ms), A/ka/pause duration (ms),/pata/pause duration (ms),/paka/pause duration (ms),/taka/pause duration (ms),/pataka/pause duration (ms), continuous speech capability syllable duration (ms), continuous speech capability pause duration (ms); the reference value of the speech fluency lies in a speech fluency database which comprises syllable duration mean values (ms), syllable duration standard differences (ms), syllable duration variation ranges (Max-Min) (ms) and syllable duration limit values (ms) of different genders, oral rotation rates (/ pa/,/ta/,/ka/,/pata/,/paka/,/taka/,/pataka /) of all age groups (each group of 3, 4, 5, 6, 7, 8-17 years, and each group of 18-99 years is an adult group, and the total number of 7 groups); a voiced sound time length mean value (ms), a voiced sound time length variation range (Max-Min) (ms), a voiced sound time length limit value (ms), a pause time length mean value (ms), a pause time length variation range (Max-Min) (ms) and a pause time length limit value (ms); and syllable duration mean (ms), syllable duration standard deviation (ms), syllable duration variation range (Max-Min) (ms), syllable duration limit (ms), pause duration mean (ms), pause duration variation range (Max-Min) (ms), pause duration limit (ms) for continuous speech capability;
s3, reclassifying the b3301 speech rhythm, and ICF subdividing the b3301 speech rhythm into amplitude standard deviation (dB), total duration (ms) of stressed syllables and occurrence rate (%) of stressed syllables; the reference value of the speech rhythm lies in a speech rhythm database which comprises the average value (dB) of the amplitude standard deviations of different genders and age groups (3, 4, 5, 6, 7, 8-17 years old and age groups 18-99 years old and adult and 7 total), the variation range (Max-Min) (dB) of the amplitude standard deviations, the limit value (dB) of the amplitude standard deviations, the mean value (ms) of the total duration of the stressed syllables, the variation range (Max-Min) (ms) of the total duration of the stressed syllables, the limit value (ms) of the total duration of the stressed syllables, the mean value (%) of the stressed tone occurrence rate, the standard deviation (%) of the stressed tone occurrence rate, the variation range (Max-Min) (%), and the limit value (%) of the stressed tone occurrence rate;
s4, reclassifying the b3302 speech rate, the ICF subdividing the b3302 speech rate into/pa/speech rate (S),/ta/speech rate (S),/ka/speech rate (S),/pata/speech rate (S),/paka/speech rate (S),/taka/speech rate (S),/pataka/speech rate (S); a/pa/voiced rate (in/s),/ta/voiced rate (in/s),/ka/voiced rate (in/s),/pata/voiced rate (in/s),/paka/voiced rate (in/s), continuous speech capability speech rate (in/s), continuous speech capability constituent rate (in/s); the reference value of speech rate is in speech rate database comprising speech rate mean (s)/speech rate standard deviation(s), speech rate variation range (Max-Min)(s), speech rate limit (s)/speech rate limit(s), voiced rate mean (s/s), voiced rate standard deviation (s)/s), voiced rate limit (s/s) of different gender, each age group (2, 3, 4, 5, 6, 7, 8-17 years old group, 18-99 years old group, adult group, and total 8 years old group) And speech rate mean (number/s), speech rate standard deviation (number/s), speech rate variation range (Max-Min) (number/s), speech rate limit value (number/s), articulation rate mean value (number/s), articulation rate standard deviation (number/s), articulation rate variation range (Max-Min) (number/s), articulation rate limit value (number/s);
s5, reclassifying the b3303 intonations, and subdividing the b3303 intonations into a speech fundamental frequency standard deviation (Hz), a speech fundamental frequency dynamic range (Hz) and a fundamental frequency mutation occurrence rate (%); the reference value of the intonation lies in a intonation database which comprises the mean value (Hz) of the speech fundamental frequency standard deviation, the variation range (Max-Min) (Hz), the limit value (Hz) of the speech fundamental frequency standard deviation, the mean value (Hz) of the speech fundamental frequency dynamic range, the variation range (Max-Min) (Hz) of the speech fundamental frequency dynamic range, the mean value (%) of the fundamental frequency mutation occurrence rate, the variation range (Max-Min) (%) of the fundamental frequency mutation occurrence rate and the limit value (%) of the fundamental frequency mutation occurrence rate of different sexes and all age groups (2, 3, 4, 5, 6, 7 and 8-17 years old, and 18-99 years old to adult segments, and totally 8 segments).
Wherein, the structure sound limiting value includes: mouth feeling limit value, mandible movement limit value, lip movement limit value, tongue movement limit value, articulation definition limit value, initial phoneme acquisition limit value and initial phoneme contrast limit value.
Firstly, the limit value is used for determining the damage degree;
secondly, the limit values are discontinuous percentages, 0%, 5%, 25%, 50%, 96% and 100% are fixed limit values, and 4%, 24%, 49% and 95% are auxiliary limit values;
third, the limit 0 is defined as no damage (0-4%), 1 as mild damage (5-24%), 2 as moderate damage (25-49%), 3 as mild damage (50-95%), and 4 as complete damage (96-100%).
A speech fluency limit comprising: syllable duration, voiced duration, pause duration, and syllable duration, pause duration of continuous speech capability, respectively, of the oral rotation rate (/ pa/,/ta/,/ka/,/pata/,/paka /).
Firstly, the limit value is used for determining the damage degree;
secondly, the limit values are discontinuous percentages, 0%, 5%, 25%, 50%, 96% and 100% are fixed limit values, and 4%, 24%, 49% and 95% are auxiliary limit values;
third, the limit 0 is defined as no damage (0-4%), 1 as mild damage (5-24%), 2 as moderate damage (25-49%), 3 as mild damage (50-95%), and 4 as complete damage (96-100%).
Speech rhythm limit values comprising: amplitude standard deviation limit value, stress syllable total time limit value and stress occurrence rate limit value.
Firstly, the limit value is used for determining the damage degree;
secondly, the limit values are discontinuous percentages, 0%, 5%, 25%, 50%, 96% and 100% are fixed limit values, and 4%, 24%, 49% and 95% are auxiliary limit values;
third, the limit 0 is defined as no damage (0-4%), 1 as mild damage (5-24%), 2 as moderate damage (25-49%), 3 as mild damage (50-95%), and 4 as complete damage (96-100%).
Speech rate limits, including: speech rate limit, voiced speech rate limit, and speech rate limit, articulation rate limit for continuous speech capability (/ pa/,/ta/,/ka/,/pata/,/pakaa /).
Firstly, the limit value is used for determining the damage degree;
secondly, the limit values are discontinuous percentages, 0%, 5%, 25%, 50%, 96% and 100% are fixed limit values, and 4%, 24%, 49% and 95% are auxiliary limit values;
third, the limit 0 is defined as no damage (0-4%), 1 as mild damage (5-24%), 2 as moderate damage (25-49%), 3 as mild damage (50-95%), and 4 as complete damage (96-100%).
Intonation limit values, including: a speech fundamental frequency standard deviation limit value, a speech fundamental frequency dynamic range limit value and a fundamental frequency mutation occurrence rate limit value.
Firstly, the limit value is used for determining the damage degree;
secondly, the limit values are discontinuous percentages, 0%, 5%, 25%, 50%, 96% and 100% are fixed limit values, and 4%, 24%, 49% and 95% are auxiliary limit values;
third, the limit 0 is defined as no damage (0-4%), 1 as mild damage (5-24%), 2 as moderate damage (25-49%), 3 as mild damage (50-95%), and 4 as complete damage (96-100%).
Further, the conversion of the sound-constituting measurement data and the sound-constituting function damage degree further implements the following steps:
converting the sound making data of the user into a sound making limit value, and qualitatively judging the sound making function damage degree and the sound making function damage degree; the following formula is obtained according to the reference data of the Chinese structure voice function of different sexes and different age groups:
A. formula of oral sensory damage degree for discriminating oral sensory dysfunction for each age group (total 8 groups): oral sensation mean-oral sensation standard deviation- (oral sensation mean-oral sensation standard deviation) × oral sensation limit value;
B. for each age group (total 8), a mandible movement injury degree formula for judging mandible movement dysfunction: a mandible movement mean value-mandible movement standard deviation- (mandible movement mean value-mandible movement standard deviation) — a mandible movement limit value;
C. for each age group (total 8), a formula for determining the degree of lip motor impairment when lip motor dysfunction occurs: lip motion mean-lip motion standard deviation- (lip motion mean-lip motion standard deviation) × lip motion limit;
D. for each age group (total 8), a formula for judging the degree of tongue movement injury when the tongue movement dysfunction is present: tongue motion mean-tongue motion standard deviation- (tongue motion mean-tongue motion standard deviation) × tongue motion limit;
E. formula for judging degree of impairment of consonant phoneme acquisition in the case of consonant phoneme acquisition impairment for each age group (total 8 groups): obtaining a mean value of the consonant phoneme, obtaining a mean value of the consonant phoneme and obtaining a limit value of the consonant phoneme;
F. for each age group (total 8), formula for discriminating damage degree of structural sound clarity when the structural sound clarity is impaired: the structural sound definition mean value-structural sound definition standard deviation- (structural sound definition mean value-structural sound definition standard deviation). The structural sound definition limit value;
G. for each age group (total 8), a formula for determining the degree of consonant phoneme contrasted damage when the consonant phoneme contrasts obstacles: initial phoneme comparison mean-initial phoneme comparison limit value;
H. for the data between 5% of the fixed limit value and 4% of the auxiliary limit value, the data between 25% of the fixed limit value and 24% of the auxiliary limit value, the data between 50% of the fixed limit value and 49% of the auxiliary limit value, and the data between 96% of the fixed limit value and 95% of the auxiliary limit value are connected together according to rules to form continuous grades, namely, the repetition or fault of the data in each group of limit values is avoided; the connection rule is based on the fixed limit values of 0%, 5%, 25%, 50%, 96% and 100%, and expands the data of the auxiliary limit values of 4%, 24%, 49% and 95% to the data of the corresponding fixed limit values of 5%, 25%, 50% and 96%, so that the fixed limit values and the auxiliary limit values are connected into continuous grade.
Further, the conversion of the speech fluency measurement data and the speech fluency functional impairment degree is further implemented by the following steps:
converting the speech fluency data of the user into speech fluency limit values, and qualitatively judging the speech fluency function damage degree and the speech fluency function damage degree; the following formula is obtained according to the Chinese language fluent function reference data of different sexes and different age groups:
A. for each age group (7 total), a syllable time length damage degree formula for judging the syllable time length too short obstacle of the oral rotation rate in the speech fluency is as follows: syllable duration mean-syllable duration standard deviation- (syllable duration mean-syllable duration standard deviation) × syllable duration limit;
B. for each age group (7 total), a syllable time length damage degree formula for judging syllable time length overlong obstacle of oral rotation rate in speech fluency: syllable duration mean + syllable duration standard deviation + (syllable duration mean + syllable duration standard deviation) × syllable duration limit;
C. for each age group (7 total), a pause duration damage degree formula for distinguishing the disorder of too short pause duration of oral alternate rate in speech fluency is as follows: pause duration mean-pause duration mean pause duration limit value;
D. for each age group (7 total), a pause duration damage degree formula for distinguishing the pause duration too long obstacle of the oral rotation rate in the speech fluency is as follows: the pause time length mean value + the pause time length mean value is a pause time length limit value;
E. for each age group (7 total), a voiced sound time length damage degree formula for judging the voice time length of the oral alternate rate in the speech fluency when the voiced sound time length is too short and the obstacle is too short: voiced time duration mean-voiced time duration standard deviation- (voiced time duration mean-voiced time duration standard deviation) — voiced time duration limit value;
F. for each age group (7 total), a voiced sound duration damage degree formula for distinguishing the voice fluency when the voiced sound duration of the oral alternation rate is too long obstacle: voiced sound time length mean value + voiced sound time length standard deviation + (voiced sound time length mean value + voiced sound time length standard deviation) and voiced sound time length limit value;
G. for each age group (7 total), a syllable time length damage degree formula for judging the syllable time length too short obstacle of continuous voice ability in speech fluency: syllable duration mean-syllable duration standard deviation- (syllable duration mean-syllable duration standard deviation) × syllable duration limit;
H. for each age group (7 total), a syllable duration damage degree formula for judging syllable duration overlong obstacle of continuous voice capability in speech fluency: syllable duration mean + syllable duration standard deviation + (syllable duration mean + syllable duration standard deviation) × syllable duration limit;
I. for each age group (7 total), a pause duration damage degree formula for distinguishing the disorder of too short pause duration of continuous voice capacity in speech fluency is as follows: pause duration mean-pause duration mean pause duration limit value;
J. for each age group (7 in total), a pause duration damage degree formula for distinguishing the long pause duration obstacle of continuous voice capability in speech fluency is as follows: the pause time length mean value + the pause time length mean value is a pause time length limit value;
K. for the data between 5% of the fixed limit value and 4% of the auxiliary limit value, the data between 25% of the fixed limit value and 24% of the auxiliary limit value, the data between 50% of the fixed limit value and 49% of the auxiliary limit value, and the data between 96% of the fixed limit value and 95% of the auxiliary limit value are connected together according to rules to form continuous grades, namely, the repetition or fault of the data in each group of limit values is avoided; the connection rule is based on the fixed limit values of 0%, 5%, 25%, 50%, 96% and 100%, and expands the data of the auxiliary limit values of 4%, 24%, 49% and 95% to the data of the corresponding fixed limit values of 5%, 25%, 50% and 96%, so that the fixed limit values and the auxiliary limit values are connected into continuous grade.
Further, the conversion of the speech rhythm measurement data and the speech rhythm function damage degree further implements the following steps:
converting the speech rhythm data of the user into speech rhythm limit values, and qualitatively judging the speech rhythm function damage degree and the speech rhythm function damage degree; obtaining the following formula according to the Chinese speech rhythm function reference data of different sexes and different age groups:
A. formula for the degree of impairment of the amplitude standard deviation for discriminating disorders of too low amplitude standard deviation of speech rhythm for each age group (total 7 stages): amplitude standard deviation mean-amplitude standard deviation mean amplitude standard deviation limit value;
B. for each age group (7 total), formula for judging degree of damage of standard deviation of amplitude when the standard deviation of amplitude of speech rhythm is too high obstacle: amplitude standard deviation mean + amplitude standard deviation mean amplitude standard deviation limit value;
C. for each age group (7 total), the damage degree formula of the total time length of stressed syllables for judging the language rhythm when the total time length of stressed syllables is too short and the disorder: the total time length mean value of the stressed syllables-the total time length limit value of the stressed syllables;
D. for each age group (7 total), the damage degree formula of the total time length of the stressed syllables for judging the speech rhythm when the total time length of the stressed syllables is too long and the damage degree formula of the total time length of the stressed syllables is too long: the total time length mean value of the stressed syllables + the total time length mean value of the stressed syllables and the total time length limit value of the stressed syllables;
E. formula for the degree of impairment of accent appearance rate for discriminating hypo-accent appearance rate of speech rhythm for each age group (7 total paragraphs): accent occurrence mean-accent occurrence standard deviation- (accent occurrence mean-accent occurrence standard deviation) × accent occurrence limit;
F. formula for judging degree of impairment of accent frequency in case of disorder of accent frequency of speech rhythm too high for each age group (7 total): the average value of the occurrence rate of accents + the standard deviation of the occurrence rate of accents + (the average value of the occurrence rate of accents + the standard deviation of the occurrence rate of accents) × the limit value of the occurrence rate of accents;
G. for the data between 5% of the fixed limit value and 4% of the auxiliary limit value, the data between 25% of the fixed limit value and 24% of the auxiliary limit value, the data between 50% of the fixed limit value and 49% of the auxiliary limit value, and the data between 96% of the fixed limit value and 95% of the auxiliary limit value are connected together according to rules to form continuous grades, namely, the repetition or fault of the data in each group of limit values is avoided; the connection rule is based on the fixed limit values of 0%, 5%, 25%, 50%, 96% and 100%, and expands the data of the auxiliary limit values of 4%, 24%, 49% and 95% to the data of the corresponding fixed limit values of 5%, 25%, 50% and 96%, so that the fixed limit values and the auxiliary limit values are connected into continuous grade.
Further, the conversion between the speech rate measurement data and the speech rate functional impairment degree is further implemented by the following steps:
converting the user speech rate data into a speech rate limit value, and qualitatively judging the speech rate function damage degree and the speech rate function damage degree; the following formula is obtained according to the Chinese speech speed function reference data of different sexes and different age groups:
A. for each age group (total 8), a speech rate damage degree formula for judging the speech rate of the oral rotation rate in the speech rate when the speech rate is too low to be damaged: speech rate mean-speech rate standard deviation- (speech rate mean-speech rate standard deviation) × speech rate limit;
B. for each age group (7 total), a speech rate damage degree formula for judging the speech rate of the oral rotation rate in the speech rate when the speech rate is too high and is obstructed: speech rate mean + speech rate standard deviation + (speech rate mean + speech rate standard deviation) × speech rate limit;
C. for each age group (7 total), a voiced sound velocity damage degree formula for judging the voiced sound velocity of the oral alternation velocity in the speech velocity when the voiced sound velocity is too low obstacle: voiced rate mean-voiced rate standard deviation- (voiced rate mean-voiced rate standard deviation) × voiced rate limit;
D. for each age group (7 total), a voiced sound velocity damage degree formula for judging the voiced sound velocity in the oral alternation velocity in the speech velocity when the voiced sound velocity is over-high obstacle: voiced rate mean + voiced rate standard deviation + (voiced rate mean + voiced rate standard deviation) × voiced rate limit;
E. for each age group (total 8), a speech rate damage degree formula for judging speech rate underimpairment of continuous speech ability in speech rate is as follows: speech rate mean-speech rate standard deviation- (speech rate mean-speech rate standard deviation) × speech rate limit;
F. for each age group (total 8), a speech rate damage degree formula for judging the speech rate of continuous speech ability in speech rate when the speech rate is too high and is impaired: speech rate mean + speech rate standard deviation + (speech rate mean + speech rate standard deviation) × speech rate limit;
G. for each age group (7 total), a formula of damage degree of articulation rate when the articulation rate of continuous speech ability in speech rate is too low obstacle: the mean value of the sound-making rate-the standard deviation of the sound-making rate- (mean value of the sound-making rate-the standard deviation of the sound-making rate) is the limit value of the sound-making rate;
H. for each age group (7 total), a formula of damage degree of articulation rate when the articulation rate of continuous speech ability in speech rate is too high obstacle: mean value of sound-forming rate + standard deviation of sound-forming rate + (mean value of sound-forming rate + standard deviation of sound-forming rate) × limit value of sound-forming rate;
I. for the data between 5% of the fixed limit value and 4% of the auxiliary limit value, the data between 25% of the fixed limit value and 24% of the auxiliary limit value, the data between 50% of the fixed limit value and 49% of the auxiliary limit value, and the data between 96% of the fixed limit value and 95% of the auxiliary limit value are connected together according to rules to form continuous grades, namely, the repetition or fault of the data in each group of limit values is avoided; the connection rule is based on the fixed limit values of 0%, 5%, 25%, 50%, 96% and 100%, and expands the data of the auxiliary limit values of 4%, 24%, 49% and 95% to the data of the corresponding fixed limit values of 5%, 25%, 50% and 96%, so that the fixed limit values and the auxiliary limit values are connected into continuous grade.
Further, the conversion between the intonation measurement data and the intonation function damage degree is further implemented as follows:
converting the user intonation data into intonation limit values, and qualitatively judging the intonation function damage degree and the intonation function damage degree; the following formula is obtained according to the Chinese intonation function reference data of different sexes and different age groups,
A. for each age group (total 8), a speech fundamental frequency standard deviation damage degree formula for judging the speech fundamental frequency standard deviation of the intonation when the speech fundamental frequency standard deviation is too low to be obstacle: speech fundamental frequency standard deviation mean-speech fundamental frequency standard deviation mean and speech fundamental frequency standard deviation limit value;
B. for each age group (total 8), a speech fundamental frequency standard deviation damage degree formula for distinguishing the speech fundamental frequency standard deviation overhigh disorder of the intonation: speech fundamental frequency standard deviation mean value + speech fundamental frequency standard deviation mean value and speech fundamental frequency standard deviation limit value;
C. for each age group (7 total), a speech fundamental frequency dynamic range damage degree formula for judging the speech fundamental frequency dynamic range damage degree when the speech fundamental frequency dynamic range of the tone is too small: speech fundamental frequency dynamic range mean value-speech fundamental frequency dynamic range limit value;
D. for each age group (7 total), a speech fundamental frequency dynamic range damage degree formula for distinguishing the speech fundamental frequency dynamic range damage degree when the speech fundamental frequency dynamic range of the tone is too large: the speech fundamental frequency dynamic range mean value + the speech fundamental frequency dynamic range mean value and the speech fundamental frequency dynamic range limit value;
E. for each age group (7 total), a fundamental frequency mutation occurrence rate damage degree formula for judging the tone fundamental frequency mutation occurrence rate undermentioned obstacle: fundamental frequency mutation occurrence rate mean value-fundamental frequency mutation occurrence rate limiting value;
F. for each age group (7 total), a fundamental frequency mutation occurrence rate damage degree formula for judging the obstacle of overhigh fundamental frequency mutation occurrence rate of intonation: the mean value of the occurrence rate of fundamental frequency mutation + the mean value of the occurrence rate of fundamental frequency mutation and the limit value of the occurrence rate of fundamental frequency mutation;
G. for the data between 5% of the fixed limit value and 4% of the auxiliary limit value, the data between 25% of the fixed limit value and 24% of the auxiliary limit value, the data between 50% of the fixed limit value and 49% of the auxiliary limit value, and the data between 96% of the fixed limit value and 95% of the auxiliary limit value are connected together according to rules to form continuous grades, namely, the repetition or fault of the data in each group of limit values is avoided; the connection rule is based on the fixed limit values of 0%, 5%, 25%, 50%, 96% and 100%, and expands the data of the auxiliary limit values of 4%, 24%, 49% and 95% to the data of the corresponding fixed limit values of 5%, 25%, 50% and 96%, so that the fixed limit values and the auxiliary limit values are connected into continuous grade.
Wherein, the sound-forming function comprises description of mastering the initial number, description of judging grade (higher or lower than a target range), description of relative age and description of damage degree of initial phoneme position acquisition; description of mastering the consonant phoneme pair, description of judging grade (higher or lower than a target range), and description of consonant phoneme contrast damage degree; description of articulation definition score, description of decision level (above or below target range), description of relative age, description of articulation definition impairment degree; description of oral sensation score, description of decision level (above or below target range), description of relative age, description of oral sensation damage degree; a description of mandibular movement scores, a description of decision levels (above or below target ranges), a description of relative age, a description of mandibular movement impairment; description of lip movement score, description of decision level (above or below target range), description of relative age, description of degree of lip movement impairment; a description of tongue movement scores, a description of decision levels (above or below target ranges), a description of relative ages, a description of tongue movement impairment degrees.
Wherein the speech fluency comprises description of syllable duration numerical value of oral alternation rate, description of judgment grade (higher or lower than target range), description of syllable duration damage degree; description of the voiced sound duration value of the oral alternation rate, description of the decision level (higher or lower than the target range), description of the voiced sound duration damage degree; description of numerical values of pause duration of oral rotation rate, description of decision level (higher or lower than target range), description of damage degree of pause duration; description of syllable duration value of continuous speech capability, description of decision level (above or below target range), description of syllable duration damage degree; a description of the pause duration value of the continuous speech capability, a description of the decision level (above or below the target range), and a description of the degree of pause duration impairment.
Wherein the speech rhythm comprises a description of the magnitude of the standard deviation of the amplitude, a description of the decision level (above or below the target range), a description of the damage degree of the standard deviation of the amplitude; describing the total duration value of the stressed syllables, judging grades (higher or lower than a target range), and describing the damage degree of the total duration of the stressed syllables; description of the numerical value of the occurrence rate of accents, description of the decision level (above or below the target range), description of the degree of impairment of the occurrence rate of accents.
Wherein the speech rate comprises description of speech rate value of oral rotation rate, description of judgment grade (higher or lower than target range), and description of speech rate damage degree; a description of voiced rate values for the oral rotation rate, a description of the decision level (above or below the target range), a description of the degree of voiced rate impairment; description of speech rate values of continuous speech capability, description of decision levels (above or below target range), description of speech rate impairment degree; description of the value of the sound making rate of the continuous voice capability, description of the judgment level (higher or lower than the target range), and description of the damage degree of the sound making rate;
the intonation comprises description of a speech fundamental frequency standard deviation value, description of a judgment grade (higher than or lower than a target range), and description of a speech fundamental frequency standard deviation damage degree; description of the numerical value of the dynamic range of the speech fundamental frequency, description of the judgment level (higher or lower than the target range), and description of the damage degree of the dynamic range of the speech fundamental frequency; description of the numerical value of the occurrence rate of the fundamental frequency mutation, description of the judgment grade (higher or lower than the target range), and description of the damage degree of the occurrence rate of the fundamental frequency mutation.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (1)

1. An ICF framework-based constitutive speech function impairment level converter is used for realizing the following functions:
s1, reclassifying the b320 sound-constituting function, and the ICF subdividing the b320 sound-constituting function into: obtaining (individual) initial consonant phoneme, comparing (opposite) initial consonant phoneme, articulation definition (%), oral sensation (%), mandibular movement (%), lip movement (%), tongue movement (%);
the reference values of the sound-forming functions in S1 are in a sound-forming database, which includes the average value (individual) obtained by the consonant phoneme of each age, the variation range (individual) obtained by the consonant phoneme, and the limit value (individual) obtained by the consonant phoneme; initial consonant phoneme comparison mean value (pair), initial consonant phoneme comparison variation range (pair) and initial consonant phoneme comparison limit value (pair); the method comprises the following steps of (1) forming a sound definition mean value (%), forming a sound definition standard deviation (%), forming a sound definition change range (%), and forming a sound definition limit value (%); a mouth feel mean (%), a mouth feel standard deviation (%), a mouth feel variation range (%), a mouth feel limit (%); a mandible movement mean value (%), a mandible movement standard deviation (%), a mandible movement variation range (%), and a mandible movement limit value (%); a lip movement mean (%), a lip movement standard deviation (%), a lip movement variation range (%), a lip movement limit (%); a tongue motion mean value (%), a tongue motion standard deviation (%), a tongue motion variation range (%), and a tongue motion limit (%);
wherein 8 segments of each age segment comprise 2, 3, 4, 5, 6, 7 and 8-17 years old segments, and 18-99 years old segments are adult segments;
based on the parameters, respectively calculating and obtaining damage levels of consonant phoneme acquisition, sound construction definition, mouth feeling, lower jaw movement, lip movement and tongue movement;
s2, reclassifying the b3300 speech fluency, the ICF subdivides the b3300 speech fluency into/pa/syllable duration (ms),/ta/syllable duration (ms),/ka/syllable duration (ms),/pata/syllable duration (ms),/paka/syllable duration (ms),/taka/syllable duration (ms),/pataka/syllable duration (ms); a/pa/voiced duration (ms),/ta/voiced duration (ms),/ka/voiced duration (ms),/pata/voiced duration (ms),/paka/voiced duration (ms),/taka/voiced duration (ms),/pataka/voiced duration (ms); a/pa/dwell duration (ms),/ta/dwell duration (ms),/ka/dwell duration (ms),/pata/dwell duration (ms),/paka/dwell duration (ms),/taka/dwell duration (ms),/pataka/dwell duration (ms); continuous speech capability syllable duration (ms), continuous speech capability pause duration (ms);
the reference value of speech fluency in S2 is in a speech fluency database comprising syllable duration mean (ms), syllable duration standard deviation (ms), syllable duration variation range (ms), syllable duration limit value (ms) for different genders, oral rotation rates of each age group; a voiced sound time length mean value (ms), a voiced sound time length variation range (ms) and a voiced sound time length limit value (ms); the average value (ms) of the pause duration, the variation range (ms) of the pause duration and the limit value (ms) of the pause duration; and syllable duration mean (ms), syllable duration standard deviation (ms), syllable duration variation range (ms), syllable duration limit (ms), pause duration mean (ms), pause duration variation range (ms), pause duration limit (ms) of continuous speech capability;
wherein 7 segments of all age segments comprise 3, 4, 5, 6, 7, 8-17 years old segments and 18-99 years old segments which are adult segments;
respectively calculating syllable duration, voiced duration and pause duration of the alternation rate and the damage levels of the syllable duration and the pause duration of the continuous voice capacity based on the parameters;
s3, reclassifying the b3301 speech rhythm, and ICF subdividing the b3301 speech rhythm into amplitude standard deviation (dB), total duration (ms) of stressed syllables and occurrence rate (%) of stressed syllables;
the reference value of the speech rhythm in S3 is in a speech rhythm database including a mean value (dB) of amplitude standard deviations for different genders, each age group, a variation range (dB) of the amplitude standard deviations, a limit value (dB) of the amplitude standard deviations; the method comprises the following steps of (1) obtaining a stress syllable total time length mean value (ms), a stress syllable total time length variation range (ms) and a stress syllable total time length limit value (ms); the method comprises the following steps of (1) determining the mean (%) of the occurrence rate of accents, the standard deviation (%) of the occurrence rate of accents, the variation range (%) of the occurrence rate of accents and the limit (%) of the occurrence rate of accents;
wherein, each age group comprises 3, 4, 5, 6, 7, 8-17 years old and 18-99 years old which are adult;
respectively calculating damage levels of amplitude standard deviation, total time of stressed syllables and stress occurrence rate based on the parameters;
s4, reclassifying the b3302 speech rate, the ICF subdividing the b3302 speech rate into/pa/speech rate (S),/ta/speech rate (S),/ka/speech rate (S),/pata/speech rate (S),/paka/speech rate (S),/taka/speech rate (S),/pataka/speech rate (S); a/pa/voiced rate (in/s),/ta/voiced rate (in/s),/ka/voiced rate (in/s),/pata/voiced rate (in/s),/paka/voiced rate (in/s),/taka/voiced rate (in/s),/pataka/voiced rate (in/s); continuous speech capability speech rate(s), continuous speech capability articulation rate(s);
the reference value of the speech rate in S4 is in a speech rate database, which includes the average value (S/S) of speech rates of oral rotation rates of different genders and different age groups, the standard deviation (S/S) of speech rates, the variation range (S/S) of speech rates, and the limit value (S/S) of speech rates; mean voiced sound rate (in/s), standard deviation voiced sound rate (in/s), range of change voiced sound rate (in/s), limit voiced sound rate (in/s); and speech rate mean (s/s), speech rate standard deviation (s/s), speech rate variation range (s/s), speech rate limit value (s/s), articulation rate mean value (s/s), articulation rate standard deviation (s/s), articulation rate variation range (s/s), articulation rate limit value (s/s), 8 years of age including 2, 3, 4, 5, 6, 7, 8-17 years of age, and 18-99 years of age into adult;
respectively calculating the speech rate and the voiced rate of the alternative rate and the damage levels of the speech rate and the sound-forming rate of the continuous voice capacity based on the parameters;
s5, reclassifying the b3303 intonations, and subdividing the b3303 intonations into a speech fundamental frequency standard deviation (Hz), a speech fundamental frequency dynamic range (Hz) and a fundamental frequency mutation occurrence rate (%);
the reference value of the intonation in the S5 is in an intonation database, and the intonation database comprises the mean value (Hz) of the standard deviation of the fundamental frequency of the speech, the variation range (Hz) of the standard deviation of the fundamental frequency of the speech, and the limit value (Hz) of the standard deviation of the fundamental frequency of the speech of different genders and ages; the mean value (Hz) of the speech fundamental frequency dynamic range, the variation range (Hz) of the speech fundamental frequency dynamic range and the limit value (Hz) of the speech fundamental frequency dynamic range; the average value (%) of the occurrence rate of the fundamental frequency mutation, the change range (%) of the occurrence rate of the fundamental frequency mutation and the limit value (%) of the occurrence rate of the fundamental frequency mutation, wherein 8 age groups of each age group comprise 2, 3, 4, 5, 6, 7 and 8-17 years old and 18-99 years old are adult segments;
and respectively calculating the damage levels of the speech fundamental frequency standard deviation, the speech fundamental frequency dynamic range and the fundamental frequency mutation occurrence rate based on the parameters.
CN201910789160.1A 2019-08-26 2019-08-26 ICF frame-based sound-forming voice function damage level converter Active CN110875057B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910789160.1A CN110875057B (en) 2019-08-26 2019-08-26 ICF frame-based sound-forming voice function damage level converter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910789160.1A CN110875057B (en) 2019-08-26 2019-08-26 ICF frame-based sound-forming voice function damage level converter

Publications (2)

Publication Number Publication Date
CN110875057A CN110875057A (en) 2020-03-10
CN110875057B true CN110875057B (en) 2022-03-15

Family

ID=69716008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910789160.1A Active CN110875057B (en) 2019-08-26 2019-08-26 ICF frame-based sound-forming voice function damage level converter

Country Status (1)

Country Link
CN (1) CN110875057B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1089246A2 (en) * 1999-10-01 2001-04-04 Siemens Aktiengesellschaft Method and apparatus for speech impediment therapy
EP1785891A1 (en) * 2005-11-09 2007-05-16 Sony Deutschland GmbH Music information retrieval using a 3D search algorithm
CN104598758A (en) * 2015-02-12 2015-05-06 上海市徐汇区中心医院 System and method for evaluating hearing-speech rehabilitation training and curative effect of patients with post-stroke dysarthria
CN108670199A (en) * 2018-05-28 2018-10-19 暨南大学 A kind of dysarthrosis vowel assessment template and appraisal procedure
CN109360645A (en) * 2018-08-01 2019-02-19 太原理工大学 A kind of statistical classification method of dysarthrosis pronunciation movement spatial abnormal feature

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1089246A2 (en) * 1999-10-01 2001-04-04 Siemens Aktiengesellschaft Method and apparatus for speech impediment therapy
EP1785891A1 (en) * 2005-11-09 2007-05-16 Sony Deutschland GmbH Music information retrieval using a 3D search algorithm
CN104598758A (en) * 2015-02-12 2015-05-06 上海市徐汇区中心医院 System and method for evaluating hearing-speech rehabilitation training and curative effect of patients with post-stroke dysarthria
CN108670199A (en) * 2018-05-28 2018-10-19 暨南大学 A kind of dysarthrosis vowel assessment template and appraisal procedure
CN109360645A (en) * 2018-08-01 2019-02-19 太原理工大学 A kind of statistical classification method of dysarthrosis pronunciation movement spatial abnormal feature

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《《国际功能、残疾和健康分类》应用指导(一)》;邱卓英;《中国康复理论与实践》;20030131;第9卷(第1期);全文 *
《Communication attitude and speech in 10-year-old children with cleft (lip and) palate: an ICF perspective》;Havstam Christina et al.;《INTERNATIONAL JOURNAL OF SPEECH-LANGUAGE PATHOLOGY》;20110430;第13卷(第2期);全文 *
《ICF 理论与方法在儿童听力语言残疾康复中的应用研究》;程凯 等;《中国康复理论于实践》;20070531;第13卷(第5期);全文 *

Also Published As

Publication number Publication date
CN110875057A (en) 2020-03-10

Similar Documents

Publication Publication Date Title
Flege The detection of French accent by American listeners
Sparks et al. Method: melodic intonation therapy for aphasia
Iskarous et al. Locus equations are an acoustic expression of articulator synergy
Vicenik An acoustic study of Georgian stop consonants
Nirgianaki Acoustic characteristics of Greek fricatives
Hilton et al. Syllable reduction and articulation rates in Danish, Norwegian and Swedish
Benjamin Speech production of normally aging adults
Möller et al. Analysis of infant cries for the early detection of hearing impairment
Rodríguez et al. A prelingual tool for the education of altered voices
Murdoch et al. Speech disorders in children treated for posterior fossa tumours: ataxic and developmental features
CN110875057B (en) ICF frame-based sound-forming voice function damage level converter
Whitehill Studies of Chinese speakers with dysarthria: informing theoretical models
Kuzla et al. Compensation for assimilatory devoicing and prosodic structure in German fricative perception
Lehiste Some acoustic correlates of accent in Serbo-Croatian
Jiang et al. Encoding and decoding confidence information in speech
CN110880370B (en) Speech and language function damage level converter based on ICF framework
Mitchell et al. Changes in syllable and boundary strengths due to irritation
McAuliffe et al. Variation in articulatory timing of three English consonants: An electropalatographic investigation
Padareva-Ilieva et al. F2 TRANSITION MEASUREMENT IN BULGARIAN ADULTS WHO STUTTER AND WHO DO NOT STUTTER.
Allison et al. Effect of prosodic manipulation on articulatory kinematics and second formant trajectories in children
Dmitrić et al. Articulation disorders in Serbian language in children with speech pathology
Astruc 8 Prosody
Sivaram et al. Enhancement of dysarthric speech for developing an effective speech therapy tool
Albuquerque et al. Age and gender effects in European Portuguese spontaneous speech
Ochi et al. Automatic Discrimination of Soft Voice Onset Using Acoustic Features of Breathy Voicing.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Functional Damage Level Converter for Pronunciation Speech Based on ICF Framework

Granted publication date: 20220315

Pledgee: China Construction Bank Corporation Shanghai Baosteel Baoshan Branch

Pledgor: Shanghai Huimin Medical Equipment Co.,Ltd.

Registration number: Y2024310000236