CN107705803A - The method of simulated sound - Google Patents
The method of simulated sound Download PDFInfo
- Publication number
- CN107705803A CN107705803A CN201710740419.4A CN201710740419A CN107705803A CN 107705803 A CN107705803 A CN 107705803A CN 201710740419 A CN201710740419 A CN 201710740419A CN 107705803 A CN107705803 A CN 107705803A
- Authority
- CN
- China
- Prior art keywords
- sound
- voice signal
- patient
- species
- database
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- 238000012360 testing method Methods 0.000 claims abstract description 6
- 208000009205 Tinnitus Diseases 0.000 abstract description 5
- 231100000886 tinnitus Toxicity 0.000 abstract description 5
- 238000011282 treatment Methods 0.000 abstract description 3
- 241000894007 species Species 0.000 description 13
- 230000015654 memory Effects 0.000 description 8
- 210000005069 ears Anatomy 0.000 description 6
- 241000931705 Cicada Species 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 240000006409 Acacia auriculiformis Species 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000000638 stimulation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012074 hearing test Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010998 test method Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/12—Audiometering
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/12—Audiometering
- A61B5/121—Audiometering evaluating hearing capacity
- A61B5/123—Audiometering evaluating hearing capacity subjective methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/63—Querying
- G06F16/635—Filtering based on additional data, e.g. user or group profiles
- G06F16/637—Administration of user profiles, e.g. generation, initialization, adaptation or distribution
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Multimedia (AREA)
- Acoustics & Sound (AREA)
- Animal Behavior & Ethology (AREA)
- Theoretical Computer Science (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Pathology (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Otolaryngology (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention discloses a kind of method of simulated sound, including step:S1, foundation main suit determine sound species;S2, the voice signal to match in database selection with above-mentioned sound species;S3, according to audiology test result tut signal is adjusted;S4, the voice signal after regulation verified, until consistent with the sound of main suit.Determine that sound species, then the voice signal with being stored in database are matched by patient main suit;The voice signal of main suit can be the complex sound of the single sound or multiple sound in database, and selected voice signal is finely adjusted so that the voice signal of selection is consistent with the Hearing Threshold of patient.Thus, overcome in the prior art and only have single standard to stimulate sound and the situation of the time of day of patient's sound can not be embodied well, the accuracy of Sound Match is improved, checks that field provides new means for tinnitus acoustic matching etc., adequate data is provided for fields such as accurate sound treatments.
Description
Technical field
The invention belongs to Audiological examination field, more particularly, to a kind of method for matching sound.
Background technology
In some need to match the inspection of sound, more particularly, to tinnitus acoustic matching, patient is describing the sound of oneself
Often the experience according to itself or experience describe during sound, and they can be described as the sound of oneself usually touching each
The sound of kind nature, such as:The cicada cried sound, aeolian tone, the sound that drips, crash, musical instrument sound, speech sound etc., but clinically use
It is limited, general only standard stimulus sound in the sound of matching, such as pure tone, pulse pure tone, NARROWBAND NOISE, white noise, twiters
Sound etc..Because patient was not in contact with these standard stimulus sounds before, no any stimulation sound is reflected when just appearing in audition
The sound described therewith is consistent or close, in order that sound can match come, technician may induce patient, inform without with
Its completely the same sound, selection one are close just;And other situation is then lacking experience for technician, it is impossible to from trouble
Similar standard stimulus sound therewith is judged in the description of person, causes to test situations such as slow or matching error is big.
If patient is not considered and operates the situation of technician, it is likely that the problem of being tester, because at present a lot
Hospital institution may also match the sound of patient using pure tone audiometer, it is well known that the function of pure tone audiometer is to be used for
Hearing threshold test, the even audiometer of some research-baseds, the function also without the matching sound that becomes more meticulous, match the sound come
Sound may also be very remote with actual error.
In addition, the method that Sound Match typically uses pitch matches at present, this method is typically multiple to patient's audition
The standard stimulus sound of different frequency and intensity, the sound that tone and size are close therewith is then therefrom selected, but suffered from
Person is in the subjective description of own voice, simple standard stimulus sound is difficult to comply fully with actual situation, although there is part trouble
The sound of person is consistent with standard stimulus sound, but also more patient's sounds are more complex, is difficult matching with current method
Accurately.
To solve the problems, such as to operate the degree of being skilled in technique of technician, the fitness problem of patient, precision problem of instrument etc., I
Need to propose a kind of new simulated sound method, luminous energy does not solve above-mentioned problems for it, is also greatly improved of sound
With accuracy, check that field provides new means for tinnitus acoustic matching etc., adequate data is provided for fields such as accurate sound treatments.
The content of the invention
It is an object of the invention to provide a kind of method of simulated sound, can solve the problem that at least one in above mentioned problem.
According to an aspect of the invention, there is provided a kind of method of simulated sound comprises the following steps:
S1, foundation patient's subjective description determine sound species;
S2, the voice signal to match in database selection with the sound species of above-mentioned determination;
S3, according to audiology test result tut signal is adjusted;
S4, the voice signal after regulation verified, until consistent with the sound of patient.
The beneficial effects of the invention are as follows:Sound species, then the sound with being stored in database are determined by patient's subjective description
Sound signal is matched;Voice signal with patient main suit can be single sound or multiple sound in database
Complex sound, and selected voice signal is finely adjusted so that the voice signal of selection is consistent with the Hearing Threshold of patient.By
This, overcomes in the prior art and only has single standard to stimulate sound and can not embody the feelings of the time of day of patient's sound well
Shape, the accuracy of Sound Match is improved, check that field provides new means for tinnitus acoustic matching etc., for fields such as accurate sound treatments
Adequate data is provided.
In some embodiments, sound in step S1 is determined according to the subjective description of patient.
In some embodiments, in step S1 sound species be a monotone, two monotones, a complex sound,
Two complex sounds and a monotone add any one medium of a complex sound.Thus, it is possible to truly cover sound patient's
Sound type.
In some embodiments, multiple monotones and multiple complex tones are stored with database, every kind of sound has it
Corresponding frequency, intensity, duration property.Thus, it is possible to meet different types of sound, can accurately with sound species phase
Matching.
In some embodiments, the regulation in step S3 to voice signal include sound ear not, frequency, intensity,
Every the regulation with fluctuation, as long as appeal parameter can be micro-adjusted for the hearing state for having patient.Thereby, it is possible to accurate mould
Sound is drawn up, improves Sound Match precision.
Brief description of the drawings
Fig. 1 is the operation interface schematic diagram of operation module corresponding to the method for the simulated sound of the present invention.
Embodiment
The invention will now be described in further detail with reference to the accompanying drawings.
Reference picture 1:Method according to patient main suit's simulated sound, it is characterised in that comprise the following steps:
S1, foundation patient's subjective description determine sound species;
S2, the voice signal to match in database selection with the sound species of above-mentioned determination;
S3, according to audiology test result tut signal is adjusted;
S4, the voice signal after regulation verified, until consistent with the sound of patient.
Sound species is a monotone, two monotones, a complex sound, two complex sounds and a list in step S1
Tone adds any one in a complex sound.
Multiple monotones and multiple complex tones are stored with database, every kind of sound has its corresponding frequency, intensity, held
Continuous time response.
In practical operation, the sound in database is stored in multiple memories, is generally 3 memories
Form, it can also increase the quantity of memory according to clinical requirement, there is a full set of complete voice signal in each memory, need
In the case of wanting, the voice signal in memory concurrently or separately can be debugged out.A full set of complete voice signal mainly has:Mark
Quasi- stimulation sound:Such as pure tone, sound of twitering, white noise and narrow-band noise, simulate nature sound:As the cicada cried sound, aeolian tone, singing of the stream and
Tweedle, musical instrument sound:Such as whistling, piano sound and violin sound, speech sound etc..
Regulation in step S3 to voice signal include the ear of sound not, frequency, intensity, the regulation of interval and fluctuation.For
The regulation of voice signal selected by convenience, above-mentioned each memory are connected by processor with analog switch.Opened by simulation
Close with the connection of respective memory, the output of the voice signal of each memory can be controlled, so as to realize the defeated of monotone
Go out or multiple tones it is compound after export.Meanwhile by processor can to the ear of sound not, frequency, intensity, interval and fluctuation
The regulation of regulation so that the voice signal of final output can reach the effect of simulated sound.
Patient is confirmed mainly have it is determined that during sound species from multiple dimensions in step S1:Sound side is other, sound
Symmetry, sound species, acoustic tones, the continuation of sound, the detailed description of the rhythmicity of sound and sound.
Simulated sound functional module is created according to the method for foundation patient main suit's simulated sound of the present invention, by simulated sound
In sound test software in sound function Module-embedding to computer, hearing test software platform hardware is controlled to set by computer
It is standby, realize function possessed by simulated sound functional module.Wherein, sound module has multiple sound select buttons, and
The sound ear of each voice signal not, frequency, intensity, the regulation button of interval and fluctuation.
Specifically, sound ear Bao Kuo not left ear, auris dextra and ears;
Sound species adds including a monotone, two monotones, a complex sound, two complex sounds and a monotone
One complex sound;
Acoustic tones include low pitch, medium pitch and high-pitched tone;
The continuation of sound includes continuing ring, interruption ring and transient, wherein when interruption ring includes sound sometimes
The situation of nothing, irregularities, etc. related to position or limb activity;
The rhythmicity of sound includes arythmia, same tone, fluctuation, when it is big when small or fluctuating ring, the section with breathing
Rule is consistent and consistent with the rhythm and pace of moving things of pulse beat;
The detailed description of sound mainly have ring, hummed, z z z ring, susurrate, rumble is loud, ticking sound, the cicada cried, wind
Blow sound, current sound, speech sound, musical instrument sound etc..
With a certain name patient illustrate, simulate patient option and generation situation it is as follows:
Sound ear is other:Ears, and left and right Er Bie sound describe it is inconsistent;
After selecting ears, sound 1 and the passage of sound 2 can be opened, and sound 1 shows left ear, and sound 2 shows auris dextra.
If unilateral sound, then left ear or auris dextra can be selected.
The species of sound:1 monotone;
Pure tone, pulse pure tone, sound of twitering can be filtered out in " type ".
Narrow-band noise can be filtered out if selecting complex sound, in " type ", white noise, pink noise, speech noise, pulse are made an uproar
Sound, simulation natural phonation.And some simulation natural phonations can also be filtered out by detailed description, most common have that the cicada cried sound, wind
Sound, current sound, musical instrument sound etc..
The tone of sound:High-pitched tone, shrillness.
" " the middle frequency sound that can filter out 3k-8kHz, technician can be finely adjusted frequency to specific frequency.
If the sound for then having 0.125-1kHz of low pitch, the sound for then having 1k-3kHz of medium pitch.
Then, operator can be finely adjusted to sound so that frequency, intensity, interval and the regulation of fluctuation of sound are more
Meet the sound of patient.
Whether sound has rhythmicity:
Fluctuation, when it is big when small or fluctuating ring;" fluctuation " project can be finely adjusted, and adjust the changes in amplitude of fluctuating,
Such as:" fluctuation " item inputs 1000ms, and that output sound will gradually increase and diminishingly change within the input cycle.
Interruption, pause process is had among sound, just as pulse and heartbeat, such as:" interval " inputs 200ms,
So output sound will make a short pause per 200ms.
And so on.
Then, operator can continue to finely tune to sound so that sound frequency, intensity, interval and the regulation of fluctuation
More meet the sound of patient.The premise for paying attention to fine setting is that becoming more meticulous for patient listens threshold condition, and the hearing test result that becomes more meticulous can be with
Obtained by Auditory identification susceptibility test method.
Last technician can give patient's audition simultaneously sound 1 and sound 2, simulate the sound for best suiting patient perceptions.
The method of the simulated sound of the present invention determines sound species by the subjective description of patient, then with being stored in database
Voice signal matched;Voice signal with patient main suit can be single sound in database or multiple
The complex sound of sound, and selected voice signal is finely adjusted so that the voice signal of selection is consistent with the Hearing Threshold of patient
Close.Thus, overcome in the prior art and only have single standard to stimulate sound and the true shape of patient's sound can not be embodied well
The situation of state, the accuracy of Sound Match is improved, check that field provides new means for tinnitus acoustic matching etc., treated for accurate sound
Adequate data is provided Deng field.
Above-described is only the preferred embodiment of the present invention, it is noted that for one of ordinary skill in the art
For, without departing from the concept of the premise of the invention, various modifications and improvements can be made, these belong to the present invention
Protection domain.
Claims (5)
1. the method for simulated sound, it is characterised in that comprise the following steps:
S1, foundation patient's subjective description determine sound species;
S2, the voice signal to match in database selection with the sound species of above-mentioned determination;
S3, according to audiology test result tut signal is adjusted;
S4, the voice signal after regulation verified, until consistent with the sound of patient.
2. the method for simulated sound according to claim 1, it is characterised in that the sound in the step S1 is according to trouble
The subjective description of person determines.
3. the method for simulated sound according to claim 1, it is characterised in that sound species is one in the step S1
Monotone, two monotones, a complex sound, two complex sounds and a monotone add any one in a complex sound.
4. the method for simulated sound according to claim 2, it is characterised in that multiple single-tones are stored with the database
Reconcile multiple complex tones, every kind of sound has its corresponding frequency, intensity and duration property.
5. the method for simulated sound according to claim 3, it is characterised in that to the tune of voice signal in the step S3
Ear that section includes sound is other, frequency, intensity, the regulation of interval and fluctuation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710740419.4A CN107705803A (en) | 2017-08-24 | 2017-08-24 | The method of simulated sound |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710740419.4A CN107705803A (en) | 2017-08-24 | 2017-08-24 | The method of simulated sound |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107705803A true CN107705803A (en) | 2018-02-16 |
Family
ID=61171191
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710740419.4A Pending CN107705803A (en) | 2017-08-24 | 2017-08-24 | The method of simulated sound |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107705803A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007031814A1 (en) * | 2005-09-14 | 2007-03-22 | Ramiro Vergara | Programmable electronic instrument known as a tinnitus suppressor |
CN103211600A (en) * | 2013-04-27 | 2013-07-24 | 江苏贝泰福医疗科技有限公司 | Hearing diagnosis and treatment device |
CN103239237A (en) * | 2013-04-27 | 2013-08-14 | 江苏贝泰福医疗科技有限公司 | Tinnitus diagnostic test device |
CN103239236A (en) * | 2013-04-27 | 2013-08-14 | 江苏贝泰福医疗科技有限公司 | Hearing test and auditory sense assessment device |
CN104783808A (en) * | 2015-04-09 | 2015-07-22 | 复旦大学附属眼耳鼻喉科医院 | Tinnitus detecting method and tinnitus therapeutic apparatus |
CN105997099A (en) * | 2016-06-15 | 2016-10-12 | 佛山博智医疗科技有限公司 | Tinnitus sound scanning method |
CN107049333A (en) * | 2017-06-15 | 2017-08-18 | 佛山博智医疗科技有限公司 | Auditory identification susceptibility test method |
-
2017
- 2017-08-24 CN CN201710740419.4A patent/CN107705803A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007031814A1 (en) * | 2005-09-14 | 2007-03-22 | Ramiro Vergara | Programmable electronic instrument known as a tinnitus suppressor |
CN103211600A (en) * | 2013-04-27 | 2013-07-24 | 江苏贝泰福医疗科技有限公司 | Hearing diagnosis and treatment device |
CN103239237A (en) * | 2013-04-27 | 2013-08-14 | 江苏贝泰福医疗科技有限公司 | Tinnitus diagnostic test device |
CN103239236A (en) * | 2013-04-27 | 2013-08-14 | 江苏贝泰福医疗科技有限公司 | Hearing test and auditory sense assessment device |
CN104783808A (en) * | 2015-04-09 | 2015-07-22 | 复旦大学附属眼耳鼻喉科医院 | Tinnitus detecting method and tinnitus therapeutic apparatus |
CN105997099A (en) * | 2016-06-15 | 2016-10-12 | 佛山博智医疗科技有限公司 | Tinnitus sound scanning method |
CN107049333A (en) * | 2017-06-15 | 2017-08-18 | 佛山博智医疗科技有限公司 | Auditory identification susceptibility test method |
Non-Patent Citations (1)
Title |
---|
HOSSEIN MAHBOUBI ETAL.: "Accuracy of Tinnitus Pitch Matching Using a Web-Based Protocol", 《ANNALS OF OTOLOGY,RHINOLOGY & LARYNGOLOGY》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Dhar et al. | Otoacoustic emissions: Principles, procedures, and protocols | |
McFarland et al. | Incomplete compensation to articulatory perturbation | |
Nie et al. | Encoding frequency modulation to improve cochlear implant performance in noise | |
Enflo et al. | Effects on vocal fold collision and phonation threshold pressure of resonance tube phonation with tube end in water | |
US9642573B2 (en) | Practitioner device for facilitating testing and treatment of auditory disorders | |
EP2942010B1 (en) | Tinnitus diagnosis and test device | |
CN105930480B (en) | The generation method and managing irritating auditory phenomena system of managing irritating auditory phenomena music | |
Dromey et al. | The effects of emotional expression on vibrato | |
Li et al. | Improved perception of music with a harmonic based algorithm for cochlear implants | |
US9467789B1 (en) | Mobile device for facilitating testing and treatment of auditory disorders | |
US20240089679A1 (en) | Musical perception of a recipient of an auditory device | |
Brown et al. | Laryngeal biomechanics and vocal communication in the squirrel monkey (Saimiri boliviensis) | |
CN109102862A (en) | Concentrate the mind on breathing depressurized system and method, storage medium, operating system | |
Lin et al. | Improved subglottal pressure estimation from neck-surface vibration in healthy speakers producing non-modal phonation | |
Tseng et al. | A study of joint effect on denoising techniques and visual cues to improve speech intelligibility in cochlear implant simulation | |
US11228854B2 (en) | Tinnitus treatment device and recording medium | |
CN203244395U (en) | Tinnitus diagnosis testing device | |
US9844326B2 (en) | System and methods for creating reduced test sets used in assessing subject response to stimuli | |
Sturgeon et al. | High F0 and musicianship make a difference: Pitch-shift responses across the vocal range | |
CN106510944A (en) | Method and apparatus for generating tinnitus treatment sound | |
CN107705803A (en) | The method of simulated sound | |
Davidow et al. | Systematic studies of modified vocalization: Effects of speech rate and instatement style during metronome stimulation | |
Krecichwost et al. | Multichannel speech acquisition and analysis for computer-aided sigmatism diagnosis in children | |
Spahr et al. | Simulating the effects of spread of electric excitation on musical tuning and melody identification with a cochlear implant | |
Yuba et al. | Unique technological voice method (The YUBA Method) shows clear improvement in patients with cochlear implants in singing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180216 |