CN114913873A - Tinnitus rehabilitation music synthesis method and system - Google Patents
Tinnitus rehabilitation music synthesis method and system Download PDFInfo
- Publication number
- CN114913873A CN114913873A CN202210595873.6A CN202210595873A CN114913873A CN 114913873 A CN114913873 A CN 114913873A CN 202210595873 A CN202210595873 A CN 202210595873A CN 114913873 A CN114913873 A CN 114913873A
- Authority
- CN
- China
- Prior art keywords
- music
- block
- tinnitus rehabilitation
- synthesizing
- tinnitus
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 208000009205 Tinnitus Diseases 0.000 title claims abstract description 48
- 231100000886 tinnitus Toxicity 0.000 title claims abstract description 48
- 238000001308 synthesis method Methods 0.000 title claims abstract description 10
- 230000007704 transition Effects 0.000 claims description 47
- 239000011159 matrix material Substances 0.000 claims description 36
- 238000000034 method Methods 0.000 claims description 28
- 238000012546 transfer Methods 0.000 claims description 23
- 230000001755 vocal effect Effects 0.000 claims description 20
- 230000002194 synthesizing effect Effects 0.000 claims description 17
- 238000009795 derivation Methods 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 7
- 230000011218 segmentation Effects 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 4
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 230000010354 integration Effects 0.000 abstract description 2
- 238000000051 music therapy Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 3
- 238000002560 therapeutic procedure Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 230000001225 therapeutic effect Effects 0.000 description 2
- 208000019901 Anxiety disease Diseases 0.000 description 1
- 241001342895 Chorus Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000036506 anxiety Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- HAORKNGNJCEJBX-UHFFFAOYSA-N cyprodinil Chemical compound N=1C(C)=CC(C2CC2)=NC=1NC1=CC=CC=C1 HAORKNGNJCEJBX-UHFFFAOYSA-N 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000036651 mood Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 231100000957 no side effect Toxicity 0.000 description 1
- 230000008506 pathogenesis Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 208000019116 sleep disease Diseases 0.000 description 1
- 208000022925 sleep disturbance Diseases 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/27—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/02—Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Auxiliary Devices For Music (AREA)
- Rehabilitation Tools (AREA)
Abstract
The invention relates to a tinnitus rehabilitation music synthesis method and a system, S1, extracting a main melody part in MIDI music; s2, extracting notes and chords in the main melody part; s3, numbering the musical notes and the chords to obtain digital music; s4, cutting the digital music into music blocks; s5, constructing a Markov chain for generating the music blocks; s6, generating digital music by using the Markov chain of the music block; and S7, restoring the digital music into the MIDI format to obtain the tinnitus rehabilitation music. The invention has the beneficial effects that the music generated by using the self-updating Markov chain has higher similarity with the original music, is natural and smooth, is similar to artificially created music and has low repeatability, can better meet the preference requirement of tinnitus patients on the music, and the music duration is not limited. Meanwhile, the invention has simple operation and high integration level, and even medical staff with computer-related knowledge lack can use the system quickly, thereby being convenient for clinical popularization.
Description
Technical Field
The invention belongs to the field of music sound treatment of tinnitus, relates to a music generation method meeting tinnitus treatment requirements, and particularly relates to a tinnitus rehabilitation music synthesis method and system.
Background
Tinnitus is a subjective auditory perception in the absence of external sound stimuli. Tinnitus may cause sleep disturbance, anxiety, inattention and affect the quality of life of the patient. Because the pathogenesis of tinnitus is uncertain, no fixed means for treating tinnitus exists at present. The music therapy is a non-invasive treatment method, and the current music therapy aiming at tinnitus mainly comprises the following steps: neuroronics Tinnitus Therapy (NTT), notch filtering music Therapy (TMNM), Heidelberg nerve music therapy. The music therapy is popular with patients because of no side effect.
However, the existing music used in existing tinnitus music therapies is of limited duration, is often repeatedly played during the treatment, is prone to the patient's negative mood, is not conducive to the patient's relaxation, and the individual preference of the patient for music is mostly overlooked, both of which may hinder the recovery of tinnitus. Although the existing methods can synthesize specific music that is infinitely long and not repeatedly played, the generated music is unpleasant due to algorithmic and technical problems, and has drawbacks of unnatural hearing and not well satisfying the preference of the patient. Moreover, a generative model needs to be trained for each piece of music, and the clinical operation is difficult.
Disclosure of Invention
In order to solve the above problems in the prior art, the present invention provides a method and a system for synthesizing tinnitus rehabilitation music.
In order to achieve the purpose, the invention adopts the technical scheme that:
the method for synthesizing the tinnitus rehabilitation music is characterized by comprising the following steps of:
s1, extracting the main melody part in the MIDI music;
s2, extracting notes and chords in the main melody part;
s3, numbering the musical notes and the chords to obtain digital music;
s4, cutting the digital music into music blocks;
s5, constructing a Markov chain for generating the music blocks;
s6, generating digital music by using the Markov chain of the music block;
s7, restoring the digital music into an MIDI format to obtain tinnitus rehabilitation music;
preferably, in S1, the melody part is a first part of music;
preferably, in S3, the notes and the chords are numbered according to their appearance order;
preferably, in S4, the digitized music is sliced using the byte pair code, and a slicing stop condition is set;
preferably, the segmentation stopping condition is that the highest occurrence frequency of adjacent sub-words does not exceed 1;
preferably, in S5, constructing a markov transfer matrix to represent a markov chain;
preferably, the step of constructing the markov transfer matrix comprises the following steps:
s51, counting the type N of the music block coded by the byte pair, numbering the music blocks from 1 to N according to the appearance sequence, wherein the number of the music block which appears repeatedly is the same as the number of the music block which appears for the first time;
s52, counting the transfer frequency number among the N kinds of music blocks to obtain the transfer probability among the N kinds of music blocks;
s53, constructing a matrix with the size of N x N, wherein the uppermost left corner element of the matrix is (1,1), and the matrix element (i, j) represents the probability that the music block with the number of i is transferred to the music block with the number of j;
s54, observing whether the value of the matrix element (N, N) with the size of N x N is 0, if so, adding a new state transition by using the update rule of the Markov chain, and updating the state transition matrix;
preferably, when the state of the markov chain is transferred, any music block before the last music block is taken as the next state of the last music block, and the state transfer matrix is updated;
preferably, when the state of the markov chain is transferred, the state transfer matrix is updated by using the previous music block of the last music block as the next state of the last music block;
preferably, when the state of the markov chain is transferred, the minimum number of transfer states that a previous music block can be transferred to is set to 2, and if the number of transfer states is greater than or equal to 2, the music block is taken as the next transfer state, and the state transfer matrix is updated; if the number of the transition states is less than 2, tracing back a music block forward to serve as the next transition state, and updating the state transition matrix until the number of the transition states of the music block traced back forward is more than or equal to 2;
preferably, in S6, the generated digitized music is arbitrary in length;
a system for synthesizing tinnitus rehabilitation music, comprising:
importing a module;
the importing module is used for importing original music;
a processing module;
wherein the processing module processes the original music using the music synthesis method of claims 1-10;
a derivation module;
wherein, the derivation module is used for deriving tinnitus rehabilitation music.
The tinnitus rehabilitation music synthesis method has the beneficial effects that the music generated by using the self-updating Markov chain has higher similarity with the original music, is natural and smooth, is similar to artificially created music and has low repeatability, can better meet the preference requirement of tinnitus patients on the music, and the music duration is not limited. Meanwhile, the invention has simple operation and high integration level, and even medical staff with computer-related knowledge lack can use the system quickly, thereby being convenient for clinical popularization.
Description of the drawings:
FIG. 1 is a schematic diagram of the synthesis process of tinnitus rehabilitation music according to the present invention;
FIG. 2 is a flow chart of the BPE algorithm used in the present invention;
figure 3 is a schematic diagram of a self-updating markov chain for use with the present invention;
FIG. 4 is a graph showing the pitch value sequence and note duration sequence 1/f fluctuation analysis according to the present invention;
FIG. 5 is an analysis diagram of an analysis structure according to the present invention;
FIG. 6 is a melody contour of short time music generated according to the present invention;
FIG. 7 is a melody contour of a long time music generated by the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-7, the present invention provides the following embodiments:
example 1:
a tinnitus rehabilitation music synthesis method is characterized by comprising the following steps:
s1, extracting the main melody part in the MIDI music;
s2, extracting notes and chords in the main melody part;
s3, numbering the musical notes and the chords to obtain digital music;
s4, cutting the digital music into music blocks;
s5, constructing a Markov chain for generating the music blocks;
s6, generating digital music by using the Markov chain of the music block;
and S7, restoring the digital music into the MIDI format to obtain the tinnitus rehabilitation music.
When a patient receives tinnitus music therapy, the existing music has limited duration and is usually played repeatedly during the therapy, if the music is not favorite, the negative emotion of the patient is possibly caused, the patient is not easy to relax, and the music generated by the prior art is unpleasant to the ear and cannot naturally hear, so that the therapy effect is influenced.
In this embodiment, the present invention provides a method for synthesizing tinnitus rehabilitation music, which extracts notes and chords of a main melody part in MIDI music, codes the notes and chords according to a sequence appearing before and after, and uses the numbered numbers to replace original notes and chords, and finally obtains digitized music information, and cuts the digitized music into music blocks, so that even if the same section of music preferred by a patient is cut into different music blocks, the digitized music generated by using a markov chain is linked together during generation, so that melodious music can be generated, and personalized requirements of the patient can be met, and subsequently, when the digitized music is restored into the original notes and chords, the music information of the notes and chords, i.e., tone color, note duration, note speed, etc., can be given to tinnitus rehabilitation music.
Because the artificially created music has self-similarity, referring to the attached figure 5, the music synthesized by the method meets the fractal characteristic and has self-similarity, the synthesized music is highly similar to the artificially created music, and the listening feeling is better. The generated music is a 1/f fluctuation, and the 1/f fluctuation is a comfortable fluctuation, and the music synthesized by the method is a comfortable music according to the attached figure 4.
If the synthesized music has high similarity with the original music, the synthesized music has the same therapeutic effect as the original music, and comparing fig. 6 and fig. 7, it can be seen that the music synthesized by the method has high similarity with the melody contour of the original music, no pitch jump occurs, and the therapeutic effect can be achieved.
Example 2:
in S1, the melody vocal part is the first vocal part of the music.
The music is composed of one or more vocal parts, only one vocal part is in play in solo or solo, and the music is composed of a plurality of vocal parts in chorus or instrumental ensemble, but in the plurality of vocal parts, usually only one vocal part playing the main melody and the other vocal parts playing the accompaniment, so that the main melody vocal part needs to be determined.
In this embodiment, the present invention provides a method for synthesizing tinnitus rehabilitation music, wherein the first vocal part of the music is used as the main melody vocal part, and for the single vocal part music, the whole music is directly selected, i.e. the first vocal part is used as the main melody vocal part, and for the multi-vocal part music, the first vocal part is usually the main melody vocal part, so the first vocal part is also selected as the main melody vocal part.
Example 3:
in S3, the notes and chords are numbered in the order of appearance.
In the embodiment, the musical notes and the chords are numbered according to the appearance sequence of the musical notes and the chords, and due to the similarity of the front music and the back music, the numbers enable the internal listening feeling of the divided music blocks to be smoother, and further enable the synthesized music listening feeling to be better.
Example 4:
in S4, segmenting the digitized music using byte pairs, and setting segmentation stop conditions;
preferably, the segmentation stop condition is that the maximum occurrence frequency of adjacent sub-words does not exceed 1.
If the method for segmenting the music blocks is preset, because the notes and the chords of each song are different, the same method for segmenting is not necessarily suitable for each music, and the synthesized music is not pleasant enough.
The embodiment uses Byte Pair Encoding (BPE) to segment music, for example, the notes and chords of a section of music are numbered in the order of appearance, and the numbered music is represented as: {3,3,3,28,16,3,3,3,28,3,10}, with a stop condition that the highest frequency does not exceed 1. Taking a single number as a sub-word, counting the occurrence times of adjacent sub-words, wherein the occurrence frequency of 3,3 is highest, replacing 3,3 with '-1', and changing the text into: { -1,3,28,16, -1,3,28,3,10}, where "3, 28" occurs most frequently, use "-2" instead of "3, 28", the text becomes: { -1, -2,16, -1, -2,3,10}, when "-1, -2" appears most frequently, "-3" is used instead of "-1, -2", the text becomes: { -3,16, -3,3,10}. And finally, the occurrence times of all adjacent subwords are all 1, and the final text data is obtained: { -3,16, -3,3,10}. The byte pair coding can realize automatic learning and find the optimal combination mode between the notes and the chords, so that stronger dependency relationship exists between the notes and the chords in the segmented music blocks, the synthesized music is more pleasant, and the synthesized music obtained by setting the segmentation stop conditions in such a way is better in listening feeling after testing.
Example 5:
in S5, constructing a Markov transfer matrix to represent a self-updating Markov chain;
preferably, the step of constructing the markov transfer matrix comprises the following steps:
s51, counting the types N of the music blocks coded by the BPE, numbering the music blocks from 1 to N according to the appearance sequence, wherein the number of the music blocks which appear repeatedly is the same as the number of the music block which appears for the first time;
s52, counting the transfer frequency number among the N kinds of music blocks to obtain the transfer probability among the N kinds of music blocks;
s53, constructing a matrix with the size of N x N, wherein the uppermost left corner element of the matrix is (1,1), and the matrix element (i, j) represents the probability that the music block with the number of i is transferred to the music block with the number of j;
and S54, observing whether the value of the matrix element (N, N) with the size of N x N is 0, if so, adding a new state transition by using the updating rule of the Markov chain, and updating the state transition matrix.
In the present embodiment, a markov chain is described using a markov transition matrix in which each music block is taken as a state, transition from one music block to the next is represented by transition from one state to the next, and the probability of occurrence of the next music block is related only to the music block that occurred previously, and not only can represent music blocks in which the next state of one music block may occur, but also can give the probability of occurrence of each state.
Example 6:
and when the Markov chain carries out state transition, taking any one music block before the last music block as the next state of the last music block, and updating the state transition matrix.
In the original music, the music blocks are sequentially presented, so that when the state transition is performed using the markov chain, the last music block is presented without the next transition state.
In this embodiment, any one of the music blocks before the last music block is used as the next state of the last music block, and the state transition matrix is updated, so that self-update of the markov chain is realized, state transition can be continuously performed, and the duration of generating music is not limited. When the number of local transfer states of the original music is small, the method can enrich the transfer states of the local music blocks to generate better music.
Example 7:
and when the Markov chain carries out state transition, taking the previous music block of the last music block as the next state of the last music block, and updating the state transition matrix.
In the embodiment, the previous music block of the last music block is used as the next state of the last music block, and the state transition matrix is updated, because of the similarity of the front and the back of the music, the generated music has better listening feeling, no high-pitch sudden change and no splicing trace, and see fig. 7.
If the number of previous music block transition states is 1 so that the next state of the previous music block is only the last music block, the music composition falls into a dead loop.
Preferably, when the state of the markov chain is transferred, the minimum number of transfer states that a previous music block can be transferred to is set to 2, and if the number of transfer states is greater than or equal to 2, the music block is taken as the next transfer state, and the state transfer matrix is updated; if the number of the transition states is less than 2, tracing back a music block forward to be used as the next transition state, and updating the state transition matrix until the number of the transition states of the music block traced back forward is more than or equal to 2.
In the embodiment, the minimum number of the state transitions which can be carried out by the previous music block is 2, and if the number of the state transitions is less than 2, the music block is traced forward, so that the phenomenon that the music falls into the dead loop because the number of the state transitions of the previous music block is 1 is avoided. After setting the minimum value to 2, if the number of transition states is 2, the next state of this music block is likely to be the last music block by 50%; if the number of state transitions is 3, then the next state for this music block has a 33.3% probability of being the last music block; in this way, if the number of transition states of the music block is larger, the probability of the music block being moved from the music block to the last music block is lower, the probability of falling into a dead loop is lower, and the music block is prevented from being moved continuously when the music is generated.
Example 8:
in S6, the generated digitized music is arbitrary in length.
The duration that different patients of tinnitus sound treatment need the treatment is different, and the demand to music duration is also different, and the digital music length that this embodiment generated is arbitrary, and in clinical treatment, according to patient's treatment time generate corresponding music of corresponding duration.
Example 9:
a system for synthesizing tinnitus rehabilitation music, comprising:
importing a module;
the importing module is used for importing original music;
a processing module;
wherein the processing module processes the original music using the music synthesis method of claims 1-10;
a derivation module;
wherein, the derivation module is used for deriving tinnitus rehabilitation music.
Present generation treatment music need train the generative model to each music alone, and the operation degree of difficulty is big to the doctor, and the recovered synthesis system of tinnitus that this embodiment provided, including leading-in module, processing module and derivation module, highly integrated the code is applicable to all music, and the doctor only need put into leading-in module with original music, just can obtain the rehabilitation treatment music from the derivation module, and it is more convenient to operate, the clinical popularization of being convenient for.
In the description of the embodiments of the present invention, it is to be understood that the terms "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "center", "top", "bottom", "inner", "outer", and the like indicate an orientation or positional relationship.
In the description of the embodiments of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "assembled" are to be construed broadly and may, for example, be fixedly connected, detachably connected, or integrally connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In the description of the embodiments of the invention, the particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
In the description of the embodiments of the present invention, it should be understood that "-" and "-" indicate the same range of two numerical values, and the range includes the endpoints. For example, "A-B" means a range greater than or equal to A and less than or equal to B. "A to B" means a range of A or more and B or less.
In the description of the embodiments of the present invention, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (12)
1. A tinnitus rehabilitation music synthesis method is characterized by comprising the following steps:
s1, extracting the main melody part in the MIDI music;
s2, extracting notes and chords in the main melody part;
s3, numbering the musical notes and the chords to obtain digital music;
s4, cutting the digital music into music blocks;
s5, constructing a Markov chain for generating the music blocks;
s6, generating digital music by using the Markov chain of the music block;
and S7, restoring the digital music into the MIDI format to obtain the tinnitus rehabilitation music.
2. The method for synthesizing tinnitus rehabilitation music according to claim 1,
in S1, the melody vocal part is the first vocal part of the music.
3. The method for synthesizing tinnitus rehabilitation music according to claim 2,
in S3, the notes and chords are numbered in the order of appearance.
4. The method for synthesizing tinnitus rehabilitation music according to claim 3,
in S4, the digitized music is sliced using the byte pair code, and a slicing stop condition is set.
5. The method for synthesizing tinnitus rehabilitation music according to claim 4,
the segmentation stopping condition is that the highest occurrence frequency of adjacent sub-words does not exceed 1.
6. The method for synthesizing tinnitus rehabilitation music according to claim 1,
in S5, a markov transition matrix is constructed to represent the markov chain.
7. The method for synthesizing tinnitus rehabilitation music according to claim 6,
the method for constructing the Markov transfer matrix comprises the following steps:
s51, counting the type N of the music block coded by the byte pair, numbering the music blocks from 1 to N according to the appearance sequence, wherein the number of the music block which appears repeatedly is the same as the number of the music block which appears for the first time;
s52, counting the transfer frequency number among the N music blocks to obtain the transfer probability among the N music blocks;
s53, constructing a matrix with the size of N x N, wherein the uppermost left corner element of the matrix is (1,1), and the matrix element (i, j) represents the probability that the music block with the number of i is transferred to the music block with the number of j;
and S54, observing whether the value of the matrix element (N, N) with the size of N x N is 0, if so, adding a new state transition by using the updating rule of the Markov chain, and updating the state transition matrix.
8. The method for synthesizing tinnitus rehabilitation music according to claim 7,
and when the Markov chain carries out state transition, taking any one music block before the last music block as the next state of the last music block, and updating the state transition matrix.
9. The method for synthesizing tinnitus rehabilitation music according to claim 8,
and when the Markov chain carries out state transition, taking the previous music block of the last music block as the next state of the last music block, and updating the state transition matrix.
10. The method for synthesizing tinnitus rehabilitation music according to claim 9,
when the Markov chain carries out state transition, setting the minimum number of transition states of a previous music block which can be transferred as 2, and if the number of the transition states is more than or equal to 2, updating a state transition matrix by taking the music block as the next transition state; if the number of the transition states is less than 2, tracing back a music block forward as the next transition state, and updating the state transition matrix until the number of the transition states of the music block traced back forward is more than or equal to 2.
11. The method for synthesizing tinnitus rehabilitation music according to claim 1,
in S6, the generated digitized music is arbitrary in length.
12. A system for synthesizing tinnitus rehabilitation music, comprising:
importing a module;
the importing module is used for importing original music;
a processing module;
wherein the processing module processes the original music using the music synthesis method of claims 1-11;
a derivation module;
wherein, the derivation module is used for deriving tinnitus rehabilitation music.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210595873.6A CN114913873B (en) | 2022-05-30 | 2022-05-30 | Tinnitus rehabilitation music synthesis method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210595873.6A CN114913873B (en) | 2022-05-30 | 2022-05-30 | Tinnitus rehabilitation music synthesis method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114913873A true CN114913873A (en) | 2022-08-16 |
CN114913873B CN114913873B (en) | 2023-09-01 |
Family
ID=82768467
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210595873.6A Active CN114913873B (en) | 2022-05-30 | 2022-05-30 | Tinnitus rehabilitation music synthesis method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114913873B (en) |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1173044A2 (en) * | 2000-06-30 | 2002-01-16 | Cochlear Limited | Implantable system for the rehabilitation of a hearing disorder |
US20020194984A1 (en) * | 2001-06-08 | 2002-12-26 | Francois Pachet | Automatic music continuation method and device |
WO2008106974A2 (en) * | 2007-03-07 | 2008-09-12 | Gn Resound A/S | Sound enrichment for the relief of tinnitus |
CN101950377A (en) * | 2009-07-10 | 2011-01-19 | 索尼公司 | The new method of novel Markov sequence maker and generation Markov sequence |
EP2842530A1 (en) * | 2013-08-30 | 2015-03-04 | Neuromod Devices Limited | Method and system for generation of customised sensory stimulus |
CN105930480A (en) * | 2016-04-29 | 2016-09-07 | 苏州桑德欧声听觉技术有限公司 | Method for generating tinnitus rehabilitation music and tinnitus rehabilitation system |
CN105999509A (en) * | 2016-05-06 | 2016-10-12 | 苏州桑德欧声听觉技术有限公司 | A tinnitus treating music generating method and a tinnitus treating system |
CN107039025A (en) * | 2017-04-14 | 2017-08-11 | 四川大学 | Personalized managing irritating auditory phenomena sound generation method based on hyperchaos |
CN107068166A (en) * | 2017-04-14 | 2017-08-18 | 四川大学 | A kind of method that managing irritating auditory phenomena sound is generated based on chord and chaos sequence |
CN108877749A (en) * | 2018-04-25 | 2018-11-23 | 杭州回车电子科技有限公司 | A kind of generation method and system of E.E.G AI music |
CN110960351A (en) * | 2019-12-05 | 2020-04-07 | 复旦大学附属眼耳鼻喉科医院 | Tinnitus treatment music generation method, medium, equipment and tinnitus treatment instrument |
CN111921061A (en) * | 2020-08-04 | 2020-11-13 | 四川大学 | Method and system for synthesizing tinnitus rehabilitation sound by combining fractal and masking |
CN112331221A (en) * | 2020-11-05 | 2021-02-05 | 佛山博智医疗科技有限公司 | Tinnitus sound treatment device and application method thereof |
CN112955948A (en) * | 2018-09-25 | 2021-06-11 | 宅斯楚蒙特公司 | Musical instrument and method for real-time music generation |
CN113010730A (en) * | 2021-03-22 | 2021-06-22 | 平安科技(深圳)有限公司 | Music file generation method, device, equipment and storage medium |
CN114141378A (en) * | 2017-01-19 | 2022-03-04 | 京东方科技集团股份有限公司 | Data analysis method and device |
-
2022
- 2022-05-30 CN CN202210595873.6A patent/CN114913873B/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1173044A2 (en) * | 2000-06-30 | 2002-01-16 | Cochlear Limited | Implantable system for the rehabilitation of a hearing disorder |
US20020194984A1 (en) * | 2001-06-08 | 2002-12-26 | Francois Pachet | Automatic music continuation method and device |
WO2008106974A2 (en) * | 2007-03-07 | 2008-09-12 | Gn Resound A/S | Sound enrichment for the relief of tinnitus |
CN101950377A (en) * | 2009-07-10 | 2011-01-19 | 索尼公司 | The new method of novel Markov sequence maker and generation Markov sequence |
EP2842530A1 (en) * | 2013-08-30 | 2015-03-04 | Neuromod Devices Limited | Method and system for generation of customised sensory stimulus |
CN105930480A (en) * | 2016-04-29 | 2016-09-07 | 苏州桑德欧声听觉技术有限公司 | Method for generating tinnitus rehabilitation music and tinnitus rehabilitation system |
CN105999509A (en) * | 2016-05-06 | 2016-10-12 | 苏州桑德欧声听觉技术有限公司 | A tinnitus treating music generating method and a tinnitus treating system |
CN114141378A (en) * | 2017-01-19 | 2022-03-04 | 京东方科技集团股份有限公司 | Data analysis method and device |
CN107068166A (en) * | 2017-04-14 | 2017-08-18 | 四川大学 | A kind of method that managing irritating auditory phenomena sound is generated based on chord and chaos sequence |
CN107039025A (en) * | 2017-04-14 | 2017-08-11 | 四川大学 | Personalized managing irritating auditory phenomena sound generation method based on hyperchaos |
CN108877749A (en) * | 2018-04-25 | 2018-11-23 | 杭州回车电子科技有限公司 | A kind of generation method and system of E.E.G AI music |
CN112955948A (en) * | 2018-09-25 | 2021-06-11 | 宅斯楚蒙特公司 | Musical instrument and method for real-time music generation |
CN110960351A (en) * | 2019-12-05 | 2020-04-07 | 复旦大学附属眼耳鼻喉科医院 | Tinnitus treatment music generation method, medium, equipment and tinnitus treatment instrument |
CN111921061A (en) * | 2020-08-04 | 2020-11-13 | 四川大学 | Method and system for synthesizing tinnitus rehabilitation sound by combining fractal and masking |
CN112331221A (en) * | 2020-11-05 | 2021-02-05 | 佛山博智医疗科技有限公司 | Tinnitus sound treatment device and application method thereof |
CN113010730A (en) * | 2021-03-22 | 2021-06-22 | 平安科技(深圳)有限公司 | Music file generation method, device, equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
HUI JUN 等: "Efficacy of sound terapy interventions for tinntius management:" * |
方一鸣 等: "基于BPE与自更新马尔科夫链的耳鸣康复音乐合成方法", vol. 41, no. 41 * |
Also Published As
Publication number | Publication date |
---|---|
CN114913873B (en) | 2023-09-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Story et al. | The relationship of vocal tract shape to three voice qualities | |
Miller | Solutions for singers: Tools for performers and teachers | |
McClellan | The healing forces of music: History, theory and practice | |
Brosnahan et al. | Introduction to phonetics | |
Sundberg | Perception of singing | |
Moorer | Music and computer composition | |
Miller | Securing baritone, bass-baritone, and bass voices | |
Lieberman | On the evolution of human language | |
Welch et al. | Solo voice | |
Coffin | Coffin's sounds of singing: Principles and applications of vocal techniques with chromatic vowel chart | |
Mackenzie | The Hygiene of the Vocal Organs: A Practical Handbook for Singers and Speakers. Together with a List of American Singers and Singing-teachers | |
Sergeant et al. | Gender differences in long-term average spectra of children's singing voices | |
Kohler | Parameters of Speech Rate Perception in German Words and Sentences: Duration, F o Movement, and F o Level | |
Di Matteo | Performing theEntre-Deux: The capture of speech in (dis) embodied voices 1 | |
CN114913873B (en) | Tinnitus rehabilitation music synthesis method and system | |
Qi | Replacing tracheoesophageal voicing sources using LPC synthesis | |
CN107039025A (en) | Personalized managing irritating auditory phenomena sound generation method based on hyperchaos | |
Daikoku et al. | The Hierarchical Structure of Temporal Modulations in Music is Universal across Genres and matches Infant-Directed Speech | |
Tokumaru et al. | Membership functions in automatic harmonization system | |
Ting | Between Piano and Forte: Hearing with Aids | |
Browne | Voice, song and speech: a practical guide for singers and speakers; from the combined view of vocal surgeon and voice trainer | |
Van Wyk | The use of bel canto techniques to develop healthy vocal techniques in adolescent singers who belt | |
White | Singing and science | |
Lanz | Silence: Exploring Salvatore Sciarrino’s style through L’opera per flauto | |
Bishop | XXVII. On the physiology of the human voice |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |