CN1118764C - Speech information processor - Google Patents

Speech information processor Download PDF

Info

Publication number
CN1118764C
CN1118764C CN94119941A CN94119941A CN1118764C CN 1118764 C CN1118764 C CN 1118764C CN 94119941 A CN94119941 A CN 94119941A CN 94119941 A CN94119941 A CN 94119941A CN 1118764 C CN1118764 C CN 1118764C
Authority
CN
China
Prior art keywords
actuating unit
memory storage
sound source
output signal
voice messaging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CN94119941A
Other languages
Chinese (zh)
Other versions
CN1112261A (en
Inventor
古侨真
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Interactive Entertainment Inc
Original Assignee
Sony Computer Entertainment Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Computer Entertainment Inc filed Critical Sony Computer Entertainment Inc
Publication of CN1112261A publication Critical patent/CN1112261A/en
Application granted granted Critical
Publication of CN1118764C publication Critical patent/CN1118764C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H7/00Instruments in which the tones are synthesised from a data store, e.g. computer organs
    • G10H7/002Instruments in which the tones are synthesised from a data store, e.g. computer organs using a common processing for different operations or calculations, and a set of microinstructions (programme) to control the sequence thereof
    • G10H7/004Instruments in which the tones are synthesised from a data store, e.g. computer organs using a common processing for different operations or calculations, and a set of microinstructions (programme) to control the sequence thereof with one or more auxiliary processor in addition to the main processing unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H7/00Instruments in which the tones are synthesised from a data store, e.g. computer organs
    • G10H7/08Instruments in which the tones are synthesised from a data store, e.g. computer organs by calculating functions or polynomial approximations to evaluate amplitudes at successive sample points of a tone waveform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • G06F13/1673Details of memory controller using buffers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/04Details of speech synthesis systems, e.g. synthesiser structure or memory management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Algebra (AREA)
  • Electrophonic Musical Instruments (AREA)
  • Multi Processors (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)

Abstract

An apparatus for processing the speech information includes a first execution device and a second execution device for executing operations at respective different execution cycles, and a first memory unit for reading and recording the speech information. The first execution device and the second execution device exploit the first memory unit in common for processing the speech information. The processing apparatus further includes a second memory unit for storage of the speech information from the first execution means or the speech information read out from the first memory unit. The first execution device records the speech information on or reads the speech information from the second memory unit during the execution cycle of the first execution device. The second execution device accesses the first memory unit during the execution cycle of the second execution device for outputting the special information to outside. The speech information recorded in the second memory unit is read out and recorded in the first memory unit or the speech information recorded in the first memory unit is read out recorded in the second memory unit during the time of not accessing the first memory unit. The data transfer between first execution device and the second memory can be performed during the execution cycle of the first execution device, while that between the second memory unit and the first memory unit can be performed during the execution cycle of the second execution device, so that it is possible for the first execution device to perform data transfer independently, such that sound source data can be transferred at a higher speed by employing a high-speed device as the first execution device.

Description

Speech information processor
The present invention relates to a kind of speech information processor that is applied in electronic musical instrument or the TV game computer that is suitable for.
The sound source that is used for electronic musical instrument or electronic game machine can be divided into the simulation sound source usually roughly, and it comprises a voltage-controlled oscillator (VCO), a voltage-adjusting amplifier (VCA), video controlled filter (VCF) or the like; The numeral sound source, an acoustical generator for example able to programme (PSG) or a sawtooth wave reading type ROM.
An example as digital sound source, Jap.P. (disclosing) is applied for 62-264099 (1987) number or is disclosed a kind of sampling sound source Japanese patent application (disclosing) 62-267798 (1987) number, wherein, sampling and the sound source data after digitized processing are stored in the storer and use as sound source from the sound of real instrument.
(for example, by nonlinear quantization) after above-mentioned sound source (sampling sound source) store compressed presets the sound source data of tone (interval).Each sound source data is pressed two parts and is stored, i.e. one-period of a plurality of repetitive patterns of resonance peak part (FR) and one basic cycle behind resonance peak partly (LP), as shown in Figure 9.Resonance peak partly is a signal waveform of the initial period of the distinctive pronunciation of each musical instrument, for example, is pounded out to hammer from the key on the keyboard for piano and impacts the sound that string end to produce.During the source data of reading aloud, the resonance peak part at first is read out, and one-period is partly then repeatedly read.
Because above-mentioned sound source data is compressed, have only the part of requirement, promptly above-mentioned resonance peak part and one-period repeating part be drawn out of and store, a large amount of sound source datas can be stored in a less storage space.
General speech information processor as handling the sampling sound source data has a kind of known apparatus for processing audio (APU) 107, and it comprises a digital signal processing device (DSP) 101, one storeies 102 and a central processing unit (CPU) 103, as shown in figure 10.
In the figure, APU107 is connected on the main frame 104 that is arranged on common personal computer, digital electrophonic musical instrument or electronic game machine.
Main frame 104 comprises that one has stored a ROM box of above-mentioned sound source data, control program or the like.The control program that is stored in the ROM box is read by CPU103, so that can be stored in the working storage 103a that places in it.
CPU103 reads sound source data from the ROM box, store in the storer 102 by DSP101 is instantaneous, this process is based on above-mentioned control program, finishes by the control of writing to storer 102.CPU103 also controls DSP101 according to control program.DSP101 reads the sound source data in storer 102 under the control of CPU103, and handles the sound source data of reading, and for example claps to prolong or the accent of changing voice.DPS101 also handles sound source data with repeatedly the read aloud repeating part of source data of loop program.By sound source data feed-in one D/A converter 105 of DSP101 output, convert analoging sound signal then to after handling like this, input speaker unit 106.Like this, the voice output corresponding to language data produces by loudspeaker 106.
CPU103 and DSP101 preset the access time of storer 102, and behind the DSP101 access memory secondary, CPU103 to storer 102 accesses once.Therefore, when the sound source data of rewrite memory 102 partly, CPU103 controls writing of 102 storeies, in order that at DSP101 not during the access memory, and from the ROM box, read aloud source data and of CPU in the writing data into memory 102.This can make the voice output corresponding to the sound source data that rewrites continue constantly to produce from next.The application procurator has submitted relevant application: No. 0543667, European patent application EP and corresponding US application (unexamined).
But, because above-mentioned speech information processor uses APU107, storer 102 is by DSP101 and the common use of CPU103, and DSP101 and CPU103 preset the access time of storer 102, thereby make the CPU103 can only be, thereby can't carry out high speed data transfer at preset time to storage access.
Otherwise owing to can not obtain high speed data transfer, high-speed CPU also can't use.
If the access of storer is undertaken by interruption, for example for obtaining high speed data transfer, when speech data is read by DSP101, just have to interrupt speech data output, this causes the output of speech data not advance continuously.
According to above-mentioned in the art situation, an object of the present invention is to provide a kind of speech information processor, wherein, both made by CPU and DSP and used storer jointly, and the store access time presets, and still can obtain high speed data transfer and need not interrupt dsp operation.
On the one hand, the invention provides a kind of device of processed voice information, it comprise be used for different implementation period separately operation first actuating unit and second actuating unit and be used to read and write first memory storage of voice messaging.The common first memory storage processed voice information of using of first actuating unit and second actuating unit.This treating apparatus also comprises second memory storage, is used to store voice messaging from first actuating unit or the voice messaging of reading from first memory storage.First actuating unit was the second memory means record voice messaging or reads voice messaging from second memory storage in the performance period of first actuating unit.Second actuating unit second actuating unit between the performance period to first storage access with outside output voice messaging.In the time of the first not access of memory storage, be recorded in voice messaging in second memory storage by from wherein read and be recorded in first memory storage, the voice messaging that perhaps is recorded in first memory storage is therefrom read and is recorded in second memory storage.
On the other hand, above-mentioned treating apparatus provided by the invention also comprises a direct store access controller, is used for to the second memory means record voice messaging or from the second memory storage reproduce voice information.
According to speech information processing apparatus of the present invention, first actuating unit, a central processor CPU for example, with second actuating unit, a digital signal processor for example, performance period preset so that two performance periods of DSP carried out facing to the performance period of CPU, and record is alternately used in these different performance periods with DSP by CPU with first memory storage of reading voice messaging, with processed voice information.
Second memory storage has precedence over first memory storage, is write into or reads from first memory storage by the former voice messaging.
Specifically, CPU reads voice messaging from the sound source ROM of a for example TV game computer, and control writes second memory storage, so that the instantaneous storage of voice messaging is entered.That is to say that it is to carry out that second memory storage is gone in voicemail logging between the performance period of CPU.
If the voice messaging more than preset value is stored in the storage space of second memory storage, DSP is not during DSP is to first storage access, for example between the performance period of CPU, read the language message that is stored in second memory storage, and control writes first memory storage, thereby the voice messaging of reading like this can be written in first memory storage.That is to say, be to carry out in the performance period of DSP in first memory storage with voicemail logging.
Between its performance period, DSP reads the voice messaging that is stored in first memory storage, and mediates the voice messaging that reason is read like this with for example clapping to prolong or change voice.The output of DSP promptly waits the voice messaging of handling to clap to prolong, and is sent to a speaker unit or similar device.Speaker unit produces the voice output corresponding to voice messaging.
When reading the voice messaging that is stored in first memory storage, CPU requires DSP to read the voice messaging that is stored in first memory storage.This makes that reading voice messaging from first memory storage is performance period at DSP, so that be recorded in first memory storage.That is to say that voice messaging writes in second memory storage and carries out in the DSP time limit during this situation.
When voice messaging had been recorded in second memory storage, CPU was read out voice messaging from second memory storage.That is to say that voice messaging writes in second memory storage and carries out in the CPU time limit during this situation.
Although the performance period of CPU and DSP presets, and first memory storage shared by CPU and DSP, the transmission of voice messaging is to be undertaken by the operation of second memory storage in the CPU time limit between the CPU and second memory storage, thereby, by adopting the high CPU of voice messaging transmission speed, make high-speed transfer become possibility.
On the other hand, because the transmission of voice messaging was carried out during the DSP time limit between first memory storage of second memory storage, this period DSP not to first storage access, this makes the information processing of not interrupting DSP and transmitting voice information becomes possibility, thereby can avoid continuous voice messaging output to be interrupted.
Processed voice massaging device of the present invention comprises a direct access controller (DMAC) that is arranged on second memory storage that is used for writing down and read voice messaging.
If transfer out voice messaging, DMAC sends a bus request signal to CPU, requires to obtain to allow to use bus.When feed-in bus request signal, CPU interrupts the operation that it is just carrying out suitably the time and allows one the acknowledge signal of use bus to deliver to DMC one.When the feed-in acknowledge signal, DMC reaches second memory storage to the voice messaging of reading from CPU, or the voice messaging in second memory storage is read, so that the voice messaging of reading is delivered to CPU.
With to rely on control program to carry out the CPU of transmitting voice information different, DMAC is specifically designed to information transmission and a kind of hardware of designing, thereby can transmit information quickly than CPU.
In a word, use speech information processing apparatus of the present invention, because the transmission of voice messaging can be carried out in the time limit at first actuating unit between first actuating unit and second actuating unit, and can adopt first actuating unit of high-speed transfer voice messaging, thereby makes high-speed transfer become possibility.
On the other hand, because the transmission of the voice messaging between second memory storage and first memory storage, finish in the time limit at second actuating unit, at second actuating unit during this period not to first storage access, can not interrupt the second actuating unit process information and carry out the transmission of voice messaging, thereby the continuous output of voice messaging can be avoided being interrupted.
In addition, when using speech information processing apparatus of the present invention, by using directmemoryaccess controller (DMAC), also can realize information transmission without the intervention of CPU, thus might be than CPU transmitting voice information quickly.
Also have, because can the high-speed transfer voice messaging, the high-speed transfer voice messaging between first actuating unit and first memory storage becomes possibility, thereby can form between spare region in first memory storage.This spare region in first memory storage can be used as the data storage (ram disc) of main frame.
Fig. 1 shows the block diagram of implementing a kind of speech information processor of the present invention;
Fig. 2 is a synchronizing circuit block diagram, and this circuit is used for CPU and the DSP of control setting in the speech information processor of Fig. 1, so as timesharing utilize local memory.
Fig. 3 is a sequential chart, is used to illustrate the running of synchronizing circuit.
Fig. 4 is a sequential chart, is used to illustrate the running of synchronizing circuit.
Fig. 5 is a block diagram, expresses to be arranged on a part of implementing the DSP in a kind of speech information processor of the present invention.
Fig. 6 is a block diagram, expresses the another part that is arranged among the DSP that implements in a kind of speech information processor of the present invention.
Fig. 7 represents the control data figure to the register RA M in DSP.
Fig. 8 represents the control data figure to the register RA M in DSP.
Fig. 9 represents to be stored in respectively non-interval part and the interval sound source data partly in the sampling sound source.
Figure 10 is a block diagram, and expression is used speech information processor usually.
With reference to accompanying drawing, will a preferred embodiment of speech information processor of the present invention be described.
This speech information processor comprises a central processing unit (CPU) 1, as first actuating unit; One main memory 2 is wherein stored the program that control CPU1 is arranged, or the like; And a first in first out device (FIFO) 3, as second storing apparatus, they link to each other by a bus 4.
FIFO3 is connected on the moving contact 5a of a switch 5, and its stiff end 5c is connected to a sound source data input end of a digital signal processor (DSP) 6.The moving contact 5a of switch 5 is connected on this machine memory 7 as first storing apparatus.The sound source data output terminal of DSP6 is connected on the input end of a D/A converter 8, and its output terminal is connected on the loudspeaker 9.
Being connected on the bus 4 is a principal computer 10, and for example a video game machine has a sound source ROM, and sound source data prestores in it.
In the sound source ROM of principal computer 10, various musical instruments, for example 16 bit sound source datas of piano, saxophone or hairpin cymbals are storing with 4 bit compression forms.Have a sound source data resonance peak part FR for example shown in Figure 9 of a non-interval part, the sound source data that resembles piano is this situation, is stored to be divided into a non-interval part and an interval part (the repeating part LP shown in Fig. 9).
This machine memory 7 has a memory capacity, for example, the 64K byte, storing access time for each storage accessing operation is 330 nanoseconds.Except second sound source data, in this machine memory 7, also there is the program of CPU1.This machine memory 7 is used by CPU1 and DSP6 timesharing, hereinafter will explain successively.
The running of the language information processing device of above-mentioned present embodiment is as follows:
When the recreation beginning, CPU1 reads sound source data and control program from the sound source ROM of mainframe computer system 10, and control program is delivered to main memory 2 through bus 4, simultaneously the part of control program and sound source data is delivered to FIFO3 through bus 4.This moment, control program was stored in the main memory 2, had stored the control program and the sound source data of part simultaneously in FIFO3 instantaneously.
Before the storage of sound source data in FIFO3 surpassed preset value, DSP6 manipulation transforms switch 5 placed on one side of stiff end 5c moving contact 5a.When the preset value that surpasses sound source data stored among the FIFO3, if DSP6 is not just at this machine of access memory 7, then DSP6 made the moving contact 5a of switch 5 place stiff end 5b place.DSP6 is also to the FIFO3 access.Like this, the sound source data that temporarily is stored among the FIFO3 deposits calling among the FIFO in it according to the order of sequence, thereby is sent to and stores in the local memory 7.
On the other hand, during the performance period of DSP6 for connecting local memory 7, DSP6 switch 5 and make moving contact 5a place stiff end 5c this on one side on.Like this, the sound source data in local memory 7 is transferred on the DSP6 by switch 5.
The transmission that is to say data between CPU1 and FIFO3 can be undertaken by the transmission speed of CPU1.Therefore, for guaranteeing high speed data transfer, can use CPU1 with high sound source data transmission speed.
On the other hand, data transmissions between FIFO3 and local memory 7 and between local memory 7 and DSP6 is carried out with the distinctive transmission speed of DSP6.In addition, during not access of DSP6 local memory 7, between FIFO3 and local memory 7, carry out the sound source data transmission.Therefore, sound source data can not interrupt the data processing of DSP6 and transmit.Like this, just can prevent to interrupt the output of continuous speech.
If want to handle the sound source data that is stored in the local memory 7, CPU1 control DSP6 is read out the sound source data that is stored in the local memory 7.So DSP6 makes switch 5 conversions movable contact 5a place this one side of stiff end 5b.In addition, during DSP6 did not read to be stored in sound source data in the local memory 7, sound source data was read out and is sent among the FIFO3 from this machine storage 7.That is, the data transmissions between local memory 7 and FIFO3 is carried out with the distinctive transmission speed of DSP6.
If the sound source data that deposits among the FIFI3 has surpassed preset value, CPU1 reads sound source data from FIFO3, and handles the data of reading in a kind of mode that presets during not access of DSP6 sound source ROM.That is to say that the data transmissions between FIFO3 and CPU1 is carried out with the distinctive transmission speed of CPU1.This makes from the data rate the local memory 7 to CPU1 and is improved.Thereby in local memory 7, can leave and do not have the occupied area and can be used for being for example principal computer storage data (ram disc).
The memory access of CPU1 and DSP6 is controlled by a synchronizing circuit, and example as shown in FIG. 2.
For at synchronizing circuit shown in Figure 2, supply with first frequency divider 72 and second frequency divider 73 from the frequency signal that the oscillator 71 that links to each other with a quartz (controlled) oscillator 71a is exported.Frequency divider 72 with a mode crossover frequency signal that presets to produce DSP time clock as shown in Figure 3.These DSP time clock supply to the clock signal input terminal of time-sharing multiplex control circuit 74 and DSP6 for the moment.
This time division multiplexing control circuit 74 produces time signals, and it alternately uprises step-down, repeats with the interval of 4 doubling times of the time clock of DSP, and 8 cycles of DSP signal are equivalent to the one-period of time signal.These time signals are input in the 1st to 3 switch 77 to 79 and the comparer 75.
Second frequency divider, 73 its frequency separation rates are set to 4 times of first frequency divider 72.With the frequency signal of this frequency separation rate separation from oscillator 71, just produced the cpu clock pulse, its frequency equal the DSP time clock of coming from the output of first frequency divider 72 frequency 1/4th, shown in Fig. 3 C, and deliver to CPU1 with door 76 through one.
Based on the time pulse of CPU, CPU1 produces the machine cycle signal, changes synchronously with the time division multiplexed signals shown in Fig. 3 b, shown in Fig. 3 d, and the machine cycle signal is sent in the comparer 75.
The phase place of time division multiplexing control signal in the time signal that comparer 75 is relatively produced by the time multiplexing control circuit and the machine cycle signal that produces by CPU1.If these two signal phases are identical, then high level coincidence rectified signal is fed to and door 76.Opposite, then a low level is overlapped rectified signal and deliver to and door 76.When supplying with high level coincidence rectified signal, deliver on clock pulse input terminal of CPU1 from the cpu clock pulse of second frequency divider 73 with door knob.Yet, when input low level overlaps rectified signal, turn-off from the time clock of second frequency divider 73 with door.
Therefore, when two mutual same phase times of signal, the cpu clock pulse that should transfer to CPU1 is turn-offed with door 76 and is stopped to carry to CPU1, thereby the machine cycle of CPU1 has been moved half period to be assumed to be standard conditions.
Like this, synchronizing circuit control store access, thus DSP6 whenever carries out twice access by CPU1 access storage once.
Specifically, the access time of local memory 7 is about 330 nanoseconds, and DSP is about 240 nanoseconds, and each machine cycle of CPU1 is about 1 microsecond, and the store access time of CPU1 is about 375 nanoseconds in the machine cycle of CPU1.
Suppose the DSP time clock of supplying with DSP6 by synchronizing circuit, supply with the cpu clock pulse of CPU1 and under normal condition, producing by the time-division multiplex signal of time-division multiplex control circuit output, as shown in Figs. 4a-c, the store access time cycle Mc of CPU1 is arranged on the second half of each machine cycle S, shown in Fig. 4 a, and two store access time cycle MD1, the MD2 of DSP6 are arranged on the preceding half cycle of machine cycle S, shown in Fig. 4 e.
On the other hand, the access time of local memory 7 is about 330 nanoseconds, thereby three access time MD1, MD2 and MD3 are provided with the interval that equates in a machine cycle S, shown in Fig. 4 g.
Like this, just at local memory 7, produce a skew in the access time of DSP6 and CPU1.This skew in the access time is regulated by the switch of first to the 3rd switch 77 to 79 and time-division multiplex control circuit, as shown in Figure 2 and sound source data writes and reads by FIFO3.
That is to say that time-division multiplex control circuit 74 produces the changeover control signal shown in Fig. 4 f, it is a benchmark with the time-division multiplex signal that is shown in Fig. 4 c, and the time-division multiplex control signal is delivered to first to the 3rd switch 77 to 79.Like this, first to the 3rd switch 77 to 79 is by conversion moving contact 77c to 79c, during the first access MD1 and the second access MD2 of local memory 7, select stiff end 77a to 79a, during the 3rd access Mc, then select stiff end 77 by moving contact 77c to 79c conversion bTo 79 b, shown in Fig. 4 g.
Like this, the sound source data of address bus, data bus and control bus is fetched and delivered to DSP6 during the MD2 during MD1 and second access during first access of DSP6.
On the other hand, the sound source data that is stored in FIFO3 of address bus, data bus and control bus then is sent to local memory 7 during the store access cycle MC of CPU1.
Therefore, for speech information processor of the present invention, local memory 7 is used by DSP6 and the common timesharing of CPU1.This has just improved the utilization ratio of local memory, and can use the local memory 7 of small storage capacity, and the latter makes economical, thereby has reduced manufacturing cost.
Local memory 7 is with for example 0 to 255 number storage sound source data.The number that has the sound source data storage of non-interval part (the resonance peak part is shown among Fig. 9) is different from the number of interval part (repeating part is shown in Fig. 9).Sound source data is selected data SRCa to SRC by eight sound sources hRead from DSP6.Select data SRCa to SRC by eight sound sources hThe sound source data of reading is sent to signal processor 20A to 20H, sees Fig. 1.
If be stored in local memory and be divided into non-interval part and interval sound source data partly is read out, the non-interval of sound source data partly is sent to signal processor 20A, and the interval of sound source data partly is sent to signal processor 20B to 20H.DSP6 carries out above-mentioned processing with software control procedure.For ease of explaining, referring to the functional block diagram that is shown in Fig. 5 and 6.
Eight sound source datas (voice data) A to H is handled on DSP16 timesharing ground, to form and output two-channel (left side and R channel).Specifically, the sample frequency of DSP6 is set at 44.1 kilo hertzs, so as in each sampling date (1/f s), finishing each cycle for eight sound source datas and two sound channels is 170 nanoseconds, adds up to the processing running in 128 cycles.
That is to say, deliver to the sound source data of signal processor 20A to 20H and transported to switch S Ia to SIh.Each switch S Ia to SIn, by terminal 31a to 31h, from the register RA M in DSP6, feed-in control data KON, indicate the beginning (bonding) or the feed-in control data KOF of the sound generating of each sound source data, indicate the stopping of sound generating (key is disconnected) of each sound source data, thereby realize turning on and off.
Each control data is made up of the data D0 to D7 of eight bits, and these data D0 to D7 gets in touch with the bonding and the key phase failure of sound source data A to H.These control datas write in the register separately.
So just satisfied user's requirement, given and want bonding or the disconnected sound source data of key that one Q-character " 1 " is set, so that can cancel the operation of loaded down with trivial details preparation routine, the latter temporarily writes each tone in the buffer register unalterablely.
Sound source data through switch S Ia to SIh is sent to the data expanded circuit 21 that is arranged in each signal processor 20A to 20H.Because sound source data is compressed in 4 bits and is stored in the sound source RAM with this form from 16 bits, data expanded circuit 21 is the sound source data expansion that is compressed to 4 bits, produce 16 bit sound source datas, the latter delivers to a tone changing circuit 23 places through a buffer RAM 22.
Tone changing circuit 23 is by feed-in tone control data P (H) and P (L), and for example processing parameter is produced by register RA M through a terminal 33a and a control circuit 24.Tone changing circuit 23 usefulness oversamplings (over-sampling) inserted 4 sample values in front and inserted 4 sample values (samples) in the back this moment, and oversampling is then based on tone control data P (H) and P (L), in order that with the identical sampling frequency f of input sound source data sConversion modifies tone.
If low Bit data P (L) is set to 0, can prevents to insert data and die down inconsistently (thinned out), thereby prevent trickle tone variation, to produce high-quality broadcast sound.
Switch S 2a is suitable for Be Controlled data FMON (FM-on) and turns on and off, and FMON is supplied with by register RA M by a terminal 35a.When switch S 2a is connected by control data FMON, sound source data, for example sound source data H is fed to control circuit 24.When feeding other this class sound source data, control circuit 24 usefulness sound source datas replace tone control data P (H) and P (L), change circuit 23 so that sound source data is transferred to modulation.
This moment, sound source data A was warbled in tone changing circuit 23, thereby, if modulation signal is several hertz an extremely low frequency signal, trill can be applied on the modulation signal, if yet modulation signal is variable-frequency, the tone of the broadcast sound of modulation signal can be diversified, therefore there is no need to modulation provides a special sound source, and the FM sound source can be produced by sampling system.
The FMON control data is written in one eight bit register, and as control data KON, so the data D0 to D7 of each bit corresponds respectively to sound source data A to H.
Sound source data is delivered to amplifier 26 by tone changing circuit 23.This amplifier comprises the control that comes from register RA M by a terminal 36a, a control circuit 27 and switch S 3a the control data ENV feed-in of (envelope), by terminal 37a, control circuit 28 and switch S the 3a also certain control data ADSR of feed-in, handle simultaneously to be used for ADSR4.
Switch S 3a is by uppermost bit (MSB) conversion of control data ADSR, therefore, if control data ADSR is " 1 ", switch S3a selects the control data ADSR (ADSR pattern) from control circuit 28, if the MSB of control data ADSR is " 0 ", switch is converted and selects the control data ENV (ENV pattern) of control circuit 28.
When feed-in control data ENV, amplifier 26 is to comprise control, and the sound source data from tone changing circuit 23 is handled in for example diminuendo.Comprise control as for this, can select one of five kinds of patterns, promptly directly indicate linear crescendo, the straight line crescendo of turning back, linear diminuendo and index diminuendo by three on the top of control data ENV.Adopt the initial value of current wave peak value as each pattern.
What remind is, if sound source is drum or piano, all the generation phase of sound can be divided into strike, decay, continues and loosen each phase, and signal amplitude has been represented distinctive variable condition of each phase.Therefore, when feed-in control data ADSR, amplifier 26 carries out corresponding to the change control operation from the sound source data level of each sound of the sound source data of tone changing circuit 23.
Specifically, use this control operation, signal level just just raises during impacting linearly, and decay, continue and loosen during be index decreased.The duration of crescendo and diminuendo, each pattern suitably to be set according to parameter value, parameter value is by last five regulations of control data ENV.
Impact and the time duration that continues, suitably set according to upper and lower 4 predetermined parameter values of control data ADSR, continue intensity and decay and perdurability of loosening then according to setting by per two predetermined parameter values of control data ADSR.
Use this DSP6, only just linear rising under the ADSR pattern during impacting of signal level is to reduce the number of times of arithmetic logical operation.To the ENV pattern, the phase of impacting is transferred to the straight line crescendo of turning back by the ADSR mode switch, and decay, continue and three phases of loosening transfer exponential attenuation to, yet can manually carry out from harmonious ADSR control operation.
Through a terminal 41a output sound source data of amplifier 2b is delivered to register RA M and through terminal 42a control data ENV delivered to register RA M and between to per sampling period, rewrite, can produce the voice signal of optional envelope trait, this envelope trait has comprised from a large amount of different tone of the next sound source data of same musical instrument.
If as effect sound, the noise data that comes from M-series noise generator (not shown) is sent to amplifier 26 with noise, representative is from the sound source data of tone changing device 23.
Be sent to the second and the 3rd amplifier 29l, 29r from the sound source data of amplifier 26.The second amplifier 29l feed-in L channel volume control data LVL, LVL comes from register RA M through terminal 38a, is used to control the L channel volume, and the 3rd amplifier 29r feed-in R channel volume control data RVL, RVL comes from register RA M through terminal 3a, is used to control the volume of R channel.
The second amplifier 29l amplifies the sound source data that has L channel volume control data LVL, has the L channel sound source data that presets volume with generation, and exports the data that produce through terminal TLa.The 3rd amplifier 29r amplifies the sound source data that has R channel volume control data RVL, has the R channel sound source data that presets volume with generation, and exports the data that produce through terminal TRa.
Fig. 7 and Fig. 8 represent all control datas to register RA M.
So the L channel sound source data by signal processor 20A to 20H produces through being shown in the terminal Tla to Tlh of Fig. 6, is sent to left channel signals processor 50L, the R channel sound source data then is sent to right-channel signals processor 50R through terminal TRa to TRh.
For left channel signals processor 50L, the sound source data of sending into through terminal TLa to TLh is sent to a main totalizer 51ml, is sent to an attached totalizer 51el through switch S 4a to S4h simultaneously.
For right-channel signals processor 50R, the sound source data of sending into through terminal TLa to TLh is sent to main totalizer 51mr, is sent to an attached totalizer 51er through switch S 5a to S5h simultaneously.
Totalizer 51ml, 51mr deliver to amplifier 52 to the sound source data addition of sending here through terminal TLa to TRh to result and number.
Amplifier 52 is imported control data MVL by register RA M through terminal 62, with the control master volume.Amplifier 52 is the master volume of the sound source data amplification that has control data MVL with the guide sound source data, and a result who produces is delivered to totalizer 53.
Deliver to signal processor 50L, that the switch S 4a to S4h of 50R and S5a to S5h is control data EONa to EONh, and the latter adds echo (reverberant sound) from register RA M through terminal 61a to 61h.Select by these control datas EONa to EONh with the sound source data (sound) that echo adds.
When the sound A of signal processing apparatus 20A was carried out the signal Processing of non-interval composition, switch S 4a and S5a were controlled and be turned off, thereby do not have echo to join non-interval part.
Control data EON writes in the eight bit register, as shown in Figure 8.
Attached summation node 51el, 51er be the sound source data addition of supplying with from switch S 4a to S4h and S5a to S5h, and transport to sound channel echo control device 14El, 14Er to what produce with data through totalizer 54.
Echo control device 14El, 14Er through terminal 64 feed-ins in order to control echo amount control data EDL (echo delay), with will be in order to indicate with the control data ESA (echo start address) of the sound source data of echo addition, echo control device 14El, 14Er in 255 milliseconds scope echo and the sound source data addition that comes from attached totalizer 51el, 51er, so that L channel echo and R channel echo equate, and the data that produce are delivered to a wave digital lowpass filter through a buffer RAM 55, and for example an infinite impulse response (FIR) wave filter 56.
Through terminal 66 inputs 8 bit parameter C0 to C7, with the code addition, and it is controlled to make its filtering characteristic change ground from register RA M for FIR wave filter 56, thereby makes echo natural harmony aspect psychologic acoustics of generation.This sound source data is delivered to amplifier 57,58 through FIR wave filter 56.
Amplifier 57 is sent into control data EFB (echo feedback) by register RA M through terminal 57.Amplifier 57 amplifies the sound source data that has control data EFB from FIR wave filter 56, and the result who produces is delivered to totalizer 54.Totalizer 54 is sound source data that comes from attached totalizer 51el and 51er and sound source data addition from amplifier 57, and produce and deliver to echo control device 14El and 14Er.
Amplifier 58 from register RA M through terminal 68 feed-in control data EVL, with the volume of control echo.Amplifier 58 amplifies the sound source data from FIR wave filter 56, is adjusted in the volume of echo in the sound source data with control data EVL, and the result of generation is delivered to totalizer 53.
Totalizer 53 the sound source data that comes from main totalizer 51ml and 51mr with add up from the sound source data of amplifier 58, echo is added from the sound source data that amplifier 51mr comes, and by an oversampling wave filter 59 and L channel sound source data output terminal Lout and R channel sound source data output terminal Rout, respectively output produce and signal.
Deliver to D/A converter shown in Figure 18 from DSP6 through the sound source data of output terminal Lout, Rout output.D/A 8 converts sound source data to simulating signal to form voice signal, and the latter is added on the loudspeaker 9.Like this, the voice corresponding to sound source data are produced by loudspeaker 9.
The control control data MVL of master volume and the control data of control echo volume are 8 Bit datas, do not have code, and a left side and R channel are separate relatively.Like this, main speech signal and echo signal can be regulated aspect intensity independently of each other, so that dense by the voice atmosphere of loudspeaker 9 generations.
In the above description, the sound source data of reading from the sound source ROM of host computer system 10 is to write FIFO3's under the control of CPU1.Yet a direct store access controller (DMAC) 10 also can be set, as shown in phantom in Figure 1, be used for the sound source data of reading from sound source ROM is transferred to FIFO3.
Because DMAC is that specialized designs is used for data transmission, sound source data can transmit without CPU1 gets involved, thereby can obtain than using the CPU1 faster data transmission.
In addition, in the above description, the memory capacity of local memory 7 is 64K bytes, and store access time is 330 nanoseconds.Yet these numerical value are illustrative, are not limitations of the present invention.Therefore, the present invention is not subjected to provide in the literary composition restriction of numerical value, and is not breaking away under the scope of the invention prerequisite and can give pre-the improvement by the mode of hope.

Claims (10)

1. a processed voice massaging device comprises
First actuating unit and second actuating unit, different performance period executable operations separately and
First memory storage is used to read and recording voice message,
The said first memory storage processed voice information of the common use of said first actuating unit and second actuating unit,
Said device also comprises
Second memory storage is used to store voice messaging from described first actuating unit or the voice messaging of reading from above-mentioned first memory storage,
Described first actuating unit said first actuating unit the term of execution in above-mentioned second memory storage recording voice message or read voice messaging from said second memory storage,
Said second actuating unit at said first memory storage of the term of execution access of said second actuating unit with outside output voice messaging,
During described first memory storage of not access, the feasible voice messaging that is recorded in described second memory storage of second actuating unit control is read out and is recorded in described first memory storage or the voice messaging that is recorded in described first memory storage is read out and is recorded in second memory storage.
2. device as claimed in claim 1 also comprises
One direct store access controller is used for to the said second memory means record voice messaging or from said second memory storage voice messaging of regenerating.
3. device as claimed in claim 2, wherein, the clear area of said first memory storage can be used as a ram disc.
4. device as claimed in claim 3 also comprises
One synchronizing circuit is used to control said first actuating unit and second actuating unit.
5. device as claimed in claim 4, wherein, said synchronizing circuit comprises a transmission circuit and first and second frequency dividing circuit, is used for the frequency division that carries out to an output signal of above-mentioned transmission circuit, and said first and second frequency dividing circuits are controlled said first actuating unit and second actuating unit respectively.
6. device as claimed in claim 5, wherein, the output signal of said first and second frequency dividing circuits is introduced into the clock pulse input terminal of said first and second output units respectively.
7. device as claimed in claim 5, wherein, the machine week of said first actuating unit is the output terminal output from said first actuating unit, and output signal wherein compares with the output signal of said first frequency dividing circuit in a comparer, and its output signal is controlled said first actuating unit.
8. device as claimed in claim 7 also comprises
One logical circuit is used for the output signal union the output signal of comparer and second frequency dividing circuit, and the output signal of said logical circuit is controlled said first actuating unit.
9. device as claimed in claim 8 also comprises
One time division multiplexing control circuit, be connected between said first frequency dividing circuit and the above-mentioned comparer, with a first in first out device, be used for storing through a bus output signal of above-mentioned first actuating unit, an output signal of above-mentioned first in first out device and the output signal of above-mentioned second executive circuit are changed by an output signal of above-mentioned time division multiplexing control circuit instantaneously.
10. device as claimed in claim 9, wherein, the output signal of said first in first out device, said second actuating unit and said first memory storage is changed by said time division multiplexing control circuit.
CN94119941A 1993-10-27 1994-10-27 Speech information processor Expired - Lifetime CN1118764C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP5268905A JPH07121181A (en) 1993-10-27 1993-10-27 Sound information processor
JP268905/93 1993-10-27
JP268905/1993 1993-10-27

Publications (2)

Publication Number Publication Date
CN1112261A CN1112261A (en) 1995-11-22
CN1118764C true CN1118764C (en) 2003-08-20

Family

ID=17464902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN94119941A Expired - Lifetime CN1118764C (en) 1993-10-27 1994-10-27 Speech information processor

Country Status (9)

Country Link
US (2) US5640489A (en)
EP (1) EP0653710B1 (en)
JP (1) JPH07121181A (en)
KR (1) KR100302030B1 (en)
CN (1) CN1118764C (en)
CA (1) CA2134308C (en)
DE (1) DE69425107T2 (en)
MY (1) MY113976A (en)
TW (1) TW256901B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3019767B2 (en) * 1995-12-28 2000-03-13 ヤマハ株式会社 Digital signal processor
US5913258A (en) * 1997-03-11 1999-06-15 Yamaha Corporation Music tone generating method by waveform synthesis with advance parameter computation
US6401114B1 (en) * 1997-05-01 2002-06-04 Stratum Technologies Corporation Method and apparatus for dynamic programming across a computer network
JP2003316395A (en) * 2002-04-26 2003-11-07 Toshiba Corp Information reproducing device, and signal processing module and its program therefor
US7610200B2 (en) * 2004-08-30 2009-10-27 Lsi Corporation System and method for controlling sound data
JP2006126482A (en) * 2004-10-28 2006-05-18 Seiko Epson Corp Audio data processor
US20110191238A1 (en) * 2010-01-29 2011-08-04 Bank Of America Corporation Variable merchant settlement options
CN105049173B (en) * 2015-08-27 2017-12-22 南京南瑞继保电气有限公司 The synchronous method of asynchronous device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB8824373D0 (en) * 1988-10-18 1988-11-23 Hewlett Packard Ltd Buffer memory arrangement
EP0421696A3 (en) * 1989-10-02 1992-01-29 Motorola Inc. Staggered access memory
JPH03147013A (en) * 1989-11-01 1991-06-24 Casio Comput Co Ltd Data updating device
US5404455A (en) * 1991-12-31 1995-04-04 Dictaphone Corporation Time division multiplexer chip for supporting alternating communication between a pair of RAMs and two different interfaces
CA2086386C (en) * 1991-12-31 1997-04-29 Daniel F. Daly Interface chip for a voice processing system

Also Published As

Publication number Publication date
US5761643A (en) 1998-06-02
DE69425107D1 (en) 2000-08-10
CA2134308A1 (en) 1995-04-28
US5640489A (en) 1997-06-17
EP0653710A1 (en) 1995-05-17
CN1112261A (en) 1995-11-22
JPH07121181A (en) 1995-05-12
CA2134308C (en) 2004-07-06
TW256901B (en) 1995-09-11
KR950012319A (en) 1995-05-16
MY113976A (en) 2002-07-31
KR100302030B1 (en) 2001-10-22
EP0653710B1 (en) 2000-07-05
DE69425107T2 (en) 2001-02-15

Similar Documents

Publication Publication Date Title
CN1230275A (en) Wavetable synthesizer and operating method using variable sampling rate approximation
CN1178199C (en) Musical compsn. reproducting appts. portable terminal, musical compsn. reproducing method, and storage medium
CN1230273A (en) Reduced-memory reverberation simulator in sound synthesizer
CN1230274A (en) Period forcing filter for preprocessing sound samples for usage in wavetable synthesizer
JP5134078B2 (en) Musical instrument digital interface hardware instructions
CN1118764C (en) Speech information processor
JP5086445B2 (en) System and method for providing multi-region equipment support in an audio player
JP2007534214A (en) Method, apparatus, and system for synthesizing audio performance using convolution at various sample rates
CN1151486C (en) Sound signal producing apparatus
US6586667B2 (en) Musical sound generator
JP5086444B2 (en) System and method for providing variable root note support in an audio player
CN1233200C (en) FPGA 5.1 channel virtual speech reproducing method and device
CN1535457A (en) Telephone terminal apparatus
JP4240772B2 (en) Music data processing device
JP3152156B2 (en) Music sound generation system, music sound generation apparatus and music sound generation method
JP2005092191A (en) Data exchange format of musical piece sequence data, sound source system, and musical piece file generation tool
JP3334483B2 (en) Waveform memory type tone generator that can input external waveform
JP2709965B2 (en) Music transmission / reproduction system used for BGM reproduction
CN1535458A (en) Musical sound reproducer and mobile terminal
JP3060920B2 (en) Digital signal processor
JP2019168645A (en) Musical tone generating apparatus, musical tone generating method, musical tone generating program and electronic musical instrument
JP4306138B2 (en) Musical sound generator and musical sound generation processing program
JP2019032566A (en) Musical sound creation device, musical sound creation method, musical sound creation program, and electronic musical instrument
JPS63261395A (en) Electronic musical instrument
JPH10124051A (en) Music data processing method, reproducing method for music data after processing, and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: SONY COMP ENTERTAINMENT INC.

Free format text: FORMER OWNER: SONY CORPORATION

Effective date: 20010629

C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20010629

Applicant after: Sony Computer Entertainment, Inc.

Applicant before: Sony Corp

C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CX01 Expiry of patent term

Expiration termination date: 20141027

Granted publication date: 20030820