CN202307120U - Device for assisting deaf person to perceive environmental sound - Google Patents

Device for assisting deaf person to perceive environmental sound Download PDF

Info

Publication number
CN202307120U
CN202307120U CN2011204203168U CN201120420316U CN202307120U CN 202307120 U CN202307120 U CN 202307120U CN 2011204203168 U CN2011204203168 U CN 2011204203168U CN 201120420316 U CN201120420316 U CN 201120420316U CN 202307120 U CN202307120 U CN 202307120U
Authority
CN
China
Prior art keywords
sound
circuit
deaf person
module
display module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2011204203168U
Other languages
Chinese (zh)
Inventor
杨丹
徐彬
李扬
张锡冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN2011204203168U priority Critical patent/CN202307120U/en
Application granted granted Critical
Publication of CN202307120U publication Critical patent/CN202307120U/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Telephonic Communication Services (AREA)

Abstract

The utility model relates to a device for assisting a deaf person to perceive environmental sound, which comprises sound acquisition modules and a sound processing and display module, wherein the sound acquisition modules are arranged at a plurality of fixed nodes in the environment where the deaf person stays, and respectively comprise a microphone, a signal conditioning circuit, a microprocessor, a reset circuit, a joint test action group (JTAG) interface, a clock circuit, a power circuit and a wireless transmitting module; the sound processing and display module is carried about by the deaf person, and comprises a processor, a memorizer, a wireless receiving module, a JTAG interface, a secure digital (SD) memory card interface, a clock circuit, a reset circuit, a power circuit and a liquid crystal display (LCD) screen. When a certain sound is produced in the ambient environment where the deaf person stays, environment prompt tone is collected by sound acquisition nodes and then is transmitted into the sound processing and display module in a wireless communication way, and pretreatment, feature extraction, category judgment and location display are carried out on the collected environmental sound, so that the deaf person can perceive the sound change of the ambient environment in real time in a lossless visual compensation way.

Description

A kind of device of auxiliary deaf person's perception ambient sound
Technical field
The utility model relates to the embedded signal processing technology field, particularly a kind of device of auxiliary deaf person's perception ambient sound.
Background technology
For the second time national disabled person's sample survey result shows that China disabled population total amount is in continuous increase, and the disabled person accounts for 6.34% of country's total population, and wherein about 2,004 ten thousand people of hearing disabilities account for 24.16% of disabled person's sum.Owing to lost the sense of hearing, the deaf person produces serious puzzlement in many aspects such as physical function and social life.In recent years; Along with the progress of development of science and technology with society; Deaf person's degree that receives publicity also improves constantly; People propose some technology and method is improved the perception of deaf person to sound, recover normal good hearing, utilize vibration transducer perception step, the most of sound of hearing dog perception, flashing light perception prompt tone or the like like digital osophone, cochlear implant.They can divide two types according to the feedback network to acoustic information: the one, repair original Auditory Pathway, like cochlear implant, digital deaf-aid etc.; The 2nd, utilize other normal perception paths to replace Auditory Pathway, like vision, sense of touch, the pain sensation etc. voice signal is converted into other forms of information, give brain through neurotransmission, thereby realize the perception of sound, like vibratory receptor, flashing light etc.Wherein digital deaf-aid, cochlear implant belong to intrusive mood hearing compensation method; Though be that a part of deafness patient is rebuild hearing; But the place that still exists many needs to improve needs the process of one section adaptation like the experimenter to the sound of hearing after implanting, and needs regularly carry out the debugging of microprocessor to hospital; Might cause the outer complication of a series of encephalic, wear expensive etc.And vibratory receptor, flashing light etc. utilize the method for other path perceives sound are compensation of a kind of non-intrusion type; Can differentiate color, shape, position and forms of motion through vision; Can distinguish vibration mode or position through sense of touch, and these technology often receive many-sided factor restriction such as enforcement environment and sound quality.At present, a lot of good design proposals have appearred in the non-intrusion type compensation of hearing, but are limited to the voice signal aspect mostly, less to non-speech sounds.
Summary of the invention
Deficiency to prior art; The utility model provides a kind of device of auxiliary deaf person's perception ambient sound; When using, the deaf person carries this device; When in its surrounding environment of living in certain sound generating being arranged, collect the environment prompt tone through the sound collection module, the mode through radio communication passes to Embedded ambient sound and handles and display module; Ambient sound to collecting is accomplished pre-service, feature extraction, kind judging and locating and displaying, thereby makes the deaf person can pass through the sound variation of harmless vision compensation way real-time perception surrounding environment.
The technical scheme of the utility model: a kind of device of auxiliary deaf person's perception ambient sound; Comprise sound collection module and acoustic processing and display module; The sound collection module is placed in the plurality of fixed node in deaf person's environment of living in; Comprise microphone, signal conditioning circuit, microprocessor, reset circuit, jtag interface, clock circuit, power circuit and wireless transmitter module; Main real-time collection and the transmission of accomplishing ambient sound, microphone is connected to the signal conditioning circuit input end, and the signal conditioning circuit output terminal inserts the microprocessor port; Reset circuit, jtag interface, clock circuit and power circuit all are external in the microprocessor port, and wireless transmitter module is connected to the microprocessor communication port;
Acoustic processing and display module are carried by the deaf person; Comprise processor, storer, wireless receiving module, jtag interface, SD card, clock circuit, reset circuit, power circuit and LCD display; Mainly be that the voice signal that receives is carried out data processing, storage and demonstration; Processor is circumscribed with storer, jtag interface and SD card, and power circuit is connected to the power supply port of processor, and the output pin of power circuit connects LCD display, reset circuit, storer respectively simultaneously; For they provide power supply, wireless receiving module is connected to the COM1 of processor.The also configurable keyboard of this module, switch and LED light are provided with duty device and point out.Accomplish boot Bootloader, embedded Linux kernel, root file system and necessary device driver through software programming, build the basic running environment of embedded system; Under the operation for embedded system environment, accomplish ambient sound and handle and graphic presentation.
The concrete course of work of the utility model device is following: certain sound generating in deaf person's specific environment of living in; The sound collection module of this position utilizes microphone to gather sound; After amplification, filtering, under the microprocessor control of low-power consumption, send to embedded acoustic processing and display module through radio communication.Embedded acoustic processing and display module are received voice data through wireless receiving module; With ARM nuclear control storage, demonstration and the slave unit module of communicating by letter in the processor; Check the pre-service of voice signal by DSP in the processor; Sound characteristic extracts, sound source position is confirmed, and sound and figure are set up mapping, sends into classification and positional information that LCD shows sound in real time.
The control method of the utility model device, specifically carry out as follows:
Step 1: to the residing specific environment of experimenter, set up ambient sound database (talking sound, sound when the people walks about), set up the image file of these ambient sound occurrence positions like phone sound, doorbell sound, launch computer sound, people;
Step 2: the sound in the ambient sound database is handled and trained, set up the corresponding relation of these ambient sounds and occurrence positions, specific as follows:
Step 2.1: according to formula 1 calculate each sampled point in every frame energy E (m, k), according to formula 2 signal calculated gross energy E, according to formula 3 calculate each sampled point in every frame probability density P (m, k);
E (m, k)=[x (m) ω (n-m)] 2M=1 ..., N, k=1 ..., M formula 1
X (n) expression voice signal wherein, m is a voice signal sampling ordinal number, and ω (n) is a Hamming window function, and k representes the frame ordinal number, and N is every frame data sampling number, the M frame number of sampling;
Calculate the voice signal gross energy according to formula 2
E = Σ k = 1 M Σ m = 1 N / 2 E ( m , k ) Formula 2
P (m, k)=E (m, k)/E, m=1 ..., N/2, k=1 ..., M formula 3
Step 2.2: the spectrum entropy H that calculates every frame voice signal m
Calculate every frame spectrum entropy H according to formula 4 m,
H m = - Σ m = 1 N / 2 P ( m , k ) · Log P ( m , k ) Formula 4
Step 2.3: set threshold value, compare by frame;
When greater than threshold value H1, regard as the entering acoustic segment, otherwise continue to judge that relatively the judgement of sound end point therewith in like manner;
Step 2.4: when judging the reference position of non-noise frequency range, begin, calculate the power spectral value of present frame, calculate 15 frames altogether from start frame;
Step 2.5: 15 frame power spectral value to calculating are carried out binaryzation;
Choose reference value Base, greater than Base be made as 1, be made as 0 less than Base, constitute the network input feature value of voice recognition;
Step 2.6: adopt neural network algorithm to carry out the sound class recognition training, concrete steps are following:
Step 2.6.1: initialization;
According to formula 5 initialization forward connection power W Ij(0), connects power t according to formula 6 initialization feedback Ji(0), the initialization threshold parameter is ρ.
W Ij ( 0 ) = 1 n + 1 Formula 5
t Ji(0)=1, i=1,2 ..., n, j=1,2 ..., m formula 6
Step 2.6.2: select a certain classification voice signal in the ambient sound database; Extract the energy of preceding 15 frames of voice signal, structure 15 dimension input feature values
Figure BDA0000103291300000033
are sent into the neural network input layer;
Step 2.6.3: calculate each neuronal activation function S of input layer according to formula 7 j, the activation value S of neuron g gMaximum like formula 8, is tentatively confirmed as input feature value U iThe classification neuron of corresponding output layer;
S j = Σ i = 1 n W Ij U i k , j = 1,2 , . . . , m Formula 7
S g = Max j = 1 m [ S j ] Formula 8
Step 2.6.4:, calculate input feature value U according to formula 9 iWith output layer classification neuron g matching degree C j,
C j = Σ i = 1 n t Ji U i Σ i = 1 n U i Formula 9
Wherein, T j=[t J1, t J2..., t Jn] T, j=1,2 ..., m representes that the pairing feedback of neuron j connects power, storage be the input feature value of remembering in the former learning process.
Work as C jDuring>=ρ, confirm that output layer neuron g is input feature value U iThe classification neuron, is connected weights according to formula 10 and 11 adjustment neurons, the result is remembered in storage.
W Ij ( t + 1 ) = t Ji ( t ) U i 0.5 + Σ i = 1 n t Ji ( t ) U i , i = 1,2 , . . . , n Formula 10
t Ji(t+1)=t Ji(t) U iFormula 11
Work as C jDuring<ρ, then the output layer neuron is not the classification neuron, and the output of neuron g is put 0, and in the output layer neuron of remainder, continues to seek, and promptly goes to step 2.6.3.
Step 2.6.5: neuron g is excluded next identified range, return step 2.6.3; If when all neurons of having remembered do not satisfy, then select a untapped output layer neuron as input feature value U iClassification results, and make that this neuron is classification neuron g, 10,11 adjustment connect power according to formula.
Step 2.6.6: return step 2.6.2, next input feature value is discerned.
Step 2.6.7: all after output layer had been confirmed classification neuron g, then training finished ambient sound in the ambient sound database of setting up.
Step 3: during certain sound generating, the voice signal that the microphone of the sound collection module of this position collects after the signal conditioning circuit amplification filtering, is sent to acoustic processing and display module through wireless transmitter module in deaf person's specific environment of living in;
Step 4: when ambient sound is sent to embedded acoustic processing and display module, extract voice signal property, send in the neural network that step 2.6.1~2.6.7 trains, judge classification neuron g, confirm sound class.
Step 5:, call the image file of deaf person's surrounding environment sound occurrence positions according to classification under the sound;
Step 6: the avatars of setting up voice data: certain sound that the deaf person is belonged to the surrounding environment generation is represented with the annulus of continuous flicker; The position of indication sound source in the room, center of annulus; Confirm the size of annulus, the time that the voice data duration shows for annulus according to calculating the energy of 15 frames before the voice data.
Confirm the size of annulus: at first voice signal is carried out the branch frame, and the voice signal behind minute frame is asked for energy density spectral function P, and (n k), chooses the maximum value of energy density values in every frame then.In order to make the moire pattern of demonstration more clear and legible, the concentric circles too little to some radiuses will not show, a threshold value B ase is set, and only shows the sound greater than threshold value.This thresholding can not be very high, otherwise detect less than many useful sound; Then, the value representation of choosing is become the form of dB value, use 20lg (P (n, k)), and directly with dB (P) value as concentrically ringed radius.
Beneficial effect:
The utility model device organically combines embedded processing technology and biomedical engineering practical application; The advantage that possesses ARM and DSP simultaneously; Support higher computing ability, can realize gathering in real time, fast, accurately of signal well, and cost of development is low; Have higher use value, good application prospects is arranged.The sound collection module is placed on the sonorific stationary nodes; And embedded acoustic processing and display module, the user carries, and is not influencing under deaf person's normal life state status; Can make in family; Under the environment such as office, long-time real time environment sound monitoring makes the deaf person can pass through the sound variation in the harmless vision compensation way real-time perception surrounding environment.This device has many characteristics of embedded systems such as low-power consumption, volume are little, portable, personalized customization, and embedded technology is used and compensation deaf person hearing has positive effect for promoting.
Description of drawings
Fig. 1 is the sound collection modular structure block diagram of the utility model embodiment;
Fig. 2 is acoustic processing and the display module structured flowchart of the utility model embodiment;
Fig. 3 is microphone and the signal conditioning circuit catenation principle figure of the utility model embodiment;
Fig. 4 is the power supply 3V power supply the principle figure of the sound collection module of the utility model embodiment;
Fig. 5 is the reset circuit schematic diagram of the sound collection module of the utility model embodiment;
Fig. 6 is the clock circuit schematic diagram of the sound collection module of the utility model embodiment;
Fig. 7 is the jtag interface circuit theory diagrams of the sound collection module of the utility model embodiment;
Fig. 8 is the wireless transmitter module circuit theory diagrams of the sound collection module of the utility model embodiment;
Fig. 9 is the acoustic processing of the utility model embodiment and the clock circuit schematic diagram of display module;
Figure 10 is the acoustic processing of the utility model embodiment and the power supply chip TPS73701 catenation principle figure of display module;
Figure 11 is the acoustic processing of the utility model embodiment and the reset circuit schematic diagram of display module;
Figure 12 is the acoustic processing of the utility model embodiment and the power supply chip TPS65930 power supply catenation principle figure of display module;
Figure 13 is the acoustic processing of the utility model embodiment and the storer catenation principle figure of display module;
Figure 14 is the acoustic processing of the utility model embodiment and the LCD and the processor catenation principle figure of display module;
Figure 15 is the acoustic processing of the utility model embodiment and the wireless receiving module circuit theory diagrams of display module;
Figure 16 is the acoustic processing of the utility model embodiment and the SD card connection schematic diagram of display module;
Figure 17 is the acoustic processing of the utility model embodiment and the jtag interface circuit theory diagrams of display module;
Figure 18 is the utility model embodiment device workflow diagram;
Figure 19 is the utility model embodiment neural network algorithm process flow diagram;
Figure 20 is the vertical view reduced graph in the utility model embodiment room;
Figure 21 is the spectrogram shape figure of four kinds of sound of the utility model embodiment;
Figure 22 is the displayed map as a result of four kinds of sound of the utility model embodiment.
Embodiment
Below in conjunction with accompanying drawing the utility model is further specified.
The device of auxiliary deaf person's perception ambient sound of the utility model; Comprise sound collection module and acoustic processing and module; The sound collection module is placed in the plurality of fixed node in deaf person's environment of living in; Comprise microphone, signal conditioning circuit, microprocessor MSP430F22X4, reset circuit, jtag interface, power circuit and wireless transmitter module, structured flowchart is as shown in Figure 1.Acoustic processing and display module are placed in the deaf person and belong to the fixed position in the environment; Comprise processor, storer, wireless receiving module, jtag interface, SD card, clock circuit, reset circuit, power supply chip and LCD display; Mainly be that the voice signal that receives is carried out data processing, storage and demonstration; Processor is circumscribed with FLASH storer, SDRAM storer, USB interface, jtag interface and SD card, and structured flowchart is as shown in Figure 2.
Microphone and signal conditioning circuit connect as shown in Figure 3, and microphone collects voice signal, receives reverse input end 2 pin of operational amplifier TLV2760, handle through amplification filtering, are inserted 8 pins of TI microprocessor MSP430F22X4 by MicOut.Because microprocessor work voltage is 3V, this module is selected two groups of 1.5V dry cell power supplies for use, is connected to 14 pins of TI microprocessor MSP430F22X4, and circuit is as shown in Figure 4.The reset circuit of sound collection module is as shown in Figure 5, when button is pressed, is pulled to low level to the TRST reset terminal, and microprocessor MSP430F22X4 is resetted, reset circuit output and 7 pins that are connected to microprocessor MSP430F22X4.The sound collection module adopts the passive crystal oscillator of 8MHZ as master clock source, is connected to XOUT and the XIN of MSP430F22X4.Physical circuit is as shown in Figure 6.The sound collection module adopts standard 14 pin jtag interface circuit; Wherein TDI (test data input), TDO (test data output), TMS (test pattern selection), TCK (test clock input) are four required signal wires of boundary scan testing of standard; Link to each other with 35 pins, 36 pins, 34 pins, 33 pins of MSP430F22X4 through the network label, physical circuit is as shown in Figure 7.Wireless transmitter module adopts the CC2500 radio frequency chip; Its four signal pins SCLK (clock input), SO (data output), CSN (chip selection), SI (data input) are respectively with 12 pins of MSP430F22X4; 11 pins, 9 pins, 10 pins link to each other; Under MSP430F22X4 control, accomplish RFDC, the wireless transmitter module circuit is as shown in Figure 8.
Microphone acquisition node position ambient sound; The voice signal that collects is after signal conditioning circuit amplification, filtering; Accomplish analog to digital conversion by microprocessor internal A/D interface, under microprocessor control, send to embedded processing and display module through radio communication circuit.
In acoustic processing and the display module,, adopt the embedded Linux system of freely increasing income, accomplish the processing and the demonstration of surrounding environment voice data through RFDC based on the flush bonding processor chip.Processor is selected OMAP3530 for use, and ARM nuclear wherein is responsible for basic peripheral interface and device controller, and DSP nuclear is responsible for sound signal processing and demonstration;
The clock circuit of acoustic processing and display module is as shown in Figure 9.Particularly: TPS65930 carries out synchronously and the initialization of total system through the master clock signal of HFCLKIN pin receiving trap; The square wave clock that on the HFCLKOUT pin, produces a 26MHz produces the square wave clock of a 26Mhz to OMAP3530; 32.768khz being cooperated by the PLL circuit of passive crystal oscillator and TPS65930, clock produces, for the RTC circuit of TPS65930 provides reference clock.McBSP_CLKS is produced by TPS65930, offers OMAP3530 through E10.
The power circuit of acoustic processing and display module adopts the power supply of TPS65930 device, for OMAP3530 clock, system's button is provided simultaneously; Two kinds of needed WV 3.3V of TPS65930 and 4.2V realize that through the TI power supply chip TPS73701 produces, and be shown in figure 10.
NRESPWRON is the signal that produces from TPS65930, sends into the chip TC7SH08FU that resets, and the reset signal RESET of generation gives the OMAP3530 completion and resets, and is shown in figure 11.
(VDD1 VDD2), the I/O voltage (VIO) of 1.8V, with three supply voltage VDD_PLL11.8V of external interface, VDAC1.2V, VMMC3V, is produced by the TPS65930 power supply chip OMAP3530 chip operation 1.2V nuclear-electric power supply, and is shown in figure 12.
Storer is selected chip model MT29C1G24MADLAJA-61T for use.It comprises the NAND FLASH of 128M and the LPDDR SDRAM of 128M.NAND FLASH pin ALE wherein, CE#, CLE, LOCK, RE#, WE#, WP#, I/O [15:0] links to each other with the inner GMPC assembly control of OMAP3530 pin; DDR SDRAM pin A [13:0], BA0, BA1, CAS#, CK, CK#, CKE0, CKE1, CS0#, CS1#, DM [3:0], RAS#, WE#, DQ [31:0], DQS [3:0] links to each other with the SDRC interface corresponding pin of OMAP3530, and is shown in figure 13.
LCD display is controlled by the inner integrated lcd controller of OMAP3530; The LQ043T3DX02LCD that the utility model adopts has 24 image point data output pin DSS_D0~DSS_D23; DSS_D0~the DSS_D7 that the R0~R7 of LCD screen is connected to the LCD interface is last; DSS_D8~the DSS_D15 that the G0~G7 of LCD screen is connected to the LCD interface is last, and the DSS_D16~DSS_D23 that the B0~B7 of LCD screen is connected to the LCD interface is last.DSS_HSYNC LCD horizontal-drive signal, DSS_VSYNC is the LCD vertical synchronizing signal, and DSS_PCLK is the LCD pixel clock, and the hardware connection mode of LCD and OMAP3530 is shown in figure 14.LCD touch-screen control designs the coordinate of touch point through TI company's T SC2046 touch screen controller, controls through the SPI interface.
Wireless receiving module adopts the CC2500 chip; Wherein SCLK (clock input), SO (data output), CSN (chip selection), SI (data input) are four required signal wires of control chip of standard; They link to each other with the SPI2 interface pin of OMAP3530 respectively, and are shown in figure 15.SD card connection principle is shown in figure 16, and the jtag interface circuit is shown in figure 17.
This device is accomplished boot Bootloader, embedded Linux kernel, root file system and necessary device driver through software programming, builds the basic running environment of embedded system; Under the operation for embedded system environment, accomplish ambient sound and handle and graphic presentation.
Concrete steps are following:
Step 1: to the residing specific environment of experimenter, set up ambient sound database (talking sound, sound when the people walks about), set up the image file of these ambient sound occurrence positions like phone sound, doorbell sound, launch computer sound, people;
Step 2: the sound in the ambient sound database is handled and trained, set up the corresponding relation of these ambient sounds and occurrence positions, specific as follows:
Step 2.1: according to formula 1 calculate each sampled point in every frame energy E (m, k), according to formula 2 signal calculated gross energy E, according to formula 3 calculate each sampled point in every frame probability density P (m, k);
E (m, k)=[x (m) ω (n-m)] 2M=1 ..., N, k=1 ..., M formula 1
X (n) expression voice signal wherein, m is a voice signal sampling ordinal number, and ω (n) is a Hamming window function, and k representes the frame ordinal number, and N is every frame data sampling number, the M frame number of sampling;
According to formula 2 voice signal gross energies
E = Σ k = 1 M Σ m = 1 N / 2 E ( m , k ) Formula 2
P (m, k)=E (m, k)/E, m=1 ..., N/2, k=1 ..., M formula 3
Step 2.2: the spectrum entropy H that calculates every frame voice signal m
Calculate every frame spectrum entropy H according to formula 4 m,
H m = - Σ m = 1 N / 2 P ( m , k ) · Log P ( m , k ) Formula 4
Step 2.3: set threshold value, compare by frame;
When greater than threshold value H1, regard as the entering acoustic segment, otherwise continue to judge that relatively the judgement of sound end point therewith in like manner;
Step 2.4: when judging the reference position of non-noise frequency range, begin, calculate the power spectral value of present frame, calculate 15 frames altogether from start frame;
Step 2.5: 15 frame power spectral value to calculating are carried out binaryzation;
Choose reference value Base, greater than Base be made as 1, be made as 0 less than Base, constitute the network input feature value of voice recognition;
Step 2.6: adopt the ART neural network algorithm to carry out the sound class recognition training, concrete steps are following:
Step 2.6.1: initialization;
According to formula 5 initialization forward connection power W Ij(0), connects power t according to formula 6 initialization feedback Ji(0), the initialization threshold parameter is ρ.
W Ij ( 0 ) = 1 n + 1 Formula 5
t Ji(0)=1, i=1,2 ..., n, j=1,2 ..., m formula 6
Step 2.6.2: select a certain classification voice signal in the ambient sound database; Extract the energy of preceding 15 frames of voice signal, structure 15 dimension input feature values are sent into ART neural network input layer;
Step 2.6.3: calculate each neuronal activation function S of input layer according to formula 7 j, the activation value S of neuron g gMaximum like formula 8, is tentatively confirmed as input feature value U iThe classification neuron of corresponding output layer;
S j = Σ i = 1 n W Ij U i k , j = 1,2 , . . . , m Formula 7
S g = Max j = 1 m [ S j ] Formula 8
Step 2.6.4:, calculate input feature value U according to formula 9 iWith output layer classification neuron g matching degree C j,
C j = Σ i = 1 n t Ji U i Σ i = 1 n U i Formula 9
Wherein, T j=[t J1, t J2..., t Jn] T, j=1,2 ..., m representes that the pairing feedback of neuron j connects power, storage be the input feature value of remembering in the former learning process.
Work as C jDuring>=ρ, confirm that output layer neuron g is input feature value U iThe classification neuron, is connected weights according to formula 10 and 11 adjustment neurons, the result is remembered in storage.
W Ij ( t + 1 ) = t Ji ( t ) U i 0.5 + Σ i = 1 n t Ji ( t ) U i , i = 1,2 , . . . , n Formula 10
t Ji(t+1)=t Ji(t) U i Formula 11
Work as C jDuring<ρ, then the output layer neuron is not the classification neuron, and the output of neuron g is put 0, and in the output layer neuron of remainder, continues to seek, and promptly goes to step 2.6.3.
Step 2.6.5: neuron g is excluded next identified range, return step 2.6.3; If when all neurons of having remembered do not satisfy, then select a untapped output layer neuron as input feature value U iClassification results, and make that this neuron is classification neuron g, 10,11 adjustment connect power according to formula.
Step 2.6.6: return step 2.6.2, next input feature value is discerned.
Step 2.6.7: all after output layer had been confirmed classification neuron g, then training finished ambient sound in the ambient sound database of setting up.
Step 3: during certain sound generating, the voice signal that the microphone of the sound collection module of this position collects after the signal conditioning circuit amplification filtering, is sent to acoustic processing and display module through wireless transmitter module in deaf person's specific environment of living in;
Step 4: when ambient sound is sent to embedded acoustic processing and display module, extract voice signal property, send in the ART neural network that step 2.6.1~2.6.7 trains, judge classification neuron g, confirm sound class.
Step 5:, be invoked at the image file of deaf person's surrounding environment occurrence positions according to classification under the sound;
Step 6: the avatars of setting up voice data: certain sound that the deaf person is belonged to the surrounding environment generation is represented with the annulus of continuous flicker; The position of indication sound source in the room, center of annulus; Confirm the size of annulus, the time that the voice data duration shows for annulus according to calculating the energy of 15 frames before the voice data.
Confirm the size of annulus: at first sound signal is carried out the branch frame, and the voice signal behind minute frame is asked for energy density spectral function P, and (m k), chooses the maximum value of energy density values in every frame then.In order to make the moire pattern of demonstration more clear and legible, the concentric circles too little to some radiuses will not show, a threshold value B ase is set, and only shows the sound greater than threshold value.This thresholding can not be very high, otherwise detect less than many useful sound; Then, the value representation of choosing is become the form of dB value, use 20lg (P (m, k)), and directly with dB (P) value as concentrically ringed radius.
This device deaf person sets up the image file of sound occurrence positions according to own environment of living in, preserves into the .bmp file layout; When environment of living in changes, can set up the image file of current ambient sound occurrence positions at any time through software operation; When if the quantity of ambient sound classification of living in and position change, also can add the point source of sound position and revise through software operation.
With deaf person's environment of living in is that single room is an example, and the vertical view in room is simplified shown in left figure among Figure 20, and the black block among the right figure is the position of fixed sound source point.Position 1 is a phone; Position 2 is alarm clocks; Position 3 is doorbells; Position 4 is display positions of unknown sound.
Present embodiment selects for use four kinds of sound sources that the identification of ART neural network is demonstrated; Be respectively telephone bell, doorbell sound, alarm clock sound and unknown sound (i.e. other sound except these three kinds of sound of telephone bell, doorbell sound and alarm clock sound, this example refers to two people's talk sound).
With spectrum signature such as Figure 21 that every kind of sound extracts, (a) is telephone bell among the figure, (b) is alarm clock sound, (c) is doorbell sound, (d) is one section voice.As can be seen from Figure 21 the spectrogram differences in shape of alternative sounds is very big, extracts the spectrogram shape information and can distinguish these sound.Voice data is selected preceding 15 frames, every frame voice data length N=128, input neuron number (N/2+1) * 15 of neural network in the training; The number 50 of output neuron, warning parameter ρ=0.5,20 of the sample numbers of each sound class training; Be about 1s when lasting, after the completion training.Choose in four types of sound sources a kind of, through the software test result, shown in figure 22.

Claims (1)

1. the device of auxiliary deaf person's perception ambient sound is characterized in that: comprise sound collection module and acoustic processing and display module;
The sound collection module is placed in the plurality of fixed node in deaf person's environment of living in, comprises microphone, signal conditioning circuit, microprocessor, reset circuit, jtag interface, clock circuit, power circuit and wireless transmitter module; Microphone is connected to the signal conditioning circuit input end, and the signal conditioning circuit output terminal inserts the microprocessor port, and reset circuit, jtag interface, clock circuit and power circuit all are external in the microprocessor port, and wireless transmitter module is connected to the microprocessor communication port;
Acoustic processing and display module are carried by the deaf person; Comprise processor, storer, wireless receiving module, jtag interface, SD card, clock circuit, reset circuit, power circuit and LCD display; Processor is circumscribed with storer, jtag interface and SD card; Power circuit is connected to the power supply port of processor, and the output pin of power circuit connects LCD display, reset circuit, storer respectively simultaneously, and wireless receiving module is connected to the COM1 of processor.
CN2011204203168U 2011-10-28 2011-10-28 Device for assisting deaf person to perceive environmental sound Expired - Fee Related CN202307120U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011204203168U CN202307120U (en) 2011-10-28 2011-10-28 Device for assisting deaf person to perceive environmental sound

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011204203168U CN202307120U (en) 2011-10-28 2011-10-28 Device for assisting deaf person to perceive environmental sound

Publications (1)

Publication Number Publication Date
CN202307120U true CN202307120U (en) 2012-07-04

Family

ID=46375968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011204203168U Expired - Fee Related CN202307120U (en) 2011-10-28 2011-10-28 Device for assisting deaf person to perceive environmental sound

Country Status (1)

Country Link
CN (1) CN202307120U (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102499815A (en) * 2011-10-28 2012-06-20 东北大学 Device for assisting deaf people to perceive environmental sound and method
CN111127728A (en) * 2019-12-26 2020-05-08 星微科技(天津)有限公司 Intelligent entrance guard induction system for hearing-impaired people

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102499815A (en) * 2011-10-28 2012-06-20 东北大学 Device for assisting deaf people to perceive environmental sound and method
CN102499815B (en) * 2011-10-28 2013-07-24 东北大学 Method for assisting deaf people to perceive environmental sound
CN111127728A (en) * 2019-12-26 2020-05-08 星微科技(天津)有限公司 Intelligent entrance guard induction system for hearing-impaired people

Similar Documents

Publication Publication Date Title
CN102499815B (en) Method for assisting deaf people to perceive environmental sound
US11006896B2 (en) Food intake monitor
CN109276255B (en) Method and device for detecting tremor of limbs
US20180233163A1 (en) Method and system for transforming language inputs into haptic outputs
CN102467615B (en) System and method for constructing personalized nerve stimulation model
CN105852886B (en) Stress assessment and people at highest risk's interfering system towards university student
CN104274191B (en) A kind of Psychological Evaluation method and system thereof
CN107361755A (en) Intelligent watch with dysarteriotony prompting
CN110584601B (en) Old man cognitive function monitoring and evaluation system
WO2021121226A1 (en) Electrocardiosignal prediction method and device, terminals, and storage medium
CN104383637A (en) Training assistance equipment and training assistance method
CN102362810B (en) Heart sound identification system and method based on virtual instrument
CN205924016U (en) Body fat balance based on ITO conductive glass test
CN104523244A (en) Wearable-type electronic device and control method based on human body metabolism
CN202307120U (en) Device for assisting deaf person to perceive environmental sound
CN106108878A (en) A kind of intelligent watch for health monitoring
CN201397512Y (en) Embedded-type infrared human face image recognition device
CN116466823A (en) Automatic community environment quality identification and evaluation method, system and equipment
Mefferd Associations between tongue movement pattern consistency and formant movement pattern consistency in response to speech behavioral modifications
CN206729365U (en) Internet of Things intellective IC card bracelet based on bluetooth
CN117524452A (en) Intelligent endowment data processing method based on big data
CN107802273A (en) A kind of depressive state monitoring device, system and Forecasting Methodology
CN108652664A (en) Abdominal belt, support abdomen trousers and baby's health is ask to monitor system
CN112535393A (en) Mirror and system
CN209203247U (en) The wearable bracelet with intervention is monitored for phrenoblabia convalescence mood

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120704

Termination date: 20121028