CN103826194A - Method and device for rebuilding sound source direction and distance in multichannel system - Google Patents

Method and device for rebuilding sound source direction and distance in multichannel system Download PDF

Info

Publication number
CN103826194A
CN103826194A CN201410071545.1A CN201410071545A CN103826194A CN 103826194 A CN103826194 A CN 103826194A CN 201410071545 A CN201410071545 A CN 201410071545A CN 103826194 A CN103826194 A CN 103826194A
Authority
CN
China
Prior art keywords
sound
articulation
point
signal
sound source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410071545.1A
Other languages
Chinese (zh)
Other versions
CN103826194B (en
Inventor
胡瑞敏
张茂胜
姚雪春
涂卫平
王晓晨
姜林
杨乘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201410071545.1A priority Critical patent/CN103826194B/en
Publication of CN103826194A publication Critical patent/CN103826194A/en
Application granted granted Critical
Publication of CN103826194B publication Critical patent/CN103826194B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Stereophonic System (AREA)

Abstract

The invention discloses a method and device for rebuilding a sound source direction and distance in a multichannel system. The particle speed and sound pressure of a sound source received by a sound listening point are calculated according to known sound source signals, the distance between the sound listening point and the sound source is (i)r(/i), four loudspeakers in the direction are selected according to the particle speed, the sound source signals are multiplied by four weight factors respectively and then are distributed to the four selected loudspeakers, then particle speed and sound pressure of a sound image inducted at the sound listening point are calculated at a playback end after the loudspeakers in a rebuilt sound field send signals, finally, direction and distance equivalent models are built according to the particle speed and sound pressure of the original sound source and the particle speed and sound pressure of the sound image in the rebuilt sound field, and the signals of the loudspeakers are distributed through the weight factors acquired according to a solution model. Compared with the prior art, the direction and distance information of the sound source in the original sound source space can be accurately restored, the operation is simple, calculation efficiency is high, and stability is good.

Description

The method and apparatus that in a kind of multi-channel system, Sounnd source direction and distance are rebuild
Technical field
The invention belongs to field of multimedia signal processing, relate in particular to the directions such as sound source reconstruction in Audio Signal Processing direction, sound field rebuilding, sound source dimensional orientation and range recovery, be specifically related to Sounnd source direction and the method and apparatus apart from reconstruction in a kind of multi-channel system.
Background technology
Although the tonequality of stereophony and sound field effect are better than monophony greatly, it still has obvious limitation.Dual-channel stereo system can only reproduce the sound field in the sector region of one, people front, can not allow the sound of resetting give Sensurround.So multichannel technology also starts to grow up.Multi-sound channel digital audio system, by the expansion of number of channels, can be recovered multiple ambient sound, brings better feeling of immersion and higher-quality audio frequency to enjoy to audience, has surmounted the effect of traditional monophony and stereophonic sound system far away.Along with the development of the storage medium such as DVD, SACD, multichannel audio progressively enters by the initial cinema that monopolizes multiple fields that people live.
Typical multi-channel system has Dolby Digital AC-3 multichannel surround sound sound system, THX Surround EX system and 22.2 multichannel stereo systems.
Dolby Digital audio compress standard (AC-3) is the international standard proposing, the problem that it has solved on a film digital sound and simulated sound and has deposited.Dolby Digital AC-3 multichannel surround sound sound system by six independently sound channel form.Because the Hz-KHz of the first five separate channels in these six separate channels is all that audio frequency Whole frequency band is 20Hz~20kHz, and subwoofer sound channel Hz-KHz only has 15Hz~150Hz, only account for 1/10th of whole frequency spectrum, therefore Dolby Digital AC-3 multichannel surround sound sound system is called again 5.l sound channel ambiophonic system.
THX Surround EX system has strictly been worked out the standard of the relevant audio-visual equipment of cinema and environment, as long as meet THX standard and through authentication, just can have suitable level.As long as consumer selects to have the movie theatre of THX authentication, just have excellent audio-visual enjoyment like this.THX was transplanted to home theater afterwards, for authenticating the seehears of high-quality, and had unique requirement for the difference of home environment.But THX is not Dolby Digital and DTS is a kind of audio format like that, but a kind of audio post-processing pattern, object is to obtain best seeing and hearing enjoyment.When the Dolby of 6.1 sound channels Digital EX and DTS ES out after, it is further evolved into THX Surround EX system by THX.For the side sound channel of the former Bidirectional sounding of compatibility with reinforcing ring is around sound effect Sensurround once again, so sound channel increased again two on the basis of former side sound channel after, this has just formed 7.1 sound channels.
22.2 multichannel stereo systems are developed with sound equipment as " ultra high-definition " image by NHK.By the loud speaker of upper strata 9 sound channels, middle level 10 sound channels, three layers of configuration of lower floor's 3 sound channels, and dual track low frequency audio (LFE) loud speaker, can reproduce comparatively truly the sound that propagate left and right and above-below direction forwards, backwards.The sound technique of 22.2 sound channels need to, with the position of audience's ear level, lay respectively 10,9 and 3 loudspeaker above and below the position of audience's ear, be furnished with 2 subwoofer that are made up of 36 little bass units in addition.Whole ambiophonic system forms by three layers, and the sound field of the bottom is made up of three front channels and two LFE sound channels.Middle level sound field is made up of surround channel after four front channels, two side surround channels and three.Upper strata sound field is added a top sound channel by surround channel after three front channels, two side surround channels, four and is formed.In addition, aspect main sound channel, in order to strengthen effect, can also select two trumpet arrays that formed by 36 toy trumpets to guarantee the dynamics sense of main sound channel output.
5.1 ambiophonic systems, 22.2 multi-channel systems can bring good Sensurround to audience, to all processing based on signal itself of the reconstruction of sound, aspect tonequality fidelity, obtaining gratifying effect, but lack the recovery in the orientation to sound source, cannot recover at reconstruction end the positional information of sound source.
Summary of the invention
Rebuilding in order to overcome existing multi-channel system the deficiency existing on Sounnd source direction and range information, the invention provides Sounnd source direction and the method and apparatus apart from reconstruction in a kind of multi-channel system, in multi-channel system, can accurately recover Sounnd source direction and distance.
The technical scheme that method of the present invention adopts is: the method and apparatus that in a kind of multi-channel system, Sounnd source direction and distance are rebuild, it is characterized in that, and comprise the following steps:
Step 1: according to the time-domain signal s (t) of known sound source, to listen the point of articulation after initial point is set up Descartes's rectangular coordinate system, calculate the particle rapidity pv with sound source distance to be r the listen sound source that point of articulation place receives in original sound field 0with acoustic pressure p 0;
Step 2: according to the particle rapidity pv of the sound source of calculating in step 1 0, selection comprises 4 loud speaker L of this direction 1, L 2, L 3, L 4;
Step 3: by the loud speaker L in sound-source signal s (t) and step 2 1, L 2, L 3, L 4, s (t) is multiplied by respectively to 4 weight w 1, w 2, w 3, w 4after be assigned on selected four loud speakers;
Step 4: at playback end, calculate and rebuild loud speaker L in sound field 1, L 2, L 3, L 4send the particle rapidity pv and the acoustic pressure p that listen the acoustic image of point of articulation place perception after signal;
Step 5: the particle rapidity pv of the original sound source of calculating according to step 1 0with acoustic pressure p 0, particle rapidity pv and the acoustic pressure p of acoustic image in the reconstruction sound field calculated in step 4, set up direction, apart from equivalence model;
Step 6: in the direction of step 5, apart from equivalence model, with the weight w arranging in step 3 1, w 2, w 3, w 4for unknown quantity, solve direction, obtain the value of weight apart from equivalence model;
Step 7: utilize the weight w solving in step 6 1, w 2, w 3, w 4, the signal that carries out loud speaker distributes, and makes the audio direction of reconstruction consistent with direction and the distance of acoustic source with distance.
As preferably, the particle rapidity pv of the sound source that point of articulation place receives that what the calculating described in step 1 and sound source distance were r listen 0with acoustic pressure p 0, its specific implementation comprises following sub-step:
Step 1.1: utilize Fourier transform that the time-domain signal s (t) of sound source is transformed into frequency domain by time domain, obtain frequency-region signal s (ω) s ( ω ) = ∫ - ∞ + ∞ s ( t ) e - iwt dt
Step 1.2: the position vector ε=(ε in sound field by frequency-region signal s (ω) and sound source x, ε y, ε z), listen point of articulation position coordinates r=r (r x, r y, r z), according to the definition of sound physical properties attribute particle rapidity, calculate the particle rapidity pv that listens point of articulation place to receive 0:
pv 0 ( r , ω ) = G e - ik | r - ϵ | | r - ϵ | ( 1 + 1 ik | r - ϵ | ) × 1 | r - ϵ | r x - ϵ x r y - ϵ y r z - ϵ z s ( ω ) ;
The acoustic pressure p that calculating listens point of articulation place to receive 0:
p 0 ( T 0 , ω ) = G e - ik | r - ϵ | | r - ϵ | s ( ω ) ;
Wherein k represents wave number, relevant with frequency and the velocity of sound of voice signal, and e is constant, and i is imaginary unit, and ω is the angular frequency of signal s (t), | r-ε | represent sound source and listen the distance between the point of articulation, G (ω) represents source strength.
As preferably, 4 loud speaker L described in step 2 1, L 2, L 3, L 4, meet following condition: will listen respectively point of articulation r (x, y, z) and L 1, L 2, L 3, L 4connect, in the polyhedron forming, the particle rapidity pv of sound source 0unit vector be positioned at this polyhedron inside.
As preferably, loud speaker L in sound field is rebuild in the calculating described in step 4 1, L 2, L 3, L 4send the particle rapidity pv and the acoustic pressure p that listen the acoustic image of point of articulation place perception after signal, its specific implementation comprises following sub-step:
Step 4.1: at playback end, cartesian coordinate system, as initial point, is set up, loud speaker L in the center of the ambiophonic system forming take multichannel icoordinate be designated as L i(L ix, Li y, L iz), the coordinate of listening the point of articulation is r (r x, r y, r z);
Step 4.2: by loud speaker L 1, L 2, L 3, L 4the signal q sending 1(t), q 2(t), q 3(t), q 4(t) be converted to frequency domain:
q ( ω ) = q 1 ( ω ) q 2 ( ω ) q 3 ( ω ) q 4 ( ω ) , q i ( ω ) = ∫ - ∞ + ∞ q i ( t ) e - iwt dt
Step 4.3: by loud speaker L 1, L 2, L 3, L 4send signal q 1(t), q 2(t), q 3(t), q 4(t) frequency domain representation pv and the acoustic pressure p of the particle rapidity at point of articulation place listened in calculating:
p v ( r , ω ) = Σ i = 1 3 G e ik | r - L i | | r - L i | ( 1 + 1 ik | r - L i | ) × 1 | r - L i | r x - L ix r y - L iy r z - L iz q ( ω ) = Σ i = 1 3 G e - ik | r - L i | | r - L i | 2 ( 1 + 1 ik | r - L i | ) × r x - L ix r y - L iy r z - L iz q ( ω ) p = G Σ j = 1 n e - ik | r - L i | | r - L i | q ( ω )
Wherein k represents wave number, relevant with frequency and the velocity of sound of voice signal, and e is constant, and i is imaginary unit, and ω is the angular frequency of signal s (t), | r-L i| represent loud speaker and listen the distance between the point of articulation, G (ω) represents source strength.
As preferably, the particle rapidity pv of the original sound source of calculating according to step 1 described in step 5 0with acoustic pressure p 0, particle rapidity pv and the acoustic pressure p of acoustic image in the reconstruction sound field calculated in step 4, set up direction, apart from equivalence model, its specific implementation comprises following sub-step:
Step 5.1: the particle rapidity p of the original sound source of calculating according to step 1 v0particle rapidity pv with acoustic image in the reconstruction sound field of calculating in step 4, sets up direction equivalence relation as follows:
pv=pv 0
?
Σ i = 1 3 G e - ik | r - L i | | r - L i | 2 ( 1 + 1 ik | r - L i | ) × r ix - L ix r iy - L iy r iz - L iz q ( ω ) = G e - ik | r - ϵ | | r - ϵ | ( 1 + 1 ik | r - ϵ | ) × 1 | r - ϵ | r x - ϵ x r y - ϵ x r z - ϵ x s ( ω )
Wherein k represents wave number, relevant with frequency and the velocity of sound of voice signal, and e is constant, and i is imaginary unit, and ω is the angular frequency of signal s (t), | r-L i| represent loud speaker and listen the distance between the point of articulation, | r-ε | represent sound source and listen the distance between the point of articulation, G (ω) represents source strength;
Step 5.2: the acoustic pressure p of the original sound source of calculating according to step 1 0acoustic pressure p with acoustic image in the reconstruction sound field of calculating in step 4, sets up as follows apart from equivalence relation:
p=p 0
?
G Σ j = 1 n e - ik | r - L i | | r - L i | q ( ω ) = G e - ik | r - ϵ | | r - ϵ | s ( ω )
Wherein k represents wave number, relevant with frequency and the velocity of sound of voice signal, and e is constant, and i is imaginary unit, and ω is the angular frequency of signal s (t), | r-L i| represent loud speaker and listen the distance between the point of articulation, | r-ε | represent sound source and listen the distance between the point of articulation, G (ω) represents source strength;
Step 5.3: according to the direction equivalence relation of step 5.1 and 5.2 with apart from equivalence relation, set up direction, apart from equivalence model:
pv = pv 0 p = p 0
The technical scheme that device of the present invention adopts is: a kind of method that in multi-channel system utilizing described in claim 1, Sounnd source direction and distance are rebuild is carried out the device that in multi-channel system, Sounnd source direction and distance are rebuild, it is characterized in that, comprise: point of articulation directional information computing module (1) is listened in sound source space, point of articulation range information computing module (2) is listened in sound source space, loud speaker is selected module (3), signal preassignment module (4), rebuild sound field Sounnd source direction information computing module (5), rebuild sound field sound source range information computing module (6), model building module (7), model solution module (8), signal distribution module (9),
The direction of point of articulation directional information computing module (1) for the sound source calculating the pleasant to the ear point of articulation in sound source space at acoustic source place and perceive listened in described sound source space, calculates the particle rapidity pv that listens the point of articulation according to the signal s (t) of sound source 0, and by pv 0export model building module (7) to;
The distance of point of articulation range information computing module (2) for the sound source calculating the pleasant to the ear point of articulation in sound source space at acoustic source place and perceive listened in described sound source space, calculates the acoustic pressure p that listens the point of articulation according to the signal s (t) of sound source 0, and by p 0export model building module (7) to;
Described loud speaker selects module (3) for setting up the selection principle of multi-channel system loud speaker of reconstruction, determines the loud speaker L selecting in multi-channel system 1, L 2, L 3, L 4, and export selected loud speaker to signal preassignment module (4);
Described signal preassignment module (4) is for being multiplied by weight w by original audio signal s (t) 1, w 2, w 3, w 4after distribute to loud speaker and select the loud speaker L selecting in module (3) 1, L 2, L 3, L 4, then the signal after distributing is exported to and rebuilds sound field Sounnd source direction information computing module (5) and rebuild sound field sound source range information computing module (6);
Described reconstruction sound field acoustic image directional information computing module (5) receives the direction of acoustic image for calculating the pleasant to the ear point of articulation of multi-channel system place, according in signal preassignment module (4) to the preallocated signal of loud speaker, utilize acoustic theory to calculate the pleasant to the ear point of articulation of the multi-channel system place rebuilding and receive the particle rapidity pv of acoustic image, and export pv to model building module (7);
Described reconstruction sound field acoustic image range information computing module (6) receives the distance of acoustic image for calculating the pleasant to the ear point of articulation of multi-channel system place, according in signal preassignment module (4) to the preallocated signal of loud speaker, utilize acoustic theory to calculate the pleasant to the ear point of articulation of the multi-channel system place rebuilding and receive the acoustic pressure p of acoustic image, and export p to model building module (7);
Described model building module (7) for set up listen point of articulation prescription to perceived distance consistency model, listened the sound source particle rapidity p of point of articulation directional information computing module (1) output by sound source space v0set up direction equivalence relation with the acoustic image particle rapidity pv that rebuilds output in sound field Sounnd source direction information computing module (4), listened the sound source acoustic pressure p of point of articulation range information computing module (2) output by sound source space 0set up apart from equivalence relation with the acoustic image acoustic pressure p that rebuilds output in sound field Sounnd source direction information computing module (6), according to direction equivalence relation and apart from equivalence relation set up direction, apart from equivalence model, and export this model to model solution module (8);
Described model solution module (8) is set up the model of module (7) foundation and then obtains four weight w for solving model 1, w 2, w 3, w 4value, finally weights are exported to signal distribution module (9);
The weight w of described signal distribution module (9) for utilizing model solution module (8) to solve 1, w 2, w 3, w 4, the signal that carries out loud speaker distributes, and makes the audio direction of reconstruction consistent with direction and the distance of acoustic source with distance.
The present invention, with respect to prior art, can recover direction and the range information of the sound source in acoustic source space accurately, and simple to operate, and computational efficiency is high, good stability.
Accompanying drawing explanation
Fig. 1: the device workflow diagram of the embodiment of the present invention.
Embodiment
By reference to the accompanying drawings technical scheme of the present invention and system are described further with specific embodiment below.
The technical scheme that method of the present invention adopts is: Sounnd source direction and apart from reconstructing device and method in a kind of multi-channel system, comprises the following steps:
Step 1: according to the time-domain signal s (t) of known sound source, to listen the point of articulation after initial point is set up Descartes's rectangular coordinate system, calculate the particle rapidity pv with sound source distance to be r the listen sound source that point of articulation place receives in original sound field 0with acoustic pressure p 0;
Step 1.1: utilize Fourier transform that the time-domain signal s (t) of sound source is transformed into frequency domain by time domain, obtain frequency-region signal s (ω), in this example, time-domain signal s (t) adopts sinusoidal signal, and the sample rate of signal is 48000Hz.Computational process is as follows:
s ( ω ) = ∫ - ∞ + ∞ s ( t ) e - iwt dt
Step 1.2: this example is to listen the point of articulation as initial point, and the coordinate r (x, y, z) that listens the point of articulation is (0,0,0), the position vector ε=(ε in sound field by frequency-region signal s (ω) and sound source x, ε y, ε z), listen point of articulation position coordinates (0,0,0), according to the definition of sound physical properties attribute particle rapidity, calculate listen point of articulation place to receive particle rapidity pv 0, computational methods are as follows:
pv 0 ( r , w ) = G e - ik | ϵ | | ϵ | ( 1 + 1 ik | ϵ | ) × 1 | ϵ | - ϵ x - ϵ y - ϵ z s ( ω )
Calculate the acoustic pressure p that listens point of articulation place to receive simultaneously 0:
p 0 ( T 0 , ω ) = G e - ik | ϵ | | ϵ | s ( ω )
Wherein k represents wave number, relevant with frequency and the velocity of sound of voice signal, and e is constant, and i is imaginary unit, and ω is the angular frequency of signal s (t), | ε | represent sound source and listen the distance between the point of articulation, G (ω) represents source strength.
Step 2: according to the particle rapidity pv of the sound source of calculating in step 1 0, selection comprises 4 loud speaker L of this direction 1, L 2, L 3, L 4, wherein L 1, L 2, L 3, L 4meet following condition: will listen respectively point of articulation r (x, y, z) and L 1, L 2, L 3, L 4connect, in the polyhedron forming, the particle rapidity pv of sound source 0unit vector be positioned at this polyhedron inside.
Step 3: by the loud speaker L in sound-source signal s (t) and step 2 1, L 2, L 3, L 4, s (t) is multiplied by respectively to 4 weight w 1, w 2, w 3, w 4after be assigned on selected four loud speakers, obtain four preallocated signal q of loud speaker 1(t), q 2(t), q 3(t), q 4(t);
q ( ω ) = q 1 ( t ) q 2 ( t ) q 3 ( t ) q 4 ( t ) = W · s ( t ) = w 1 w 2 w 3 w 4 s ( t )
Wherein W represents weight vector, and its value is:
W = w 1 w 2 w 3 w 4
Q (t)=(q 1(t), q 2(t), q 3(t), q 4(t)) T represents loud speaker L 1, L 2, L 3, L 4the signal phasor distributing, T represents transposition computing.
Step 4: calculate and rebuild in sound field by loud speaker L 1, L 2, L 3, L 4send the particle rapidity pv and the acoustic pressure p that listen the acoustic image of point of articulation place perception after signal, detailed process comprises following 3 sub-steps:
Step 4.1: at playback end, cartesian coordinate system, as initial point, is set up in the center of the ambiophonic system forming take multichannel.The coordinate of loud speaker Li is designated as L i(L ix, Li y, L iz), in this example, listen the position of the point of articulation and the position consistency of central point, the coordinate of listening the point of articulation is r (r x, r y, r z)=(0,0,0);
Step 4.2: by loud speaker L 1, L 2, L 3, L 4the signal q sending 1(t), q 2(t), q 3(t), q 4(t) be converted to frequency domain:
q ( ω ) = q 1 ( ω ) q 2 ( ω ) q 3 ( ω ) q 4 ( ω ) , q i ( ω ) = ∫ - ∞ + ∞ q i ( t ) e - iwt dt
Step 4.3: by loud speaker L 1, L 2, L 3, L 4send signal q 1(t), q 2(t), q 3(t), q 4(t) frequency domain representation pv and the acoustic pressure p of the particle rapidity at point of articulation place listened in calculating:
p v ( r , ω ) = Σ i = 1 3 G e ik | L i | | L i | ( 1 + 1 ik | L i | ) × 1 | L i | - L ix L iy L iz q ( ω ) = Σ i = 1 3 G e - ik | L i | | L i | 2 ( 1 + 1 ik | L i | ) × - L ix - L iy - L iz q ( ω ) p = G Σ j = 1 n e - ik | L i | | L i | q ( ω )
Wherein k represents wave number, relevant with frequency and the velocity of sound of voice signal, and e is constant, and i is imaginary unit, and ω is the angular frequency of signal s (t), | L i| represent loud speaker and listen the distance between the point of articulation, G (ω) represents source strength;
Step 5: the particle rapidity pv of the original sound source of calculating according to step 1 0with acoustic pressure p 0, particle rapidity pv and the acoustic pressure p of acoustic image in the reconstruction sound field calculated in step 4, set up direction, apart from equivalence model, comprise following 3 sub-steps:
Step 5.1: the particle rapidity pv of the original sound source of calculating according to step 1 0particle rapidity pv with acoustic image in the reconstruction sound field of calculating in step 4, sets up direction equivalence relation as follows:
pv=pv 0
?
Σ i = 1 3 G e - ik | r - L i | | L i | 2 ( 1 + 1 ik | r - L i | ) × - L ix - L iy - L iz q ( ω ) = G e - ik | ϵ | | ϵ | ( 1 + 1 ik | ϵ | ) × 1 | ϵ | - ϵ x - ϵ x - ϵ x s ( ω )
Wherein k represents wave number, relevant with frequency and the velocity of sound of voice signal, and e is constant, and i is imaginary unit, and ω is the angular frequency of signal s (t), | L i| represent loud speaker and listen the distance between the point of articulation, | ε | represent sound source and listen the distance between the point of articulation, G (ω) represents source strength.
Step 5.2: the acoustic pressure p of the original sound source of calculating according to step 1 0acoustic pressure p with acoustic image in the reconstruction sound field of calculating in step 4, sets up as follows apart from equivalence relation:
p=p0
?
G Σ j = 1 n e - ik | L i | | L i | q ( ω ) = G e - ik | ϵ | | ϵ | s ( ω )
Wherein k represents wave number, relevant with frequency and the velocity of sound of voice signal, and e is constant, and i is imaginary unit, and ω is the angular frequency of signal s (t), | L i| represent loud speaker and listen the distance between the point of articulation, | ε | represent sound source and listen the distance between the point of articulation, G (ω) represents source strength.
Step 5.3: according to the direction equivalence relation of step 5.1 and 5.2 with apart from equivalence relation, set up direction, apart from equivalence model:
pv = pv 0 p = p 0
Step 6: in the direction of step 5, apart from equivalence model, with the weight w arranging in step 3 1, w 2, w 3, w 4for unknown quantity, solve direction, obtain the value of weight apart from equivalence model; Model is launched to obtain system of linear equations, separate this system of linear equations, can utilize but be not limited to the tool software such as Matlab, Mathematical, or other instrument, and manually calculate, this solution of equations (w obtained 1, w 2, w 3, w 4).
Step 7: utilize the weight w solving in step 6 1, w 2, w 3, w 4, the signal that carries out loud speaker distributes, and makes the audio direction of reconstruction consistent with direction and the distance of acoustic source with distance, in this example, all loud speakers are positioned on same sphere, be that each loud speaker equates with listening the distance between the point of articulation, therefore in signal assigning process without postponing, loud speaker L 1, L 2, L 3, L 4distribute signal as shown in the formula:
q 1 ( t ) = w 1 s ( t ) q 2 ( t ) = w 2 s ( t ) q 3 ( t ) = w 3 s ( t ) q 4 ( t ) = w 4 s ( t )
Ask for an interview Fig. 1, the technical scheme that device of the present invention adopts is: the device that recovers Sounnd source direction and range information in a kind of multi-channel system, it is characterized in that, comprising: sound source space is listened point of articulation directional information computing module 1, sound source space to listen point of articulation range information computing module 2, loud speaker to select module 3, signal preassignment module 4, rebuilds sound field Sounnd source direction information computing module 5, rebuild sound field sound source range information computing module 6, model building module 7, model solution module 8, signal distribution module 9;
The direction of point of articulation directional information computing module 1 for the sound source calculating the pleasant to the ear point of articulation in sound source space at acoustic source place and perceive listened in described sound source space, calculates the particle rapidity pv that listens the point of articulation according to the signal s (t) of sound source 0, and by pv 0export model building module 7 to;
The distance of point of articulation range information computing module 2 for the sound source calculating the pleasant to the ear point of articulation in sound source space at acoustic source place and perceive listened in described sound source space, calculates the acoustic pressure p that listens the point of articulation according to the signal s (t) of sound source 0, and by p 0export model building module 7 to;
Described loud speaker selects module 3 for setting up the selection principle of multi-channel system loud speaker of reconstruction, determines the loud speaker L selecting in multi-channel system 1, L 2, L 3, L 4, and export selected loud speaker to signal preassignment module 4;
Described signal preassignment module 4 is for being multiplied by weight w by original audio signal s (t) 1, w 2, w 3, w 4after distribute to loud speaker and select the loud speaker L selecting in module 3 1, L 2, L 3, L 4, then the signal after distributing is exported to and rebuilds sound field Sounnd source direction information computing module 5 and rebuild sound field sound source range information computing module 6;
Described reconstruction sound field acoustic image directional information computing module 5 receives the direction of acoustic image for calculating the pleasant to the ear point of articulation of multi-channel system place, according in signal preassignment module 4 to the preallocated signal of loud speaker, utilize acoustic theory to calculate the pleasant to the ear point of articulation of the multi-channel system place rebuilding and receive the particle rapidity pv of acoustic image, and export pv to model building module 7;
Described reconstruction sound field acoustic image range information computing module 6 receives the distance of acoustic image for calculating the pleasant to the ear point of articulation of multi-channel system place, according in signal preassignment module 4 to the preallocated signal of loud speaker, utilize acoustic theory to calculate the pleasant to the ear point of articulation of the multi-channel system place rebuilding and receive the acoustic pressure p of acoustic image, and export p to model building module 7;
Described model building module 7 for set up listen point of articulation prescription to perceived distance consistency model, the sound source particle rapidity pv that listens point of articulation directional information computing module 1 to export by sound source space 0set up direction equivalence relation, the sound source acoustic pressure p that listens point of articulation range information computing module 2 to export by sound source space with the acoustic image particle rapidity pv that rebuilds output in sound field Sounnd source direction information computing module 4 0set up apart from equivalence relation with the acoustic image acoustic pressure p that rebuilds output in sound field Sounnd source direction information computing module 6, according to direction equivalence relation and apart from equivalence relation set up direction, apart from equivalence model, and export this model to model solution module 8;
Described model solution module 8 is set up the model of module 7 foundation and then obtains four weight w for solving model 1, w 2, w 3, w 4value, finally weights are exported to signal distribution module 9;
The weight w that described signal distribution module 9 solves for utilizing model solution module 8 1, w 2, w 3, w 4, the signal that carries out loud speaker distributes, and makes the audio direction of reconstruction consistent with direction and the distance of acoustic source with distance.
These are only preferred embodiment of the present invention, be not intended to limit protection scope of the present invention, therefore, all any modifications of doing within the spirit and principles in the present invention, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.

Claims (6)

1. the method and apparatus that in multi-channel system, Sounnd source direction and distance are rebuild, is characterized in that, comprises the following steps:
Step 1: according to the time-domain signal s (t) of known sound source, to listen the point of articulation after initial point is set up Descartes's rectangular coordinate system, calculate the particle rapidity pv with sound source distance to be r the listen sound source that point of articulation place receives in original sound field 0with acoustic pressure p 0;
Step 2: according to the particle rapidity pv of the sound source of calculating in step 1 0, selection comprises 4 loud speaker L of this direction 1, L 2, L 3, L 4;
Step 3: by the loud speaker L in sound-source signal s (t) and step 2 1, L 2, L 3, L 4, s (t) is multiplied by respectively to 4 weight w 1, w 2, w 3, w 4after be assigned on selected four loud speakers;
Step 4: at playback end, calculate and rebuild loud speaker L in sound field 1, L 2, L 3, L 4send the particle rapidity pv and the acoustic pressure p that listen the acoustic image of point of articulation place perception after signal;
Step 5: the particle rapidity pv of the original sound source of calculating according to step 1 0with acoustic pressure p 0, particle rapidity pv and the acoustic pressure p of acoustic image in the reconstruction sound field calculated in step 4, set up direction, apart from equivalence model;
Step 6: in the direction of step 5, apart from equivalence model, with the weight w arranging in step 3 1, w 2, w 3, w 4for unknown quantity, solve direction, obtain the value of weight apart from equivalence model;
Step 7: utilize the weight w solving in step 6 1, w 2, w 3, w 4, the signal that carries out loud speaker distributes, and makes the audio direction of reconstruction consistent with direction and the distance of acoustic source with distance.
2. the method and apparatus that in multi-channel system according to claim 1, Sounnd source direction and distance are rebuild, is characterized in that: the particle rapidity pv of the sound source that point of articulation place receives that what the calculating described in step 1 and sound source distance was r listen 0with acoustic pressure p 0, its specific implementation comprises following sub-step:
Step 1.1: utilize Fourier transform that the time-domain signal s (t) of sound source is transformed into frequency domain by time domain, obtain frequency-region signal s (ω)
s ( ω ) = ∫ - ∞ + ∞ s ( t ) e - iwt dt
Step 1.2: the position vector ε=(ε in sound field by frequency-region signal s (ω) and sound source x, ε y, ε z), listen point of articulation position coordinates r=r (r x, r y, r z), according to the definition of sound physical properties attribute particle rapidity, calculate the particle rapidity pv that listens point of articulation place to receive 0:
pv 0 ( r , ω ) = G e - ik | r - ϵ | | r - ϵ | ( 1 + 1 ik | r - ϵ | ) × 1 | r - ϵ | r x - ϵ x r y - ϵ y r z - ϵ z s ( ω ) ;
The acoustic pressure p that calculating listens point of articulation place to receive 0:
p 0 ( T 0 , ω ) = G e - ik | r - ϵ | | r - ϵ | s ( ω ) ;
Wherein k represents wave number, relevant with frequency and the velocity of sound of voice signal, and e is constant, and i is imaginary unit, and ω is the angular frequency of signal s (t), | r-ε | represent sound source and listen the distance between the point of articulation, G (ω) represents source strength.
3. the method and apparatus that in multi-channel system according to claim 1, Sounnd source direction and distance are rebuild, is characterized in that: 4 loud speaker L described in step 2 1, L 2, L 3, L 4, meet following condition: will listen respectively point of articulation r (x, y, z) and L 1, L 2, L 3, L 4connect, in the polyhedron forming, the particle rapidity pv of sound source 0unit vector be positioned at this polyhedron inside.
4. the method and apparatus that in multi-channel system according to claim 1, Sounnd source direction and distance are rebuild, is characterized in that: loud speaker L in sound field is rebuild in the calculating described in step 4 1, L 2, L 3, L 4send the particle rapidity pv and the acoustic pressure p that listen the acoustic image of point of articulation place perception after signal, its specific implementation comprises following sub-step:
Step 4.1: at playback end, cartesian coordinate system, as initial point, is set up, loud speaker L in the center of the ambiophonic system forming take multichannel icoordinate be designated as L i(L ix, Li y, L iz), the coordinate of listening the point of articulation is r (r x, r y, r z);
Step 4.2: by loud speaker L 1, L 2, L 3, L 4the signal q sending 1(t), q 2(t), q 3(t), q 4(t) be converted to frequency domain:
q ( ω ) = q 1 ( ω ) q 2 ( ω ) q 3 ( ω ) q 4 ( ω ) , q i ( ω ) = ∫ - ∞ + ∞ q i ( t ) e - iwt dt
Step 4.3: by loud speaker L 1, L 2, L 3, L 4send signal q 1(t), q 2(t), q 3(t), q 4(t) frequency domain representation pv and the acoustic pressure p of the particle rapidity at point of articulation place listened in calculating:
p v ( r , ω ) = Σ i = 1 3 G e ik | r - L i | | r - L i | ( 1 + 1 ik | r - L i | ) × 1 | r - L i | r x - L ix r y - L iy r z - L iz q ( ω ) = Σ i = 1 3 G e - ik | r - L i | | r - L i | 2 ( 1 + 1 ik | r - L i | ) × r x - L ix r y - L iy r z - L iz q ( ω ) p = G Σ j = 1 n e - ik | r - L i | | r - L i | q ( ω )
Wherein k represents wave number, relevant with frequency and the velocity of sound of voice signal, and e is constant, and i is imaginary unit, and ω is the angular frequency of signal s (t), | r-L i| represent loud speaker and listen the distance between the point of articulation, G (ω) represents source strength.
5. the method and apparatus that in multi-channel system according to claim 1, Sounnd source direction and distance are rebuild, is characterized in that: the particle rapidity pv of the original sound source of calculating according to step 1 described in step 5 0with acoustic pressure p 0, particle rapidity pv and the acoustic pressure p of acoustic image in the reconstruction sound field calculated in step 4, set up direction, apart from equivalence model, its specific implementation comprises following sub-step: step 5.1: the particle rapidity p of the original sound source of calculating according to step 1 v0particle rapidity pv with acoustic image in the reconstruction sound field of calculating in step 4, sets up direction equivalence relation as follows:
pv=pv 0
?
Σ i = 1 3 G e - ik | r - L i | | r - L i | 2 ( 1 + 1 ik | r - L i | ) × r ix - L ix r iy - L iy r iz - L iz q ( ω ) = G e - ik | r - ϵ | | r - ϵ | ( 1 + 1 ik | r - ϵ | ) × 1 | r - ϵ | r x - ϵ x r y - ϵ x r z - ϵ x s ( ω ) Wherein k represents wave number, relevant with frequency and the velocity of sound of voice signal, and e is constant, and i is imaginary unit, and ω is the angular frequency of signal s (t), | r-L i| represent loud speaker and listen the distance between the point of articulation, | r-ε | represent sound source and listen the distance between the point of articulation, G (ω) represents source strength.
Step 5.2: the acoustic pressure p of the original sound source of calculating according to step 1 0acoustic pressure p with acoustic image in the reconstruction sound field of calculating in step 4, sets up as follows apart from equivalence relation:
p=p0
?
G Σ j = 1 n e - ik | r - L i | | r - L i | q ( ω ) = G e - ik | r - ϵ | | r - ϵ | s ( ω )
Wherein k represents wave number, relevant with frequency and the velocity of sound of voice signal, and e is constant, and i is imaginary unit, and ω is the angular frequency of signal s (t), | r-L i| represent loud speaker and listen the distance between the point of articulation, | r-ε | represent sound source and listen the distance between the point of articulation, G (ω) represents source strength;
Step 5.3: according to the direction equivalence relation of step 5.1 and 5.2 with apart from equivalence relation, set up direction, apart from equivalence model:
pv = pv 0 p = p 0 .
6. the method that in the multi-channel system utilizing described in claim 1, Sounnd source direction and distance are rebuild is carried out the device that in multi-channel system, Sounnd source direction and distance are rebuild, it is characterized in that, comprise: point of articulation directional information computing module (1) is listened in sound source space, point of articulation range information computing module (2) is listened in sound source space, loud speaker is selected module (3), signal preassignment module (4), rebuild sound field Sounnd source direction information computing module (5), rebuild sound field sound source range information computing module (6), model building module (7), model solution module (8), signal distribution module (9),
The direction of point of articulation directional information computing module (1) for the sound source calculating the pleasant to the ear point of articulation in sound source space at acoustic source place and perceive listened in described sound source space, calculates the particle rapidity pv that listens the point of articulation according to the signal s (t) of sound source 0, and by pv 0export model building module (7) to;
The distance of point of articulation range information computing module (2) for the sound source calculating the pleasant to the ear point of articulation in sound source space at acoustic source place and perceive listened in described sound source space, calculates the acoustic pressure p that listens the point of articulation according to the signal s (t) of sound source 0, and by p 0export model building module (7) to;
Described loud speaker selects module (3) for setting up the selection principle of multi-channel system loud speaker of reconstruction, determines the loud speaker L selecting in multi-channel system 1, L 2, L 3, L 4, and export selected loud speaker to signal preassignment module (4);
Described signal preassignment module (4) is for being multiplied by weight w by original audio signal s (t) 1, w 2, w 3, w 4after distribute to loud speaker and select the loud speaker L selecting in module (3) 1, L 2, L 3, L 4, then the signal after distributing is exported to and rebuilds sound field Sounnd source direction information computing module (5) and rebuild sound field sound source range information computing module (6);
Described reconstruction sound field acoustic image directional information computing module (5) receives the direction of acoustic image for calculating the pleasant to the ear point of articulation of multi-channel system place, according in signal preassignment module (4) to the preallocated signal of loud speaker, utilize acoustic theory to calculate the pleasant to the ear point of articulation of the multi-channel system place rebuilding and receive the particle rapidity pv of acoustic image, and export pv to model building module (7);
Described reconstruction sound field acoustic image range information computing module (6) receives the distance of acoustic image for calculating the pleasant to the ear point of articulation of multi-channel system place, according in signal preassignment module (4) to the preallocated signal of loud speaker, utilize acoustic theory to calculate the pleasant to the ear point of articulation of the multi-channel system place rebuilding and receive the acoustic pressure p of acoustic image, and export p to model building module (7);
Described model building module (7) for set up listen point of articulation prescription to perceived distance consistency model, listened the sound source particle rapidity p of point of articulation directional information computing module (1) output by sound source space v0set up direction equivalence relation with the acoustic image particle rapidity pv that rebuilds output in sound field Sounnd source direction information computing module (4), listened the sound source acoustic pressure p of point of articulation range information computing module (2) output by sound source space 0set up apart from equivalence relation with the acoustic image acoustic pressure p that rebuilds output in sound field Sounnd source direction information computing module (6), according to direction equivalence relation and apart from equivalence relation set up direction, apart from equivalence model, and export this model to model solution module (8);
Described model solution module (8) is set up the model of module (7) foundation and then obtains four weight w for solving model 1, w 2, w 3, w 4value, finally weights are exported to signal distribution module (9);
The weight w of described signal distribution module (9) for utilizing model solution module (8) to solve 1, w 2, w 3, w 4, the signal that carries out loud speaker distributes, and makes the audio direction of reconstruction consistent with direction and the distance of acoustic source with distance.
CN201410071545.1A 2014-02-28 2014-02-28 Method and device for rebuilding sound source direction and distance in multichannel system Expired - Fee Related CN103826194B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410071545.1A CN103826194B (en) 2014-02-28 2014-02-28 Method and device for rebuilding sound source direction and distance in multichannel system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410071545.1A CN103826194B (en) 2014-02-28 2014-02-28 Method and device for rebuilding sound source direction and distance in multichannel system

Publications (2)

Publication Number Publication Date
CN103826194A true CN103826194A (en) 2014-05-28
CN103826194B CN103826194B (en) 2015-06-03

Family

ID=50760977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410071545.1A Expired - Fee Related CN103826194B (en) 2014-02-28 2014-02-28 Method and device for rebuilding sound source direction and distance in multichannel system

Country Status (1)

Country Link
CN (1) CN103826194B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104363555A (en) * 2014-09-30 2015-02-18 武汉大学深圳研究院 Method and device for reconstructing directions of 5.1 multi-channel sound sources
CN106017837A (en) * 2016-06-30 2016-10-12 北京空间飞行器总体设计部 Simulation method of equivalent sound simulation source
CN106205573A (en) * 2016-06-28 2016-12-07 青岛海信移动通信技术股份有限公司 A kind of audio data processing method and device
CN106454685A (en) * 2016-11-25 2017-02-22 武汉大学 Sound field reconstruction method and system
CN109068262A (en) * 2018-08-03 2018-12-21 武汉大学 A kind of acoustic image personalization replay method and device based on loudspeaker
CN109302660A (en) * 2017-07-24 2019-02-01 华为技术有限公司 The compensation method of audio signal, apparatus and system
CN109618276A (en) * 2018-11-23 2019-04-12 武汉轻工大学 Sound field rebuilding method, equipment, storage medium and device based on non-central point
CN110366091A (en) * 2019-08-07 2019-10-22 武汉轻工大学 Sound field rebuilding method, equipment, storage medium and device based on acoustic pressure
CN111464932A (en) * 2020-04-07 2020-07-28 武汉轻工大学 Sound field reconstruction method, device and equipment based on multiple listening points and storage medium
CN112073804A (en) * 2020-09-10 2020-12-11 深圳创维-Rgb电子有限公司 Television sound adjusting method, television and storage medium
CN113286252A (en) * 2021-07-23 2021-08-20 科大讯飞(苏州)科技有限公司 Sound field reconstruction method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6222930B1 (en) * 1997-02-06 2001-04-24 Sony Corporation Method of reproducing sound
WO2010113434A1 (en) * 2009-03-31 2010-10-07 パナソニック株式会社 Sound reproduction system and method
CN103021414A (en) * 2012-12-04 2013-04-03 武汉大学 Method for distance modulation of three-dimensional audio system
CN103037301A (en) * 2012-12-19 2013-04-10 武汉大学 Convenient adjustment method for restoring range information of acoustic images
CN103347245A (en) * 2013-07-01 2013-10-09 武汉大学 Method and device for restoring sound source azimuth information in stereophonic sound system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6222930B1 (en) * 1997-02-06 2001-04-24 Sony Corporation Method of reproducing sound
WO2010113434A1 (en) * 2009-03-31 2010-10-07 パナソニック株式会社 Sound reproduction system and method
CN103021414A (en) * 2012-12-04 2013-04-03 武汉大学 Method for distance modulation of three-dimensional audio system
CN103037301A (en) * 2012-12-19 2013-04-10 武汉大学 Convenient adjustment method for restoring range information of acoustic images
CN103347245A (en) * 2013-07-01 2013-10-09 武汉大学 Method and device for restoring sound source azimuth information in stereophonic sound system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BO HANG ET AL: "Distributed System for Virtual Conference Audio Synthesis", 《2009 INTERNATIONAL CONFERENCE ON FUTURE BIOMEDICAL INFORMATION ENGINEERING》 *
SONG WANG ET AL: "Sound intensity and particle velocity based three-dimensional panning methods by five loudspeakers", 《MULTIMEDIA AND EXPO(ICME),2013 IEEE INTERNATIONAL CONFERENCE ON》 *
杭波等: "基于空间对象的虚拟会议音频重建", 《立体声与环绕声》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104363555A (en) * 2014-09-30 2015-02-18 武汉大学深圳研究院 Method and device for reconstructing directions of 5.1 multi-channel sound sources
CN106205573A (en) * 2016-06-28 2016-12-07 青岛海信移动通信技术股份有限公司 A kind of audio data processing method and device
CN106017837A (en) * 2016-06-30 2016-10-12 北京空间飞行器总体设计部 Simulation method of equivalent sound simulation source
CN106454685A (en) * 2016-11-25 2017-02-22 武汉大学 Sound field reconstruction method and system
CN106454685B (en) * 2016-11-25 2018-03-27 武汉大学 A kind of sound field rebuilding method and system
CN109302660A (en) * 2017-07-24 2019-02-01 华为技术有限公司 The compensation method of audio signal, apparatus and system
CN109302660B (en) * 2017-07-24 2020-04-14 华为技术有限公司 Audio signal compensation method, device and system
CN109068262A (en) * 2018-08-03 2018-12-21 武汉大学 A kind of acoustic image personalization replay method and device based on loudspeaker
CN109618276A (en) * 2018-11-23 2019-04-12 武汉轻工大学 Sound field rebuilding method, equipment, storage medium and device based on non-central point
CN109618276B (en) * 2018-11-23 2020-08-07 武汉轻工大学 Sound field reconstruction method, device, storage medium and device based on non-central point
CN110366091A (en) * 2019-08-07 2019-10-22 武汉轻工大学 Sound field rebuilding method, equipment, storage medium and device based on acoustic pressure
CN110366091B (en) * 2019-08-07 2021-11-02 武汉轻工大学 Sound field reconstruction method and device based on sound pressure, storage medium and device
CN111464932A (en) * 2020-04-07 2020-07-28 武汉轻工大学 Sound field reconstruction method, device and equipment based on multiple listening points and storage medium
CN112073804A (en) * 2020-09-10 2020-12-11 深圳创维-Rgb电子有限公司 Television sound adjusting method, television and storage medium
CN113286252A (en) * 2021-07-23 2021-08-20 科大讯飞(苏州)科技有限公司 Sound field reconstruction method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN103826194B (en) 2015-06-03

Similar Documents

Publication Publication Date Title
CN103826194B (en) Method and device for rebuilding sound source direction and distance in multichannel system
CN106454685B (en) A kind of sound field rebuilding method and system
CN104363555A (en) Method and device for reconstructing directions of 5.1 multi-channel sound sources
CN104871566B (en) Collaborative sound system
CN101836249B (en) A method and an apparatus of decoding an audio signal
CN105264914B (en) Audio playback device and method therefor
CN103888889B (en) A kind of multichannel conversion method based on spheric harmonic expansion
CN105120418B (en) Double-sound-channel 3D audio generation device and method
CN105308988A (en) Audio decoder configured to convert audio input channels for headphone listening
JP5826996B2 (en) Acoustic signal conversion device and program thereof, and three-dimensional acoustic panning device and program thereof
CN102932730B (en) Method and system for enhancing sound field effect of loudspeaker group in regular tetrahedron structure
CN105392102A (en) Three-dimensional audio signal generation method and system for non-spherical speaker array
CN105637902A (en) Method for and apparatus for decoding an ambisonics audio soundfield representation for audio playback using 2D setups
CN103021414B (en) Method for distance modulation of three-dimensional audio system
CN106535059A (en) Method for rebuilding stereo, loudspeaker box, position information processing method, and pickup
JP2009077379A (en) Stereoscopic sound reproduction equipment, stereophonic sound reproduction method, and computer program
CN106303843B (en) A kind of 2.5D playback methods of multizone different phonetic sound source
CN105594227A (en) Matrix decoder with constant-power pairwise panning
US9066173B2 (en) Method for producing optimum sound field of loudspeaker
CN103347245B (en) Method and device for restoring sound source azimuth information in stereophonic sound system
CN102075832A (en) Method and apparatus for dynamic spatial audio zones configuration
CN102421054A (en) Spatial audio frequency configuration method and device of multichannel display
CN109923877B (en) Apparatus and method for weighting stereo audio signal
CN109036456B (en) Method for extracting source component environment component for stereo
CN109391896A (en) A kind of audio generation method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150603

Termination date: 20200228

CF01 Termination of patent right due to non-payment of annual fee