CN102209290A - Audio reproduction device and audio reproduction method - Google Patents
Audio reproduction device and audio reproduction method Download PDFInfo
- Publication number
- CN102209290A CN102209290A CN2011100741039A CN201110074103A CN102209290A CN 102209290 A CN102209290 A CN 102209290A CN 2011100741039 A CN2011100741039 A CN 2011100741039A CN 201110074103 A CN201110074103 A CN 201110074103A CN 102209290 A CN102209290 A CN 102209290A
- Authority
- CN
- China
- Prior art keywords
- transmission characteristic
- data
- reference data
- loud speaker
- correction coefficient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/12—Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
Landscapes
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Stereophonic System (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
A method, an apparatus, and a computer-readable storage medium for processing a sound signal are provided. The method includes receiving first reference data associated with a positional relationship between reference locations on a first device, receiving second reference data associated with a positional relationship between reference locations on a second device, receiving a reference transfer characteristic, wherein the reference transfer characteristic is based on the first and second reference data, determining, by a processor, an actual transfer characteristic based on acoustic data resulting from a test signal, and calculating, by the processor, a correction coefficient based on a difference between the reference transfer characteristic and the actual transfer characteristic.
Description
The cross reference of related application
The application advocates Japanese patent application 2010-074490 number the priority that on March 29th, 2010 submitted to, and its full content is contained in this by reference.
Technical field
The disclosure relates to when being connected to the loudspeaker unit with loud speaker can be according to audio reproducing apparatus and its audio reproducing method of the model tuning loudspeaker performance of loudspeaker unit.
Background technology
In recent years, portable phone and the portable digital music player with reproducing music ability popularized.Along with it is popularized, these portable music players are typically connected to base type loud speaker (docking speaker) to reproduce sound.Usually, portable music player only has the minor diameter loud speaker or does not even have loud speaker.Yet,, can reproduce from the audio signal of portable music player output with high-quality or louder volume by portable music player being connected to base type loud speaker as relative major diameter loud speaker.
When from this base type loudspeaker reproduction sound, in the inside of portable music player audio signal is carried out signal processing, thus the recoverable loudspeaker performance.Loudspeaker performance comprises frequency characteristic, distortion, transient response and depends on the directional characteristic of loadspeaker structure.If know in advance as these characteristics of the loud speaker of audio output device, then can proofread and correct them by signal processing.
Even when the characteristic of not knowing as the loud speaker of audio output device, also can be by collecting the characteristic that calculate loud speaker from the sound of loud speaker output via microphone, and proofread and correct this characteristic by signal processing.For example, JP-A-2008-282042 (section [0078] Fig. 7) discloses a kind of " transcriber ", should " transcriber " comprise microphone, and based on from the loud speaker output and the characteristic of proofreading and correct loud speaker by the test sound that microphone is collected.
When between microphone and loud speaker, not having the object of the transmission that influences sound, can pass through the disclosed correction of typist's errors loudspeaker performance of JP-A-2008-282042.Yet if there is the object of the transmission that influences sound between microphone and loud speaker, this correction may be impossible.In this case, when by among the JP-A-2008-282042 during disclosed correction of typist's errors loudspeaker performance, the device of proofreading and correct (hereinafter being called means for correcting) need obtain the position relation between microphone and the loud speaker.That is to say,, otherwise may be difficult to separate the influence that loudspeaker performance receives during via spatial transmission the influence of the sound collected by microphone and sound wave unless means for correcting obtains position relation.
When proofreading and correct the characteristic of base type loud speaker by portable music player, the combination of base type loud speaker and portable music player can be various configurations.In addition, under portable music player is installed in state on the base type loud speaker,, might between the loud speaker of the microphone of portable music player and base type loud speaker, there be very much the object that influences transmission sound etc. as the result of this configuration.For this reason, under many circumstances, possibly can't specify the position relation between the microphone that is provided with in base type loud speaker and the portable music player.Thereby, the characteristic that is difficult to use the signal processing of portable music player to proofread and correct the base type loud speaker.
Therefore, expectation provide a kind of can be according to the audio reproducing apparatus and the method for the model tuning loudspeaker performance of loudspeaker unit.
Summary of the invention
Therefore, a kind of method that is used for processing audio signal is disclosed.This method can comprise: receive with first device on the reference position between the position concern first reference data that is associated; Receive with second device on the reference position between the position concern second reference data that is associated; Reception is with reference to transmission characteristic, wherein with reference to transmission characteristic based on first and second reference datas; Determine the actual transfer characteristic by processor based on the acoustic data that produces by test signal; And by processor based on the difference calculation correction coefficient between reference transmission characteristic and the actual transfer characteristic.
According to embodiment, provide a kind of equipment that is used for processing audio signal with first reference point.This equipment can comprise the storage arrangement of store instruction; And processing unit, execution command, with: receive with first reference point between the position concern first reference data that is associated; Receive with second reference point between the position concern second reference data that is associated; Reception is with reference to transmission characteristic, wherein with reference to transmission characteristic based on first and second reference datas; Determine the actual transfer characteristic based on the acoustic data that produces by test signal; And based on the difference calculation correction coefficient between reference transmission characteristic and the actual transfer characteristic.
According to embodiment, a kind of computer-readable storage medium that comprises instruction is provided, when carrying out this instruction on processor, this instruction makes processor carry out the method that is used for processing audio signal.This method can comprise: receive with first device on the reference position between the position concern first reference data that is associated; Receive with second device on the reference position between the position concern second reference data that is associated; Reception is with reference to transmission characteristic, wherein with reference to transmission characteristic based on first and second reference datas; Generate test signal; Determine the actual transfer characteristic based on the acoustic data that produces by test signal; And by processor based on the difference calculation correction coefficient between reference transmission characteristic and the actual transfer characteristic.
Description of drawings
Fig. 1 is the perspective view that illustrates according to the external view of the audio reproducing apparatus of the embodiment of the invention.
Fig. 2 is the perspective view that the external view of loud speaker base (dock) is shown.
Fig. 3 illustrates the perspective view of butt joint (dock) to the external view of the audio reproducing apparatus of loud speaker base.
Fig. 4 is the block diagram that the functional structure of audio reproducing apparatus is shown.
Fig. 5 is the block diagram that the functional structure of loud speaker base is shown.
Fig. 6 is the flow chart of determining about correction coefficient.
Fig. 7 A is the plane graph of audio reproducing apparatus to Fig. 7 C.
Fig. 8 A is the plane graph of loud speaker base to Fig. 8 C.
Fig. 9 A and Fig. 9 B are the concept maps that desirable transmission characteristic mapping is shown.
Figure 10 A and Figure 10 B are the figure that desirable transmission characteristic candidate's example is shown.
Figure 11 is the concept map that the method for approximate ideal transmission characteristic is shown.
Embodiment
Hereinafter, embodiments of the invention are described with reference to the accompanying drawings.
The schematic construction of audio reproducing apparatus and loud speaker base
Fig. 1 is the perspective view that illustrates according to the external view of the audio reproducing apparatus 1 of the embodiment of the invention, Fig. 2 is the perspective view that the external view of the loud speaker base 2 that audio reproducing apparatus 1 is docked to is shown, and Fig. 3 is the perspective view that the external view of the audio reproducing apparatus 1 that is docked to loud speaker base 2 is shown.In these figure, a direction in the space will be defined as directions X, and the direction vertical with directions X be defined as the Y direction, and the direction vertical with the Y direction with directions X is defined as the Z direction.In the present embodiment, illustrate that as an example audio reproducing apparatus 1 is the situation of portable music player.
As shown in Figure 1, audio reproducing apparatus 1 has the reference position, such as engaging recess 12 and microphone 13.Audio reproducing apparatus 1 is provided with the earphone terminal 14 that earphone can be connected to, and the load button 15 that is used for importing user's operation.Audio reproducing apparatus 1 is carried by the user, and in response to being stored in wherein audio signal via user's operation of load button 15 inputs from 14 outputs of earphone terminal.The size of audio reproducing apparatus 1 can be 10cm on the directions X for example, 3cm on 2cm and the Z direction on the Y direction.
As shown in Figure 2, loud speaker base 2 has the reference position, such as left speaker 21, right loud speaker 22 and joint outstanding 23.Left and right loud speaker 21 and 22 is general loud speakers, and does not have any special construction.The number of loud speaker is not limited to two.Engaging shape that recess 12 engages and form and engage outstandingly 23, and engage outstanding 23 and be provided with the splicing ear (not shown) that is connected electrically to audio reproducing apparatus 1 by engaging with above-mentioned.The size of loud speaker base 2 can be 6cm on 14cm, the Y direction on the directions X for example, and 9cm on the Z direction.
By this way, when engaging recess 12 and engaging outstanding 23 joints, audio reproducing apparatus 1 and loud speaker base 2 are fixed to one anotherly, electrically be connected.In audio reproducing apparatus 1, audio signal sent to loud speaker base 2 sides via engaging recess 12 and engaging outstanding 23.In loud speaker base 2, from left and right loud speaker 21 and 22 outputs and the corresponding sound of audio signal.At this moment, 1 pair of audio signal of audio reproducing apparatus " treatment for correcting " that illustrate after a while.
The functional structure of audio reproducing apparatus
Functional structure with explanation audio reproducing apparatus 1.
Fig. 4 is the block diagram that the functional structure of audio reproducing apparatus 1 is shown.As shown in FIG., audio reproducing apparatus 1 comprises: arithmetic processing unit 30, memory cell 31, operation input unit (load button 15 and universal port 37), audio signal output unit (D/A (digital-to-analog) transducer 38, earphone terminal 14 and joint recess 12), audio signal input unit (microphone 13, amplifier 39 and A/D (analog/digital) transducer 40) and communication unit 35.These parts are connected to each other via bus 36.
The operation input unit comprises load button 15 and general input port 37.Load button 15 is connected to bus 36 via general input port 37, and via general input port 37 and bus 36 operator input signal is offered arithmetic processing unit 30.
The audio signal output unit comprises D/A converter 38, earphone terminal 14 and engages recess 12.Earphone terminal 14 and joint recess 12 are connected to bus 36 via D/A converter 38.The content audio signal that is provided by arithmetic processing unit 30 is outputed to earphone terminal 14 and loud speaker base 2 sides via D/A converter 38.To represent to output to the content audio signal of loud speaker base 2 sides with audio signal SigA.
The audio signal input unit comprises microphone 13, amplifier 39 and A/D converter 40.Microphone 13 is connected to bus 36 via amplifier 39 and A/D converter 40, and via amplifier 39, A/D converter 40 and bus 36 audio signal (sound collecting signal) of collecting is offered arithmetic processing unit 30.
Construct audio reproducing apparatus 1 by this way.Yet, shown in the structure of audio reproducing apparatus 1 is not limited thereto.For example, in audio reproducing apparatus 1, can provide loud speaker, make and under situation about helping, to reproduce sound without any external device (ED).In this case, audio reproducing apparatus 1 is connected to loud speaker base 2, so that reproduce sound with higher quality, higher volume.
The functional structure of loud speaker base
Functional structure with explanation loud speaker base 2.
Fig. 5 is the block diagram that the functional structure of loud speaker base 2 is shown.
As shown in FIG., loud speaker base 2 comprises: engage outstanding 23, amplifier 24 and left and right loud speaker 21 and 22.
To offer left and right loud speaker 21 and 22 with joint outstanding 23 from the audio signal SigA that audio reproducing apparatus 1 side is provided to loud speaker base 2 sides by engaging recess 12 via amplifier 24, and it will be exported from left and right loud speaker 21 and 22 as sound.
The operation of audio reproducing apparatus
Operation with explanation audio reproducing apparatus 1.
When the user operates load button 15, arithmetic processing unit 30 will send to memory cell 31 at the request of audio content data D, and generate the content audio signal by the expansion arithmetic processing.At this, arithmetic processing unit 30 outputs to the splicing ear that for example engages recess 12 with request signal, and detects whether connected loud speaker base 2.
When not detecting loud speaker base 2, arithmetic processing unit 30 offers D/A converter 38 via bus 36 with the content audio signal.In this case, the content audio signal is not carried out treatment for correcting.38 pairs of content audio signals of D/A converter are carried out the D/A conversion and the signal of changing are outputed to earphone terminal 14.Come the output content audio signal from the earphone that is connected to earphone terminal 14 as sound.
When detecting loud speaker base 2,30 pairs of content audio signals of arithmetic processing unit are carried out described after a while treatment for correcting.Arithmetic processing unit 30 offers D/A converter 38 via bus 36 with the content audio signal of proofreading and correct.38 pairs of content audio signals of D/A converter are carried out the D/A conversion and by engaging recess 12 signal of changing are outputed to loud speaker base 2 sides.Content audio signal (SigA) is provided to left and right loud speaker 21 and 22, and exports from loud speaker as sound.
Treatment for correcting
To the treatment for correcting of being undertaken by audio reproducing apparatus 1 be described.
For example, when audio reproducing apparatus 1 at first is connected to loud speaker base 2, be identified for " correction coefficient " of treatment for correcting.Correction coefficient is determined in combination at audio reproducing apparatus 1 and loud speaker base 2.When audio reproducing apparatus 1 separates with loud speaker base 2 and is docked to loud speaker base 2, use determined correction coefficient.When audio reproducing apparatus 1 is connected to another loud speaker base different with loud speaker base 2, this loud speaker base is determined correction coefficient.Determining of correction coefficient will be described after a while.
y(s)=G(s)·x(s)
In expression formula 1, y (s) is the Laplace function (output function) from the content audio signal of digital filter output, x (s) is the Laplace function (input function) that is input to the content audio signal of digital filter, and G (s) is the Laplace function of impulse response function.G (s) is called as " correction coefficient ".Expression formula 1 means that output function is changed by correction coefficient the impulse response of input function.
Then, determining of correction coefficient is described.
Fig. 6 is the flow chart of determining about correction coefficient.Below describe each step in detail.In the following description, explanation is determined the processing of the correction coefficient of left speaker 21.This is equally applicable to the processing of the correction coefficient of definite right loud speaker 22.
As shown in Figure 6, audio reproducing apparatus 1 obtains first data (St1) (i.e. first reference data).First data are to specify microphone 13 (being input unit) about the position of joint recess 12 (promptly installing receiving unit) and the data of orientation.Subsequently, audio reproducing apparatus 1 obtains second data (St2) (i.e. second reference data).Second data are to specify flexible piezoelectric sound-generating devices (being left speaker 21 in this example) about the position that engages outstanding 23 (promptly installing receiving unit) and the data of orientation.Subsequently, according to first and second data of obtaining among step St1 and the St2, under by these data appointed positions and orientation (hereinafter referred to as the position relation), audio reproducing apparatus 1 is determined " desirable transmission characteristic " (promptly with reference to transmission characteristic) (St3).Desirable transmission characteristic is the transmission characteristic of measuring under the position when proofreading and correct loudspeaker performance ideally concerns.
Subsequently, the transmission characteristic (actual transfer characteristic) of audio reproducing apparatus 1 measurement left speaker 21 under these position relations (St4).Transmission characteristic is by the signal (sound collecting signal, i.e. acoustic data result) of the sound of microphone 13 collections and the ratio of the test voice signal that outputs to left speaker 21.Subsequently, audio reproducing apparatus 1 calculates and makes the actual transfer characteristic correction coefficient (St5) identical with desirable transmission characteristic.
Hereinafter, will describe each step in detail.
First data acquisition step (St1) will be described.
Fig. 7 A is the plane graph of audio reproducing apparatus 1 to Fig. 7 C.Fig. 7 A is the top view of seeing from the Z direction, and Fig. 7 B is the front view of seeing from the Y direction, and Fig. 7 C is the end view of seeing from directions X.As shown in these figures, when initial point Om when engaging a bit the locating of recess 12, the position coordinates of microphone 13 (hereinafter being Pm) is the coordinate of microphone 13.In Fig. 7 C, to X, Y and Z coordinate, the position coordinates Pm of microphone 13 is illustrated as Xm, Ym and Zm respectively at Fig. 7 A.The orientation of microphone 13 (sound collecting direction) can be expressed as direction vector.In Fig. 7 C, the direction vector of microphone 13 is represented as Vm at Fig. 7 A.
In the present embodiment, because the first data E is stored in the memory cell 31, arithmetic processing unit 30 obtains the first data E from memory cell 31.When first data were not stored in the memory cell 31, arithmetic processing unit 30 can obtain first data from network via communication unit 35.In addition, arithmetic processing unit 30 can obtain first data of directly being imported by load button 15 by the user.By this way, arithmetic processing unit 30 obtains first data.
Second data acquisition step (St2) will be described.
Fig. 8 A is the plane graph of loud speaker base 2 to Fig. 8 C.Fig. 8 A is the top view of seeing from the Z direction, and Fig. 8 B is the front view of seeing from the Y direction, and Fig. 8 C is the end view of seeing from directions X.As shown in these figures, when initial point Os when engaging outstanding 23 a bit locate, the position coordinates of left speaker 21 (hereinafter being Ps) is the coordinate of left speaker 21.At this, when joint outstanding 23 was connected to joint recess 12, false origin Os was identical with initial point Om.In Fig. 8 C, to X, Y and Z coordinate, the position coordinates Ps of left speaker 21 is illustrated as Xs, Ys and Zs respectively at Fig. 8 A.The orientation of left speaker 21 (voice output direction) can be represented as direction vector.In Fig. 8 C, the direction vector of left speaker 21 is represented as Vs at Fig. 8 A.
Second data at the loud speaker base of various models (type) can be stored in the memory cell 31 in advance.In this case, arithmetic processing unit 30 can obtain second data of the loud speaker base of same model by with reference to by " model information " of user by the loud speaker base 2 of load button 15 inputs from memory cell 31.Model information is the information that can specify the model of loud speaker base, and for example can use the pattern number of loud speaker base.In addition, arithmetic processing unit 30 can obtain second data of the loud speaker base of corresponding model based on the model information of input from network via communication unit 35.In addition, for example, when camera, barcode reader etc. is installed on the audio reproducing apparatus 1, and when bar code, QR sign indicating number (registered trade mark) etc. were printed on the loud speaker base 2, arithmetic processing unit 30 can be by with reference to utilizing camera etc. to obtain second data from the model information of acquisitions such as QR sign indicating number from memory cell 31.
When second data were not stored in the memory cell 31, arithmetic processing unit 30 can obtain second data of loud speaker base 2 via communication unit 35 from network.In addition, arithmetic processing unit 30 can obtain second data of directly being imported by load button 15 by the user.By this way, arithmetic processing unit 30 obtains second data.
The order of first data acquisition step (St1) and second data acquisition step (St2) can be put upside down.
Desirable transmission characteristic determining step (St3) will be described.
The position coordinates Ps and the direction vector Vs of the left speaker 21 that obtains among the position coordinates Pm of the microphone 13 that arithmetic processing unit 30 obtains according to step St1 and direction vector Vm and the step St2 determine desirable transmission characteristic Hi
(Pm, Vm, Ps, Vs)Desirable transmission characteristic Hi
(Pm, Vm, Ps, Vs)Be to concern (Pm, Vm, Ps, Vs) transmission characteristic of measuring down in the position when having proofreaied and correct loudspeaker performance ideally.Desirable loudspeaker performance can be smooth frequency characteristic, linear phase characteristic, minimum phase characteristic etc.
Arithmetic processing unit 30 can use " desirable transmission characteristic mapping " to determine desirable transmission characteristic Hi
(Pm, Vm, Ps, Vs)As mentioned above, desirable transmission characteristic mapping F is stored in the memory cell 31.Fig. 9 A and Fig. 9 B are the concept maps that desirable transmission characteristic mapping is shown.In Fig. 9 A and Fig. 9 B, the direction vector Vs difference of left speaker 21.In Fig. 9 A and Fig. 9 B, omitted diagram to Z-direction.Desirable transmission characteristic mapping is such mapping: at each the position coordinates Pm and the direction vector Vm of microphone 13, desirable transmission characteristic candidate mappings in each lattice point (grid) about the position coordinates of the initial point (Os) of loud speaker (being left speaker 21 in this example).For example, use loud speaker to measure desirable transmission characteristic candidate in advance with desirable loudspeaker performance.For example, shown in Fig. 9 A and Fig. 9 B, when the position coordinates Pm of microphone 13 be (Xm, Ym)=(3 ,-1) and direction vector Xm when parallel with Y-axis, the mapping that request is corresponding.In addition,, select the mapping of correspondence according to the direction vector Vs of left speaker 21 herein.The seat target value ((3 ,-1) etc.) be arbitrarily, and its unit for example is cm.
Fig. 9 A is illustrated in the example of the direction vector Vs of left speaker 21 mapping when parallel with Y-axis, and the example of the mapping of Fig. 9 B when being illustrated in direction vector Vs and tilting with respect to Y-axis.In each mapping, for example, when position coordinates Ps is that (Xs, in the time of Ys)=(3,3), the desirable transmission characteristic candidate that can be assigned to lattice point is confirmed as desirable transmission characteristic Hi
(Pm, Vm, Ps, Vs)
Figure 10 A and Figure 10 B are illustrated in the difference of the asynchronous desirable transmission characteristic of position coordinates Ps of left speaker 21 in the mapping shown in Fig. 9 A.It is (Xs, the desirable transmission characteristic Hi in the time of Ys)=(3,3) that Figure 10 A is illustrated in position coordinates Ps1
(Pm, Vm, Ps, Vs), and Figure 10 B to be illustrated in position coordinates Ps2 be (Xs, the desirable transmission characteristic Hi in the time of Ys)=(2 ,-3)
(Pm, Vm, Ps, Vs)
Do not use the mapping of desirable transmission characteristic when audio reproducing apparatus 1 but determine desirable transmission characteristic Hi according to first and second data
(Pm, Vm, Ps, Vs)The time, because the diffractions that the shell of audio reproducing apparatus 1 causes etc. are difficult to calculate linear characteristic.Arithmetic processing unit 30 can be determined desirable transmission characteristic Hi by select the approximating desirable transmission characteristic candidate of first and second data from the desirable transmission characteristic candidate of mapping in advance
(Pm, Vm, Ps, Vs)
In above example, although being described, position coordinates Ps is positioned at the situation on the lattice point, it is also conceivable that position coordinates Ps is not positioned at the situation on the lattice point.In this case, can be confirmed as desirable transmission characteristic Hi near the desirable transmission characteristic candidate of the lattice point of Ps
(Pm, Vm, Ps, Vs)In addition, can be according to desirable transmission characteristic candidate's approximate ideal transmission characteristic of adjacent lattice point.
Figure 11 illustrates approximate ideal transmission characteristic Hi
(Pm, Vm, Ps, Vs)The concept map of method.
For example, as shown in FIG., when position coordinates Ps was positioned between lattice point Pa1 and the Pa8 (PaN), the distance between position coordinates Ps and each lattice point PaN was Da1 to Da8 (DaN), and the desirable transmission characteristic candidate of each lattice point PaN be Ha1 to Ha8 (HaN), determined desirable transmission characteristic Hi
(Pm, Vm, Ps, Vs)Can represent by following formula 1.In formula 1, Dsum be Da1 to Da8 and.
[equation 1]
Especially when the variation of the relative little and transmission characteristic with the size of left speaker 21 of audio reproducing apparatus 1 was big with respect to distance, this is approximate to be effective.In addition, when having generated mapping in advance, can increase the distance between the lattice point and suppress the number of measurement point.Like this, determine position relation (Pm, Vm, Ps, Vs) the desirable transmission characteristic Hi in
(Pm, Vm, Ps, Vs)
Actual transfer feature measurement step (St4) will be described.
Y(s)=H(s)·X(s)
In expression formula (2), Y (S) is the Laplace function (output function) of sound collecting signal, and X (S) is the Laplace function (input function) of test voice signal.That is to say the variation of the impulse response of the sound collecting signal of actual transfer characteristic H (S) expression with regard to the test voice signal.Arithmetic processing unit 30 can be by calculating actual transfer characteristic H (S) with X (S) cancellation Y (S) shown in expression formula 2.The actual transfer characteristic H (S) that is calculated comprising: the space transmission characteristic between the loudspeaker performance of left speaker 21 and left speaker 21 and the microphone 13 (variation of the impulse response that receives during sound wave is via spatial transmission).
Correction factor calculation step (St5) will be described.
As mentioned above, the desirable transmission characteristic Hi that obtains among the step St3
(Pm, Vm, Ps, Vs)Be will be in position relation (Pm, Vm, Ps, Vs) transmission characteristic of measuring down when having the loud speaker output sound of desirable loudspeaker performance.Therefore, can use desirable transmission characteristic Hi
(Pm, Vm, Ps, Vs)Idealized system is expressed as following formula 3.
Expression formula 3
Y(s)=Hi
(Pm,Vm,Ps,Vs)·X(s)
At this, shown in expression formula 1, when test voice signal X (S) had passed through the treatment for correcting of digital filter, the relation between test voice signal X (S) and the sound collecting signal Y (S) can be expressed as following formula 4.
Expression formula 4
Y(s)=H(s)·G(s)·X(s)
When expression formula 3 and expression formula 4 are identical, can use correction coefficient G (S) that the loudspeaker performance of left speaker 21 is corrected to desirable loudspeaker performance.Therefore, can use position relation (Pm, Vm, Ps, Vs) the desirable transmission characteristic Hi under that determines among the step St3
(Pm, Vm, Ps, Vs)Shown in following formula 5, determine correction coefficient G (S) with the actual transfer characteristic H (S) that measures among the step St4.
Expression formula 5
G(s)=Hi
(Pm,Vm,Ps,Vs)/H(s)
Like this, audio reproducing apparatus 1 is determined correction coefficient G (s).
If audio reproducing apparatus 1 is connected to the i.e. different loud speaker base of second data and loud speaker base 2 of its model, the correction coefficient of definite each loud speaker and be used for treatment for correcting in the above described manner.The correction coefficient of each loud speaker that audio reproducing apparatus 1 will so obtain is stored in memory cell 31 grades, thereby can use identical correction coefficient when being connected to the loud speaker base of same model.
In view of above content, according to present embodiment, arithmetic processing unit 30 carries out treatment for correcting based on first and second data to the content audio signal, thereby can be, and can be according to the characteristic of the model tuning loud speaker of loud speaker base from actual transfer characteristic H (S) cancellation and the corresponding component of space transmission characteristic.
The desirable transmission characteristic Hi definite according to first and second data
(Pm, Vm, Ps, Vs)Comprise the loudspeaker performance of desirable loud speaker and the space transmission characteristic under the relation of position.For this reason, be used for actual transfer characteristic H (S) is converted to desirable transmission characteristic Hi
(Pm, Vm, Ps, Vs)Correction coefficient G (S) can be regarded as being used for the loudspeaker performance of loud speaker base 2 is converted to the correction coefficient of desirable loudspeaker performance.Therefore, by the content audio signal is used correction coefficient G (s), can be according to the model tuning loudspeaker performance of loud speaker base.
The invention is not restricted to the foregoing description, in the scope that does not break away from spirit of the present invention, can change.
In the above-described embodiments, although determined correction coefficient, the invention is not restricted to this by arithmetic processing unit.Audio reproducing apparatus can use communication unit that first and second data and actual transfer characteristic are sent to network, makes to determine desirable transmission characteristic on network, and receives correction coefficient.
In the above-described embodiments, although audio reproducing apparatus uses the model information of loud speaker base to obtain second data, yet the invention is not restricted to this.Audio reproducing apparatus can for example use the model information of loud speaker base to obtain correction coefficient from memory cell or network.
In the above-described embodiments, although first and second data are described to specify about the position of splicing ear and the data of orientation, the invention is not restricted to this.For example, first and second data can be the data of only specifying about the position of splicing ear.
It will be understood by those skilled in the art that: various modifications, combination, sub-portfolio and variation can occur according to design requirement and other factors, as long as it falls in the scope of claims or its equivalent.The overview of the foregoing description and other embodiment is an example with specifying.The present invention can also and can be applied to various other embodiment.Those skilled in the art should understand that: various modifications, combination, sub-portfolio and variation can occur according to design requirement and other factors, as long as it falls in the scope of claims or its equivalent.
Claims (20)
1. computer implemented method that is used for processing audio signal comprises:
Receive with first device on the reference position between the position concern first reference data that is associated;
Receive with second device on the reference position between the position concern second reference data that is associated;
Reception is with reference to transmission characteristic, wherein said with reference to transmission characteristic based on described first and second reference datas;
Determine the actual transfer characteristic by processor based on the acoustic data that produces by test signal; And
By described processor based on described with reference to the difference calculation correction coefficient between transmission characteristic and the described actual transfer characteristic.
2. method according to claim 1 also comprises:
Based on described correction coefficient processing audio signal.
3. method according to claim 1 wherein, receives and describedly comprises in response to definite reception of being carried out based on described first and second reference datas by described first device described with reference to transmission characteristic with reference to transmission characteristic.
4. method according to claim 1 wherein, receives and describedly comprises in response to definite reception of being carried out based on described first and second reference datas by described second device described with reference to transmission characteristic with reference to transmission characteristic.
5. method according to claim 1, wherein, described first reference data and described second reference data are corresponding to the tentation data that is stored in the storage device.
6. method according to claim 1, wherein, receive described first reference data and described second reference data comprise from network receive described first reference data or described second reference data one of at least.
7. method according to claim 1, wherein, the described reference position on described first device comprise with the corresponding primary importance of input unit and with the device receiving unit corresponding second place.
8. method according to claim 1, wherein, the described reference position on described second device comprise with the corresponding primary importance of flexible piezoelectric sound-generating devices and with the device receiving unit corresponding second place.
9. method according to claim 1, wherein, described first device is mobile phone, music player, handheld computer, navigation system or personal digital assistant.
10. method according to claim 9, wherein, one of reference position on described first device is corresponding to the position of microphone, and described first device uses described microphone to carry out one or more functions.
11. method according to claim 1 also comprises:
By NEURAL DISCHARGE BY DIGITAL FILTER, come processing audio signal based on described correction coefficient.
12. method according to claim 1 also comprises:
Receive and the corresponding identification information of described second device,
Wherein:
Described request comprises described identification information; And
Receiving described second reference data comprises in response to described request and receives described second reference data.
13. method according to claim 1, wherein, described first reference data comprises:
Space coordinates; And
With the direction vector that is associated of described reference position on described first device.
14. method according to claim 1, wherein, described second reference data comprise with described second device on the direction vector that is associated of described reference position.
15. the equipment that is used for processing audio signal with first reference point comprises:
The storage arrangement of store instruction; And
Processing unit, execution command, with:
Receive with described first reference point between the position concern first reference data that is associated;
Receive with second reference point between the position concern second reference data that is associated;
Reception is with reference to transmission characteristic, wherein said with reference to transmission characteristic based on described first and second reference datas;
Determine the actual transfer characteristic based on the acoustic data that produces by test signal; And
Based on described with reference to the difference calculation correction coefficient between transmission characteristic and the described actual transfer characteristic.
16. equipment according to claim 15, wherein, described processing unit is carried out described instruction with based on described correction coefficient processing audio signal.
17. equipment according to claim 15 also comprises the communication unit that is used for the request that sends by network, wherein processing unit receives described second reference data in response to described request.
18. equipment according to claim 15, wherein, described storage arrangement is a tentation data with described first reference data and described second reference data storage.
19. equipment according to claim 15, wherein, described first reference data comprises:
Space coordinates; And
With the direction vector that is associated of described reference position on described first device.
20. a computer-readable recording medium that comprises instruction, when carrying out described instruction on processor, described instruction makes processor carry out the method that is used for processing audio signal, and described method comprises:
Receive with first device on the reference position between the position concern first reference data that is associated;
Receive with second device on the reference position between the position concern second reference data that is associated;
Reception is with reference to transmission characteristic, wherein said with reference to transmission characteristic based on described first and second reference datas;
Generate test signal;
Determine the actual transfer characteristic based on the acoustic data that produces by described test signal; And
By described processor based on described with reference to the difference calculation correction coefficient between transmission characteristic and the described actual transfer characteristic.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010-074490 | 2010-03-29 | ||
JP2010074490A JP5387478B2 (en) | 2010-03-29 | 2010-03-29 | Audio reproduction apparatus and audio reproduction method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102209290A true CN102209290A (en) | 2011-10-05 |
CN102209290B CN102209290B (en) | 2015-07-15 |
Family
ID=44656511
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201110074103.9A Expired - Fee Related CN102209290B (en) | 2010-03-29 | 2011-03-22 | Audio reproduction device and audio reproduction method |
Country Status (3)
Country | Link |
---|---|
US (1) | US8964999B2 (en) |
JP (1) | JP5387478B2 (en) |
CN (1) | CN102209290B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113055801A (en) * | 2015-07-16 | 2021-06-29 | 索尼公司 | Information processing apparatus, information processing method, and computer readable medium |
Families Citing this family (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5644748B2 (en) * | 2011-12-15 | 2014-12-24 | ヤマハ株式会社 | Audio equipment |
US9084058B2 (en) | 2011-12-29 | 2015-07-14 | Sonos, Inc. | Sound field calibration using listener localization |
US9020623B2 (en) | 2012-06-19 | 2015-04-28 | Sonos, Inc | Methods and apparatus to provide an infrared signal |
US9706323B2 (en) | 2014-09-09 | 2017-07-11 | Sonos, Inc. | Playback device calibration |
US9106192B2 (en) | 2012-06-28 | 2015-08-11 | Sonos, Inc. | System and method for device playback calibration |
US9219460B2 (en) | 2014-03-17 | 2015-12-22 | Sonos, Inc. | Audio settings based on environment |
US9439010B2 (en) * | 2013-08-09 | 2016-09-06 | Samsung Electronics Co., Ltd. | System for tuning audio processing features and method thereof |
KR20150049966A (en) * | 2013-10-31 | 2015-05-08 | 삼성전자주식회사 | Audio output apparatus and method for audio correction |
US9264839B2 (en) | 2014-03-17 | 2016-02-16 | Sonos, Inc. | Playback device configuration based on proximity detection |
US9952825B2 (en) | 2014-09-09 | 2018-04-24 | Sonos, Inc. | Audio processing algorithms |
EP3101920B1 (en) * | 2014-11-06 | 2017-06-14 | Axis AB | Method and peripheral device for providing a representation of how to alter a setting affecting audio reproduction of an audio device |
US9678707B2 (en) | 2015-04-10 | 2017-06-13 | Sonos, Inc. | Identification of audio content facilitated by playback device |
US10664224B2 (en) | 2015-04-24 | 2020-05-26 | Sonos, Inc. | Speaker calibration user interface |
WO2016172593A1 (en) | 2015-04-24 | 2016-10-27 | Sonos, Inc. | Playback device calibration user interfaces |
US9538305B2 (en) | 2015-07-28 | 2017-01-03 | Sonos, Inc. | Calibration error conditions |
JP6437695B2 (en) | 2015-09-17 | 2018-12-12 | ソノズ インコーポレイテッド | How to facilitate calibration of audio playback devices |
US9693165B2 (en) | 2015-09-17 | 2017-06-27 | Sonos, Inc. | Validation of audio calibration using multi-dimensional motion check |
US9743207B1 (en) | 2016-01-18 | 2017-08-22 | Sonos, Inc. | Calibration using multiple recording devices |
US11106423B2 (en) | 2016-01-25 | 2021-08-31 | Sonos, Inc. | Evaluating calibration of a playback device |
US10003899B2 (en) | 2016-01-25 | 2018-06-19 | Sonos, Inc. | Calibration with particular locations |
US9860662B2 (en) | 2016-04-01 | 2018-01-02 | Sonos, Inc. | Updating playback device configuration information based on calibration data |
US9864574B2 (en) | 2016-04-01 | 2018-01-09 | Sonos, Inc. | Playback device calibration based on representation spectral characteristics |
US9763018B1 (en) | 2016-04-12 | 2017-09-12 | Sonos, Inc. | Calibration of audio playback devices |
US9794710B1 (en) | 2016-07-15 | 2017-10-17 | Sonos, Inc. | Spatial audio correction |
US10372406B2 (en) | 2016-07-22 | 2019-08-06 | Sonos, Inc. | Calibration interface |
US10459684B2 (en) | 2016-08-05 | 2019-10-29 | Sonos, Inc. | Calibration of a playback device based on an estimated frequency response |
US10299061B1 (en) | 2018-08-28 | 2019-05-21 | Sonos, Inc. | Playback device calibration |
US11206484B2 (en) | 2018-08-28 | 2021-12-21 | Sonos, Inc. | Passive speaker authentication |
US10734965B1 (en) | 2019-08-12 | 2020-08-04 | Sonos, Inc. | Audio calibration of a portable playback device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1659927A (en) * | 2002-06-12 | 2005-08-24 | 伊科泰克公司 | Method of digital equalisation of a sound from loudspeakers in rooms and use of the method |
CN1751540A (en) * | 2003-01-20 | 2006-03-22 | 特因诺夫音频公司 | Method and device for controlling a reproduction unit using a multi-channel signal |
WO2007028094A1 (en) * | 2005-09-02 | 2007-03-08 | Harman International Industries, Incorporated | Self-calibrating loudspeaker |
CN101296529A (en) * | 2007-04-25 | 2008-10-29 | 哈曼贝克自动系统股份有限公司 | Sound tuning method and apparatus |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005089018A1 (en) * | 2004-03-16 | 2005-09-22 | Pioneer Corporation | Stereophonic reproducing system and stereophonic reproducing device |
JP2007110294A (en) * | 2005-10-12 | 2007-04-26 | Yamaha Corp | Mobile phone terminal and speaker unit |
US8086332B2 (en) * | 2006-02-27 | 2011-12-27 | Apple Inc. | Media delivery system with improved interaction |
JP2008282042A (en) * | 2008-07-14 | 2008-11-20 | Sony Corp | Reproduction device |
JP6031930B2 (en) * | 2012-10-02 | 2016-11-24 | ソニー株式会社 | Audio processing apparatus and method, program, and recording medium |
-
2010
- 2010-03-29 JP JP2010074490A patent/JP5387478B2/en not_active Expired - Fee Related
-
2011
- 2011-03-11 US US13/046,268 patent/US8964999B2/en not_active Expired - Fee Related
- 2011-03-22 CN CN201110074103.9A patent/CN102209290B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1659927A (en) * | 2002-06-12 | 2005-08-24 | 伊科泰克公司 | Method of digital equalisation of a sound from loudspeakers in rooms and use of the method |
CN1751540A (en) * | 2003-01-20 | 2006-03-22 | 特因诺夫音频公司 | Method and device for controlling a reproduction unit using a multi-channel signal |
WO2007028094A1 (en) * | 2005-09-02 | 2007-03-08 | Harman International Industries, Incorporated | Self-calibrating loudspeaker |
CN101296529A (en) * | 2007-04-25 | 2008-10-29 | 哈曼贝克自动系统股份有限公司 | Sound tuning method and apparatus |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113055801A (en) * | 2015-07-16 | 2021-06-29 | 索尼公司 | Information processing apparatus, information processing method, and computer readable medium |
CN113055801B (en) * | 2015-07-16 | 2023-04-07 | 索尼公司 | Information processing apparatus, information processing method, and computer readable medium |
Also Published As
Publication number | Publication date |
---|---|
CN102209290B (en) | 2015-07-15 |
US20110235808A1 (en) | 2011-09-29 |
US8964999B2 (en) | 2015-02-24 |
JP5387478B2 (en) | 2014-01-15 |
JP2011211296A (en) | 2011-10-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102209290B (en) | Audio reproduction device and audio reproduction method | |
US10979805B2 (en) | Microphone array auto-directive adaptive wideband beamforming using orientation information from MEMS sensors | |
US10873814B2 (en) | Analysis of spatial metadata from multi-microphones having asymmetric geometry in devices | |
EP1856948B1 (en) | Position-independent microphone system | |
CN103827959B (en) | For the electronic installation of control noises | |
CN101378607B (en) | Sound processing apparatus and method for correcting phase difference | |
CN106489130B (en) | System and method for making audio balance to play on an electronic device | |
US20160044410A1 (en) | Audio Apparatus | |
EP1841279A2 (en) | Electronic apparatus for a vehicle, method and system for optimally correcting sound field in a vehicle | |
CN101601082B (en) | Touch detection system | |
CN108319445B (en) | Audio playing method and mobile terminal | |
EP3745399B1 (en) | Electronic devices for generating an audio signal with noise attenuated on the basis of a phase change rate according to change in frequency of an audio signal | |
CN111683325B (en) | Sound effect control method and device, sound box, wearable device and readable storage medium | |
CN108614263B (en) | Mobile terminal, position detection method and related product | |
CN108234045B (en) | Received signal strength adjusting method and device, terminal testing system and electronic terminal | |
JP4962572B2 (en) | Sound receiver | |
CN108234046B (en) | Received signal strength adjusting method and device, terminal testing system and electronic terminal | |
US10390167B2 (en) | Ear shape analysis device and ear shape analysis method | |
CN114125624B (en) | Active noise reduction method, noise reduction earphone and computer readable storage medium | |
JP6728961B2 (en) | Received power estimation device, received power estimation method, and received power estimation program | |
CN113938792B (en) | Audio playing optimization method and device and readable storage medium | |
CN211047148U (en) | Recording circuit control panel and recording equipment | |
CN115150712A (en) | Vehicle-mounted microphone system and automobile | |
CN111078178A (en) | Method, device and equipment for determining bending angle and storage medium | |
CN109417666A (en) | Noise remove device, echo cancelling device, abnormal sound detection device and noise remove method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20150715 Termination date: 20210322 |
|
CF01 | Termination of patent right due to non-payment of annual fee |