WO2016089049A1 - Procédé et dispositif d'émission d'un signal audio sur la base d'informations d'emplacement d'un haut-parleur - Google Patents

Procédé et dispositif d'émission d'un signal audio sur la base d'informations d'emplacement d'un haut-parleur Download PDF

Info

Publication number
WO2016089049A1
WO2016089049A1 PCT/KR2015/012853 KR2015012853W WO2016089049A1 WO 2016089049 A1 WO2016089049 A1 WO 2016089049A1 KR 2015012853 W KR2015012853 W KR 2015012853W WO 2016089049 A1 WO2016089049 A1 WO 2016089049A1
Authority
WO
WIPO (PCT)
Prior art keywords
speaker
signal
main
auxiliary
background signal
Prior art date
Application number
PCT/KR2015/012853
Other languages
English (en)
Korean (ko)
Inventor
이윤재
김한기
김상윤
송영석
오은미
임동현
장지호
조재연
Original Assignee
삼성전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자 주식회사 filed Critical 삼성전자 주식회사
Priority to CN201580075082.9A priority Critical patent/CN107211213B/zh
Priority to US15/531,916 priority patent/US10171911B2/en
Priority to KR1020177014532A priority patent/KR102343330B1/ko
Publication of WO2016089049A1 publication Critical patent/WO2016089049A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/02Spatial or constructional arrangements of loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/12Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2205/00Details of stereophonic arrangements covered by H04R5/00 but not provided for in any of its subgroups
    • H04R2205/024Positioning of loudspeaker enclosures for spatial sound reproduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/07Applications of wireless loudspeakers or wireless microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/01Aspects of volume control, not necessarily automatic, in sound systems

Definitions

  • the present invention relates to a method and a device for outputting an audio signal based on location information of a speaker.
  • a speaker having a low acoustic performance may be provided in the multimedia device. Accordingly, the user may further use at least one high performance speaker device to enhance the sound output performance of the multimedia device when viewing the image using the multimedia device.
  • the user may listen to the sound by using the plurality of speaker devices.
  • the positional movement of the speaker device may frequently occur due to the characteristics of the wireless connection. Since the performance of the sound effect provided to the user may vary according to the position of the speaker device, when sound is output without considering the position of the speaker device, it may be difficult to provide the user with the optimal sound effect.
  • a method of outputting an audio signal may be required to provide an optimal sound effect in consideration of the positional information of each speaker.
  • An object of the present invention is to provide a method and device for processing and outputting an audio signal to provide an optimal sound effect in consideration of location information of each speaker when using a plurality of speakers.
  • the audio signal when using a plurality of speakers, may be processed and output to provide an optimal sound effect in consideration of the location information of each speaker.
  • FIG. 1 is a diagram illustrating an example of a speaker and a multimedia apparatus according to an exemplary embodiment.
  • FIG. 2 is a flowchart illustrating a method of outputting an audio signal based on location information of a speaker, according to an exemplary embodiment.
  • FIG. 3 is a block diagram illustrating an internal structure of a device that outputs an audio signal based on speaker location information according to an exemplary embodiment.
  • FIG. 4 is a flowchart illustrating a method of outputting an audio signal based on speaker location information according to an exemplary embodiment.
  • FIG. 5 is an exemplary diagram illustrating an example in which a plurality of speaker devices are connected to each other through wireless communication.
  • FIG. 6 is a diagram illustrating an example of location information of a main speaker and an auxiliary speaker according to an exemplary embodiment.
  • FIG. 7 is a diagram illustrating an example of a plurality of auxiliary speakers and a main speaker according to an embodiment.
  • FIG. 8 is a diagram illustrating an example of a main speaker and an auxiliary speaker according to an exemplary embodiment.
  • FIG. 9 is an exemplary view illustrating an example of a speaker according to an exemplary embodiment.
  • FIG. 10 is a diagram illustrating an example of a speaker that outputs an audio signal in consideration of a wall position according to an exemplary embodiment.
  • FIG. 11 is a diagram illustrating an example of a speaker that outputs an audio signal in consideration of a user's location according to an exemplary embodiment.
  • FIG. 12 is an exemplary diagram illustrating an example of a method of obtaining location information of a speaker, according to an exemplary embodiment.
  • FIG. 13 is a diagram illustrating an example of a method of obtaining location information of a speaker, according to an exemplary embodiment
  • FIG. 14 is an exemplary diagram illustrating an example of a method of measuring a distance between speaker devices according to an exemplary embodiment.
  • 15 is a block diagram illustrating an internal structure of a device according to an embodiment.
  • a method of processing an audio signal includes: separating the audio signal into a main signal and a background signal; Obtaining location information of a main speaker and a sub speaker; Mixing the main signal and the background signal based on the position information; And outputting the mixed main signal and the background signal to the main speaker and the auxiliary speaker, respectively.
  • the mixing may include determining gains for the main signal and gains for the background signal based on the position information; Generating a mixed background signal by mixing the background signal and a main signal to which a gain with respect to the main signal is applied; And generating a mixed main signal by mixing the main signal and a background signal to which a gain with respect to the background signal is applied.
  • the determining of the gain may further include determining gains for the main signal and the background signal based on a difference between a sound output direction of the main speaker and a sound output direction of the auxiliary speaker.
  • the determining of the gain may include setting a center axis based on a position and a predetermined direction of the main speaker; And determining a gain value for the main signal as a value inversely proportional to a distance between the auxiliary speaker and the central axis.
  • the determining of the gain may include determining a gain value of the background signal as a value proportional to a distance between the auxiliary speaker and the central axis.
  • the separating may include converting the audio signal into a main signal and a background signal based on at least one of a correlation between a display screen corresponding to the audio signal, a reproduction performance of the main and auxiliary speakers, and a correlation between channels. Separating.
  • a device for processing an audio signal may include a receiver configured to receive the audio signal; A controller which separates the audio signal into a main signal and a background signal, obtains position information of the main speaker and the auxiliary speaker, and mixes the main signal and the background signal based on the position information; And an output unit configured to output the mixed main signal and the background signal to the main speaker and the auxiliary speaker, respectively.
  • any part of the specification is to “include” any component, this means that it may further include other components, except to exclude other components unless otherwise stated.
  • a part is “connected” with another part, this includes not only the case where it is “directly connected” but also the case where it is “electrically connected” with another element between them.
  • part refers to a hardware component, such as software, FPGA or ASIC, and “part” plays certain roles. However, “part” is not meant to be limited to software or hardware.
  • the “unit” may be configured to be in an addressable storage medium and may be configured to play one or more processors.
  • a “part” refers to components such as software components, object-oriented software components, class components, and task components, processes, functions, properties, procedures, Subroutines, segments of program code, drivers, firmware, microcode, circuits, data, databases, data structures, tables, arrays and variables.
  • the functionality provided within the components and “parts” may be combined into a smaller number of components and “parts” or further separated into additional components and “parts”.
  • FIG. 1 is a diagram illustrating an example of a speaker and a multimedia apparatus according to an exemplary embodiment.
  • a user may watch a multimedia image using the auxiliary speaker 120 and the multimedia apparatus 110 for improving sound performance.
  • the multimedia apparatus 110 for displaying a multimedia image may include a speaker for outputting an audio signal inside the apparatus 110.
  • the multimedia apparatus 110 may output an audio signal corresponding to the multimedia image being displayed through a speaker provided in the multimedia apparatus 110.
  • the audio signal corresponding to the multimedia image may also be output through the auxiliary speaker 120 connected to the multimedia apparatus 110.
  • the high performance auxiliary speaker 120 may assist the speaker inside the low performance multimedia device 110.
  • the sound of the multimedia image may be output through the auxiliary speaker 120 as well as the speaker of the multimedia apparatus 110, so that a sound effect having a better performance may be provided to the user.
  • the audio signal may be output only to the auxiliary speaker 120.
  • the audio signal may be divided into an audio signal to be output to the multimedia apparatus 110 and the auxiliary speaker 120, and each of the separated audio signals may be output to the multimedia apparatus 110 and the auxiliary speaker 120.
  • the multimedia apparatus 110 When the multimedia apparatus 110 is located in front of the user, it is assumed that sound is preferably recognized in the direction of the multimedia apparatus 110 according to the intention of the multimedia image producer. If the sound is output through the auxiliary speaker 120, the sound may be recognized in the direction of the auxiliary speaker 120, unlike the producer's intention. As sound is output in an unintended direction as the auxiliary speaker 120 is separated from the multimedia apparatus 110, a user may feel awkwardness, dislocation, and the like from the output sound.
  • the device may process the audio signal such that a component of an audio signal having directionality is output from the multimedia apparatus 110 and a component of an audio signal having no directionality is output from the auxiliary speaker 120. have.
  • the device may process the audio signal such that a component of the audio signal that can be output from the auxiliary speaker 120 more effectively than the multimedia apparatus 110 according to the sound output performance of each speaker. have.
  • FIG. 2 is a flowchart illustrating a method of outputting an audio signal based on location information of a speaker, according to an exemplary embodiment.
  • a device for processing an audio signal may be an element that may be included in the above-described multimedia apparatus 110 or the auxiliary speaker 120.
  • the present invention is not limited thereto, and the device may be an external device.
  • the audio signal may be output through the main speaker and one or more auxiliary speakers.
  • the main speaker may refer to a speaker provided in the multimedia apparatus 110, but is not limited thereto and may be one of various kinds of speakers.
  • the main speaker may be a device that is located in front of the user and outputs sound in a user direction. Alternatively, the present invention is not limited thereto, and the main speaker may be a device that outputs sound toward at least one direction as a reference.
  • the auxiliary speaker may be used as a device for assisting the sound output performance of the main speaker.
  • the device may process and output audio signals to be output to the main speaker and the auxiliary speaker so as to provide an optimal sound effect to the user.
  • the device may separate an audio signal to be output into a main signal and a background signal.
  • the device may separate the audio signal component which is preferably output from the main speaker into the main signal and the audio signal component which is preferably output from the auxiliary speaker into the background signal.
  • a voice or an object click sound having a relatively high correlation with an image or a screen displayed on a display corresponds to a signal component whose direction is important, and thus the aforementioned audio signal components may be output in a reference direction. So that it can be included in the main signal.
  • audio signal components such as an effect sound or an ambient sound whose direction is not important may be separated into a background signal.
  • a signal having a high correlation for left and right channels may be separated into a main signal.
  • a signal having low correlation with the left and right channels may be separated into a background signal.
  • audio signal components of a band eg, ultra high frequency or ultra low frequency band
  • the device may acquire location information of the main speaker and the auxiliary speaker. For example, the device may set the center axis based on the position of the main speaker and the user, and then obtain a distance between the auxiliary speaker and the center axis. The device may assume the user is located in front of the main speaker to determine the user's location. In addition to this, the device may set a central axis based on a predetermined position. The device may determine the gain value used in mixing based on the distance between the auxiliary speaker and the center axis.
  • the device may generate the mixed main signal and the background signal by mixing the main signal and the background signal based on the location information of the main speaker and the auxiliary speaker acquired in step S220.
  • the device may determine a gain value based on the distance between the auxiliary speaker and the center axis obtained based on the positional information of the main speaker and the auxiliary speaker, and mix the main signal and the background signal according to the determined gain value.
  • the device may determine the gain to be applied to the main signal and the background signal based on the location information of the speakers, and then perform mixing using the main signal and the background signal to which different gains are applied.
  • the main signal to which the gain for the main signal is applied may be mixed with the background signal.
  • the background signal to which the gain for the background signal is applied may be mixed with the main signal.
  • the gain value that may be determined based on the location information may be determined to be less than one.
  • the gain value for the main signal may be determined as a value inversely proportional to the distance between the center axis and the auxiliary speaker, and the gain value for the background signal may be determined as a value proportional to the distance between the center axis and the auxiliary speaker.
  • the direction difference between the auxiliary speaker and the main speaker may be smaller based on the user position or a predetermined position. Therefore, when the distance between the central axis and the auxiliary speaker is close, awkwardness, dislocation, or the like with respect to the main signal output from the auxiliary speaker that the user can feel may be reduced. Therefore, as the distance between the central axis and the auxiliary speaker gets closer, a gain value applied to the main signal that is mixed with the background signal and output from the auxiliary speaker may be determined to be a large value.
  • the background signal may include an audio signal having a lower directional specific gravity than the main signal, but when the direction difference from the display image increases, the background signal may increase awkwardness and dislocation.
  • the device may determine the gain such that the ratio of the background signal output from the main speaker is increased, thereby reducing awkwardness, dislocation, and the like, which may be felt by the user due to the difference in directionality of the sound.
  • a gain value applied to a background signal that is mixed with the main signal and output from the main speaker may be determined as a large value.
  • the device may generate the mixed main signal that may be output from the main speaker and the mixed background signal that may be output from the auxiliary speaker using the determined gain value.
  • the device may output each of the mixed main signal and the mixed background signal acquired in operation S250 to the main speaker and the auxiliary speaker. Based on the location information of the speaker, by adjusting the ratio of the main signal and the background signal included in the mixed signal, the audio signal is processed and output so as to minimize the discomfort, awkwardness, etc. that the user can feel due to the directional difference of the audio signal Can be.
  • FIG. 3 is a block diagram illustrating an internal structure of a device that outputs an audio signal based on speaker location information according to an exemplary embodiment.
  • the device 300 may include a signal separator 310 that separates an audio signal, a gain determiner 320, 330, and an adder 340, 350 that determine gains for a main signal and a background signal. It may include.
  • the device 300 of FIG. 3 may correspond to the device of FIG. 2.
  • the signal separator 310 may separate the audio signal input to the device 300 into a main signal and a background signal.
  • the signal separator 310 may separate an audio signal component, which is preferably output from the main speaker, as a main signal, and an audio signal component, which is preferably output from the auxiliary speaker, as a background signal.
  • the gain determiner 320 for the main signal may determine a gain that may be applied to the main signal based on location information of the main speaker and the auxiliary speaker.
  • the gain value for the main signal may be determined as a value inversely proportional to the distance between the center axis and the auxiliary speaker. As the distance between the central axis and the auxiliary speaker gets closer, the awkwardness, dislocation, etc. of the main signal output from the auxiliary speaker decreases, so that the gain determiner 320 for the main signal is mixed with the background signal and output from the auxiliary speaker.
  • the gain applied to the main signal may be determined as a large value.
  • the gain determiner 330 for the background signal may determine a gain that may be applied to the background signal based on location information of the main speaker and the auxiliary speaker.
  • the gain value for the background signal may be determined as a value proportional to the distance between the center axis and the auxiliary speaker. As the distance between the central axis and the auxiliary speaker increases, the gain is determined to increase the ratio of the background signal output from the main speaker, and thus, awkwardness, dislocation, and the like that the user may feel due to the difference in the directionality of the sound may be reduced.
  • the gain determiner 330 for the background signal may determine a gain value applied to the background signal that is mixed with the main signal and output from the main speaker as the distance between the central axis and the auxiliary speaker increases.
  • the main signal separated from the audio signal by the signal separator 310 may be mixed with the background signal to which the gain for the background signal is applied by the adder 340 and output.
  • the main signal mixed by the adder 340 may be output to the main speaker.
  • the background signal separated from the audio signal by the signal separator 310 may be mixed with the main signal to which the gain for the main signal is applied and output by the adder 350.
  • the background signal mixed by the adder 350 may be output to the auxiliary speaker.
  • the ratio of the main signal and the background signal included in the mixed signal is adjusted based on the location information of the speaker, such that the user may feel discomfort due to the difference in direction, awkwardness, etc.
  • the audio signal may be processed and output to minimize this.
  • FIG. 4 is a flowchart illustrating a method of outputting an audio signal based on speaker location information according to an exemplary embodiment.
  • the method illustrated in FIG. 4 may correspond to the method illustrated in FIG. 2, and overlapping description may be omitted.
  • the speaker devices may be connected to each other to perform communication.
  • the display device including the speaker and the wireless speaker may be connected to each other through wired or wireless communication.
  • the device 300 for processing the audio signal based on the speaker position information is considered to be included in the display apparatus, and the following steps S420 to S470 will be described. Steps S420 to S470 may be performed in the device 300.
  • sound profile information between speaker devices connected to each other in operation S410 may be exchanged with each other.
  • acoustic profile information of another speaker device may be transmitted to a speaker device including a device that processes an audio signal among speaker devices connected to each other.
  • the sound profile information may include information about sound output performance of the speaker device. Based on the information about the sound output performance, the audio signal can be separated into a main signal and a background signal.
  • the main speaker and the auxiliary speaker may be determined based on the sound profile information exchanged in step S420.
  • the speaker of the multimedia device including the display may be determined as the main speaker, and the remaining speaker device may be determined as the auxiliary speaker.
  • a speaker capable of outputting an audio signal in a reference direction may be determined as a main speaker, and the remaining speaker device may be determined as an auxiliary speaker.
  • a speaker having a relatively high performance of sound output may be determined as an auxiliary speaker.
  • one or more auxiliary speakers may be present.
  • a process for recognizing the speaker location may be performed to obtain location information of the speaker.
  • location information of each speaker may be obtained by receiving an audio signal output from the auxiliary speaker using a microphone of the display device.
  • the location information of each speaker may be obtained from the time when the audio signal output from the auxiliary speaker is received in the microphone of the display device and the strength of the received audio signal.
  • gain for the main signal and the background signal may be determined based on the location information of the speaker.
  • Gain for the main signal and the background signal may be determined according to the method of FIGS. 2 to 3 described above.
  • the device 300 may generate the mixed main signal and the background signal using the gain value determined in operation S450, and output the mixed main signal and the background signal to the main speaker and the auxiliary speaker.
  • FIG. 5 is an exemplary diagram illustrating an example in which a plurality of speaker devices are connected to each other through wireless communication.
  • step S410 of FIG. 4 may be included in an embodiment in which speaker devices are connected to each other through wired or wireless communication in step S410 of FIG. 4.
  • the speaker device 520 may be in a state of being previously connected to an access point 530.
  • the speaker device 520 connected in advance to the AP 530 may be a terminal device such as a smart phone, a smart TV, or the like that can be used by a user.
  • the new speaker device 510 When the new speaker device 510 is detected by a proximity sensor or BLE broadcasting (Bluetooth low energy broadcasting) provided in the existing speaker device 520, the detected speaker device 510 is connected to the existing speaker device 520. Can be connected. An authentication procedure for the new speaker device 510 may be performed in the existing speaker device 520. When the new speaker device 510 is authenticated, the new speaker device 510 is a service set identifier (SSID), identifier (ID) and password (password) for accessing the AP 530 from the existing speaker device 520. Receive the information, and can use the received information to access the AP 530.
  • SSID service set identifier
  • ID identifier
  • password password
  • the new speaker device 510 and the existing speaker device 520 may be connected to each other.
  • the user may control the new speaker device 510 using the control means of the existing speaker device 520.
  • the user may control the processed audio signal to be output to the speaker devices 510 and 520 based on the location information of the speaker devices 510 and 520, according to an exemplary embodiment.
  • the new speaker device 510 is determined to be an auxiliary speaker and the existing speaker device 520 is a main speaker, and each of the speaker devices 510 and 520 may output the mixed background signal and the mixed main signal.
  • FIG. 6 is a diagram illustrating an example of location information of a main speaker and an auxiliary speaker according to an exemplary embodiment.
  • the location information of the main speaker 620 and the auxiliary speaker 610 may include a distance r between the main speaker 620 and the auxiliary speaker 610 and between the central axis 640 and the auxiliary speaker 610. It may include angle ( ⁇ ) information.
  • the distance r * sin ⁇ between the auxiliary speaker 610 and the central axis 640 described above may be obtained from the location information of the main speaker 620 and the auxiliary speaker 610.
  • the central axis 640 may be set based on the main speaker 620 and the user location 630.
  • the user location 630 may be determined based on location information measured by a terminal device that the user is carrying, for example, a smart watch or smart glasses.
  • the user location 630 may be regarded as being located in front of the main speaker 620 in that the user may face the display screen.
  • FIG. 7 is a diagram illustrating an example of a plurality of auxiliary speakers and a main speaker according to an embodiment.
  • auxiliary speakers 710 and 720 capable of outputting an audio signal corresponding to an image displayed on the multimedia apparatus of the main speaker 730.
  • the left side auxiliary speaker 710 may output an L (left) channel audio signal
  • the right side auxiliary speaker 720 may output an R (right) channel audio signal.
  • the device 300 may separate the background signal from the audio signal by the number of auxiliary speakers 710 and 720.
  • the device 300 may separate an L background signal that may be output to the left side speaker 710 and an R background signal that may be output to the right side speaker 720 from the audio signal, respectively.
  • the device 300 may determine the gain for the main signal based on the distance between the auxiliary speakers 710 and 720 and the center axis. For example, the gain for the main signal may be determined based on the average distance between the auxiliary speakers 710 and 720 and the center axis. In addition, the device 300 may determine the gain for the L background signal and the R background signal based on the distance between the corresponding auxiliary speakers 710 and 720 and the center axis, respectively.
  • the device 300 may generate the mixed main signal to be output from the main speaker 730 by mixing the L background signal and the R background signal to which the gain for each background signal is applied, and the main signal.
  • the device 300 may generate the mixed L background signal to be output from the left auxiliary speaker 710 by mixing the main signal to which the gain with respect to the main signal is applied and the L background signal.
  • the device 300 may generate the mixed R background signal to be output from the right auxiliary speaker 720 by mixing the main signal to which the gain with respect to the main signal is applied and the R background signal.
  • FIG. 8 is a diagram illustrating an example of a main speaker and an auxiliary speaker according to an exemplary embodiment.
  • the built-in speaker 811 of the smartphone or the built-in speaker 851 of the TV (television) may be set as the main speaker.
  • the built-in speaker of the terminal device capable of displaying an image may be determined as the main speaker.
  • the wireless speaker 812, the built-in speaker 851 of the TV, the sound bar 831, the subwoofer 841, and the like may be set as auxiliary speakers.
  • a speaker having a higher sound output performance than the main speaker may be set as an auxiliary speaker.
  • FIG. 9 is an exemplary view illustrating an example of a speaker according to an exemplary embodiment.
  • the speaker 900 illustrated in FIG. 9 may be set as the above-described main speaker or auxiliary speaker to output an audio signal.
  • the speaker 900 may radiate in all directions.
  • the speaker 900 may output audio signals in up, left, right, and down directions.
  • the speaker 900 may output different audio signals according to left and right or up and down directions, and may be designed to recognize various sound fields for each listening position.
  • the horizontal unit among the output units of the speaker 900 may output audio signals of left and right channels along the left and right directions.
  • the unit in the vertical direction of the speaker 900 may output a mixed signal of the audio signal of the left channel and the audio signal of the right channel.
  • the units in each direction of the speaker 900 may output audio signals of left and right channels based on the user position so that the user may feel an optimal sense of presence.
  • FIG. 10 is a diagram illustrating an example of a speaker that outputs an audio signal in consideration of a wall position according to an exemplary embodiment.
  • the speaker 900 may include a wall recognition sensor 910 to detect a position of a wall.
  • the wall recognition sensor 910 outputs a signal such as an ultrasonic wave or an infrared ray in a predetermined direction, and obtains a time when the output signal is reflected and input again to the wall recognition sensor 910, thereby providing a distance between the speaker 900 and the wall. Can be obtained.
  • the speaker 900 may adjust the size of the audio signal radiated in the wall direction based on the distance between the speaker 900 and the wall.
  • the speaker 900 may adjust and output the audio signal radiated to the wall according to the distance between the wall and the speaker 900.
  • the output of the audio signal radiated to the wall may be minimized.
  • the output of the audio signal radiated to the wall may be adjusted.
  • FIG. 11 is a diagram illustrating an example of a speaker that outputs an audio signal in consideration of a user's location according to an exemplary embodiment.
  • the speaker 900 when the user's listening position is above the speaker 900, the speaker 900 directly radiates an audio signal 920 in the direction in which the user is located or radiates an audio signal 930 toward the ceiling. can do. Since the audio signal 930 radiated toward the ceiling may be reflected from the ceiling and transmitted directly to the user, the user may hear the sound more clearly.
  • FIG. 12 is an exemplary diagram illustrating an example of a method of obtaining location information of a speaker, according to an exemplary embodiment.
  • location information of the speakers 1210, 1220, and 1230 may be obtained by the terminal device 1240 including the microphones 1241 and 1242.
  • the terminal device 1240 may acquire location information of the speakers 1210, 1220 and 1230 by sensing the sound output from the speakers 1210, 1220 and 1230 through the microphones 1241 and 1242.
  • the sound output from the speaker 1220 may be sensed through the microphones 1241 and 1242 of the terminal device 1240. According to the positions of the microphones 1241 and 1242, since the distance between the speaker and the microphones 1241 and 1242 is different, the sensing time for the same sound is different from each other. Based on the time difference of arrival that can be obtained from the sensing times T1 and T2, the distance between the speaker 1220 and each of the microphones 1241 and 1242 may be obtained. In addition, according to the distance between the microphones 1241 and 1242, an angle between the central axis of the terminal device 1240 and the speaker 1220 may be obtained. The central axis of the terminal device 1240 may be set based on the front direction of the terminal device 1240.
  • FIG. 13 is a diagram illustrating an example of a method of obtaining location information of a speaker, according to an exemplary embodiment
  • distance information between the speaker devices 1320 and 1330 and the terminal device 1310 may be obtained through a received signal strength indication (RSSI) of a wireless signal. By comparing the measured RSSI, it may be determined in which direction the speaker devices 1320 and 1330 are located with respect to the terminal device 1310.
  • the terminal device 1310 may process an audio signal and output the same to the speaker devices 1320 and 1330 according to the positions of the speaker devices 1320 and 1330.
  • RSSI received signal strength indication
  • Radio frequency (RF) modules capable of measuring RSSI may be provided in the terminal device 1310 and the speaker devices 1320 and 1330, respectively.
  • the RF modules provided in the terminal device 1310 and the speaker devices 1320 and 1330 will be referred to as RF module_TV, RF module_WA1, and RF module_WA2, respectively.
  • TV, WA1, and WA2 represent terminal device 1310 and speaker devices 1320 and 1330, respectively.
  • the RF module _TV which is an RF module of the terminal device 1310, is provided to be biased on the right side or the left side, so that it is determined in which direction the speaker devices 1320 and 1330 exist based on the terminal device 1310 through RSSI comparison. have.
  • the speaker device of the larger size among RSSI (WA1, TV) and RSSI (WA2, TV) may be closer to the RF module TV.
  • the RSSIs (TV, WA1), RSSIs (WA2, WA1), and RSSIs (TV, WA2) illustrated in FIG. 13 indicate RSSIs detected between the terminal device 1310 and the speaker devices 1320 and 1330. By comparing the detected RSSIs with each other, it may be determined in which direction the speaker devices 1320 and 1330 are present based on the terminal device 1310.
  • the speaker device 1320 1330 may be determined to be located at the left and the right of the terminal device 1310, respectively.
  • the speaker device 1320 1330 may be determined to be located at the right and left sides of the terminal device 1310, respectively.
  • FIG. 14 is an exemplary diagram illustrating an example of a method of measuring a distance between speaker devices according to an exemplary embodiment.
  • the distance between the speaker devices may be measured by a proximity sensor provided in the speaker devices 1410 and 1420, for example, an RF module.
  • the distance between the speaker devices 1410 and 1420 may be measured. As the RSSI value is smaller, the distance between the speaker devices 1410 and 1420 may be determined as a longer value.
  • 15 is a block diagram illustrating an internal structure of a device according to an embodiment.
  • the device 1500 may include a receiver 1510, a controller 1520, and an output unit 1530.
  • the receiver 1510 may receive an audio signal to be output through a speaker.
  • the audio signal that may be received by the receiver 1510 may correspond to the image or screen being displayed.
  • the controller 1520 may separate the audio signal received from the receiver 1510 into a main signal and a background signal.
  • the controller 1520 may generate the mixed main signal and the background signal by mixing the main signal and the background signal based on the location information of the main speaker and the auxiliary speaker.
  • the location information of the main speaker and the auxiliary speaker may be received from the outside by the receiver 1510 or may be obtained according to the method of acquiring the location information of the speaker described above.
  • the controller 1520 may determine a gain value based on location information of the main speaker and the auxiliary speaker, and mix the main signal and the background signal according to the determined gain value.
  • a gain value applied to the main signal that is mixed with the background signal and output from the auxiliary speaker may be determined to be a large value.
  • the gain value applied to the background signal that is mixed with the main signal and output from the main speaker may be determined as a large value.
  • the output unit 1530 may output the mixed main signal that may be output from the main speaker and the mixed background signal that may be output from the auxiliary speaker to the main speaker and the auxiliary speaker.
  • the audio signal when using a plurality of speakers, may be processed and output to provide an optimal sound effect in consideration of the location information of each speaker.
  • the method according to some embodiments may be embodied in the form of program instructions that may be executed by various computer means and recorded on a computer readable medium.
  • the computer readable medium may include program instructions, data files, data structures, etc. alone or in combination.
  • Program instructions recorded on the media may be those specially designed and constructed for the purposes of the present invention, or they may be of the kind well-known and available to those having skill in the computer software arts.
  • Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tape, optical media such as CD-ROMs, DVDs, and magnetic disks, such as floppy disks.
  • Examples of program instructions include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like.

Abstract

L'invention concerne un procédé de traitement d'un signal audio dans un dispositif, ce procédé comprenant : la division d'un signal audio en un signal principal et un signal de fond ; l'acquisition d'informations d'emplacement d'un haut-parleur principal et d'un haut-parleur auxiliaire ; le mélange du signal principal et du signal de fond sur la base des informations d'emplacement ; et l'émission respective, vers le haut-parleur principal et le haut-parleur auxiliaire, du signal principal et du signal de fond mélangés.
PCT/KR2015/012853 2014-12-01 2015-11-27 Procédé et dispositif d'émission d'un signal audio sur la base d'informations d'emplacement d'un haut-parleur WO2016089049A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201580075082.9A CN107211213B (zh) 2014-12-01 2015-11-27 基于扬声器的位置信息输出音频信号的方法和设备
US15/531,916 US10171911B2 (en) 2014-12-01 2015-11-27 Method and device for outputting audio signal on basis of location information of speaker
KR1020177014532A KR102343330B1 (ko) 2014-12-01 2015-11-27 스피커의 위치 정보에 기초하여, 오디오 신호를 출력하는 방법 및 디바이스

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201462085729P 2014-12-01 2014-12-01
US62/085,729 2014-12-01

Publications (1)

Publication Number Publication Date
WO2016089049A1 true WO2016089049A1 (fr) 2016-06-09

Family

ID=56091954

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2015/012853 WO2016089049A1 (fr) 2014-12-01 2015-11-27 Procédé et dispositif d'émission d'un signal audio sur la base d'informations d'emplacement d'un haut-parleur

Country Status (4)

Country Link
US (1) US10171911B2 (fr)
KR (1) KR102343330B1 (fr)
CN (1) CN107211213B (fr)
WO (1) WO2016089049A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111586539A (zh) * 2016-09-23 2020-08-25 苹果公司 延伸通过扬声器隔膜的扬声器后腔
US11256338B2 (en) 2014-09-30 2022-02-22 Apple Inc. Voice-controlled electronic device

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111432503B (zh) * 2015-01-06 2023-12-05 三星电子株式会社 电子装置和设置音频设备网络的方法
JP7176194B2 (ja) * 2018-02-09 2022-11-22 ヤマハ株式会社 情報処理装置、情報処理方法、及び情報処理プログラム
CN108712706B (zh) * 2018-05-17 2020-09-22 Oppo广东移动通信有限公司 发声方法、装置、电子装置及存储介质
CN108901080A (zh) * 2018-07-02 2018-11-27 Oppo广东移动通信有限公司 通信连接建立方法及相关设备
KR20210015540A (ko) * 2019-08-02 2021-02-10 엘지전자 주식회사 디스플레이 장치 및 서라운드 사운드 시스템
KR102456748B1 (ko) * 2020-12-17 2022-10-20 주식회사 엘지유플러스 셋톱 단말 및 이의 동작 방법
US11659331B2 (en) * 2021-01-22 2023-05-23 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for audio balance adjustment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004056418A (ja) * 2002-07-19 2004-02-19 Yamaha Corp 音響再生装置
KR100739762B1 (ko) * 2005-09-26 2007-07-13 삼성전자주식회사 크로스토크 제거 장치 및 그를 적용한 입체 음향 생성 시스템
KR20070108341A (ko) * 2007-10-22 2007-11-09 주식회사 이머시스 스테레오 스피커 환경에서 가상 스피커 기술을 사용한입체음향 재생 장치 및 방법
US20100226500A1 (en) * 2006-04-03 2010-09-09 Srs Labs, Inc. Audio signal processing
US20120140959A1 (en) * 2001-02-09 2012-06-07 Fincham Lawrence R Sound system and method of sound reproduction

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7006645B2 (en) 2002-07-19 2006-02-28 Yamaha Corporation Audio reproduction apparatus
US20090103737A1 (en) 2007-10-22 2009-04-23 Kim Poong Min 3d sound reproduction apparatus using virtual speaker technique in plural channel speaker environment
CN101640831A (zh) * 2008-07-28 2010-02-03 深圳华为通信技术有限公司 一种扬声器阵列设备及其驱动方法
JP2011124723A (ja) * 2009-12-09 2011-06-23 Sharp Corp オーディオデータ処理装置、オーディオ装置、オーディオデータ処理方法、プログラム及び当該プログラムを記録した記録媒体
US9408011B2 (en) 2011-12-19 2016-08-02 Qualcomm Incorporated Automated user/sensor location recognition to customize audio performance in a distributed multi-sensor environment
KR102028122B1 (ko) * 2012-12-05 2019-11-14 삼성전자주식회사 오디오 장치 및 그의 신호 처리 방법 그리고 그 방법을 수행하는 프로그램이 기록된 컴퓨터 판독 가능 매체
WO2014122550A1 (fr) 2013-02-05 2014-08-14 Koninklijke Philips N.V. Appareil audio et procédé correspondant
US20160066118A1 (en) 2013-04-15 2016-03-03 Intellectual Discovery Co., Ltd. Audio signal processing method using generating virtual object
KR20140127022A (ko) 2013-04-24 2014-11-03 인텔렉추얼디스커버리 주식회사 가상 객체 생성을 이용한 오디오 신호처리 방법.

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120140959A1 (en) * 2001-02-09 2012-06-07 Fincham Lawrence R Sound system and method of sound reproduction
JP2004056418A (ja) * 2002-07-19 2004-02-19 Yamaha Corp 音響再生装置
KR100739762B1 (ko) * 2005-09-26 2007-07-13 삼성전자주식회사 크로스토크 제거 장치 및 그를 적용한 입체 음향 생성 시스템
US20100226500A1 (en) * 2006-04-03 2010-09-09 Srs Labs, Inc. Audio signal processing
KR20070108341A (ko) * 2007-10-22 2007-11-09 주식회사 이머시스 스테레오 스피커 환경에서 가상 스피커 기술을 사용한입체음향 재생 장치 및 방법

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11256338B2 (en) 2014-09-30 2022-02-22 Apple Inc. Voice-controlled electronic device
USRE49437E1 (en) 2014-09-30 2023-02-28 Apple Inc. Audio driver and power supply unit architecture
CN111586539A (zh) * 2016-09-23 2020-08-25 苹果公司 延伸通过扬声器隔膜的扬声器后腔
CN111586539B (zh) * 2016-09-23 2022-07-19 苹果公司 延伸通过扬声器隔膜的扬声器后腔
US11693487B2 (en) 2016-09-23 2023-07-04 Apple Inc. Voice-controlled electronic device
US11693488B2 (en) 2016-09-23 2023-07-04 Apple Inc. Voice-controlled electronic device

Also Published As

Publication number Publication date
KR102343330B1 (ko) 2021-12-24
CN107211213B (zh) 2019-06-14
US20170325028A1 (en) 2017-11-09
KR20170089861A (ko) 2017-08-04
CN107211213A (zh) 2017-09-26
US10171911B2 (en) 2019-01-01

Similar Documents

Publication Publication Date Title
WO2016089049A1 (fr) Procédé et dispositif d'émission d'un signal audio sur la base d'informations d'emplacement d'un haut-parleur
WO2015053485A1 (fr) Système audio, procédé de sortie audio, et appareil haut-parleur
WO2018008885A1 (fr) Dispositif de traitement d'image, procédé de commande de dispositif de traitement d'image, et support d'enregistrement lisible par ordinateur
WO2014119857A1 (fr) Système et procédé de réglage de canaux de sortie audio de haut-parleurs
WO2018004163A1 (fr) Dispositif de sortie acoustique et son procédé de commande
WO2017052056A1 (fr) Dispositif électronique et son procédé de traitement audio
WO2013147547A1 (fr) Dispositif audio et procédé correspondant de conversion de signal audio
WO2014021670A1 (fr) Appareil mobile et son procédé de commande
WO2017119644A1 (fr) Dispositif électronique et procédé de fonctionnement pour dispositif électronique
WO2017039255A1 (fr) Écouteur, système d'écouteur et procédé de commande d'écouteur
CN109121047B (zh) 双屏终端立体声实现方法、终端及计算机可读存储介质
WO2015147435A1 (fr) Système et procédé de traitement de signal audio
WO2020050473A1 (fr) Dispositif et procédé destinés à commander de façon adaptative un préambule dans un réseau uwb
US20220021980A1 (en) Terminal, audio cooperative reproduction system, and content display apparatus
WO2018164547A1 (fr) Appareil d'affichage d'image, et procédé de fonctionnement associé
EP2668790A2 (fr) Procédé et appareil pour contrôler à distance un dispositif électronique grand public au moyen d'un réseau personnel sans fil
WO2017057866A1 (fr) Dispositif de sortie audio, et procédé de commande de dispositif de sortie audio
WO2018084483A1 (fr) Appareil de haut-parleur, appareil électronique relié à celui-ci, et son procédé de commande
WO2015020418A1 (fr) Dispositif de mise à niveau de fonction, appareil d'affichage, et procédé de commande de son appareil d'affichage
WO2013187688A1 (fr) Procédé de traitement de signal audio et appareil de traitement de signal audio l'adoptant
WO2016053019A1 (fr) Procédé et appareil de traitement d'un signal audio contenant du bruit
WO2018084468A1 (fr) Dispositif électronique et procédé de commande de connexion sans fil de dispositif électronique
WO2019074238A1 (fr) Microphone, appareil électronique comprenant un microphone et procédé de commande d'un appareil électronique
WO2016167464A1 (fr) Procédé et appareil de traitement de signaux audio sur la base d'informations de haut-parleur
WO2015088149A1 (fr) Dispositif de sortie de son

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15866097

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20177014532

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 15531916

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15866097

Country of ref document: EP

Kind code of ref document: A1