EP1393591A2 - Systeme audio a reglage automatique - Google Patents
Systeme audio a reglage automatiqueInfo
- Publication number
- EP1393591A2 EP1393591A2 EP01989480A EP01989480A EP1393591A2 EP 1393591 A2 EP1393591 A2 EP 1393591A2 EP 01989480 A EP01989480 A EP 01989480A EP 01989480 A EP01989480 A EP 01989480A EP 1393591 A2 EP1393591 A2 EP 1393591A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- user
- speakers
- audio
- image
- generating system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
- H04S7/303—Tracking of listener position or orientation
Definitions
- the invention relates to audio systems, such as stereo systems, television audio systems and home theater systems.
- the invention relates to systems and methods for adjusting audio systems.
- 2,228,324 describes a system that adjusts the balance of a stereo system as a user moves, in order to maintain the stereo effect for the listener.
- a signal emitter carried by the user emits signals to two separate receivers that are adjacent to two stereo speakers.
- the signal emitted may be an ultrasonic signal, infra-red signal or radio signal and may be emitted in response to an initiating signal. (It may also be a wired electrical signal.)
- the system uses the time it takes a respective receiver (adjacent a speaker) to receive the signal from the signal emitter to determine the distance between the user and the speaker. A distance between the user and each of the two speakers is so calculated.
- GB 2,228,324 refers to the system determining the position of the user by determining the point where the user's distance from each speaker overlaps, but notes that determining position is not necessary for adjusting stereo balance.
- Japanese Patent Abstract 5-137200 detects the position of a viewer in one of five angular zones with respect to the front of a television by pointing a separate infra-red detector at each zone. The balance of the stereo speakers flanking the television screen is said to be adjusted based on the zone the viewer is in.
- Japanese Patent Abstract 4-130900 uses elapsed time of light transmission to calculate the distances between a listener and two light emitting and detecting parts. The distances between the user and the two parts and the distance between the two parts is used to calculate the position of the listener and to adjust the balance of the audio signal.
- Japanese Patent Abstract 7-302210 uses an infra-red signal to measure the distance between a listening position and a series of spealcer and to adjust an appropriate delay time for each speaker based on the distance between the spealcer and the listening position.
- One obvious difficulty with the prior art systems is that they either require a user to wear or carry a signal emitter (as in GB 2,228,324) in order to enjoy automatic adjustment of a balance of a stereo system, or, if not, to rely on sensors (such as infra-red sensors) that are unreliable and/or crude in detecting the position of a listener.
- sensors such as infra-red sensors
- use of infra-red detectors may fail to detect the listener, resulting in the above-mentioned systems failing to balance properly for the user's position.
- other people or other items, such as pets
- may be sensed by the sensors resulting in an adjustment in the balance to someone or something other than the listener.
- a home theater system typically has a multiplicity of speakers positioned about a room that are used to project audio, including audio effects, to a listener.
- the audio is not simply "balanced" between speakers. Rather, the output of a particular speaker location may be raised and lowered or otherwise coordinated based on the audio effect to be projected to the listener at his or her location. For example, two speakers of a multiplicity of speakers may be driven in phase or out of phase, in order to project a particular audio effect to a listener at the listener's position.
- the invention provides an audio system (including an audiovisual system) that can automatically adjust to the position of the listener or user of the system, including a change in position of the user.
- the system uses image capturing and recognition that recognizes some or part of the contours of a human body, i.e., the user. Based on the position of the user in the field of view, the system determines position information of the user. In one embodiment of the system, for example, the angular position of the user is determined based on the location of the image of the user in the field of view of an imaging capturing device, and the system may adjust the output of two or more speakers based on the determined angle.
- the image capturing device may be, for example, a video camera connected to a control unit or CPU that has image recognition software programmed to recognize all or part of the shape of a human body.
- image recognition software programmed to recognize all or part of the shape of a human body.
- Various methods of detecting and tracking active contours such as the human body have been developed. For example, a "person finder” that finds and follows people's bodies (or head or hands, for example) in a video image is described in "Pfinder: Real-Time Tracking Of the Human Body” by Wren et al., M.I.T. Media Laboratory Perceptual Computing Section Technical Report No. 353, published in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp 780-85 (July 1997), the contents of which are hereby incorporated by reference.
- control unit or CPU may be programmed to recognize the contours of a human head or even the contours of a particular user's face.
- Software that can recognize faces in images is commercially available, such as the "Facelt" software sold by Visionics and described at www.faceit.com.
- Software incorporating such algorithms which may be used to detect human bodies, faces, etc. will be generally referred to as image recognition software, image recognition algorithm and the like in the description below.
- the position of the recognized body or head relative to the field of view of the camera may be used, for example, to determine the angle of the user's location with respect to the camera.
- the determined angle may be used to balance or otherwise adjust the audio output and effects to be projected by each speaker to the user's location.
- the use of an image capturing device and related image sensing software that identifies the contour of a human body or a particular face makes the detection of the user more accurate and reliable.
- Two or more such programmed image capturing devices having overlapping fields of view may be used to accurately determine the location of the user.
- two separate cameras as described above may be separately located and each may be used to determine the user's position in a reference coordinate system.
- the user's location may be used by the audio system, for example, to determine the distance between the user's present location and the fixed (known) position of each speaker in the reference coordinate system and to make the appropriate adjustments to the speaker output to provide the proper audio mix to the user's location, such as audio effects in a home theater system.
- the invention comprises an audio generating system that outputs audio through two or more speakers.
- the audio output of each of the two or more speakers is adjustable based upon the position of a user with respect to the positions of the two or more speakers.
- the system includes at least one image capturing device (such as a video camera) that is trainable on a listening region and coupled to a processing section having image recognition software.
- the processing section uses the image recognition software to identify the user in an image generated by the image capturing device.
- the processing section also has software that generates at least one measurement of the position of the user based upon the position of the user in the image.
- Fig. 1 is a perspective view of a home theater system including automatic detection and locating of a user and adjustment of output in accordance with a first embodiment of the invention
- Fig. la is a diagram of portions of the control system of the system of Fig. 1;
- Fig. 2a is an image that includes an image of a user captured by a first camera of the system of Fig. 1;
- Fig. 2b is an image that includes an image of the user captured by a second camera of the system of Fig. 1 ;
- Fig. 3 is a representative view of a stereo system including automatic detection and locating of a user and adjustment of output in accordance with a second embodiment of the invention; and Fig. 3 a is an image that includes an image of the user captured by a camera of the system of Fig. 3.
- a user 10 is shown positioned amongst audio and visual components of a home theater system.
- the home theater system is comprised of a video display screen 14 and a series of audio speakers 18a-e surrounding the perimeter of a comfortable viewing area for the display screen 14.
- the system is also comprised of a control unit 22, shown in Fig. 1 positioned atop the display screen 14.
- the control unit 22 may be positioned elsewhere or may be incorporated within the display unit 14 itself.
- the control unit 22, display screen 14 and speakers 18a-e are all electrically connected with electrical wires and connectors. The wires are typically run beneath carpet in a room or within an adjacent wall, so they are not shown in Fig. 1.
- the home theater system of Fig. 1 includes electrical components that produce visual output from display screen 14 and corresponding audio output from speakers 18a-e.
- the audio and video processing for the home theater output typically occurs in the control unit 22, which may include a processor, memory and related processing software.
- control units and related processing components are known and available in various commercial formats.
- Audio and video input provided to the control unit 22 may come from a television signal, a cable signal, a satellite signal, a DVD and/or a VCR.
- the control unit 22 processes the input signal and provides appropriate signals to the driving circuitry of the display screen 14, resulting in a video display, and also processes the input signal and provides appropriate driving signals to the speakers 18a-e, as shown in Fig. la.
- the audio portion of the signal input to the control unit 22 may be a stereophonic signal or may support more complex audio processing, such as audio effects processing by the control unit 22.
- the control unit 22 may drive speakers 18b, 18c, 18d in an overlapping sequence in order to simulate a car passing by on the right hand portion of the display.
- the amplitude and phase of each speaker 18b, 18c, 18d is driven based on received audio signal by the control unit 22, as well as the position of the speaker 18b, 18c, 18d relative to the user 10 as stored in the memory of control unit 22.
- the control unit 22 may receive and store the positions of the speakers 18a-e and the position of the user 10 with respect to a common reference system, such as the one defined by origin O and unit vectors (x,y,z) in Fig. 1.
- the x, y and z coordinates of each speaker 18a-e and the user 10 in the reference coordinate system may be physically measured or otherwise determined and input to the control unit 22.
- the position of user 10 in Fig. 1 is shown to have coordinates (Xp,Yp, Zp) in the reference coordinate system.
- the reference coordinate system in general may be located in positions other than shown in Fig. 1. (As described further below, the reference coordinate system in Fig.
- control unit 22 may alternatively translate the coordinates to an internal reference coordinate system.
- the position of the user 10 and the speakers 18a-e in such a common reference coordinate system enables the control unit 10 to determine the position of the user 10 with respect to each speaker 18a-e. (It is well known that subtracting the coordinates of the user 10 from the coordinates of the speaker 18a determines their relative positions in the reference coordinate system.)
- Software within the control unit 22 electronically adjusts the driving signals for the audio output (such as volume, frequency, phase) of each speaker based upon the received audio signal, as well as the position of the user 10 relative to the speaker.
- Electronic adjustment of the audio output by the control unit 22 based on the relative positions of the speakers 18a-e with respect to the user 10 is known in the art.
- the control system may allow the user to manually adjust the audio output of each speaker 18a-e.
- Such manual controls of the audio components via the control unit 22 is also known in the art.
- input may be provided to the control unit 22 through a remote that wirelessly interfaces with the control unit 22 and projects a menu on the display screen 14, that allows, for example, input of positional data.
- the home theater system of Fig. 1 can also automatically identify the user and the user's location in the reference coordinate system.
- the locations of the user 10 and the speakers 18a-e in the reference coordinate system at origin O were presumed to be known by the control unit 22 based, for example, on manual input provided by the user.
- the positions of the speakers 18a-e will still normally be known to the control unit 22, since they usually will remain fixed after they are placed.
- the positions of the speakers 18a-e in the reference coordinate system are each manually input to the control system 22 during the initial system set-up and generally remained fixed thereafter.
- the speaker location may be changed, of course, and a new position(s) may be input, but this does not occur during normal usage of the system.
- the control unit 22 adjust the audio output to each speaker 18a-e based on the relative locations of the user 10 and the speakers 18a-e, as in the case of manual input of positions, as previously described.
- the system is further comprised of two video cameras 26a, 26b located atop display screen 14 and directed toward the normal viewing area of the display screen 14.
- Camera 26a is located at the origin O of the common reference coordinate system.
- video cameras 26a, 26b may be positioned at other locations; the reference coordinate system may also be re-positioned to a different location of camera 26a or elsewhere.
- Video cameras 26a, 26b interface with the control unit 22 and provide it with images captured in the viewing area.
- Image recognition software is loaded in control unit 22 and is used by a processor therein to process the video images received from the cameras 26a, 26b.
- the components, including memory, of the control unit 22 used for image recognition may be separate or may be shared with the other functions of the control unit 22, such as those shown in Fig. la. Alternatively, the image recognition may take place in a separate unit.
- Fig. 2a depicts the image in the field of view of camera 26a on one side of the display screen of Fig. 1.
- the image of Fig. 2a is transmitted to control unit 22, where it is processed using, for example, known image recognition software loaded therein.
- An image recognition algorithm may be used to recognize the contours of a human body, such as the user 10.
- image recognition software may be used that recognizes faces or may be programmed to recognize a particular face or faces, such as the face of user 10.
- the control unit 22 is programmed to determine the point Pi' at the center of the user's 10 head in the image and the coordinates (x',y') with respect to the point Oj' in the upper left-hand corner of the image.
- the point Oj' in the image of Fig. 2a corresponds approximately to the point (0,0,Zp) in the reference coordinate system of Fig. 1.
- Fig. 2b depicts the image in the field of view of camera 26b on the other side of the display screen of Fig. 1.
- the image of Fig. 2b is transmitted to control unit 22, where it is processed using image recognition software to recognize the user 10 or the image of the user's face.
- the control unit determines the point Pi" at the center of the user's head 10 in the image of Fig. 2b and the coordinates (x",y") with respect to the point 0/' in the upper left-hand corner of the image.
- the coordinates (Xp,Yp, Zp) of the position P of the user 10 in the reference coordinate system of Fig. 1 may be uniquely determined using standard techniques of computer vision known as the "stereo problem".
- Basic stereo techniques of three dimensional computer vision are described for example, in “Introductory Techniques for 3-D Computer Vision” by Trucco and Verri, (Prentice Hall, 1998) and, in particular, Chapter 7 of that text entitled “Stereopsis", the contents of which are hereby incorporated by reference.
- D is the distance between cameras 26a, 26b.
- D is the distance between cameras 26a, 26b.
- Eqs. 1-4 are up to linear transformations defined by camera geometry.
- Equations 1-4 have three unknown variables (coordinates Xp,Yp, Zp), thus the simultaneous solution gives the values of Xp,Yp, and Zp and thus gives the position of the user 10 in the reference coordinate system of Fig. 1.
- the coordinates (Xp, Yp, Zp) may be translated to another internal coordinate system of the control unit 22.
- the processing required to determine the position (Xp,Yp, Zp) of the user and to translate the radial coordinates to another reference coordinate, if necessary, may also take place in a processing unit other than control unit 22. For example, it may take place in a processing unit that also supports the image recognition processing, thus comprising a separate processing unit dedicated to the tasks of image detection and location.
- the fixed positions of speakers 18a-e are known to the control unit 22 based on prior input. For example, once each speaker 18a-e is placed in the room as shown in Figs. 1, the coordinates (x,y,z) of each speaker 18a-e in the reference coordinate system, and the distance D between cameras 26a, 26b may be measured and input in memory in the control unit 22.
- the coordinates (Xp,Yp, Zp) of the user 10 as determined using the image recognition software (along with the post-recognition processing of the stereo problem described above) and the pre-stored coordinates of each speaker may then be used to determine the position of the user 10 with respect to each speaker 18a-e.
- the audio processing of the control unit 22 may then appropriately adjust the audio output (including amplitude, frequency and phase) of each speaker 18a-e based upon the input audio signal and the position of the user 10 with respect to the speakers 18a-e.
- the use of the video cameras 26a, 26b, image recognition software, and post- recognition processing to determine a detected user's position thus allows the location of the user of the home theater system of Fig. 1 to be automatically detected and determined. If the user moves, the processing is repeated and a new position is determined for the user, and the control unit 22 uses the new location to adjust the audio signals output by speakers 18a-e.
- the automatic detection feature may be turned off so that the output of the speakers is based on a default or a manual input of the location of the user 10.
- the image recognition software may also be programmed to recognize, for example, a number of different faces and the face of a particular user may be selected for recognition and automatic adjustment. Thus, the system may adjust to the position of a particular user in the viewing area.
- the image recognition software may be used to detect all faces or human bodies in the viewing area and the processing may then automatically determine each of their respective locations.
- the adjustment of the audio output of each speaker 18a-e may be determined by an algorithm that attempts to optimize the aural experience at the location of each detected user.
- Fig. 1 depicted a home theater system
- the automatic detection and adjustment may be used by other audiovisual systems or other purely audio systems. It may be used, for example, with a stereo system having a number of speakers to adjust the volume at each speaker location based on the determined location of the user with respect to the speakers in order to maintain a proper (or pre-determined) balance of the stereophonic sound at the location of the user.
- a simpler embodiment of the invention applied to a two speaker stereo system is shown in Fig. 3.
- the basic components of the stereo system comprise a stereo amplifier 130 attached to two speakers 100a, 100b.
- a camera 110 is used to detect an image of a listening region, including the image of a listener 140 in the listening region.
- Fig. 3 shows a simple reference coordinate system in the plane, having an origin O at the camera and comprised of the angle of an object with respect to the axis A of the camera 110.
- the angle 3 is the angular position of speaker 100a
- the angle N is the angular position of speaker 100b
- the angle 2 is the angular position of the user 140.
- Fig. 3 shows the top of the user's head.
- the user 140 is assumed to listen to the stereo in the central region of Fig. 3 at an approximate distance D from the origin O.
- the speakers 100a, 100b have a default balance at the position D along the axis A, which is approximately at the center of the listening area.
- the angles 3 and N of the positions of speakers 100a, 100b are measured and pre-stored in processing unit 120.
- the image captured by the camera 110 is transferred to the processing unit 120 that includes image recognition software that detects the contour of a human body, a particular face, etc., as described in the embodiment above.
- the location of the detected body or face in the image is used by the processing unit to determine the angle 2 corresponding to the position of the user 140 in the reference coordinate system.
- a first order determination of the angle 2 is:
- the processing unit 120 in turn sends a signal to the amplifier that adjusts the balance of speakers 100a, 100b based on the relative angular positions of the user 140 and the speakers 100a, 100b.
- the output of speaker 110a is adjusted using a factor (3-2) and the output of speaker 110b is adjusted using a factor (N+2).
- the balance of speakers 100a, 100b is thus automatically adjusted based upon the position of the user 140 with respect to the speakers 100a, 100b.
- the adjustment of the balance is based on the angular position 2 of the user is an acceptable first order adjustment.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Stereophonic System (AREA)
- Television Receiver Circuits (AREA)
Abstract
L'invention concerne un système de génération audio qui émet du son à travers un ou plusieurs haut-parleurs. Le son émis par chaque haut-parleur est réglable en fonction de la position d'un utilisateur par rapport à l'emplacement des haut-parleurs. Le système comprend au moins un dispositif de capture d'images (tel qu'une caméra vidéo) qui peut être entraînée sur une zone d'écoute et couplée à une section de traitement possédant un logiciel de reconnaissance d'image. La section de traitement utilise le logiciel de reconnaissance d'image afin d'identifier l'utilisateur dans une image générée par le dispositif de capture d'images. Cette section comprend aussi un logiciel générant au moins une mesure de l'emplacement de l'utilisateur en fonction de la position de celui-ci dans l'image.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US713898 | 1991-06-12 | ||
US71389800A | 2000-11-16 | 2000-11-16 | |
PCT/EP2001/013304 WO2002041664A2 (fr) | 2000-11-16 | 2001-11-14 | Systeme audio a reglage automatique |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1393591A2 true EP1393591A2 (fr) | 2004-03-03 |
Family
ID=24867986
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP01989480A Withdrawn EP1393591A2 (fr) | 2000-11-16 | 2001-11-14 | Systeme audio a reglage automatique |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP1393591A2 (fr) |
JP (1) | JP2004514359A (fr) |
WO (1) | WO2002041664A2 (fr) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114679661A (zh) * | 2022-04-29 | 2022-06-28 | 歌尔科技有限公司 | 音箱控制方法、装置、音箱设备、立体声音箱及存储介质 |
Families Citing this family (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102004004068A1 (de) * | 2004-01-20 | 2005-08-04 | Deutsche Telekom Ag | Verfahren und Steuereinheit zur ortsbezogenen Einrichtung und Optimierung von Multimedia-Anlagen |
GB0415625D0 (en) * | 2004-07-13 | 2004-08-18 | 1 Ltd | Miniature surround-sound loudspeaker |
KR101118214B1 (ko) | 2004-09-21 | 2012-03-16 | 삼성전자주식회사 | 청취 위치를 고려한 2채널 가상 음향 재생 방법 및 장치 |
FR2877534A1 (fr) * | 2004-11-03 | 2006-05-05 | France Telecom | Configuration dynamique d'un systeme sonore |
US8015590B2 (en) | 2004-12-30 | 2011-09-06 | Mondo Systems, Inc. | Integrated multimedia signal processing system using centralized processing of signals |
US8880205B2 (en) * | 2004-12-30 | 2014-11-04 | Mondo Systems, Inc. | Integrated multimedia signal processing system using centralized processing of signals |
US7653447B2 (en) | 2004-12-30 | 2010-01-26 | Mondo Systems, Inc. | Integrated audio video signal processing system using centralized processing of signals |
WO2006100644A2 (fr) * | 2005-03-24 | 2006-09-28 | Koninklijke Philips Electronics, N.V. | Adaptation de l'orientation et de la position d'un dispositif electronique pour experiences d'immersion |
JP5091857B2 (ja) * | 2005-06-30 | 2012-12-05 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | システム制御方法 |
JP4789145B2 (ja) * | 2006-01-06 | 2011-10-12 | サミー株式会社 | コンテンツ再生装置及びコンテンツ再生プログラム |
JP4535006B2 (ja) * | 2006-03-08 | 2010-09-01 | ソニー株式会社 | テレビジョン装置 |
US8675880B2 (en) | 2006-03-31 | 2014-03-18 | Koninklijke Philips N.V. | Device for and a method of processing data |
KR20090022718A (ko) * | 2007-08-31 | 2009-03-04 | 삼성전자주식회사 | 음향처리장치 및 음향처리방법 |
US9066191B2 (en) * | 2008-04-09 | 2015-06-23 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for generating filter characteristics |
US8269902B2 (en) | 2009-06-03 | 2012-09-18 | Transpacific Image, Llc | Multimedia projection management |
US8976986B2 (en) | 2009-09-21 | 2015-03-10 | Microsoft Technology Licensing, Llc | Volume adjustment based on listener position |
US9196238B2 (en) * | 2009-12-24 | 2015-11-24 | Nokia Technologies Oy | Audio processing based on changed position or orientation of a portable mobile electronic apparatus |
US9100633B2 (en) | 2010-11-18 | 2015-08-04 | Lg Electronics Inc. | Electronic device generating stereo sound synchronized with stereographic moving picture |
TWI510106B (zh) * | 2011-01-28 | 2015-11-21 | Hon Hai Prec Ind Co Ltd | 聲音輸出校正系統及方法 |
KR101874836B1 (ko) | 2012-05-25 | 2018-08-02 | 삼성전자주식회사 | 음향 보정이 가능한 디스플레이 장치, 청각 레벨 제어 장치 및 방법 |
US9591405B2 (en) * | 2012-11-09 | 2017-03-07 | Harman International Industries, Incorporated | Automatic audio enhancement system |
US9544679B2 (en) | 2014-12-08 | 2017-01-10 | Harman International Industries, Inc. | Adjusting speakers using facial recognition |
TWI603626B (zh) * | 2016-04-26 | 2017-10-21 | 音律電子股份有限公司 | 揚聲裝置、其控制方法及播放控制系統 |
US10255032B2 (en) | 2016-12-13 | 2019-04-09 | EVA Automation, Inc. | Wireless coordination of audio sources |
CN106851469A (zh) * | 2017-02-16 | 2017-06-13 | 深圳创维-Rgb电子有限公司 | 一种调整音箱输出音频的方法和装置 |
US10171054B1 (en) | 2017-08-24 | 2019-01-01 | International Business Machines Corporation | Audio adjustment based on dynamic and static rules |
US10484809B1 (en) | 2018-06-22 | 2019-11-19 | EVA Automation, Inc. | Closed-loop adaptation of 3D sound |
US10524053B1 (en) | 2018-06-22 | 2019-12-31 | EVA Automation, Inc. | Dynamically adapting sound based on background sound |
US10531221B1 (en) | 2018-06-22 | 2020-01-07 | EVA Automation, Inc. | Automatic room filling |
US10511906B1 (en) | 2018-06-22 | 2019-12-17 | EVA Automation, Inc. | Dynamically adapting sound based on environmental characterization |
US10708691B2 (en) | 2018-06-22 | 2020-07-07 | EVA Automation, Inc. | Dynamic equalization in a directional speaker array |
US10440473B1 (en) | 2018-06-22 | 2019-10-08 | EVA Automation, Inc. | Automatic de-baffling |
CN111782045A (zh) * | 2020-06-30 | 2020-10-16 | 歌尔科技有限公司 | 一种设备角度调节方法、装置、智能音箱及存储介质 |
CN116736982B (zh) * | 2023-06-21 | 2024-01-26 | 惠州中哲尚蓝柏科技有限公司 | 一种用于家庭影院的多媒体输出参数自动调节系统及方法 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE4027338C2 (de) * | 1990-08-29 | 1996-10-17 | Drescher Ruediger | Balanceregelung für Stereoanlagen mit wenigstens zwei Lautsprechern |
JPH04351197A (ja) * | 1991-05-29 | 1992-12-04 | Matsushita Electric Ind Co Ltd | 指向性制御スピーカシステム |
US6741273B1 (en) * | 1999-08-04 | 2004-05-25 | Mitsubishi Electric Research Laboratories Inc | Video camera controlled surround sound |
-
2001
- 2001-11-14 EP EP01989480A patent/EP1393591A2/fr not_active Withdrawn
- 2001-11-14 WO PCT/EP2001/013304 patent/WO2002041664A2/fr not_active Application Discontinuation
- 2001-11-14 JP JP2002543259A patent/JP2004514359A/ja active Pending
Non-Patent Citations (1)
Title |
---|
See references of WO0241664A3 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114679661A (zh) * | 2022-04-29 | 2022-06-28 | 歌尔科技有限公司 | 音箱控制方法、装置、音箱设备、立体声音箱及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
JP2004514359A (ja) | 2004-05-13 |
WO2002041664A3 (fr) | 2003-12-18 |
WO2002041664A2 (fr) | 2002-05-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2002041664A2 (fr) | Systeme audio a reglage automatique | |
US9980040B2 (en) | Active speaker location detection | |
Ribeiro et al. | Using reverberation to improve range and elevation discrimination for small array sound source localization | |
US9906885B2 (en) | Methods and systems for inserting virtual sounds into an environment | |
JP5091857B2 (ja) | システム制御方法 | |
KR101730737B1 (ko) | 안구추적을 기반으로 한 상이한 거리 자동적응 홀로그램 디스플레이 방법 및 장치 | |
US9485556B1 (en) | Speaker array for sound imaging | |
EP0872808B1 (fr) | Méthode et appareil pour la détection visuelle d'humains pour des interfaces publiques actives | |
CN114208209B (zh) | 音频处理系统、方法和介质 | |
WO2014162554A1 (fr) | Système et programme de traitement d'image | |
KR20020094011A (ko) | 시청자의 위치에 의존하는 디스플레이의 자동 포지션닝 | |
JP2023508002A (ja) | オーディオデバイス自動場所選定 | |
JPH1141577A (ja) | 話者位置検出装置 | |
Mulder et al. | An affordable optical head tracking system for desktop VR/AR systems | |
EP2850506A2 (fr) | Système d'entrée | |
US20170123037A1 (en) | Method for calculating angular position of peripheral device with respect to electronic apparatus, and peripheral device with function of the same | |
Liu et al. | Multiple speaker tracking in spatial audio via PHD filtering and depth-audio fusion | |
JP2017512327A (ja) | 制御システム及び制御システムの動作方法 | |
Łopatka et al. | Application of vector sensors to acoustic surveillance of a public interior space | |
JP2009194447A (ja) | リモートコントローラ位置検出装置、リモートコントローラ位置検出システム、リモートコントローラ位置検出方法及びプログラム | |
US20220210588A1 (en) | Methods and systems for determining parameters of audio devices | |
Deldjoo et al. | A low-cost infrared-optical head tracking solution for virtual 3d audio environment using the nintendo wii-remote | |
US7599502B2 (en) | Sound control installation | |
WO2019244315A1 (fr) | Dispositif de commande de sortie, système de commande de sortie et procédé de commande de sortie | |
CN116261095A (zh) | 可动态调整目标聆听点并消除环境物件干扰的音响系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20040601 |