WO2018203579A1 - Dispositif de génération de son stéréophonique et programme informatique associé - Google Patents

Dispositif de génération de son stéréophonique et programme informatique associé Download PDF

Info

Publication number
WO2018203579A1
WO2018203579A1 PCT/KR2017/004677 KR2017004677W WO2018203579A1 WO 2018203579 A1 WO2018203579 A1 WO 2018203579A1 KR 2017004677 W KR2017004677 W KR 2017004677W WO 2018203579 A1 WO2018203579 A1 WO 2018203579A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
sound source
directions
acoustic
positional relationship
Prior art date
Application number
PCT/KR2017/004677
Other languages
English (en)
Korean (ko)
Inventor
하수호
Original Assignee
하수호
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 하수호 filed Critical 하수호
Priority to PCT/KR2017/004677 priority Critical patent/WO2018203579A1/fr
Publication of WO2018203579A1 publication Critical patent/WO2018203579A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control

Definitions

  • This embodiment relates to a stereo sound generating apparatus and a computer program therefor. More particularly, the present invention relates to a stereophonic sound generating apparatus and a computer program therefor for enabling to implement 3D sound effects identical to actual sounds.
  • a method of realizing a 3D sound effect has been proposed by detecting a distance between a speaker and a listener, and adjusting a volume of a sound based on the distance.
  • the 3D sound implementation method only the acoustic characteristics of the linear sound spreading outward among the sound spreading from the speaker in various directions are considered, and the acoustic characteristics of the sound spreading out in the other direction are not considered at all. There is a limitation.
  • the positional relationship between the listener object and the sound source object in the virtual space is determined, and the acoustic signal output from the sound source object is transmitted to each of a plurality of reception directions centered on the listener object based on the identified positional relationship.
  • the main object of the present invention is to provide a stereoscopic sound generating apparatus and a computer program therefor that can implement 3D sound effects identical to real sounds by implementing different sound characteristics.
  • the present embodiment may further include: a detector configured to detect positions of a listener object and a sound source object in a virtual space and determine a positional relationship between the listener object and the sound source object; A parameter determiner configured to determine an acoustic characteristic to which an acoustic signal output from the sound source object is transmitted for each of a plurality of reception directions divided about the listener object based on the positional relationship in the virtual space; And a signal controller configured to generate a stereoscopic sound by applying acoustic characteristics corresponding to each of the plurality of receiving directions to which the acoustic signal is transmitted, to the acoustic signal.
  • a detector configured to detect positions of a listener object and a sound source object in a virtual space and determine a positional relationship between the listener object and the sound source object
  • a parameter determiner configured to determine an acoustic characteristic to which an acoustic signal output from the sound source object is transmitted for each of a plurality of reception directions divided about the listener
  • detecting the position of the listener object and the sound source object in the virtual space to determine the positional relationship between the listener object and the sound source object; Determining an acoustic characteristic to which an acoustic signal output from the sound source object is transmitted in each of a plurality of directions divided about the listener object based on the positional relationship in the virtual space; And a computer program stored in a recording medium for executing a process of generating a stereoscopic sound by applying acoustic characteristics corresponding to each of the plurality of directions in which the acoustic signal is transmitted to the acoustic signal.
  • the positional relationship between the listener object and the sound source object in the virtual space is determined, and based on the identified positional relationship, the sound signal output from the sound source object for each of a plurality of receiving directions centered on the listener object.
  • FIG. 1 is a block diagram schematically showing a three-dimensional sound generating apparatus according to the present embodiment.
  • FIG. 2 is an exemplary view for explaining a method of identifying a positional relationship between a listener object and a sound source object in a virtual space according to the present embodiment.
  • FIG. 3 is an exemplary diagram illustrating a plurality of reception directions divided by a listener object according to the present embodiment.
  • 4 and 5 are exemplary views illustrating a method for providing stereo sound according to the present embodiment.
  • FIG. 6 is a flowchart illustrating a stereoscopic sound providing method according to the present embodiment.
  • the stereoscopic sound generating apparatus is a device for realizing a stereoscopic sound effect, and is applicable to various fields such as a game, artificial intelligence (AI), augmented reality (AR), virtual reality (VR), etc. as well as an audio system. Can be.
  • the application field of the 3D sound generating device is not limited to a specific field.
  • FIG. 1 is a block diagram schematically showing a three-dimensional sound generating apparatus according to the present embodiment.
  • FIG. 1 is an internal block diagram of a stereophonic sound generating apparatus 100 according to an embodiment of the present invention.
  • the stereophonic sound generating apparatus 100 is implemented as a separate stand-alone device in which hardware of a terminal and software of a stereophonic sound generating application are combined.
  • the components included in the 3D sound generating apparatus 100 may be implemented as software or hardware elements, respectively.
  • the stereoscopic sound generating apparatus 100 includes a detector 110, a parameter determiner 120, a signal controller 130, a storage 140, a user interface 150, a display 160, Audio circuitry 170 and speaker 180.
  • the components included in the stereoscopic sound generating apparatus 100 are not necessarily limited thereto.
  • the detector 110 detects a positional relationship between a listener object and a sound source object used as a reference parameter for stereoscopic sound generation.
  • the detector 110 detects a position of a listener object and at least one sound source object in a virtual space to determine a positional relationship between the listener object and the sound source object.
  • the virtual space may be an arbitrary space generated corresponding to a game space, an AI space, or an actual space in which sound is provided, according to an application field of the 3D sound generating apparatus 100.
  • the method of detecting the virtual space by the detector 110 is not limited to a specific method.
  • the detection unit 110 may receive data related to the virtual space through interworking with a server device (not shown) or a storage medium.
  • the detector 110 divides the virtual space into a plurality of cells according to a lattice structure, and analyzes the positional relationship between the listener object and the sound source object by analyzing the relationship between the cell where the listener object is located and the cell where the sound source object is located.
  • the detector 110 may calculate a separation distance and a separation direction between the objects as a positional relationship between the listener object and the sound source object. To this end, the detector 110 may set a unique coordinate value for each cell divided according to the lattice structure.
  • the parameter determiner 120 determines a sound characteristic when the sound signal output from the sound source object is transmitted to the listener object in order to generate 3D sound.
  • the sound output from the sound source object in the real space is spread not only in the front direction, but also in various directions such as left, right, and rear according to propagation characteristics.
  • the sound propagating in each direction causes a difference in the parallax and incidence direction until reaching the listener object.
  • sounds propagating in a direction other than the straight sound have different acoustic characteristics in terms of sound intensity, parallax, direction, and extinction time when reaching the listener object according to factors such as extinction or reflection.
  • the parameter determiner 120 differently determines acoustic characteristics through which sound signals output from the sound source object are transmitted for each of a plurality of reception directions centered on the listener object. That is, the parameter determiner 120 may display the sound intensity, parallax, and extinction time of the sound signal transmitted from the sound source object differently for each of the plurality of receiving directions centered on the listener object. The same 3D sound effect can be realized.
  • the parameter determiner 120 may have a sound value corresponding to a sound signal in which a corresponding sound signal arrives in a different receiving direction with respect to a first receiving direction in which a sound signal arrives straight from a sound source object among the plurality of directions. Its loudness can be determined to be loud.
  • the parameter determiner 120 is configured to gradually reduce the volume of sound as the distance from the first reception direction with respect to the sound signal transmitted in the left or right reception direction based on the first reception direction among the plurality of directions.
  • the acoustic characteristics can be determined.
  • the plurality of reception directions may be determined based on a sensing direction of the sound perceived by the listener's ear when the sound spreading in each direction in the real space reaches the listener, and may be preferably divided based on the azimuth information. have.
  • Such a plurality of reception directions may be variously set by the user.
  • the parameter determiner 120 receives a plurality of received data by further utilizing reference data stored in the storage 140 based on the positional relationship between the listener object and the sound source object identified through the detector 110. It is possible to differently determine the acoustic characteristics to which the acoustic signal output from the sound source object is transmitted for each direction.
  • the storage unit 140 may store the acoustic parameter variation of each of the plurality of reception directions based on the positional relationship between the listener object and the sound source object, and the detailed description thereof will be described with reference to the storage unit 140. It will be described later in the process.
  • the parameter determiner 120 determines whether the current listener object is spaced apart from the sound source object based on which receiving direction, based on the positional relationship between the listener object and the sound source object detected by the detector 110. . That is, the parameter determiner 120 may determine which receiving direction (hereinafter, the reference receiving direction) of the plurality of receiving directions of the current listener object from the sound source object based on the separation direction between the listener object and the sound source object identified through the detector 110. To be described in detail).
  • the parameter determiner 120 determines acoustic characteristics of a sound signal corresponding to each of the plurality of reception directions based on reference data in the storage 140 corresponding to the reference reception direction.
  • the parameter determiner 120 detects a positional relationship between the listener object and the sound source object when movement of any one of the listener object and the sound source object in the virtual space is detected. Thereafter, the parameter determiner 120 re-determines the acoustic characteristics of the corresponding acoustic signal for each of the plurality of receiving directions based on the re-detected positional relationship and the corresponding reference data in the storage 140.
  • the parameter determiner 120 may determine a positional relationship between each sound source object and a listener object, and may determine the identified positional relationship and the corresponding reference data in the storage 140 corresponding thereto. Based on the plurality of reception directions, the acoustic characteristics of the acoustic signals output from the respective sound source objects are determined.
  • the storage unit 130 stores necessary information necessary for generating a stereoscopic sound.
  • the storage unit 130 stores the acoustic characteristics of the sound signal in each of the plurality of receiving directions according to the positional relationship between the listener object and the sound source object for each virtual space. That is, the storage unit 130 stores the acoustic parameter variation of each of the plurality of receiving directions corresponding to the separation of the listener object by a unit distance in each direction of the plurality of receiving directions divided by the listener object from the sound source object. .
  • the unit distance is preferably a numerical value corresponding to the length of one cell when the virtual space is divided into a plurality of cells according to the lattice structure, but is not necessarily limited thereto.
  • the acoustic parameter variation value is preferably, but is not necessarily limited to, a part or variation value of the sound intensity, parallax and extinction time for the acoustic signal.
  • the parameter determiner 120 may determine acoustic characteristics of each of the plurality of reception directions based on a change value of the acoustic parameter in the storage 140 corresponding to the reference reception direction.
  • the parameter determiner 120 first determines how many cells the distance between the listener object and the sound source object has a difference value based on the separation distance between the listener object and the sound source object identified through the detector 110. Calculate
  • the parameter determiner 120 multiplies a coefficient value corresponding to the calculated difference value by the acoustic parameter change value, and thus, for each of the plurality of reception directions corresponding to the positional relationship between the current listener object and the sound source object.
  • the acoustic characteristics can be determined.
  • the stereophonic sound generating apparatus 100 is required for generating stereoscopic sound. This has the effect of minimizing the storage of data.
  • the signal controller 140 applies and outputs acoustic characteristics corresponding to each of the plurality of reception directions to the acoustic signal.
  • the signal controller 140 receives a control command including sound characteristics determined for each of the plurality of receiving directions from the parameter determiner 120, and adjusts and outputs the output of the acoustic signals in the plurality of receiving directions based on the control command. .
  • the user interface unit 150 provides an interface between the user and the 3D sound generating apparatus 100. That is, the user interface unit 150 provides a means for the user to input a command such as input information to the stereo sound generating apparatus 100, and thereby receives the input information from the user.
  • the user interface unit 150 may receive azimuth information corresponding to a plurality of reception directions as input information.
  • the display unit 160 provides a graphic user interface screen in which the listener object and the acoustic object in the virtual space are arranged, and through this, a path for a sound signal transmitted from a sound source object to a plurality of receiving directions separated by the listener object. Can be displayed.
  • the display 160 may display an acoustic characteristic of an acoustic signal transmitted in each receiving direction in the graphical user interface screen.
  • the audio circuit 170 receives an acoustic signal to which acoustic characteristics corresponding to each of the plurality of receiving directions are applied, converts the received data into an electrical signal, and transmits the received electrical signal to the speaker 180.
  • the speaker 180 converts the received electric signal into a sound wave that can be heard by a human being and outputs it. Meanwhile, in the present exemplary embodiment, the speaker 180 may be implemented as a device separate from the 3D sound generating device.
  • FIG. 2 is an exemplary view for explaining a method of identifying a positional relationship between a listener object and a sound source object in a virtual space according to the present embodiment.
  • the stereoscopic sound generating apparatus 100 may identify a positional relationship between a listener object and a sound source object in a virtual space, and output the sound from the sound source object for each of a plurality of receiving directions centered on the listener object based on the positional relationship.
  • the acoustic characteristics through which the sound signal is transmitted are determined differently.
  • the stereophonic sound generating apparatus 100 divides a virtual space into a plurality of cells according to a lattice structure, and a cell in which a listener object is located and a cell in which a sound source object is located among the plurality of cells. Through the relationship analysis, the positional relationship between the listener object and the sound source object is identified.
  • the stereo sound generating apparatus 100 according to the present embodiment preferably divides the virtual space into 46 to 1640 cells, but is not necessarily limited thereto.
  • the 3D sound generating apparatus 100 may divide the virtual space into various numbers of cells according to a user's selection. On the other hand, when the number of divided cells is large, there is an effect that can accurately determine the positional relationship between the listener object and the sound source object.
  • the separation distance and the separation direction of the sound source object based on the listener object are calculated as the positional relationship between the listener object and the sound source object.
  • the separation distance may be calculated in units of cells, and the separation direction may include azimuth information and azimuth information.
  • FIG. 3 is an exemplary diagram illustrating a plurality of reception directions divided by a listener object according to the present embodiment.
  • the sound output from the sound source object in the real space reaches the listener object while spreading in various directions such as left, right, and rear as well as the front direction according to the propagation characteristics.
  • various directions such as left, right, and rear as well as the front direction according to the propagation characteristics.
  • the sound propagating in the left, right, and rear directions causes a difference in the parallax until reaching the listener object and the incident direction thereof.
  • the 3D sound generating apparatus 100 sets a reception direction in which the respective sounds are transmitted based on the listener object, and transmits a sound signal output from the sound source object for each set reception direction.
  • the 3D sound effect that is identical to the real sound is realized.
  • FIG. 3 an implementation form of a plurality of reception directions set around a listener object may be checked.
  • FIG. 3A illustrates an implementation of a plurality of reception directions when the virtual space is two-dimensional
  • FIG. 3B illustrates a plurality of reception directions when the virtual space is three-dimensional. An implementation form is illustrated.
  • 4 and 5 are exemplary views illustrating a method for providing stereo sound according to the present embodiment.
  • FIG. 4 is an exemplary diagram illustrating a form in which a sound signal output from a sound source object is transmitted to each of a plurality of receiving directions according to a positional relationship between a listener object and a sound source object. Meanwhile, FIG. 4A illustrates a case where there is one sound source object, and FIG. 4B illustrates a case where two sound source objects.
  • FIG. 5 is an exemplary diagram illustrating a case where a positional relationship between a listener object and a sound source object is changed.
  • FIG. 6 is a flowchart illustrating a stereoscopic sound providing method according to the present embodiment.
  • the stereophonic sound generating apparatus 100 detects positions of the listener object and one or more sound source objects in the virtual space, and determines the positional relationship between the listener object and the sound source object (S602).
  • the 3D sound generating apparatus 100 divides the virtual space into a plurality of cells according to a lattice structure, and analyzes the relationship between the cell where the listener object is located and the cell where the sound source object is located among the plurality of cells by analyzing the listener object and the sound source. Identify the positional relationship between objects.
  • the stereoscopic sound generating apparatus 100 determines an acoustic characteristic to which an acoustic signal output from the sound source object is transmitted in each of a plurality of reception directions centered on the listener object based on the positional relationship identified in step S602 (S604). .
  • the 3D sound generating apparatus 100 uses the reference data stored in the storage unit 140 based on the positional relationship between the listener object and the sound source object determined in operation S602, and the sound source object for each of the plurality of reception directions. Differently determine the acoustic characteristics to which the acoustic signal output from the transmission.
  • the stereophonic sound generating apparatus 100 calculates how many cell distances the distance between the listener object and the sound source object is based on the separation distance between the listener object and the sound source object identified in step S602.
  • the stereophonic sound generating apparatus 100 may determine the positional relationship between the current listener object and the sound source object by multiplying a coefficient value corresponding to the calculated difference value by a sound parameter change value corresponding to the reference receiving direction in the storage 140. The acoustic characteristic for each of the corresponding plurality of receiving directions is determined.
  • the stereophonic sound generating apparatus 100 generates stereoscopic sound by applying acoustic characteristics corresponding to each of the plurality of receiving directions to the acoustic signal (S606).
  • the 3D sound generating apparatus 100 When the movement of the object in the virtual space is detected (S608), the 3D sound generating apparatus 100 re-determines the positional relationship between the listener object and the sound source object, and the re-detected positional relationship and the storage unit 140 corresponding thereto.
  • the acoustic characteristics of the corresponding acoustic signal for each of the plurality of receiving directions are re-determined based on the internal reference data (S610).
  • each process is described as being sequentially executed, but is not necessarily limited thereto. In other words, since the process described in FIG. 6 may be applied by changing or executing one or more processes in parallel, FIG. 6 is not limited to the time series order.
  • the stereophonic sound generating method described in FIG. 6 is implemented in a program and can be read using software of a computer (CD-ROM, RAM, ROM, memory card, hard disk, magneto-optical disk, storage device, etc.) Can be recorded.
  • a computer CD-ROM, RAM, ROM, memory card, hard disk, magneto-optical disk, storage device, etc.
  • stereo sound generating device 110 detection unit

Abstract

La présente invention concerne, selon le présent mode de réalisation, un dispositif de génération de son stéréophonique et un programme informatique associé, le dispositif de génération de son stéréophonique détectant la relation de position entre un objet d'un auditeur et des objets de source sonore dans un espace virtuel et sur la base de la relation de position détectée, définit différemment des caractéristiques acoustiques transmises par des signaux sonores émis par les objets de source sonore, qui sont séparés pour être centrés autour de l'auditeur dans une pluralité de directions de réception, ce qui permet de réaliser le même effet sonore en 3D que le son réel.
PCT/KR2017/004677 2017-05-02 2017-05-02 Dispositif de génération de son stéréophonique et programme informatique associé WO2018203579A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2017/004677 WO2018203579A1 (fr) 2017-05-02 2017-05-02 Dispositif de génération de son stéréophonique et programme informatique associé

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2017/004677 WO2018203579A1 (fr) 2017-05-02 2017-05-02 Dispositif de génération de son stéréophonique et programme informatique associé

Publications (1)

Publication Number Publication Date
WO2018203579A1 true WO2018203579A1 (fr) 2018-11-08

Family

ID=64016155

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/004677 WO2018203579A1 (fr) 2017-05-02 2017-05-02 Dispositif de génération de son stéréophonique et programme informatique associé

Country Status (1)

Country Link
WO (1) WO2018203579A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0991460A (ja) * 1995-09-26 1997-04-04 Nippon Telegr & Teleph Corp <Ntt> 音場制御方法
WO2014036121A1 (fr) * 2012-08-31 2014-03-06 Dolby Laboratories Licensing Corporation Système conçu pour le rendu et la lecture d'un son basé sur un objet dans divers environnements d'écoute
US20160360334A1 (en) * 2014-02-26 2016-12-08 Tencent Technology (Shenzhen) Company Limited Method and apparatus for sound processing in three-dimensional virtual scene
US20170013386A1 (en) * 2015-07-06 2017-01-12 Bose Corporation Simulating Acoustic Output at a Location Corresponding to Source Position Data
US9602946B2 (en) * 2014-12-19 2017-03-21 Nokia Technologies Oy Method and apparatus for providing virtual audio reproduction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0991460A (ja) * 1995-09-26 1997-04-04 Nippon Telegr & Teleph Corp <Ntt> 音場制御方法
WO2014036121A1 (fr) * 2012-08-31 2014-03-06 Dolby Laboratories Licensing Corporation Système conçu pour le rendu et la lecture d'un son basé sur un objet dans divers environnements d'écoute
US20160360334A1 (en) * 2014-02-26 2016-12-08 Tencent Technology (Shenzhen) Company Limited Method and apparatus for sound processing in three-dimensional virtual scene
US9602946B2 (en) * 2014-12-19 2017-03-21 Nokia Technologies Oy Method and apparatus for providing virtual audio reproduction
US20170013386A1 (en) * 2015-07-06 2017-01-12 Bose Corporation Simulating Acoustic Output at a Location Corresponding to Source Position Data

Similar Documents

Publication Publication Date Title
WO2015060660A1 (fr) Procédé de génération de signal audio multiplex, et appareil correspondant
CN1658709B (zh) 声音再现设备和声音再现方法
WO2014178479A1 (fr) Lunettes intégrales et procédé de fourniture de contenus au moyen de celles-ci
WO2016027930A1 (fr) Dispositif portatif et son procédé de commande
WO2013103256A1 (fr) Procédé et dispositif de localisation d&#39;un signal audio multicanal
WO2011139090A2 (fr) Procédé et appareil de reproduction de son stéréophonique
US9769585B1 (en) Positioning surround sound for virtual acoustic presence
JP2000152397A (ja) 複数の聴取者用3次元音響再生装置及びその方法
BR112012023504B1 (pt) Método de reproduzir som estereofônico, equipamento para reproduzir som estereofônico, e meio de gravação legível por computador
TW201215179A (en) Virtual spatial sound scape
EP2737727A2 (fr) Procédé et appareil conçus pour le traitement d&#39;un signal audio
US10979809B2 (en) Combination of immersive and binaural sound
WO2015152661A1 (fr) Procédé et appareil pour restituer un objet audio
US9843883B1 (en) Source independent sound field rotation for virtual and augmented reality applications
US20120109645A1 (en) Dsp-based device for auditory segregation of multiple sound inputs
Kyriakakis et al. Signal processing, acoustics, and psychoacoustics for high quality desktop audio
WO2018203579A1 (fr) Dispositif de génération de son stéréophonique et programme informatique associé
EP3499917A1 (fr) Activation du rendu d&#39;un contenu spatial audio pour consommation par un utilisateur
Malham Toward reality equivalence in spatial sound diffusion
WO2018194320A1 (fr) Dispositif de commande audio spatial selon le suivi du regard et procédé associé
JP2011234177A (ja) 立体音響再生装置及び再生方法
KR101038574B1 (ko) 3차원 오디오 음상 정위 방법과 장치 및 이와 같은 방법을 구현하는 프로그램이 기록되는 기록매체
CN108668215A (zh) 全景音域系统
WO2014171791A1 (fr) Appareil et procédé de traitement de signal audio multicanal
WO2018070564A1 (fr) Procédé et dispositif de sortie sonore d&#39;affichages multiples

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17908188

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17908188

Country of ref document: EP

Kind code of ref document: A1