WO2018203579A1 - Stereophonic sound generating device and computer program therefor - Google Patents

Stereophonic sound generating device and computer program therefor Download PDF

Info

Publication number
WO2018203579A1
WO2018203579A1 PCT/KR2017/004677 KR2017004677W WO2018203579A1 WO 2018203579 A1 WO2018203579 A1 WO 2018203579A1 KR 2017004677 W KR2017004677 W KR 2017004677W WO 2018203579 A1 WO2018203579 A1 WO 2018203579A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
plurality
object
sound source
directions
Prior art date
Application number
PCT/KR2017/004677
Other languages
French (fr)
Korean (ko)
Inventor
하수호
Original Assignee
하수호
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 하수호 filed Critical 하수호
Priority to PCT/KR2017/004677 priority Critical patent/WO2018203579A1/en
Publication of WO2018203579A1 publication Critical patent/WO2018203579A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control

Abstract

The present embodiment relates to a stereophonic sound generating device and a computer program therefor, wherein the stereophonic sound generating device detects the positional relationship between a listener object and sound source objects in a virtual space, and on the basis of the detected positional relationship, differently sets acoustic characteristics transmitted by sound signals outputted from the sound source objects, which are separated to center around the listener in a plurality of receiving directions, thereby realizing the same 3D sound effect as actual sound.

Description

Stereo sound generating device and computer program therefor

This embodiment relates to a stereo sound generating apparatus and a computer program therefor. More particularly, the present invention relates to a stereophonic sound generating apparatus and a computer program therefor for enabling to implement 3D sound effects identical to actual sounds.

The contents described below merely provide background information related to one embodiment according to the present invention and do not constitute a prior art.

Recently, as the technology of the audio system is developed, a stereoscopic sound technology for realizing and providing the sound reproduced in the sound system is as if it is heard in the field has been researched and developed. In addition, with the release of 3DTV, 3D stereoscopic images that provide realism and immersion, such as 3D movies, are activated. However, in the existing acoustic system, two or more speakers are needed to realize 3D sound, and binaural three-dimensional audio using a head related transfer function (HRTF) is provided. There is a problem that does not properly implement the high, low sense of distance, the sense of space unless otherwise. To this end, a method of realizing a 3D sound effect has been proposed by detecting a distance between a speaker and a listener, and adjusting a volume of a sound based on the distance. However, in the case of the 3D sound implementation method, only the acoustic characteristics of the linear sound spreading outward among the sound spreading from the speaker in various directions are considered, and the acoustic characteristics of the sound spreading out in the other direction are not considered at all. There is a limitation.

Therefore, by considering not only the linear sound but also the acoustic characteristics of other sounds reaching the listeners later than the linear sound as it spreads to the left, right, and rear, a new technology for realizing the same 3D sound effect as the actual sound is introduced. in need.

In the present embodiment, the positional relationship between the listener object and the sound source object in the virtual space is determined, and the acoustic signal output from the sound source object is transmitted to each of a plurality of reception directions centered on the listener object based on the identified positional relationship. The main object of the present invention is to provide a stereoscopic sound generating apparatus and a computer program therefor that can implement 3D sound effects identical to real sounds by implementing different sound characteristics.

The present embodiment may further include: a detector configured to detect positions of a listener object and a sound source object in a virtual space and determine a positional relationship between the listener object and the sound source object; A parameter determiner configured to determine an acoustic characteristic to which an acoustic signal output from the sound source object is transmitted for each of a plurality of reception directions divided about the listener object based on the positional relationship in the virtual space; And a signal controller configured to generate a stereoscopic sound by applying acoustic characteristics corresponding to each of the plurality of receiving directions to which the acoustic signal is transmitted, to the acoustic signal.

In addition, according to another aspect of the present invention, in combination with hardware, detecting the position of the listener object and the sound source object in the virtual space, to determine the positional relationship between the listener object and the sound source object; Determining an acoustic characteristic to which an acoustic signal output from the sound source object is transmitted in each of a plurality of directions divided about the listener object based on the positional relationship in the virtual space; And a computer program stored in a recording medium for executing a process of generating a stereoscopic sound by applying acoustic characteristics corresponding to each of the plurality of directions in which the acoustic signal is transmitted to the acoustic signal.

According to the present embodiment, the positional relationship between the listener object and the sound source object in the virtual space is determined, and based on the identified positional relationship, the sound signal output from the sound source object for each of a plurality of receiving directions centered on the listener object By differently implementing the acoustic characteristics to be delivered has the effect that can implement the same 3D sound as the actual sound.

1 is a block diagram schematically showing a three-dimensional sound generating apparatus according to the present embodiment.

FIG. 2 is an exemplary view for explaining a method of identifying a positional relationship between a listener object and a sound source object in a virtual space according to the present embodiment.

3 is an exemplary diagram illustrating a plurality of reception directions divided by a listener object according to the present embodiment.

4 and 5 are exemplary views illustrating a method for providing stereo sound according to the present embodiment.

6 is a flowchart illustrating a stereoscopic sound providing method according to the present embodiment.

Hereinafter, the present embodiment will be described in detail with reference to the accompanying drawings.

The stereoscopic sound generating apparatus according to the present embodiment is a device for realizing a stereoscopic sound effect, and is applicable to various fields such as a game, artificial intelligence (AI), augmented reality (AR), virtual reality (VR), etc. as well as an audio system. Can be. In the present embodiment, the application field of the 3D sound generating device is not limited to a specific field.

1 is a block diagram schematically showing a three-dimensional sound generating apparatus according to the present embodiment.

1 is an internal block diagram of a stereophonic sound generating apparatus 100 according to an embodiment of the present invention. The stereophonic sound generating apparatus 100 is implemented as a separate stand-alone device in which hardware of a terminal and software of a stereophonic sound generating application are combined. The components included in the 3D sound generating apparatus 100 may be implemented as software or hardware elements, respectively.

The stereoscopic sound generating apparatus 100 according to the present embodiment includes a detector 110, a parameter determiner 120, a signal controller 130, a storage 140, a user interface 150, a display 160, Audio circuitry 170 and speaker 180. Here, the components included in the stereoscopic sound generating apparatus 100 are not necessarily limited thereto.

The detector 110 detects a positional relationship between a listener object and a sound source object used as a reference parameter for stereoscopic sound generation.

The detector 110 detects a position of a listener object and at least one sound source object in a virtual space to determine a positional relationship between the listener object and the sound source object. Here, the virtual space may be an arbitrary space generated corresponding to a game space, an AI space, or an actual space in which sound is provided, according to an application field of the 3D sound generating apparatus 100.

In the present embodiment, the method of detecting the virtual space by the detector 110 is not limited to a specific method. For example, the detection unit 110 may receive data related to the virtual space through interworking with a server device (not shown) or a storage medium.

The detector 110 divides the virtual space into a plurality of cells according to a lattice structure, and analyzes the positional relationship between the listener object and the sound source object by analyzing the relationship between the cell where the listener object is located and the cell where the sound source object is located. Figure out.

The detector 110 may calculate a separation distance and a separation direction between the objects as a positional relationship between the listener object and the sound source object. To this end, the detector 110 may set a unique coordinate value for each cell divided according to the lattice structure.

The parameter determiner 120 determines a sound characteristic when the sound signal output from the sound source object is transmitted to the listener object in order to generate 3D sound.

On the other hand, the sound output from the sound source object in the real space is spread not only in the front direction, but also in various directions such as left, right, and rear according to propagation characteristics. In this case, the sound propagating in each direction causes a difference in the parallax and incidence direction until reaching the listener object. In other words, sounds propagating in a direction other than the straight sound have different acoustic characteristics in terms of sound intensity, parallax, direction, and extinction time when reaching the listener object according to factors such as extinction or reflection.

In consideration of this point, the parameter determiner 120 according to the present exemplary embodiment differently determines acoustic characteristics through which sound signals output from the sound source object are transmitted for each of a plurality of reception directions centered on the listener object. That is, the parameter determiner 120 may display the sound intensity, parallax, and extinction time of the sound signal transmitted from the sound source object differently for each of the plurality of receiving directions centered on the listener object. The same 3D sound effect can be realized.

For example, the parameter determiner 120 may have a sound value corresponding to a sound signal in which a corresponding sound signal arrives in a different receiving direction with respect to a first receiving direction in which a sound signal arrives straight from a sound source object among the plurality of directions. Its loudness can be determined to be loud.

The parameter determiner 120 is configured to gradually reduce the volume of sound as the distance from the first reception direction with respect to the sound signal transmitted in the left or right reception direction based on the first reception direction among the plurality of directions. The acoustic characteristics can be determined.

Meanwhile, the plurality of reception directions may be determined based on a sensing direction of the sound perceived by the listener's ear when the sound spreading in each direction in the real space reaches the listener, and may be preferably divided based on the azimuth information. have. Such a plurality of reception directions may be variously set by the user.

According to the present exemplary embodiment, the parameter determiner 120 receives a plurality of received data by further utilizing reference data stored in the storage 140 based on the positional relationship between the listener object and the sound source object identified through the detector 110. It is possible to differently determine the acoustic characteristics to which the acoustic signal output from the sound source object is transmitted for each direction. To this end, the storage unit 140 may store the acoustic parameter variation of each of the plurality of reception directions based on the positional relationship between the listener object and the sound source object, and the detailed description thereof will be described with reference to the storage unit 140. It will be described later in the process.

The parameter determiner 120 determines whether the current listener object is spaced apart from the sound source object based on which receiving direction, based on the positional relationship between the listener object and the sound source object detected by the detector 110. . That is, the parameter determiner 120 may determine which receiving direction (hereinafter, the reference receiving direction) of the plurality of receiving directions of the current listener object from the sound source object based on the separation direction between the listener object and the sound source object identified through the detector 110. To be described in detail).

The parameter determiner 120 determines acoustic characteristics of a sound signal corresponding to each of the plurality of reception directions based on reference data in the storage 140 corresponding to the reference reception direction.

The parameter determiner 120 detects a positional relationship between the listener object and the sound source object when movement of any one of the listener object and the sound source object in the virtual space is detected. Thereafter, the parameter determiner 120 re-determines the acoustic characteristics of the corresponding acoustic signal for each of the plurality of receiving directions based on the re-detected positional relationship and the corresponding reference data in the storage 140.

When there are a plurality of sound source objects in the virtual space, the parameter determiner 120 may determine a positional relationship between each sound source object and a listener object, and may determine the identified positional relationship and the corresponding reference data in the storage 140 corresponding thereto. Based on the plurality of reception directions, the acoustic characteristics of the acoustic signals output from the respective sound source objects are determined.

The storage unit 130 stores necessary information necessary for generating a stereoscopic sound.

The storage unit 130 stores the acoustic characteristics of the sound signal in each of the plurality of receiving directions according to the positional relationship between the listener object and the sound source object for each virtual space. That is, the storage unit 130 stores the acoustic parameter variation of each of the plurality of receiving directions corresponding to the separation of the listener object by a unit distance in each direction of the plurality of receiving directions divided by the listener object from the sound source object. . In this case, the unit distance is preferably a numerical value corresponding to the length of one cell when the virtual space is divided into a plurality of cells according to the lattice structure, but is not necessarily limited thereto.

The acoustic parameter variation value is preferably, but is not necessarily limited to, a part or variation value of the sound intensity, parallax and extinction time for the acoustic signal.

Meanwhile, the parameter determiner 120 according to the present exemplary embodiment may determine acoustic characteristics of each of the plurality of reception directions based on a change value of the acoustic parameter in the storage 140 corresponding to the reference reception direction.

To this end, the parameter determiner 120 first determines how many cells the distance between the listener object and the sound source object has a difference value based on the separation distance between the listener object and the sound source object identified through the detector 110. Calculate

Thereafter, the parameter determiner 120 multiplies a coefficient value corresponding to the calculated difference value by the acoustic parameter change value, and thus, for each of the plurality of reception directions corresponding to the positional relationship between the current listener object and the sound source object. The acoustic characteristics can be determined.

According to the form of providing the acoustic parameter variation value of the storage unit 130 and the acoustic characteristic determination method of the parameter determiner 120 using the same, the stereophonic sound generating apparatus 100 according to the present embodiment is required for generating stereoscopic sound. This has the effect of minimizing the storage of data.

The signal controller 140 applies and outputs acoustic characteristics corresponding to each of the plurality of reception directions to the acoustic signal.

The signal controller 140 receives a control command including sound characteristics determined for each of the plurality of receiving directions from the parameter determiner 120, and adjusts and outputs the output of the acoustic signals in the plurality of receiving directions based on the control command. .

The user interface unit 150 provides an interface between the user and the 3D sound generating apparatus 100. That is, the user interface unit 150 provides a means for the user to input a command such as input information to the stereo sound generating apparatus 100, and thereby receives the input information from the user.

The user interface unit 150 according to the present exemplary embodiment may receive azimuth information corresponding to a plurality of reception directions as input information.

The display unit 160 provides a graphic user interface screen in which the listener object and the acoustic object in the virtual space are arranged, and through this, a path for a sound signal transmitted from a sound source object to a plurality of receiving directions separated by the listener object. Can be displayed.

The display 160 may display an acoustic characteristic of an acoustic signal transmitted in each receiving direction in the graphical user interface screen.

The audio circuit 170 receives an acoustic signal to which acoustic characteristics corresponding to each of the plurality of receiving directions are applied, converts the received data into an electrical signal, and transmits the received electrical signal to the speaker 180.

The speaker 180 converts the received electric signal into a sound wave that can be heard by a human being and outputs it. Meanwhile, in the present exemplary embodiment, the speaker 180 may be implemented as a device separate from the 3D sound generating device.

FIG. 2 is an exemplary view for explaining a method of identifying a positional relationship between a listener object and a sound source object in a virtual space according to the present embodiment.

The stereoscopic sound generating apparatus 100 according to the present exemplary embodiment may identify a positional relationship between a listener object and a sound source object in a virtual space, and output the sound from the sound source object for each of a plurality of receiving directions centered on the listener object based on the positional relationship. The acoustic characteristics through which the sound signal is transmitted are determined differently.

As shown in FIG. 2, the stereophonic sound generating apparatus 100 according to the present exemplary embodiment divides a virtual space into a plurality of cells according to a lattice structure, and a cell in which a listener object is located and a cell in which a sound source object is located among the plurality of cells. Through the relationship analysis, the positional relationship between the listener object and the sound source object is identified. In this case, the stereo sound generating apparatus 100 according to the present embodiment preferably divides the virtual space into 46 to 1640 cells, but is not necessarily limited thereto. For example, the 3D sound generating apparatus 100 may divide the virtual space into various numbers of cells according to a user's selection. On the other hand, when the number of divided cells is large, there is an effect that can accurately determine the positional relationship between the listener object and the sound source object.

Referring to FIG. 2, it can be seen that the separation distance and the separation direction of the sound source object based on the listener object are calculated as the positional relationship between the listener object and the sound source object. In this case, the separation distance may be calculated in units of cells, and the separation direction may include azimuth information and azimuth information.

3 is an exemplary diagram illustrating a plurality of reception directions divided by a listener object according to the present embodiment.

The sound output from the sound source object in the real space reaches the listener object while spreading in various directions such as left, right, and rear as well as the front direction according to the propagation characteristics. In this case, except for the sound propagating in the forward direction, the sound propagating in the left, right, and rear directions causes a difference in the parallax until reaching the listener object and the incident direction thereof.

In consideration of this point, the 3D sound generating apparatus 100 according to the present exemplary embodiment sets a reception direction in which the respective sounds are transmitted based on the listener object, and transmits a sound signal output from the sound source object for each set reception direction. By differently determining the acoustic characteristics, the 3D sound effect that is identical to the real sound is realized.

Referring to FIG. 3, an implementation form of a plurality of reception directions set around a listener object may be checked. Meanwhile, FIG. 3A illustrates an implementation of a plurality of reception directions when the virtual space is two-dimensional, and FIG. 3B illustrates a plurality of reception directions when the virtual space is three-dimensional. An implementation form is illustrated.

4 and 5 are exemplary views illustrating a method for providing stereo sound according to the present embodiment.

4 is an exemplary diagram illustrating a form in which a sound signal output from a sound source object is transmitted to each of a plurality of receiving directions according to a positional relationship between a listener object and a sound source object. Meanwhile, FIG. 4A illustrates a case where there is one sound source object, and FIG. 4B illustrates a case where two sound source objects.

Referring to FIG. 4, it can be seen that the parallax until the sound signal output from the sound source object is reached in each reception direction of the listener object and its incident direction are set differently.

5 is an exemplary diagram illustrating a case where a positional relationship between a listener object and a sound source object is changed.

Referring to FIG. 5, it may be confirmed that a form in which a sound signal output from a sound source object is transmitted to each of a plurality of receiving directions is reset according to a change in the positional relationship between the listener object and the sound source object.

6 is a flowchart illustrating a stereoscopic sound providing method according to the present embodiment.

The stereophonic sound generating apparatus 100 detects positions of the listener object and one or more sound source objects in the virtual space, and determines the positional relationship between the listener object and the sound source object (S602). In operation S602, the 3D sound generating apparatus 100 divides the virtual space into a plurality of cells according to a lattice structure, and analyzes the relationship between the cell where the listener object is located and the cell where the sound source object is located among the plurality of cells by analyzing the listener object and the sound source. Identify the positional relationship between objects.

The stereoscopic sound generating apparatus 100 determines an acoustic characteristic to which an acoustic signal output from the sound source object is transmitted in each of a plurality of reception directions centered on the listener object based on the positional relationship identified in step S602 (S604). . In operation S604, the 3D sound generating apparatus 100 uses the reference data stored in the storage unit 140 based on the positional relationship between the listener object and the sound source object determined in operation S602, and the sound source object for each of the plurality of reception directions. Differently determine the acoustic characteristics to which the acoustic signal output from the transmission.

The stereophonic sound generating apparatus 100 spaces the current listener object from a sound source object based on which receiving direction (= reference receiving direction) based on the separation direction between the listener object and the sound source object identified in step S602. Find out if

The stereophonic sound generating apparatus 100 calculates how many cell distances the distance between the listener object and the sound source object is based on the separation distance between the listener object and the sound source object identified in step S602.

The stereophonic sound generating apparatus 100 may determine the positional relationship between the current listener object and the sound source object by multiplying a coefficient value corresponding to the calculated difference value by a sound parameter change value corresponding to the reference receiving direction in the storage 140. The acoustic characteristic for each of the corresponding plurality of receiving directions is determined.

The stereophonic sound generating apparatus 100 generates stereoscopic sound by applying acoustic characteristics corresponding to each of the plurality of receiving directions to the acoustic signal (S606).

When the movement of the object in the virtual space is detected (S608), the 3D sound generating apparatus 100 re-determines the positional relationship between the listener object and the sound source object, and the re-detected positional relationship and the storage unit 140 corresponding thereto. The acoustic characteristics of the corresponding acoustic signal for each of the plurality of receiving directions are re-determined based on the internal reference data (S610).

In FIG. 6, each process is described as being sequentially executed, but is not necessarily limited thereto. In other words, since the process described in FIG. 6 may be applied by changing or executing one or more processes in parallel, FIG. 6 is not limited to the time series order.

On the other hand, the stereophonic sound generating method described in FIG. 6 is implemented in a program and can be read using software of a computer (CD-ROM, RAM, ROM, memory card, hard disk, magneto-optical disk, storage device, etc.) Can be recorded.

The above description is merely illustrative of the technical idea of the present embodiment, and those skilled in the art to which the present embodiment belongs may make various modifications and changes without departing from the essential characteristics of the present embodiment. Therefore, the present embodiments are not intended to limit the technical idea of the present embodiment but to describe the present invention, and the scope of the technical idea of the present embodiment is not limited by these embodiments. The scope of protection of the present embodiment should be interpreted by the following claims, and all technical ideas within the scope equivalent thereto should be construed as being included in the scope of the present embodiment.

(Explanation of the sign)

100: stereo sound generating device 110: detection unit

120: parameter determination unit 130: storage unit

140: signal controller 150: user interface unit

160: display unit 170: audio circuit

180: speaker

Claims (13)

  1. A detector configured to detect positions of a listener object and a sound source object in a virtual space, and determine a positional relationship between the listener object and the sound source object;
    A parameter determiner configured to determine an acoustic characteristic to which an acoustic signal output from the sound source object is transmitted for each of a plurality of reception directions divided about the listener object based on the positional relationship in the virtual space; And
    Signal control unit for generating a three-dimensional sound by applying a sound characteristic corresponding to each of the plurality of receiving directions to which the sound signal is transmitted to the sound signal
    Stereo sound generating device comprising a.
  2. The method of claim 1,
    The detection unit,
    The virtual space is divided into a plurality of cells according to a lattice structure, and includes a separation distance and a separation direction between each object by analyzing a relationship between a cell where the listener object is located and a cell where the sound source object is located among the plurality of cells. Stereoscopic sound generating apparatus characterized in that for grasping the positional relationship.
  3. The method of claim 2,
    And a storage unit for storing acoustic parameter variation values of each of the plurality of receiving directions corresponding to the separation of the listener object by a unit distance in each direction of the plurality of receiving directions from the sound source object. Generating device.
  4. The method of claim 3, wherein
    The acoustic parameter variation value is,
    And a variation value for some or all of the sound intensity, parallax, and extinction time of the sound signal.
  5. The method of claim 3, wherein
    The parameter determiner,
    Based on the positional relationship determined using the detection unit, the current listener object is identified based on which receiving direction of the plurality of receiving directions from the sound source object, and the sound in the storage unit corresponding to the detected receiving direction is determined. And determining the acoustic characteristics for each of the plurality of receiving directions based on a change value of a parameter.
  6. The method of claim 5,
    The parameter determiner,
    The sound characteristic is determined such that a loudness value of the sound signal is greater than that of the sound signal in which the corresponding sound signal reaches the other receiving direction with respect to the first receiving direction in which the sound signal arrives straight from the sound source object among the plurality of directions. Stereoscopic sound generating device characterized in that.
  7. The method of claim 6,
    The parameter determiner,
    Regarding the acoustic signals transmitted in the left or right receiving direction based on the first receiving direction among the plurality of directions, determining the acoustic characteristics such that the sound value gradually decreases as the distance from the first receiving direction increases. Stereoscopic sound generating device characterized in that.
  8. The method of claim 3, wherein
    The parameter determiner,
    When the movement of any one of the listener object and the sound source object in the virtual space is detected, the positional relationship between the listener object and the sound source object is reexamined, and the redetected positional relationship and the acoustic parameter variation value And re-determining the acoustic characteristics for each of the plurality of reception directions based on the plurality of reception directions.
  9. The method of claim 1,
    And a user interface unit for receiving azimuth information corresponding to the plurality of reception directions.
  10. In combination with hardware,
    Detecting a position of a listener object and a sound source object in a virtual space and determining a positional relationship between the listener object and the sound source object;
    Determining an acoustic characteristic to which an acoustic signal output from the sound source object is transmitted in each of a plurality of directions divided about the listener object based on the positional relationship in the virtual space; And
    Generating stereoscopic sound by applying acoustic characteristics corresponding to each of the plurality of directions in which the acoustic signal is transmitted to the acoustic signal;
    A computer program stored in a recording medium for executing the program.
  11. The method of claim 10,
    The process of identifying the positional relationship,
    The virtual space is divided into a plurality of cells according to a lattice structure, and includes a separation distance and a separation direction between each object by analyzing a relationship between a cell where the listener object is located and a cell where the sound source object is located among the plurality of cells. And determining the positional relationship.
  12. The method of claim 11,
    The process of determining the acoustic characteristics,
    Based on the positional relationship determined through the determining, the controller determines whether a current listener object is spaced apart from a sound source object based on which receiving direction from among the plurality of receiving directions, and based on a check result, each of the plurality of receiving directions And determining the acoustic characteristics for the recording medium.
  13. The method of claim 12,
    The process of determining the acoustic characteristics,
    The plurality of receiving directions based on the confirmation result and acoustic parameter variation values of the plurality of receiving directions corresponding to the separation of the listener object by a unit distance in each direction of the plurality of receiving directions from the previously stored sound source object; Determining the acoustic characteristics for each of the computer programs stored in the recording medium.
PCT/KR2017/004677 2017-05-02 2017-05-02 Stereophonic sound generating device and computer program therefor WO2018203579A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2017/004677 WO2018203579A1 (en) 2017-05-02 2017-05-02 Stereophonic sound generating device and computer program therefor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2017/004677 WO2018203579A1 (en) 2017-05-02 2017-05-02 Stereophonic sound generating device and computer program therefor

Publications (1)

Publication Number Publication Date
WO2018203579A1 true WO2018203579A1 (en) 2018-11-08

Family

ID=64016155

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/004677 WO2018203579A1 (en) 2017-05-02 2017-05-02 Stereophonic sound generating device and computer program therefor

Country Status (1)

Country Link
WO (1) WO2018203579A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0991460A (en) * 1995-09-26 1997-04-04 Nippon Telegr & Teleph Corp <Ntt> Sound field control method
WO2014036121A1 (en) * 2012-08-31 2014-03-06 Dolby Laboratories Licensing Corporation System for rendering and playback of object based audio in various listening environments
US20160360334A1 (en) * 2014-02-26 2016-12-08 Tencent Technology (Shenzhen) Company Limited Method and apparatus for sound processing in three-dimensional virtual scene
US20170013386A1 (en) * 2015-07-06 2017-01-12 Bose Corporation Simulating Acoustic Output at a Location Corresponding to Source Position Data
US9602946B2 (en) * 2014-12-19 2017-03-21 Nokia Technologies Oy Method and apparatus for providing virtual audio reproduction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0991460A (en) * 1995-09-26 1997-04-04 Nippon Telegr & Teleph Corp <Ntt> Sound field control method
WO2014036121A1 (en) * 2012-08-31 2014-03-06 Dolby Laboratories Licensing Corporation System for rendering and playback of object based audio in various listening environments
US20160360334A1 (en) * 2014-02-26 2016-12-08 Tencent Technology (Shenzhen) Company Limited Method and apparatus for sound processing in three-dimensional virtual scene
US9602946B2 (en) * 2014-12-19 2017-03-21 Nokia Technologies Oy Method and apparatus for providing virtual audio reproduction
US20170013386A1 (en) * 2015-07-06 2017-01-12 Bose Corporation Simulating Acoustic Output at a Location Corresponding to Source Position Data

Similar Documents

Publication Publication Date Title
Jones et al. Controlling perceived depth in stereoscopic images
US7149315B2 (en) Microphone array for preserving soundfield perceptual cues
Lipshitz Stereo microphone techniques: Are the purists wrong?
US9197977B2 (en) Audio spatialization and environment simulation
KR100719816B1 (en) Wave field synthesis apparatus and method of driving an array of loudspeakers
US9154896B2 (en) Audio spatialization and environment simulation
KR101304797B1 (en) Systems and methods for audio processing
NL1029844C2 (en) A method, apparatus and computer readable medium to display a two-channel virtual sound based on the listener position.
US9983846B2 (en) Systems, methods, and apparatus for recording three-dimensional audio and associated data
KR100404918B1 (en) Computerized Interactor Systems and Method for Providing Same
Kawaura et al. Sound localization in headphone reproduction by simulating transfer functions from the sound source to the external ear
CN1672464B (en) Audio channel spatial translation
US7113610B1 (en) Virtual sound source positioning
US20150036848A1 (en) Motion detection of audio sources to facilitate reproduction of spatial audio spaces
US20050264857A1 (en) Binaural horizontal perspective display
JP2008516539A (en) Improved head-related transfer function for the bread stereo audio content
US20140198918A1 (en) Configurable Three-dimensional Sound System
Gardner Transaural 3-D audio
EP1825713B1 (en) A method and apparatus for multichannel upmixing and downmixing
CA2793720C (en) Method and apparatus for reproducing three-dimensional sound
JP2010226760A (en) Method and device for reproducing natural or corrected spatial impression in multi-channel listening, as well as, computer program for carrying out the method
EP0938832B1 (en) Method and device for projecting sound sources onto loudspeakers
EP2104375A2 (en) Vertically or horizontally placeable combinative array speaker
EP1408718A1 (en) Sound image localizer
JP4364326B2 (en) 3D sound reproducing apparatus and method for multiple listeners

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17908188

Country of ref document: EP

Kind code of ref document: A1