CN106331977B - A kind of virtual reality panorama acoustic processing method of network K songs - Google Patents

A kind of virtual reality panorama acoustic processing method of network K songs Download PDF

Info

Publication number
CN106331977B
CN106331977B CN201610704412.2A CN201610704412A CN106331977B CN 106331977 B CN106331977 B CN 106331977B CN 201610704412 A CN201610704412 A CN 201610704412A CN 106331977 B CN106331977 B CN 106331977B
Authority
CN
China
Prior art keywords
audio
network
earphone
transmitting terminal
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610704412.2A
Other languages
Chinese (zh)
Other versions
CN106331977A (en
Inventor
张晨
孙学京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tuoling Inc
Original Assignee
Beijing Tuoling Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tuoling Inc filed Critical Beijing Tuoling Inc
Priority to CN201610704412.2A priority Critical patent/CN106331977B/en
Publication of CN106331977A publication Critical patent/CN106331977A/en
Application granted granted Critical
Publication of CN106331977B publication Critical patent/CN106331977B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S5/00Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation 
    • H04S5/005Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation  of the pseudo five- or more-channel type, e.g. virtual surround
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Multimedia (AREA)
  • Stereophonic System (AREA)

Abstract

The invention discloses a kind of virtual reality panorama acoustic processing method of network K songs, the virtual reality panorama acoustic processing method includes the following steps:Transmitting terminal audio collecting device acquires audio data, and sensing unit acquires real time position of the transmitting terminal relative to earphone human ear direction;Second processing unit is overlapped processing;First processing units carry out calculation process;The playing device of signal transmission after processing to earphone plays out;By using sensing unit, real time monitoring network K sings change in location of the singer relative to listener's ear, the audio data of singer is handled relative to the spatial position of listener's ear using the sound of singer in real time, reaches network K song effects true to nature.

Description

A kind of virtual reality panorama acoustic processing method of network K songs
Technical field
The present invention relates to technical field of virtual reality, and in particular to a kind of virtual reality panorama sonication side of network K songs Method.
Background technology
Network K sings a kind of entertainment way for referring to move to KTV internet.The accompaniment of song, display are played on network The lyrics of song, user sing recording according to accompaniment and the lyrics.Finally audio data recording and accompanying song are mixed, Form the song that user individual sings.Other than individual sings, also two people are to the mode for more people's choruses of joining in the chorus.Network K is sung Other than meeting the needs of personal K song and a kind of important internet social application.But, the application of network K songs is inner at present Personal or more people record no sense of direction, affect the discrimination of audio data and social entertainment orientation.
When with virtual reality helmet (head-mounted display, HMD) to user's presentation content, in audio Hold and played by stereophone to user.We face the problem of improving virtual surrounding sound effect.In virtual reality applications, When playing audio content by stereophone, virtual 3D audios purpose is intended to reach a kind of effect and allows user just as with raising Listened in sound device array environment it is the same in addition as listening the sound in reality it is true.
When making virtual reality audio content, it usually needs the sound element of a variety of different directions.It comes personally in general, improving The method of sense is tracking user's head action, and sound is handled accordingly.If original sound perceived as coming from Front, after 90 degree of user's rotary head to the left, sound should be handled so that user perceives sound and comes from 90 degree of front-right.Using There are many kinds of classes for the virtual reality device of this processing mode, include the display equipment or headed tracking sensing of headed tracking Stereophone of device etc..Realizing head tracking, also there are many methods, and relatively common is to use multiple sensors.Movement passes Sensor external member generally includes accelerometer, gyroscope and magnetometric sensor.Each is sensed in terms of motion tracking and absolute direction The strong point and weakness that device has oneself intrinsic, therefore practices well is to use sensor " fusion " (sensor fusion) in the future It is combined from the signal of each sensor, generates a more accurate motion detection result.Obtaining end rotation angle Afterwards, it would be desirable to which sound is changed accordingly.The characteristics of network K is sung is that the multi-user for being distributed in diverse geographic location will Interaction participates in, and network K sings participant also various limb actions, position movement etc. while singing, and needs to be sung according to network K Singer changes the sound of singer in the spatial position of earphone, ability in real time relative to the change in location of listener's ear Network K song effects true to nature can be reached.
It can be seen that virtual reality panorama sound is very heavy for the network K audio data spatial impressions sung and social entertainmentization Will, but at present in this field still without suitable technical solution.In view of this, need a kind of effective network K songs virtual existing The solution of real panorama sound.
Invention content
It is existing to solve the purpose of the present invention is to provide a kind of virtual reality panorama acoustic processing method of network K songs Technology can not sing the problem of providing virtual reality panorama sound for network K
To achieve the above object, the present invention provides a kind of virtual reality panorama acoustic processing method of network K songs, the sides Method includes:
Audio is sung in the acquisition of transmitting terminal audio collecting device;
Sensing unit acquires real time position of the transmitting terminal relative to earphone human ear direction;
First processing units carry out calculation process;
Second processing unit is overlapped processing;
The playing device of signal transmission that treated to earphone plays out.
Audio is sung in the transmitting terminal audio collecting device acquisition, including:
The transmitting terminal is set on the microphone apparatus that network K song singers use;
The transmitting terminal audio collecting device records the sound of singer, and is converted to the performance audio of number format.
The sensing unit acquires real time position of the transmitting terminal relative to earphone human ear direction, including:
Transmitting terminal and earphone are both provided with sensing unit;
Data exchange is carried out by server between the sensing unit, between sensing unit and first processing units;
Using the ears line midpoint of earphone human ear as pole, pole is polar axis positive direction with auris dextra line, and setting pole is sat Mark system, sensing unit acquire polar coordinates variation of the transmitting terminal in the polar coordinate system in real time;
The sensing unit determines polar coordinates based on GPS positioning data;
The polar coordinates are sent to first processing units by the sensing unit in real time.
When background music is panorama sound, first it is overlapped by second processing unit by audio is sung with background music, so Calculation process is carried out by first processing units the audio data after the superposition is converted into binaural signal afterwards;
When background music is stereo signal, calculation process first is carried out by first processing units, audio conversion will be sung The binaural signal and background music signal are overlapped for binaural signal, then by second processing unit.
The first processing units carry out calculation process, including:
Audio data after singing audio or being superimposed is placed on 3D skies by the first processing units according to the polar coordinates Between in some direction, by sing audio or superposition after audio data be converted to binaural signal;
The method that audio data after singing audio or being superimposed is converted into binaural signal, to use HRTF (Head Related Transfer Function, head related transfer function) wave filter is handled or to use The transformation of Ambisonic sound fields is handled.
The second processing unit is overlapped processing, including:
The performance audio from specific senders with panorama sound background music is overlapped, obtains final K songs Song content;
The signal of the performance audio binaural signal from specific senders and background music is overlapped, is obtained most Whole K song song contents;
The specific senders can be one or multiple;
The performance audio of the multiple specific senders can be synchronous acquisition or asynchronous acquisition.
The playing device of treated signal transmission to the earphone plays out, including:
It is described that treated that signal is sent to server by first processing units or second processing unit, by server into Row distribution transmission;
The earphone is set on the playing device that network K song singers use;
The playing device is earphone.
The first processing units may be disposed at transmitting terminal and either be set to server or be set to earphone;Institute The second processing unit stated may be disposed at transmitting terminal and either be set to server or be set to earphone.
Audio data after singing audio or being superimposed is placed on 3D skies by the first processing units according to the polar coordinates Between in some direction, by sing audio or superposition after audio data be converted to binaural signal, further include:
The polar coordinates can sing participant by network K and artificially be set or changed on handheld mobile device, And pass through server and the polar coordinates of the artificial settings or modification are transmitted to first processing units.
Each network K song participants are both provided with transmitting terminal and earphone.
The method of the present invention has the following advantages that:Participant is sung according to network K and is distributed in diverse geographic location, multiuser interactive It participates in, network K sings participant while singing the characteristics of also various limb actions and position movement etc., single using sensing Member, real time monitoring network K sings change in location of the singer relative to listener's ear, opposite using the sound of singer in real time The audio data of singer is handled in the spatial position of listener's ear, reaches network K song effects true to nature.
Description of the drawings
The virtual reality panorama acoustic processing method flow diagram of Fig. 1 inventive networks K songs.
The polar coordinate system schematic diagram that Fig. 2 present invention is set using the ears line midpoint of earphone human ear as pole.
Specific embodiment
The following examples are used to illustrate the present invention, but are not intended to limit the scope of the present invention..
Embodiment 1
It please refers to Fig.1, a kind of virtual reality panorama acoustic processing method of network K songs, the virtual reality panorama sonication Method includes the following steps:
Step S101:Audio is sung in the acquisition of transmitting terminal audio collecting device;
Step S102:Sensing unit acquires real time position of the transmitting terminal relative to earphone human ear direction;
Step S103:Second processing unit is overlapped processing;
Step S104:First processing units carry out calculation process;
Step S105:The playing device of signal transmission that treated to earphone plays out.
Audio is sung in the transmitting terminal audio collecting device acquisition, including:
The transmitting terminal is set on the microphone apparatus that network K song singers use;
The transmitting terminal audio collecting device records the sound of singer, and is converted to the performance audio of number format.
The sensing unit acquires real time position of the transmitting terminal relative to earphone human ear direction, including:
Transmitting terminal and earphone are both provided with sensing unit;
Data exchange is carried out by server between the sensing unit, between sensing unit and first processing units;
It please refers to Fig.2, using the line midpoint of the left ear 1 of earphone and auris dextra 2 as pole, pole is polar axis with 2 line of auris dextra Positive direction, sets polar coordinate system, and sensing unit acquires polar coordinates variation of the transmitting terminal in the polar coordinate system, the sensing in real time Unit determines polar coordinates based on GPS positioning data, such as at certain time point, and network K sings coordinate of the participant 3 in the polar coordinate system For (ρ11), it is (ρ that network K, which sings participant 4 in the coordinate of the polar coordinate system,22);
The polar coordinates are sent to first processing units by the sensing unit in real time.
The background music is panorama sound, is first overlapped by second processing unit by audio is sung with background music, so Calculation process is carried out by first processing units the audio data after the superposition is converted into binaural signal afterwards;
The first processing units carry out calculation process, including:
The first processing units are according to the polar coordinates, some side audio data after superposition being placed in 3d space To the audio data after superposition is converted to binaural signal;
The method that the audio data by after superposition is converted to binaural signal, to use HRTF (Head Related Transfer Function, head related transfer function) wave filter handled.The selection of hrtf filter is opposite by transmitting terminal It being determined in the real time position direction of earphone human ear direction, the binaural signal handled is represented with B,
B=HS
Wherein, H represents HRTF filtering matrixs, and S represents the audio data of acquisition, and S here can be represented to drill corresponding to multiple Audio data after the superposition for the person of singing.
The second processing unit is overlapped processing, including:
The performance audio from specific senders with panorama sound background music is overlapped, obtains final K songs Song content;
The specific senders can be one or multiple;
The performance audio of the multiple specific senders can be synchronous acquisition or asynchronous acquisition.
The playing device of treated signal transmission to the earphone plays out, including:
It is described that treated that signal is sent to server by first processing units or second processing unit, by server into Row distribution transmission;
The earphone is set on the playing device that network K song singers use;
The playing device is earphone.
The first processing units are set to transmitting terminal, and second processing unit is set to transmitting terminal;Or first processing Unit is set to transmitting terminal, and second processing unit is set to server;Or first processing units are set to transmitting terminal, at second Reason unit is set to earphone;Or first processing units are set to server, second processing unit is set to transmitting terminal;Or First processing units are set to server, and second processing unit is set to server;Or first processing units are set to service Device, second processing unit are set to earphone;Or first processing units are set to earphone, second processing unit is set to hair Sending end;Or first processing units are set to earphone, second processing unit is set to server;Or first processing units are set Earphone is placed in, second processing unit is set to earphone.
The first processing units are according to the polar coordinates, some side audio data after superposition being placed in 3d space To the audio data after superposition is converted to binaural signal, is further included:
The polar coordinates can sing participant by network K and artificially be set or changed on handheld mobile device, And pass through server and the artificial settings and modification are transmitted to first processing units.
Each network K song participants are both provided with transmitting terminal and earphone.
Embodiment 2
It please refers to Fig.1, a kind of virtual reality panorama acoustic processing method of network K songs, the virtual reality panorama sonication Method includes the following steps:
Step S101:Audio is sung in the acquisition of transmitting terminal audio collecting device;
Step S102:Sensing unit acquires real time position of the transmitting terminal relative to earphone human ear direction;
Step S103:Second processing unit is overlapped processing;
Step S104:First processing units carry out calculation process;
Step S105:The playing device of signal transmission that treated to earphone plays out.
Audio is sung in the transmitting terminal audio collecting device acquisition, including:
The transmitting terminal is set on the microphone apparatus that network K song singers use;
The transmitting terminal audio collecting device records the sound of singer, and is converted to the performance audio of number format.
The sensing unit acquires real time position of the transmitting terminal relative to earphone human ear direction, including:
Transmitting terminal and earphone are both provided with sensing unit;
Data exchange is carried out by server between the sensing unit, between sensing unit and first processing units;
It please refers to Fig.2, using the line midpoint of the left ear 1 of earphone and auris dextra 2 as pole, pole is polar axis with 2 line of auris dextra Positive direction, sets polar coordinate system, and sensing unit acquires polar coordinates variation of the transmitting terminal in the polar coordinate system, the sensing in real time Unit determines polar coordinates based on GPS positioning data, such as at certain time point, and network K sings coordinate of the participant 3 in the polar coordinate system For (ρ11), it is (ρ that network K, which sings participant 4 in the coordinate of the polar coordinate system,22);
The polar coordinates are sent to first processing units by the sensing unit in real time.
The background music is panorama sound, is first overlapped by second processing unit by audio is sung with background music, so Calculation process is carried out by first processing units the audio data after the superposition is converted into binaural signal afterwards;
The first processing units carry out calculation process, including:
The first processing units are according to the polar coordinates, some side audio data after superposition being placed in 3d space To the audio data after superposition is converted to binaural signal;
The method that the audio data by after superposition is converted to binaural signal, to become swap-in using Ambisonic sound fields Row processing.Audio data after superposition is converted into acoustic field signal, then acoustic field signal is converted to virtual speaker array signal, Virtual speaker array signal by hrtf filter is filtered, obtains binaural signal, the binaural signal B handled It represents,
B=HDTS
H represents HRTF filtering matrixs,
D represents sound field decoding matrix,
T represents sound field transition matrix,
S represents the audio data after superposition, and S here can represent the audio data from multiple and different singers.
This processing mode is advantageous in that, when the number of chorus is more, the efficiency of this processing mode is higher.
The second processing unit is overlapped processing, including:
The performance audio from specific senders with panorama sound background music is overlapped, obtains final K songs Song content;
The specific senders can be one or multiple;
The performance audio of the multiple specific senders can be synchronous acquisition or asynchronous acquisition.
The playing device of treated signal transmission to the earphone plays out, including:
It is described that treated that signal is sent to server by first processing units or second processing unit, by server into Row distribution transmission;
The earphone is set on the playing device that network K song singers use;
The playing device is earphone.
The first processing units are set to transmitting terminal, and second processing unit is set to transmitting terminal;Or first processing Unit is set to transmitting terminal, and second processing unit is set to server;Or first processing units are set to transmitting terminal, at second Reason unit is set to earphone;Or first processing units are set to server, second processing unit is set to transmitting terminal;Or First processing units are set to server, and second processing unit is set to server;Or first processing units are set to service Device, second processing unit are set to earphone;Or first processing units are set to earphone, second processing unit is set to hair Sending end;Or first processing units are set to earphone, second processing unit is set to server;Or first processing units are set Earphone is placed in, second processing unit is set to earphone.
The first processing units are according to the polar coordinates, some side audio data after superposition being placed in 3d space To the audio data after superposition is converted to binaural signal, is further included:
The polar coordinates can sing participant by network K and artificially be set or changed on handheld mobile device, And pass through server and the artificial settings and modification are transmitted to first processing units.
Each network K song participants are both provided with transmitting terminal and earphone.
Embodiment 3
It please refers to Fig.1, a kind of virtual reality panorama acoustic processing method of network K songs, the virtual reality panorama sonication Method includes the following steps:
Step S101:Audio is sung in the acquisition of transmitting terminal audio collecting device;
Step S102:Sensing unit acquires real time position of the transmitting terminal relative to earphone human ear direction;
Step S104:First processing units carry out calculation process;
Step S103:Second processing unit is overlapped processing;
Step S105:The playing device of signal transmission that treated to earphone plays out.
Audio is sung in the transmitting terminal audio collecting device acquisition, including:
The transmitting terminal is set on the microphone apparatus that network K song singers use;
The transmitting terminal audio collecting device records the sound of singer, and is converted to the performance audio of number format.
The sensing unit acquires real time position of the transmitting terminal relative to earphone human ear direction, including:
Transmitting terminal and earphone are both provided with sensing unit;
Data exchange is carried out by server between the sensing unit, between sensing unit and first processing units;
It please refers to Fig.2, using the line midpoint of the left ear 1 of earphone and auris dextra 2 as pole, pole is polar axis with 2 line of auris dextra Positive direction, sets polar coordinate system, and sensing unit acquires polar coordinates variation of the transmitting terminal in the polar coordinate system, the sensing in real time Unit determines polar coordinates based on GPS positioning data, such as at certain time point, and network K sings coordinate of the participant 3 in the polar coordinate system For (ρ11), it is (ρ that network K, which sings participant 4 in the coordinate of the polar coordinate system,22);
The polar coordinates are sent to first processing units by the sensing unit in real time.
The background music is stereo signal, first carries out calculation process by first processing units, will sing audio conversion For binaural signal;The binaural signal is overlapped with background music by second processing unit again.
The first processing units carry out calculation process, including:
Performance audio is placed on some direction in 3d space, will drilled by the first processing units according to the polar coordinates It sings audio and is converted to binaural signal;
The method that performance audio is converted into binaural signal, to use HRTF (Head Related Transfer Function, head related transfer function) wave filter by sing audio be converted to four road ears (Quad binaural) signal, because To handle the binaural signal in 4 directions, thus sing audio will hrtf filter corresponding with 4 directions be all filtered. Handle obtained binaural signal BiIt represents,
Bi=Hi·S
Wherein:I=1~N,
HiRepresent the HRTF filtering matrixs on the i-th tunnel, S represents the performance audio of input.
The second processing unit is overlapped processing, including:
The performance audio binaural signal from specific senders with background music signal is overlapped, is obtained final K song song contents, the signal after superposition represents with B',
B'=B+M
B represents to sing audio binaural signal, M expression background music signals.
The specific senders can be one or multiple;
The performance audio of the multiple specific senders can be synchronous acquisition or asynchronous acquisition.
The playing device of treated signal transmission to the earphone plays out, including:
It is described that treated that signal is sent to server by first processing units or second processing unit, by server into Row distribution transmission;
The earphone is set on the playing device that network K song singers use;
The playing device is earphone.
The first processing units are set to transmitting terminal, and second processing unit is set to transmitting terminal;Or first processing Unit is set to transmitting terminal, and second processing unit is set to server;Or first processing units are set to transmitting terminal, at second Reason unit is set to earphone;Or first processing units are set to server, second processing unit is set to transmitting terminal;Or First processing units are set to server, and second processing unit is set to server;Or first processing units are set to service Device, second processing unit are set to earphone;Or first processing units are set to earphone, second processing unit is set to hair Sending end;Or first processing units are set to earphone, second processing unit is set to server;Or first processing units are set Earphone is placed in, second processing unit is set to earphone.
Audio data after singing audio or being superimposed is placed on 3D skies by the first processing units according to the polar coordinates Between in some direction, by sing audio or superposition after audio data be converted to binaural signal, further include:
The polar coordinates can sing participant by network K and artificially be set or changed on handheld mobile device, And pass through server and the artificial settings and modification are transmitted to first processing units.
Each network K song participants are both provided with transmitting terminal and earphone.
Embodiment 4
It please refers to Fig.1, a kind of virtual reality panorama acoustic processing method of network K songs, the virtual reality panorama sonication Method includes the following steps:
Step S101:Audio is sung in the acquisition of transmitting terminal audio collecting device;
Step S102:Sensing unit acquires real time position of the transmitting terminal relative to earphone human ear direction;
Step S104:First processing units carry out calculation process;
Step S103:Second processing unit is overlapped processing;
Step S105:The playing device of signal transmission that treated to earphone plays out.
Audio is sung in the transmitting terminal audio collecting device acquisition, including:
The transmitting terminal is set on the microphone apparatus that network K song singers use;
The transmitting terminal audio collecting device records the sound of singer, and is converted to the performance audio of number format.
The sensing unit acquires real time position of the transmitting terminal relative to earphone human ear direction, including:
Transmitting terminal and earphone are both provided with sensing unit;
Data exchange is carried out by server between the sensing unit, between sensing unit and first processing units;
It please refers to Fig.2, using the line midpoint of the left ear 1 of earphone and auris dextra 2 as pole, pole is polar axis with 2 line of auris dextra Positive direction, sets polar coordinate system, and sensing unit acquires polar coordinates variation of the transmitting terminal in the polar coordinate system, the sensing in real time Unit determines polar coordinates based on GPS positioning data, such as at certain time point, and network K sings coordinate of the participant 3 in the polar coordinate system For (ρ11), it is (ρ that network K, which sings participant 4 in the coordinate of the polar coordinate system,22);
The polar coordinates are sent to first processing units by the sensing unit in real time.
The background music is stereo signal, first carries out calculation process by first processing units, will sing audio conversion For binaural signal;The binaural signal is overlapped with background music by second processing unit again.
The first processing units carry out calculation process, including:
Performance audio is placed on some direction in 3d space, will drilled by the first processing units according to the polar coordinates It sings audio and is converted to binaural signal;
The method that performance audio is converted to binaural signal, will to be handled using the transformation of Ambisonic sound fields The performance audio of acquisition is converted to the acoustic field signal in N=4 direction, then acoustic field signal is converted to virtual speaker array letter Number, virtual speaker array signal by hrtf filter is filtered, obtains the binaural signal B in N=4 directioni,
Bi=Hi·Di·Ri·SAmb
Wherein:I=1~N,
HiRepresent the HRTF filtering matrixs on the i-th tunnel,
DiRepresent the decoding matrix on the i-th tunnel,
RiRepresent the spin matrix on the i-th tunnel,
SAmbRepresent the audio sound field of input.
The second processing unit is overlapped processing, including:
According to transmitting terminal relative to the real time position of earphone human ear direction, by four road binaural signals into row interpolation, Panorama acoustic field signal B is restored,
Wherein:I=1~N, GiRepresent the interpolation coefficient on the i-th tunnel.
In order to keep signal energy, cosine signal may be used as interpolation coefficient, cosine signal GiFunction expression For,
If Gi< 0, then Gi=0;
Wherein:I=1~N, θ represent the level angle of number of people rotation.
The performance audio binaural signal from specific senders with background music signal is overlapped, is obtained final K song song contents, the signal after superposition represents with B',
B'=B+M
B represents to sing audio binaural signal, M expression background music signals.
The specific senders can be one or multiple;
The performance audio of the multiple specific senders can be synchronous acquisition or asynchronous acquisition.
The playing device of treated signal transmission to the earphone plays out, including:
It is described that treated that signal is sent to server by first processing units or second processing unit, by server into Row distribution transmission;
The earphone is set on the playing device that network K song singers use;
The playing device is earphone.
The first processing units are set to transmitting terminal, and second processing unit is set to transmitting terminal;Or first processing Unit is set to transmitting terminal, and second processing unit is set to server;Or first processing units are set to transmitting terminal, at second Reason unit is set to earphone;Or first processing units are set to server, second processing unit is set to transmitting terminal;Or First processing units are set to server, and second processing unit is set to server;Or first processing units are set to service Device, second processing unit are set to earphone;Or first processing units are set to earphone, second processing unit is set to hair Sending end;Or first processing units are set to earphone, second processing unit is set to server;Or first processing units are set Earphone is placed in, second processing unit is set to earphone.
Audio data after singing audio or being superimposed is placed on 3D skies by the first processing units according to the polar coordinates Between in some direction, by sing audio or superposition after audio data be converted to binaural signal, further include:
The polar coordinates can sing participant by network K and artificially be set or changed on handheld mobile device, And pass through server and the artificial settings and modification are transmitted to first processing units.
Each network K song participants are both provided with transmitting terminal and earphone.
Although above having used general explanation and specific embodiment, the present invention is described in detail, at this On the basis of invention, it can be made some modifications or improvements, this will be apparent to those skilled in the art.Therefore, These modifications or improvements without departing from theon the basis of the spirit of the present invention belong to the scope of protection of present invention.

Claims (9)

  1. A kind of 1. virtual reality panorama acoustic processing method of network K songs, which is characterized in that the virtual reality panorama sonication side Method includes:
    Audio is sung in the acquisition of transmitting terminal audio collecting device;
    Sensing unit acquires real time position of the transmitting terminal relative to earphone human ear direction;
    First processing units carry out calculation process;
    Second processing unit is overlapped processing;
    The playing device of signal transmission that treated to earphone plays out;
    Wherein, the sensing unit acquires real time position of the transmitting terminal relative to earphone human ear direction, including:
    Transmitting terminal and earphone are both provided with sensing unit;
    Data exchange is carried out by server between the sensing unit, between sensing unit and first processing units;
    Using the ears line midpoint of earphone human ear as pole, pole is polar axis positive direction with auris dextra line, sets polar coordinate system, Sensing unit acquires polar coordinates variation of the transmitting terminal in the polar coordinate system in real time;
    The sensing unit determines polar coordinates based on GPS positioning data;
    The polar coordinates are sent to first processing units by the sensing unit in real time.
  2. 2. the virtual reality panorama acoustic processing method of network K songs according to claim 1, which is characterized in that the transmission Audio collecting device acquisition is held to sing audio, including:
    The transmitting terminal is set on the microphone apparatus that network K song singers use;
    The transmitting terminal audio collecting device records the sound of singer, and is converted to the performance audio of number format.
  3. 3. the virtual reality panorama acoustic processing method of network K songs according to claim 1, which is characterized in that
    When background music is panorama sound, first it is overlapped by second processing unit by audio is sung with background music, Ran Houyou First processing units carry out calculation process and the audio data after the superposition are converted to binaural signal;
    When background music is stereo signal, first by first processing units carry out calculation process, will sing audio be converted to it is double Ear signal, then be overlapped the binaural signal and background music signal by second processing unit.
  4. 4. the virtual reality panorama acoustic processing method of network K songs according to claim 1, which is characterized in that described first Processing unit carries out calculation process, including:
    Audio data after singing audio or being superimposed is placed in 3d space by the first processing units according to the polar coordinates Some direction, by sing audio or superposition after audio data be converted to binaural signal;
    The method that audio data after singing audio or being superimposed is converted into binaural signal, to use HRTF (Head Related Transfer Function, head related transfer function) wave filter handled or for using Ambisonic sound Field transformation is handled.
  5. 5. the virtual reality panorama acoustic processing method of network K songs according to claim 1, which is characterized in that described second Processing unit is overlapped processing, including:
    Performance audio from specific senders and panorama sound background music are overlapped, obtain final K song song contents;
    Performance audio binaural signal from specific senders and the signal of background music are overlapped, obtain final K songs Song content;
    The specific senders are at least one;
    The performance audio of the specific senders is synchronous acquisition or asynchronous acquisition.
  6. 6. the virtual reality panorama acoustic processing method of network K songs according to claim 1, which is characterized in that the processing The playing device of signal transmission afterwards to earphone plays out, including:
    Described treated that signal is sent to server by first processing units or second processing unit, is divided by server Hair transmission;
    The earphone is set on the playing device that network K song singers use;
    The playing device is earphone.
  7. 7. the virtual reality panorama acoustic processing method of network K according to claim 1 song, which is characterized in that described the One processing unit may be disposed at transmitting terminal and either be set to server or be set to earphone;The second processing unit Transmitting terminal is may be disposed at either to be set to server or be set to earphone.
  8. 8. the virtual reality panorama acoustic processing method of network K songs according to claim 4, which is characterized in that described first For processing unit according to the polar coordinates, the audio data by performance audio or after being superimposed is placed on some direction in 3d space, Audio data after singing audio or being superimposed is converted into binaural signal, is further included:
    The polar coordinates are artificially set or are changed on handheld mobile device, and passed through clothes by network K song participants The polar coordinates of the artificial settings or modification are transmitted to first processing units by business device.
  9. 9. the virtual reality panorama acoustic processing method of network K songs according to claim 1, which is characterized in that each network K Song participant is both provided with transmitting terminal and earphone.
CN201610704412.2A 2016-08-22 2016-08-22 A kind of virtual reality panorama acoustic processing method of network K songs Active CN106331977B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610704412.2A CN106331977B (en) 2016-08-22 2016-08-22 A kind of virtual reality panorama acoustic processing method of network K songs

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610704412.2A CN106331977B (en) 2016-08-22 2016-08-22 A kind of virtual reality panorama acoustic processing method of network K songs

Publications (2)

Publication Number Publication Date
CN106331977A CN106331977A (en) 2017-01-11
CN106331977B true CN106331977B (en) 2018-06-12

Family

ID=57742711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610704412.2A Active CN106331977B (en) 2016-08-22 2016-08-22 A kind of virtual reality panorama acoustic processing method of network K songs

Country Status (1)

Country Link
CN (1) CN106331977B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016990B (en) * 2017-03-21 2018-06-05 腾讯科技(深圳)有限公司 Audio signal generation method and device
CN106851482A (en) * 2017-03-24 2017-06-13 北京时代拓灵科技有限公司 A kind of panorama sound loudspeaker body-sensing real-time interaction system and exchange method
EP3651480A4 (en) * 2017-07-05 2020-06-24 Sony Corporation Signal processing device and method, and program
US10705790B2 (en) * 2018-11-07 2020-07-07 Nvidia Corporation Application of geometric acoustics for immersive virtual reality (VR)
CN111475022A (en) * 2020-04-03 2020-07-31 上海唯二网络科技有限公司 Method for processing interactive voice data in multi-person VR scene
CN113192486B (en) * 2021-04-27 2024-01-09 腾讯音乐娱乐科技(深圳)有限公司 Chorus audio processing method, chorus audio processing equipment and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009046909A1 (en) * 2007-10-09 2009-04-16 Koninklijke Philips Electronics N.V. Method and apparatus for generating a binaural audio signal
CN101384105B (en) * 2008-10-27 2011-11-23 华为终端有限公司 Three dimensional sound reproducing method, device and system
CN103607550B (en) * 2013-11-27 2016-08-24 北京海尔集成电路设计有限公司 A kind of method according to beholder's position adjustment Television Virtual sound channel and TV
EP2942980A1 (en) * 2014-05-08 2015-11-11 GN Store Nord A/S Real-time control of an acoustic environment
CN105376690A (en) * 2015-11-04 2016-03-02 北京时代拓灵科技有限公司 Method and device of generating virtual surround sound
CN105611481B (en) * 2015-12-30 2018-04-17 北京时代拓灵科技有限公司 A kind of man-machine interaction method and system based on spatial sound
CN105808710A (en) * 2016-03-05 2016-07-27 上海斐讯数据通信技术有限公司 Remote karaoke terminal, remote karaoke system and remote karaoke method
CN105797366A (en) * 2016-03-25 2016-07-27 中国传媒大学 Head-wearing type interactive audio game terminal based on sound source location

Also Published As

Publication number Publication date
CN106331977A (en) 2017-01-11

Similar Documents

Publication Publication Date Title
CN106331977B (en) A kind of virtual reality panorama acoustic processing method of network K songs
US20150326963A1 (en) Real-time Control Of An Acoustic Environment
CN105120421B (en) A kind of method and apparatus for generating virtual surround sound
CN106134223A (en) Reappear audio signal processing apparatus and the method for binaural signal
CN105163242B (en) A kind of multi-angle 3D sound back method and device
CN108353244A (en) Difference head-tracking device
CN106210990B (en) A kind of panorama sound audio processing method
JP6246922B2 (en) Acoustic signal processing method
CN107241672B (en) Method, device and equipment for obtaining spatial audio directional vector
JP6596896B2 (en) Head-related transfer function selection device, head-related transfer function selection method, head-related transfer function selection program, sound reproduction device
JP2021535632A (en) Methods and equipment for processing audio signals
TW201640921A (en) Virtual reality audio system and the player thereof, and method for generation of virtual reality audio
CN107105384A (en) The synthetic method of near field virtual sound image on a kind of middle vertical plane
Bujacz et al. Sound of Vision-Spatial audio output and sonification approaches
US20190170533A1 (en) Navigation by spatial placement of sound
CN105509691B (en) The detection method of multisensor group fusion and the circular method for acoustic for supporting head tracking
US11032660B2 (en) System and method for realistic rotation of stereo or binaural audio
WO2019199536A1 (en) Applying audio technologies for the interactive gaming environment
Yuan et al. Sound image externalization for headphone based real-time 3D audio
Lipshitz Stereo microphone techniques: Are the purists wrong?
CN105307086A (en) Method and system for simulating surround sound for two-channel headset
Jenny et al. Can I trust my ears in VR? Literature review of head-related transfer functions and valuation methods with descriptive attributes in virtual reality
Cohen et al. Applications of Audio Augmented Reality: Wearware, Everyware, Anyware, and Awareware
Hoose Creating Immersive Listening Experiences with Binaural Recording Techniques
CN109168125A (en) A kind of 3D sound effect system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant