CN110401898A - Export method, apparatus, equipment and the storage medium of audio data - Google Patents

Export method, apparatus, equipment and the storage medium of audio data Download PDF

Info

Publication number
CN110401898A
CN110401898A CN201910652191.2A CN201910652191A CN110401898A CN 110401898 A CN110401898 A CN 110401898A CN 201910652191 A CN201910652191 A CN 201910652191A CN 110401898 A CN110401898 A CN 110401898A
Authority
CN
China
Prior art keywords
audio data
target
target scene
model
transformation model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910652191.2A
Other languages
Chinese (zh)
Other versions
CN110401898B (en
Inventor
刘佳泽
王宇飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN201910652191.2A priority Critical patent/CN110401898B/en
Publication of CN110401898A publication Critical patent/CN110401898A/en
Application granted granted Critical
Publication of CN110401898B publication Critical patent/CN110401898B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones

Abstract

This application discloses a kind of method, apparatus, equipment and storage mediums for exporting audio data, belong to field of computer technology.The described method includes: obtaining model, the first audio data, first position and the second position of target scene;Model, the first audio data, first position and the second position based on target scene determine in target scene, during the first audio data is propagated from first position to the second position, propagate to away from the second audio data obtained at the pre-determined distance of the second position;The direction of corresponding relationship and first position based on pre-stored virtual sound source direction and acoustical transformation model relative to the second position, determines corresponding target acoustical transformation model;Second audio data is inputted into target acoustical transformation model, obtains third audio data, exports third audio data.Using the application, without storing the acoustical transformation model of a large amount of different virtual sound source distances, it is thus possible to reduce the occupancy of equipment storage resource.

Description

Export method, apparatus, equipment and the storage medium of audio data
Technical field
This application involves field of computer technology, in particular to a kind of method, apparatus for exporting audio data, equipment and deposit Storage media.
Background technique
With the skills such as VR (Virtual Reality, virtual reality) and AR (Augmented Reality, augmented reality) The gradually popularization of art, requirement of the user to audio experience are higher and higher.In some technologies, audio data is handled, makes to locate For audio data after reason after audio output apparatus exports, user can experience Sounnd source direction harmony from the sound of output Source distance, for example, HRTF (Head Related Transfer Function, head related transfer function) technology.
In HRTF technology, foundation has model database, a large amount of acoustical transformation model is contained in model database, often A acoustical transformation model is corresponding with a virtual sound source direction and a virtual sound source distance, wherein virtual sound source direction and void Onomatopoeia source distance be the output of acoustical transformation model audio data Sounnd source direction that user's auditory perception can be allowed to arrive and sound source away from From.When equipment needs to generate first audio data with direction attribute and distance property, a nothing can be first obtained Direction attribute and second audio data without distance property, wherein the audio data with direction attribute and distance property refers to User can be allowed to experience the audio data of Sounnd source direction and distance, directionless attribute and nothing after exporting by audio output apparatus The audio data of distance property refers to exported by audio output apparatus after user cannot be allowed to experience Sounnd source direction and distance Audio data.It also needs to obtain the Sounnd source direction for needing to assign for second audio data and sound source distance simultaneously.At this point it is possible to base In the Sounnd source direction and sound source distance, in model database, corresponding acoustical transformation model is searched.Then, by the second audio Data are input in the acoustical transformation model, it can obtain first audio data with direction attribute and distance property.
During realizing the application, the inventor finds that the existing technology has at least the following problems:
In model database, need to store a large amount of acoustical transformation model, thus, equipment can be occupied and largely store money Source.
Summary of the invention
The embodiment of the present application provides a kind of method, apparatus, equipment and storage medium for exporting audio data, is able to solve The problem of occupying equipment a large amount of storage resources.The technical solution is as follows:
On the one hand, a kind of method for exporting audio data is provided, which comprises
Obtain model, the first audio data, first position and the second position of target scene, wherein the first position It is sounding position of first audio data in the target scene, the second position is target listener in the target Position in scene;
Model, first audio data, the first position and the second position based on the target scene, really It is scheduled in the target scene, during first audio data is propagated from the first position to the second position, Propagate to the second audio data obtained at the third place away from the second position pre-determined distance;
Corresponding relationship and the third place phase based on pre-stored virtual sound source direction with acoustical transformation model For the direction of the second position, corresponding target acoustical transformation model is determined, wherein each sound in the corresponding relationship Learning the corresponding virtual sound source distance of transformation model is the pre-determined distance;
Based on the second audio data and the corresponding target acoustical transformation model, third audio data is determined, it is defeated The third audio data out.
Optionally, the model based on the target scene, first audio data, the first position and described The second position determines that in the target scene, first audio data is passed from the first position to the second position During broadcasting, the second audio data obtained at the third place away from the second position pre-determined distance is propagated to, comprising:
If the sound transmission distance between the first position and the second position is greater than pre-determined distance, it is based on institute Model, first audio data, the first position and the second position of target scene are stated, is determined in the target field Jing Zhong during first audio data is propagated from the first position to the second position, is propagated to away from described the The second audio data obtained at the third place of two position pre-determined distances.
Optionally, the method also includes:
If the sound transmission distance between the first position and the second position is not more than pre-determined distance, it is based on The corresponding relationship and the first position of pre-stored virtual sound source direction and acoustical transformation model are relative to described second The direction of position determines corresponding target acoustical transformation model;
First audio data is inputted into the target acoustical transformation model, obtains the 4th audio data, described in output 4th audio data.
Optionally, the model based on the target scene, first audio data, the first position and described The second position determines that in the target scene, first audio data is passed from the first position to the second position During broadcasting, the second audio data obtained at the third place away from the second position pre-determined distance is propagated to, comprising:
Acoustic properties letter based on moisture content and object in acoustic transmission path tracing algorithm, the target scene Breath, and the model based on the target scene, first audio data, the first position and the second position determine In the target scene, during first audio data is propagated from the first position to the second position, pass Cast to the second audio data obtained at the third place away from the second position pre-determined distance.
Optionally, the model based on the target scene, first audio data, the first position and described The second position determines that in the target scene, first audio data is passed from the first position to the second position During broadcasting, the second audio data obtained at the third place away from the second position pre-determined distance is propagated to, comprising:
Model, first audio data, the first position and the second position based on the target scene, really It is scheduled in the target scene, first audio data is by least one propagation path from the first position to described the During two positions are propagated, the second audio number obtained at the third place away from the second position pre-determined distance is propagated to According to, wherein at least one propagation path includes direct projection propagation path and/or reflected propagation paths.
Optionally, the target acoustical transformation model in the corresponding relationship is convolution kernel, described to be based on second audio Data and the corresponding target acoustical transformation model, determine third audio data, comprising:
Process of convolution is carried out to the second audio data based on the target acoustical transformation model, obtains third audio number According to.
On the other hand, a kind of device for exporting audio data is provided, described device includes:
Module is obtained, is configured as obtaining model, the first audio data, first position and the second position of target scene, Wherein, the first position is sounding position of first audio data in the target scene, and the second position is Position of the target listener in the target scene;
Determining module is configured as model, first audio data, the first position based on the target scene It with the second position, determines in the target scene, first audio data is from the first position to described second During position is propagated, the second audio data obtained at the third place away from the second position pre-determined distance is propagated to; Based on pre-stored virtual sound source direction with the corresponding relationship of acoustical transformation model and the third place relative to described The direction of the second position determines corresponding target acoustical transformation model, wherein each acoustical transformation mould in the corresponding relationship The corresponding virtual sound source distance of type is the pre-determined distance;Based on the second audio data and the corresponding target acoustical Transformation model determines third audio data;
Output module is configured as exporting the third audio data.
Optionally, the determining module, is configured as:
If the sound transmission distance between the first position and the second position is greater than pre-determined distance, it is based on institute Model, first audio data, the first position and the second position of target scene are stated, is determined in the target field Jing Zhong during first audio data is propagated from the first position to the second position, is propagated to away from described the The second audio data obtained at the third place of two position pre-determined distances.
Optionally, the determining module, is also configured to
If the sound transmission distance between the first position and the second position is not more than pre-determined distance, it is based on The corresponding relationship and the first position of pre-stored virtual sound source direction and acoustical transformation model are relative to described second The direction of position determines corresponding target acoustical transformation model;First audio data is inputted into the target acoustical transformation Model obtains the 4th audio data;
The output module is also configured to export the 4th audio data.
Optionally, the determining module, is configured as:
Acoustic properties letter based on moisture content and object in acoustic transmission path tracing algorithm, the target scene Breath, and the model based on the target scene, first audio data, the first position and the second position determine In the target scene, during first audio data is propagated from the first position to the second position, pass Cast to the second audio data obtained at the third place away from the second position pre-determined distance.
Optionally, the determining module, is configured as:
Model, first audio data, the first position and the second position based on the target scene, really It is scheduled in the target scene, first audio data is by least one propagation path from the first position to described the During two positions are propagated, the second audio number obtained at the third place away from the second position pre-determined distance is propagated to According to, wherein at least one propagation path includes direct projection propagation path and/or reflected propagation paths.
Optionally, the target acoustical transformation model in the corresponding relationship is convolution kernel, and the determining module is configured Are as follows:
Process of convolution is carried out to the second audio data based on the target acoustical transformation model, obtains third audio number According to.
In another aspect, provide a kind of computer equipment, the computer equipment includes processor and memory, described to deposit At least one instruction is stored in reservoir, described instruction is loaded by the processor and executed to realize output sound as described above Operation performed by the method for frequency evidence.
Also on the one hand, a kind of computer readable storage medium is provided, at least one finger is stored in the storage medium It enables, described instruction is loaded as processor and executed to realize operation performed by the method for output audio data as described above.
Technical solution provided by the embodiments of the present application has the benefit that
In the embodiment of the present application, the acoustics for only establishing the different virtual sound source directions that virtual sound source distance is pre-determined distance becomes Mold changing type, is stored, it is seen that the virtual sound source of the acoustical transformation model of storage distance be all it is identical, it is a large amount of not without storage With the acoustical transformation model of virtual sound source distance, it is thus possible to reduce the occupancy of equipment storage resource.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other Attached drawing.
Fig. 1 is a kind of schematic diagram of a scenario of method for exporting audio data provided by the embodiments of the present application;
Fig. 2 is a kind of method flow diagram for exporting audio data provided by the embodiments of the present application;
Fig. 3 is a kind of method flow diagram for exporting audio data provided by the embodiments of the present application.
Fig. 4 is a kind of apparatus structure schematic diagram for exporting audio data provided by the embodiments of the present application;
Fig. 5 is a kind of structural schematic diagram of terminal provided by the embodiments of the present application.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to embodiment party of the present invention Formula is described in further detail.
It is provided by the embodiments of the present application output audio data method can be realized by terminal, can also be by terminal and network Side apparatus is realized jointly, can also be realized jointly by the terminal and independent arithmetic facility of user side.The embodiment of the present application is with end It is illustrated in case where the independent realization in end, other situations are similar therewith, and the embodiment of the present application is not repeating.Terminal can be VR terminal or AR terminal, such as VR glasses, AR glasses, can also be the mobile terminals such as mobile phone, tablet computer, laptop, Or it is also possible to the fixed terminals such as desktop computer.The terminal has calculation processing power, also has Video Out and audio Output function.Application program, such as game application can be installed in the terminal.
By taking the game application of VR terminal as an example, establishing in game application has a virtual scene, such as a room Between, a block, a factory or a department stores etc., user can carry out game in virtual scene, carry out in game The various behaviors that can be carried out being arranged, in shooting game, user can walk, squats down, jumps, shoots, dodge Behavior, for another example in virtual live game, user can have a meal, wash one's face, body-building, behaviors such as sweep the floor, etc..In virtual scene In, wall, desk, stool, step, trench, vehicle, animal etc. can be set, various objects are used in virtual scene Family is initially assigned position, which can change with the movement of user.In virtual scene, with game into Row, may issue various sound.There may be any one positions in virtual scene for this sound.At this time What game application can calculate that his current position hears for user should be sound how, calculate corresponding audio Audio data is played out after data.In order to allow user to experience more true auditory effect under virtual environment, at this In calculating, game application can consider that sound is issued by sound source, propagates in virtual scene, be eventually spread to user in void Position in quasi- scene, after this whole process warp, what sound will become.User is when use VR terminal plays game It waits, it is only necessary to wear VR glasses, the screen of VR glasses can export image to user, and the earphone of VR glasses can be to user's output sound Sound, user can experience virtual scene by VR glasses, just look like that oneself is placed oneself in the midst of in virtual scene, just look like the image seen It is the real image oneself occurred at one's side, the sound heard is the actual sound nearby generated.
The application program and VR of AR is the difference is that the scene of AR is real scene, some objects are true objects in scene Body, some objects are dummy objects, but principle is similar with VR.In addition, the embodiment of the present application can be applicable to various movements The application program of terminal or fixed terminal, such as the game application of desktop computer.The embodiment of the present application is carried out by taking VR as an example Illustrate, other situations are similar therewith, repeat no more.
Fig. 1 is the schematic diagram of the scene of the method for output audio data provided by the embodiments of the present application, as shown in Fig. 2, this The implementation procedure of the method for the output audio data that application embodiment provides may include steps of:
Step 201, model, the first audio data, first position and the second position of target scene are obtained.
Wherein, first position is sounding position of first audio data in target scene, and the second position is target listener Position in target scene.
Target scene is the scene locating for user in the currently running destination application of terminal, can be virtual field Scape, actual scene or actual situation combination scene, target scene is generally the void established in destination application for VR terminal Quasi- scene, target scene can be actual scene for AR terminal, be also possible to that virtual article is arranged in actual scene The actual situation combination scene of composition, for mobile terminal, fixed terminal, the difference based on application program, target scene can be with It is virtual scene, actual scene or actual situation combination scene.It can be made of various objects in the model of target scene, Ru Qiang, Face, ceiling, pillar, cabinet, desk, automobile, tree etc., under different types of scene, the type of the object in model It may be different, it may be possible to dummy object, it is also possible to be real-world object.
In an implementation, when user wants to use some destination application in VR terminal, it can star VR terminal and incite somebody to action VR terminal is worn, and is then operated VR terminal and is started destination application, such as some shooting game.User is using target application During program, it can do by myself in virtual scene (i.e. target scene).It may be generated in virtual scene various A possibility that sound, sound generates, can be varied, can be the sound of some operation triggering of user, such as user is swimming It opens and robs in play, be also possible to the sound of some scheduled event triggering in virtual scene, such as the time bomb in virtual scene Explosion can also be that the sound for the action triggers that some virtual portrait of computer control carries out, such as virtual portrait are opened and robbed.When The event of triggering sounding is generated, destination application can obtain the audio data (i.e. the first audio data) of current utterance, the sound For frequency according to can generally be stored in advance in destination application, which may be considered sound that sound source has just issued also Audio data when not propagated.Destination application can also obtain the sounding position in target scene at this time, can also claim Make sound source position, i.e., above-mentioned first position can also obtain the position of target listener in target scene, the i.e. position of user.
For example, user when playing shooting game using VR terminal, has the virtual portrait shooting of a computer control , target can transfer the sound of the pre-stored firearms type based on the firearms type that virtual portrait is held for program at this time Frequency evidence, and determine the position of rifle and the position of user in virtual scene.
Step 202, the model based on target scene, the first audio data, first position and the second position are determined in target In scene, during the first audio data is propagated from first position to the second position, propagate to away from second position pre-determined distance The third place at obtained second audio data.
In an implementation, destination application exists in model, the first audio data, the first audio data for obtaining target scene It, can be with mould behind the position (i.e. the second position) of sounding position (i.e. first position) and user in target scene in target scene The quasi- propagation for calculating the corresponding sound of the first audio data in target scene.Because if directly calculating the first audio data If audio data after traveling to the second position, calculated audio data be do not have it is directive, so, in the present solution, The acoustical transformation model of HRTF technology is established in consideration, acoustical transformation model is established for multiple and different directions, without being directed to Different distance establishes the corresponding virtual sound source distance of the acoustical transformation model established in acoustical transformation model namely this programme Identical pre-determined distance.It so, can be with when simulation calculates propagation of the corresponding sound of the first audio data in target scene Calculate the audio data (i.e. above-mentioned second audio data) obtained at the sound transmission to the position apart from user preset distance.This When obtained second audio data there is distance property but directionless attribute.Then again based on acoustical transformation model to the second audio Data are further processed, and obtain the third audio data of user location, and third audio data is with distance property and side To attribute.Relevant treatment about acoustical transformation model can be discussed in detail in content below.
Optionally, acoustic transmission path tracing algorithm can be used, simulation calculates the corresponding sound of the first audio data and exists Propagation in target scene, correspondingly, the processing of step 202 can be such that based on acoustic transmission path tracing algorithm, target field The acoustic properties information of moisture content and object in scape, and the model based on target scene, the first audio data, first position And the second position, it determines in target scene, during the first audio data is propagated from first position to the second position, propagates The second audio data obtained to the third place away from second position pre-determined distance.
Wherein, acoustic transmission path tracing algorithm is after traveling to certain position based on initial audio data simulation calculating The algorithm of audio data.The acoustic properties information of object may include the reflectivity to sound wave, the absorptivity to sound wave, to sound wave Deviation ratio, resonance of frequency etc..The acoustic properties information of the material of different objects is different, when reflecting sound to sound The influence that sound generates is also different.
In instances, it can be based on acoustic transmission path tracing algorithm, determine that sound is passed from first position to the second position During broadcasting, paths traversed at the third place apart from second position pre-determined distance is propagated to.Then target field is obtained Moisture content in scape, moisture content in target scene can be to be pre-set in destination application, can also be with It is that weather conditions, environmental factor etc. by destination application based on target scene is calculated in real time.Meanwhile it obtaining The acoustic properties information of each object on acoustic transmission path.The material of each object can be preparatory when scene is established in target scene It is arranged and is recorded in destination application.It can first determine each object on acoustic transmission path, obtain the material of each object, Further obtain the corresponding acoustic properties information of pre-stored each material.It is passed in the moisture content and sound for determining target scene After broadcasting the acoustic properties information of each object on path, it can be obtained after loss, absorption, delay in communication process with sound Second audio data.
Optionally, the model based on target scene, the first audio data, first position and the second position are determined in target In scene, during the first audio data is propagated by least one propagation path from first position to the second position, propagate The second audio data obtained to the third place away from second position pre-determined distance.
Wherein, at least one propagation path includes direct projection propagation path and/or reflected propagation paths.In target scene, such as At least one propagation path includes direct projection when fruit can not obstruct the object of acoustic propagation between sounding position and user location Propagation path and reflected propagation paths, for example, virtual bomb and user are empty between this in room in virtual spacious room The sound of quasi- bomb blast is propagated by reflection and direct projection to user;And if having between sounding position and user location can hinder At least one propagation path includes reflected propagation paths when the object of acoustic propagation, for example, user hides in virtual room To avoid bearing explosion injury after cabinet, virtual bomb blast, sound can not be propagated at user by direct projection, can only be passed through Reflection is propagated to user.In addition, the propagation of sound can be preset when carrying out the processing of acoustic transmission path tracing algorithm The item number in path, such as 1,2,3,4,5, the item number of setting is more, and the authenticity of sound is higher, on the contrary, item number is fewer, sound Authenticity it is lower.For example, the item number of propagation path is 1, occur there is no barrier between position and user location, then this Propagation path is direct projection propagation path, without reflected propagation paths.
When there is a plurality of propagation path, every propagation path can all be corresponding with a third place, also be corresponding with accordingly One second audio data.
Step 203, corresponding relationship and third position based on pre-stored virtual sound source direction and acoustical transformation model The direction relative to the second position is set, determines corresponding target acoustical transformation model.
Wherein, the corresponding virtual sound source distance of each acoustical transformation model in corresponding relationship is pre-determined distance.Each Acoustical transformation model has virtual sound source distance and two, virtual sound source direction attribute, the two attributes embody acoustical transformation mould Type can assign the distance property and bearing data of audio data, and the audio data after acoustical transformation model treatment is defeated by audio Sound after equipment output out, it is to be propagated through from some direction and apart from upper sound source that sound can be experienced when user hears 's.In this programme destination application need the corresponding virtual sound source distance of all acoustical transformation models that stores be it is identical, It is pre-determined distance, corresponding virtual sound source direction is different, and the corresponding virtual sound source direction of each acoustical transformation model can be with It is equally distributed.Direction (such as virtual sound source direction) in this programme can indicate by level angle and vertical angle, Namely the direction is with respect to the level angle and vertical angle of Mr. Yu's reference direction.As it can be seen that in the present solution, each acoustical transformation of storage The corresponding virtual sound source of model relative to hearer position be equidistant and direction is different, in this way using hearer position as the center of circle, in advance If distance is that radius does a spherical surface, these virtual sound sources are distributed on this spherical surface, spherical surface as shown in figure 1.
In an implementation, technical staff can be in advance based on number of people recording technology and establish each acoustical transformation model by experiment. And it is based on the corresponding virtual sound source direction of each acoustical transformation model, it is corresponding with acoustical transformation model to establish virtual sound source direction Relationship is stored.
After above-mentioned steps 202 determine the third place and the second audio, the third place can be determined relative to second The direction of position.And then each virtual sound source direction that the corresponding relationship in above-mentioned virtual sound source direction and acoustical transformation model includes In, it searches and the immediate destination virtual Sounnd source direction of the direction.Further determine that the corresponding mesh of destination virtual Sounnd source direction Mark acoustical transformation model.
There is the case where a plurality of propagation path for above-mentioned sound, every propagation path is corresponding with a third place, base In each the third place, it can determine a target acoustical transformation model, use in the next steps.
Step 204, it is based on second audio data and corresponding target acoustical transformation model, determines third audio data, it is defeated Third audio data out.
In an implementation, the mathematical form for the acoustical transformation model established in this programme can be varied, for example, acoustics becomes Mold changing type can be convolution kernel.After through the above steps 203 determine target acoustical transformation model, it can be become based on target acoustical Mold changing type handles second audio data, obtains third audio data.For example, target acoustical transformation model can be convolution Core can carry out convolution algorithm to second audio data based on the convolution kernel, obtain third audio data.Second audio data is Audio data with the directionless attribute of distance property, the third audio data obtained after processing is with distance property and direction The audio data of attribute.After third audio data is calculated, terminal can be exported by audio output apparatus (such as earphone) Third audio data.
There is the case where a plurality of propagation path for above-mentioned sound, every propagation path is corresponding with a third place and one A second audio data.Step 203 can determine a target acoustical transformation model based on each the third place.At this point, can To handle corresponding second audio data using each target acoustical transformation model respectively, multiple third audio datas are obtained, it is right Multiple third audio datas export after being overlapped.
In the embodiment of the present application, the acoustics for only establishing the different virtual sound source directions that virtual sound source distance is pre-determined distance becomes Mold changing type, is stored, it is seen that the virtual sound source of the acoustical transformation model of storage distance be all it is identical, it is a large amount of not without storage With the acoustical transformation model of virtual sound source distance, it is thus possible to reduce the occupancy of equipment storage resource.
As shown in figure 3, the implementation procedure of the method for output audio data provided by the embodiments of the present application may include as follows Step:
Step 301, model, the first audio data, first position and the second position for obtaining target scene, determine first Whether the sound transmission distance set between the second position is greater than pre-determined distance.
Wherein, first position is sounding position of first audio data in target scene, and the second position is target listener Position in target scene.
Step 302, if the sound transmission distance between first position and the second position is greater than pre-determined distance, it is determined that In target scene, during the first audio data is propagated from first position to the second position, propagate to default away from the second position The second audio data obtained at the third place of distance.
Step 303, corresponding relationship and third position based on pre-stored virtual sound source direction and acoustical transformation model The direction relative to the second position is set, determines corresponding target acoustical transformation model.
Wherein, the corresponding virtual sound source distance of each acoustical transformation model in corresponding relationship is pre-determined distance.
Step 304, it is based on second audio data and corresponding target acoustical transformation model, determines third audio data, it is defeated Third audio data out.
The processing of above-mentioned steps 301-304 may refer to the content of above-described embodiment, and details are not described herein again.Difference exists When executing step 302, to be greater than pre-determined distance determining the sound transmission distance between first position and the second position When, just execute.
Step 305, it if the sound transmission distance between first position and the second position is not more than pre-determined distance, is based on Side of the corresponding relationship and first position of pre-stored virtual sound source direction and acoustical transformation model relative to the second position To determining corresponding target acoustical transformation model.
In embodiment, if the sound transmission distance between first position and the second position is not more than pre-determined distance, It is pre-determined distance that first position, which can be approximately considered, at a distance from the second position.First position can be determined relative to second at this time The direction of position, and then each virtual sound source direction that the corresponding relationship in above-mentioned virtual sound source direction and acoustical transformation model includes In, it searches and the immediate destination virtual Sounnd source direction of the direction.Further determine that the corresponding mesh of destination virtual Sounnd source direction Mark acoustical transformation model.
Step 306, the first audio data is inputted into target acoustical transformation model, obtains the 4th audio data, output the 4th Audio data.
In an implementation, it after through the above steps 305 determine target acoustical transformation model, can be converted based on target acoustical Model handles the first audio data, obtains the 4th audio data.For example, target acoustical transformation model can be convolution Core can carry out convolution algorithm to the first audio data based on the convolution kernel, obtain the 4th audio data.First audio data is Audio data without distance property and directionless attribute, the 4th audio data obtained after processing is with distance property and direction The audio data of attribute.After the 4th audio data is calculated, terminal can be exported by audio output apparatus (such as earphone) 4th audio data.
In the embodiment of the present application, the acoustics for only establishing the different virtual sound source directions that virtual sound source distance is pre-determined distance becomes Mold changing type, is stored, it is seen that the virtual sound source of the acoustical transformation model of storage distance be all it is identical, it is a large amount of not without storage With the acoustical transformation model of virtual sound source distance, it is thus possible to reduce the occupancy of equipment storage resource.
All the above alternatives can form alternative embodiment of the invention using any combination, herein no longer It repeats one by one.
The embodiment of the present application provides a kind of device for exporting audio data, which can be the end in above-described embodiment End, as shown in figure 4, described device includes:
Module 410 is obtained, is configured as obtaining model, the first audio data, first position and the second of target scene It sets, wherein the first position is sounding position of first audio data in the target scene, the second position It is position of the target listener in the target scene;
Determining module 420 is configured as model, first audio data, described first based on the target scene Position and the second position determine that in the target scene, first audio data is from the first position to described During the second position is propagated, the second audio number obtained at the third place away from the second position pre-determined distance is propagated to According to;Corresponding relationship and the third place based on pre-stored virtual sound source direction and acoustical transformation model relative to The direction of the second position determines corresponding target acoustical transformation model, wherein each acoustics in the corresponding relationship becomes The corresponding virtual sound source distance of mold changing type is the pre-determined distance;Based on the second audio data and the corresponding target Acoustical transformation model determines third audio data;
Output module 430 is configured as exporting the third audio data.
Optionally, the determining module 420, is configured as:
If the sound transmission distance between the first position and the second position is greater than pre-determined distance, it is based on institute Model, first audio data, the first position and the second position of target scene are stated, is determined in the target field Jing Zhong during first audio data is propagated from the first position to the second position, is propagated to away from described the The second audio data obtained at the third place of two position pre-determined distances.
Optionally, the determining module 420, is also configured to
If the sound transmission distance between the first position and the second position is not more than pre-determined distance, it is based on The corresponding relationship and the first position of pre-stored virtual sound source direction and acoustical transformation model are relative to described second The direction of position determines corresponding target acoustical transformation model;First audio data is inputted into the target acoustical transformation Model obtains the 4th audio data;
The output module is also configured to export the 4th audio data.
Optionally, the determining module 420, is configured as:
Acoustic properties letter based on moisture content and object in acoustic transmission path tracing algorithm, the target scene Breath, and the model based on the target scene, first audio data, the first position and the second position determine In the target scene, during first audio data is propagated from the first position to the second position, pass Cast to the second audio data obtained at the third place away from the second position pre-determined distance.
Optionally, the determining module 420, is configured as:
Model, first audio data, the first position and the second position based on the target scene, really It is scheduled in the target scene, first audio data is by least one propagation path from the first position to described the During two positions are propagated, the second audio number obtained at the third place away from the second position pre-determined distance is propagated to According to, wherein at least one propagation path includes direct projection propagation path and/or reflected propagation paths.
Optionally, the target acoustical transformation model in the corresponding relationship is convolution kernel, and the determining module 420 is matched It is set to:
Process of convolution is carried out to the second audio data based on the target acoustical transformation model, obtains third audio number According to.
In the embodiment of the present application, the acoustics for only establishing the different virtual sound source directions that virtual sound source distance is pre-determined distance becomes Mold changing type, is stored, it is seen that the virtual sound source of the acoustical transformation model of storage distance be all it is identical, it is a large amount of not without storage With the acoustical transformation model of virtual sound source distance, it is thus possible to reduce the occupancy of equipment storage resource.
It should be understood that it is provided by the above embodiment output audio data device when exporting audio data, only with The division progress of above-mentioned each functional module can according to need and for example, in practical application by above-mentioned function distribution by not Same functional module is completed, i.e., the internal structure of device is divided into different functional modules, to complete whole described above Or partial function.In addition, the device of output audio data provided by the above embodiment and the method for output audio data are implemented Example belongs to same design, and specific implementation process is detailed in embodiment of the method, and which is not described herein again.
Fig. 5 shows the structural block diagram of the terminal 500 of one exemplary embodiment of the application offer.The terminal 500 can be with Be: smart phone, tablet computer, MP3 player (Moving Picture Experts Group Audio Layer III, Dynamic image expert's compression standard audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert's compression standard audio level 4) player, laptop or desktop computer.Terminal 500 be also possible to by Referred to as other titles such as user equipment, portable terminal, laptop terminal, terminal console.
In general, terminal 500 includes: processor 501 and memory 502.
Processor 501 may include one or more processing cores, such as 4 core processors, 8 core processors etc..Place Reason device 501 can use DSP (Digital Signal Processing, Digital Signal Processing), FPGA (Field- Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, may be programmed Logic array) at least one of example, in hardware realize.Processor 501 also may include primary processor and coprocessor, master Processor is the processor for being handled data in the awake state, also referred to as CPU (Central Processing Unit, central processing unit);Coprocessor is the low power processor for being handled data in the standby state.In In some embodiments, processor 501 can be integrated with GPU (Graphics Processing Unit, image processor), GPU is used to be responsible for the rendering and drafting of content to be shown needed for display screen.In some embodiments, processor 501 can also be wrapped AI (Artificial Intelligence, artificial intelligence) processor is included, the AI processor is for handling related machine learning Calculating operation.
Memory 502 may include one or more computer readable storage mediums, which can To be non-transient.Memory 502 may also include high-speed random access memory and nonvolatile memory, such as one Or multiple disk storage equipments, flash memory device.In some embodiments, the non-transient computer in memory 502 can Storage medium is read for storing at least one instruction, at least one instruction performed by processor 501 for realizing this Shen Please in embodiment of the method provide output audio data method.
In some embodiments, terminal 500 is also optional includes: peripheral device interface 503 and at least one peripheral equipment. It can be connected by bus or signal wire between processor 501, memory 502 and peripheral device interface 503.Each peripheral equipment It can be connected by bus, signal wire or circuit board with peripheral device interface 503.Specifically, peripheral equipment includes: radio circuit 504, at least one of touch display screen 505, camera 506, voicefrequency circuit 507, positioning component 508 and power supply 509.
Peripheral device interface 503 can be used for I/O (Input/Output, input/output) is relevant outside at least one Peripheral equipment is connected to processor 501 and memory 502.In some embodiments, processor 501, memory 502 and peripheral equipment Interface 503 is integrated on same chip or circuit board;In some other embodiments, processor 501, memory 502 and outer Any one or two in peripheral equipment interface 503 can realize on individual chip or circuit board, the present embodiment to this not It is limited.
Radio circuit 504 is for receiving and emitting RF (Radio Frequency, radio frequency) signal, also referred to as electromagnetic signal.It penetrates Frequency circuit 504 is communicated by electromagnetic signal with communication network and other communication equipments.Radio circuit 504 turns electric signal It is changed to electromagnetic signal to be sent, alternatively, the electromagnetic signal received is converted to electric signal.Optionally, radio circuit 504 wraps It includes: antenna system, RF transceiver, one or more amplifiers, tuner, oscillator, digital signal processor, codec chip Group, user identity module card etc..Radio circuit 504 can be carried out by least one wireless communication protocol with other terminals Communication.The wireless communication protocol includes but is not limited to: Metropolitan Area Network (MAN), each third generation mobile communication network (2G, 3G, 4G and 5G), wireless office Domain net and/or WiFi (Wireless Fidelity, Wireless Fidelity) network.In some embodiments, radio circuit 504 may be used also To include the related circuit of NFC (Near Field Communication, wireless near field communication), the application is not subject to this It limits.
Display screen 505 is for showing UI (User Interface, user interface).The UI may include figure, text, figure Mark, video and its their any combination.When display screen 505 is touch display screen, display screen 505 also there is acquisition to show The ability of the touch signal on the surface or surface of screen 505.The touch signal can be used as control signal and be input to processor 501 are handled.At this point, display screen 505 can be also used for providing virtual push button and/or dummy keyboard, also referred to as soft button and/or Soft keyboard.In some embodiments, display screen 505 can be one, and the front panel of terminal 500 is arranged;In other embodiments In, display screen 505 can be at least two, be separately positioned on the different surfaces of terminal 500 or in foldover design;In still other reality It applies in example, display screen 505 can be flexible display screen, be arranged on the curved surface of terminal 500 or on fold plane.Even, it shows Display screen 505 can also be arranged to non-rectangle irregular figure, namely abnormity screen.Display screen 505 can use LCD (Liquid Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, Organic Light Emitting Diode) Etc. materials preparation.
CCD camera assembly 506 is for acquiring image or video.Optionally, CCD camera assembly 506 include front camera and Rear camera.In general, the front panel of terminal is arranged in front camera, the back side of terminal is arranged in rear camera.One In a little embodiments, rear camera at least two is main camera, depth of field camera, wide-angle camera, focal length camera shooting respectively Any one in head, to realize that main camera and the fusion of depth of field camera realize background blurring function, main camera and wide-angle Camera fusion realizes that pan-shot and VR (Virtual Reality, virtual reality) shooting function or other fusions are clapped Camera shooting function.In some embodiments, CCD camera assembly 506 can also include flash lamp.Flash lamp can be monochromatic warm flash lamp, It is also possible to double-colored temperature flash lamp.Double-colored temperature flash lamp refers to the combination of warm light flash lamp and cold light flash lamp, can be used for not With the light compensation under colour temperature.
Voicefrequency circuit 507 may include microphone and loudspeaker.Microphone is used to acquire the sound wave of user and environment, and will Sound wave, which is converted to electric signal and is input to processor 501, to be handled, or is input to radio circuit 504 to realize voice communication. For stereo acquisition or the purpose of noise reduction, microphone can be separately positioned on the different parts of terminal 500 to be multiple.Mike Wind can also be array microphone or omnidirectional's acquisition type microphone.Loudspeaker is then used to that processor 501 or radio circuit will to be come from 504 electric signal is converted to sound wave.Loudspeaker can be traditional wafer speaker, be also possible to piezoelectric ceramic loudspeaker.When When loudspeaker is piezoelectric ceramic loudspeaker, the audible sound wave of the mankind can be not only converted electrical signals to, it can also be by telecommunications Number the sound wave that the mankind do not hear is converted to carry out the purposes such as ranging.In some embodiments, voicefrequency circuit 507 can also include Earphone jack.
Positioning component 508 is used for the current geographic position of positioning terminal 500, to realize navigation or LBS (Location Based Service, location based service).Positioning component 508 can be the GPS (Global based on the U.S. Positioning System, global positioning system), the dipper system of China, Russia Gray receive this system or European Union The positioning component of Galileo system.
Power supply 509 is used to be powered for the various components in terminal 500.Power supply 509 can be alternating current, direct current, Disposable battery or rechargeable battery.When power supply 509 includes rechargeable battery, which can support wired charging Or wireless charging.The rechargeable battery can be also used for supporting fast charge technology.
In some embodiments, terminal 500 further includes having one or more sensors 510.The one or more sensors 510 include but is not limited to: acceleration transducer 511, gyro sensor 512, pressure sensor 513, fingerprint sensor 514, Optical sensor 515 and proximity sensor 516.
The acceleration that acceleration transducer 511 can detecte in three reference axis of the coordinate system established with terminal 500 is big It is small.For example, acceleration transducer 511 can be used for detecting component of the acceleration of gravity in three reference axis.Processor 501 can With the acceleration of gravity signal acquired according to acceleration transducer 511, touch display screen 505 is controlled with transverse views or longitudinal view Figure carries out the display of user interface.Acceleration transducer 511 can be also used for the acquisition of game or the exercise data of user.
Gyro sensor 512 can detecte body direction and the rotational angle of terminal 500, and gyro sensor 512 can To cooperate with acquisition user to act the 3D of terminal 500 with acceleration transducer 511.Processor 501 is according to gyro sensor 512 Following function may be implemented in the data of acquisition: when action induction (for example changing UI according to the tilt operation of user), shooting Image stabilization, game control and inertial navigation.
The lower layer of side frame and/or touch display screen 505 in terminal 500 can be set in pressure sensor 513.Work as pressure When the side frame of terminal 500 is arranged in sensor 513, user can detecte to the gripping signal of terminal 500, by processor 501 Right-hand man's identification or prompt operation are carried out according to the gripping signal that pressure sensor 513 acquires.When the setting of pressure sensor 513 exists When the lower layer of touch display screen 505, the pressure operation of touch display screen 505 is realized to UI circle according to user by processor 501 Operability control on face is controlled.Operability control includes button control, scroll bar control, icon control, menu At least one of control.
Fingerprint sensor 514 is used to acquire the fingerprint of user, collected according to fingerprint sensor 514 by processor 501 The identity of fingerprint recognition user, alternatively, by fingerprint sensor 514 according to the identity of collected fingerprint recognition user.It is identifying When the identity of user is trusted identity out, the user is authorized to execute relevant sensitive operation, the sensitive operation packet by processor 501 Include solution lock screen, check encryption information, downloading software, payment and change setting etc..Terminal can be set in fingerprint sensor 514 500 front, the back side or side.When being provided with physical button or manufacturer Logo in terminal 500, fingerprint sensor 514 can be with It is integrated with physical button or manufacturer Logo.
Optical sensor 515 is for acquiring ambient light intensity.In one embodiment, processor 501 can be according to optics The ambient light intensity that sensor 515 acquires controls the display brightness of touch display screen 505.Specifically, when ambient light intensity is higher When, the display brightness of touch display screen 505 is turned up;When ambient light intensity is lower, the display for turning down touch display screen 505 is bright Degree.In another embodiment, the ambient light intensity that processor 501 can also be acquired according to optical sensor 515, dynamic adjust The acquisition parameters of CCD camera assembly 506.
Proximity sensor 516, also referred to as range sensor are generally arranged at the front panel of terminal 500.Proximity sensor 516 For acquiring the distance between the front of user Yu terminal 500.In one embodiment, when proximity sensor 516 detects use When family and the distance between the front of terminal 500 gradually become smaller, touch display screen 505 is controlled from bright screen state by processor 501 It is switched to breath screen state;When proximity sensor 516 detects user and the distance between the front of terminal 500 becomes larger, Touch display screen 505 is controlled by processor 501 and is switched to bright screen state from breath screen state.
It will be understood by those skilled in the art that the restriction of the not structure paired terminal 500 of structure shown in Fig. 5, can wrap It includes than illustrating more or fewer components, perhaps combine certain components or is arranged using different components.
In the exemplary embodiment, a kind of computer readable storage medium is additionally provided, the memory for example including instruction, Above-metioned instruction can be executed by the processor in terminal to complete the method for exporting audio data in above-described embodiment.For example, described Computer readable storage medium can be read-only memory (Read-only Memory, ROM), random access memory (Random Access Memory, RAM), CD-ROM, tape, floppy disk and optical data storage devices etc..
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of above-described embodiment can pass through hardware It completes, relevant hardware can also be instructed to complete by program, the program can store in a kind of computer-readable In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely the preferred embodiments of the application, not to limit the application, it is all in spirit herein and Within principle, any modification, equivalent replacement, improvement and so on be should be included within the scope of protection of this application.

Claims (14)

1. a kind of method for exporting audio data, which is characterized in that the described method includes:
Obtain model, the first audio data, first position and the second position of target scene, wherein the first position is institute Sounding position of first audio data in the target scene is stated, the second position is target listener in the target scene In position;
Model, first audio data, the first position and the second position based on the target scene determine In the target scene, during first audio data is propagated from the first position to the second position, propagate The second audio data obtained to the third place away from the second position pre-determined distance;
Corresponding relationship and the third place based on pre-stored virtual sound source direction and acoustical transformation model relative to The direction of the second position determines corresponding target acoustical transformation model, wherein each acoustics in the corresponding relationship becomes The corresponding virtual sound source distance of mold changing type is the pre-determined distance;
Based on the second audio data and the corresponding target acoustical transformation model, third audio data is determined, export institute State third audio data.
2. the method according to claim 1, wherein the model based on the target scene, described first Audio data, the first position and the second position determine that in the target scene, first audio data is by institute State first position to the second position propagate during, propagate at the third place away from the second position pre-determined distance Obtained second audio data, comprising:
If the sound transmission distance between the first position and the second position is greater than pre-determined distance, it is based on the mesh Model, first audio data, the first position and the second position of scene are marked, is determined in the target scene In, during first audio data is propagated from the first position to the second position, propagate to away from described second The second audio data obtained at the third place of position pre-determined distance.
3. according to the method described in claim 2, it is characterized in that, the method also includes:
If the sound transmission distance between the first position and the second position is not more than pre-determined distance, based on preparatory The virtual sound source direction of storage and the corresponding relationship of acoustical transformation model and the first position are relative to the second position Direction, determine corresponding target acoustical transformation model;
First audio data is inputted into the target acoustical transformation model, obtains the 4th audio data, output the described 4th Audio data.
4. the method according to claim 1, wherein the model based on the target scene, described first Audio data, the first position and the second position determine that in the target scene, first audio data is by institute State first position to the second position propagate during, propagate at the third place away from the second position pre-determined distance Obtained second audio data, comprising:
Based on the acoustic properties information of moisture content and object in acoustic transmission path tracing algorithm, the target scene, and Model, first audio data, the first position and the second position based on the target scene are determined described In target scene, first audio data from the first position to the second position propagate during, propagate to away from The second audio data obtained at the third place of the second position pre-determined distance.
5. the method according to claim 1, wherein the model based on the target scene, described first Audio data, the first position and the second position determine that in the target scene, first audio data is by institute State first position to the second position propagate during, propagate at the third place away from the second position pre-determined distance Obtained second audio data, comprising:
Model, first audio data, the first position and the second position based on the target scene determine In the target scene, first audio data is by least one propagation path from the first position to the second During setting propagation, the second audio data obtained at the third place away from the second position pre-determined distance is propagated to, In, at least one propagation path includes direct projection propagation path and/or reflected propagation paths.
6. -5 any method according to claim 1, which is characterized in that the target acoustical in the corresponding relationship converts mould Type is convolution kernel, described to be based on the second audio data and the corresponding target acoustical transformation model, determines third audio Data, comprising:
Process of convolution is carried out to the second audio data based on the target acoustical transformation model, obtains third audio data.
7. a kind of device for exporting audio data, which is characterized in that described device includes:
Module is obtained, is configured as obtaining model, the first audio data, first position and the second position of target scene, wherein The first position is sounding position of first audio data in the target scene, and the second position is that target is listened Position of the person in the target scene;
Determining module is configured as model, first audio data, the first position and institute based on the target scene It states the second position, determines in the target scene, first audio data is from the first position to the second position During propagation, the second audio data obtained at the third place away from the second position pre-determined distance is propagated to;It is based on The corresponding relationship and the third place of pre-stored virtual sound source direction and acoustical transformation model are relative to described second The direction of position determines corresponding target acoustical transformation model, wherein each acoustical transformation model pair in the corresponding relationship The virtual sound source distance answered is the pre-determined distance;Based on the second audio data and the corresponding target acoustical transformation Model determines third audio data;
Output module is configured as exporting the third audio data.
8. device according to claim 7, which is characterized in that the determining module is configured as:
If the sound transmission distance between the first position and the second position is greater than pre-determined distance, it is based on the mesh Model, first audio data, the first position and the second position of scene are marked, is determined in the target scene In, during first audio data is propagated from the first position to the second position, propagate to away from described second The second audio data obtained at the third place of position pre-determined distance.
9. device according to claim 8, which is characterized in that the determining module is also configured to
If the sound transmission distance between the first position and the second position is not more than pre-determined distance, based on preparatory The virtual sound source direction of storage and the corresponding relationship of acoustical transformation model and the first position are relative to the second position Direction, determine corresponding target acoustical transformation model;First audio data is inputted into the target acoustical transformation model, Obtain the 4th audio data;
The output module is also configured to export the 4th audio data.
10. device according to claim 7, which is characterized in that the determining module is configured as:
Based on the acoustic properties information of moisture content and object in acoustic transmission path tracing algorithm, the target scene, and Model, first audio data, the first position and the second position based on the target scene are determined described In target scene, first audio data from the first position to the second position propagate during, propagate to away from The second audio data obtained at the third place of the second position pre-determined distance.
11. device according to claim 7, which is characterized in that the determining module is configured as:
Model, first audio data, the first position and the second position based on the target scene determine In the target scene, first audio data is by least one propagation path from the first position to the second During setting propagation, the second audio data obtained at the third place away from the second position pre-determined distance is propagated to, In, at least one propagation path includes direct projection propagation path and/or reflected propagation paths.
12. according to any device of claim 7-11, which is characterized in that the target acoustical transformation in the corresponding relationship Model is convolution kernel, and the determining module is configured as:
Process of convolution is carried out to the second audio data based on the target acoustical transformation model, obtains third audio data.
13. a kind of computer equipment, which is characterized in that the computer equipment includes processor and memory, the memory In be stored at least one instruction, described instruction is loaded by the processor and is executed to realize as claim 1 to right is wanted Ask operation performed by the method for 6 described in any item output audio datas.
14. a kind of computer readable storage medium, which is characterized in that be stored at least one instruction, institute in the storage medium Instruction is stated to be loaded by processor and executed to realize such as claim 1 to the described in any item output audio datas of claim 6 Method performed by operation.
CN201910652191.2A 2019-07-18 2019-07-18 Method, apparatus, device and storage medium for outputting audio data Active CN110401898B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910652191.2A CN110401898B (en) 2019-07-18 2019-07-18 Method, apparatus, device and storage medium for outputting audio data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910652191.2A CN110401898B (en) 2019-07-18 2019-07-18 Method, apparatus, device and storage medium for outputting audio data

Publications (2)

Publication Number Publication Date
CN110401898A true CN110401898A (en) 2019-11-01
CN110401898B CN110401898B (en) 2021-05-07

Family

ID=68324562

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910652191.2A Active CN110401898B (en) 2019-07-18 2019-07-18 Method, apparatus, device and storage medium for outputting audio data

Country Status (1)

Country Link
CN (1) CN110401898B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112882568A (en) * 2021-01-27 2021-06-01 深圳市慧鲤科技有限公司 Audio playing method and device, electronic equipment and storage medium
CN112908302A (en) * 2021-01-26 2021-06-04 腾讯音乐娱乐科技(深圳)有限公司 Audio processing method, device and equipment and readable storage medium
CN113362864A (en) * 2021-06-16 2021-09-07 北京字节跳动网络技术有限公司 Audio signal processing method, device, storage medium and electronic equipment
CN116935824A (en) * 2023-09-18 2023-10-24 腾讯科技(深圳)有限公司 Audio data filtering method, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8374365B2 (en) * 2006-05-17 2013-02-12 Creative Technology Ltd Spatial audio analysis and synthesis for binaural reproduction and format conversion
US9226090B1 (en) * 2014-06-23 2015-12-29 Glen A. Norris Sound localization for an electronic call
CN107172566A (en) * 2017-05-11 2017-09-15 广州酷狗计算机科技有限公司 Audio-frequency processing method and device
CN108597033A (en) * 2018-04-27 2018-09-28 深圳市零度智控科技有限公司 Bypassing method, VR equipment and the storage medium of realistic obstacles object in VR game
CN109618274A (en) * 2018-11-23 2019-04-12 华南理工大学 A kind of Virtual Sound playback method, electronic equipment and medium based on angle map table
CN109885945A (en) * 2019-02-26 2019-06-14 哈尔滨工程大学 A kind of boundary element method near field acoustic holography transform method under half space environment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8374365B2 (en) * 2006-05-17 2013-02-12 Creative Technology Ltd Spatial audio analysis and synthesis for binaural reproduction and format conversion
US9226090B1 (en) * 2014-06-23 2015-12-29 Glen A. Norris Sound localization for an electronic call
CN107172566A (en) * 2017-05-11 2017-09-15 广州酷狗计算机科技有限公司 Audio-frequency processing method and device
CN108597033A (en) * 2018-04-27 2018-09-28 深圳市零度智控科技有限公司 Bypassing method, VR equipment and the storage medium of realistic obstacles object in VR game
CN109618274A (en) * 2018-11-23 2019-04-12 华南理工大学 A kind of Virtual Sound playback method, electronic equipment and medium based on angle map table
CN109885945A (en) * 2019-02-26 2019-06-14 哈尔滨工程大学 A kind of boundary element method near field acoustic holography transform method under half space environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CARLOS: ""2-D AuditoryMapping of Virtual Environment Using Real-Time Head Related Transfer Functions"", 《PROCEEDINGS IEEE SOUTHEASTCON 2002》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112908302A (en) * 2021-01-26 2021-06-04 腾讯音乐娱乐科技(深圳)有限公司 Audio processing method, device and equipment and readable storage medium
CN112908302B (en) * 2021-01-26 2024-03-15 腾讯音乐娱乐科技(深圳)有限公司 Audio processing method, device, equipment and readable storage medium
CN112882568A (en) * 2021-01-27 2021-06-01 深圳市慧鲤科技有限公司 Audio playing method and device, electronic equipment and storage medium
CN113362864A (en) * 2021-06-16 2021-09-07 北京字节跳动网络技术有限公司 Audio signal processing method, device, storage medium and electronic equipment
CN116935824A (en) * 2023-09-18 2023-10-24 腾讯科技(深圳)有限公司 Audio data filtering method, device, equipment and storage medium
CN116935824B (en) * 2023-09-18 2023-12-29 腾讯科技(深圳)有限公司 Audio data filtering method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN110401898B (en) 2021-05-07

Similar Documents

Publication Publication Date Title
CN110401898A (en) Export method, apparatus, equipment and the storage medium of audio data
CN110244998A (en) Page layout background, the setting method of live page background, device and storage medium
CN110139142A (en) Virtual objects display methods, device, terminal and storage medium
CN108401124A (en) The method and apparatus of video record
CN109151593A (en) Main broadcaster's recommended method, device storage medium
CN108848394A (en) Net cast method, apparatus, terminal and storage medium
CN108922506A (en) Song audio generation method, device and computer readable storage medium
CN110061900A (en) Message display method, device, terminal and computer readable storage medium
CN108965757A (en) video recording method, device, terminal and storage medium
CN109068008A (en) The tinkle of bells setting method, device, terminal and storage medium
CN108897597A (en) The method and apparatus of guidance configuration live streaming template
CN109147757A (en) Song synthetic method and device
CN109327608A (en) Method, terminal, server and the system that song is shared
CN109275013A (en) Method, apparatus, equipment and the storage medium that virtual objects are shown
CN110248236A (en) Video broadcasting method, device, terminal and storage medium
EP3618055B1 (en) Audio mixing method and terminal, and storage medium
CN109922356A (en) Video recommendation method, device and computer readable storage medium
CN108900925A (en) The method and apparatus of live streaming template are set
CN109346111A (en) Data processing method, device, terminal and storage medium
CN109192218A (en) The method and apparatus of audio processing
CN110139143A (en) Virtual objects display methods, device, computer equipment and storage medium
CN109192223A (en) The method and apparatus of audio alignment
CN110166275A (en) Information processing method, device and storage medium
CN108922533A (en) Determine whether the method and apparatus sung in the real sense
CN109547847A (en) Add the method, apparatus and computer readable storage medium of video information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant