CN112771893A - 3D sound effect implementation method and device, storage medium and electronic equipment - Google Patents

3D sound effect implementation method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN112771893A
CN112771893A CN201880098267.5A CN201880098267A CN112771893A CN 112771893 A CN112771893 A CN 112771893A CN 201880098267 A CN201880098267 A CN 201880098267A CN 112771893 A CN112771893 A CN 112771893A
Authority
CN
China
Prior art keywords
information
audio signal
adjusting
signal
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880098267.5A
Other languages
Chinese (zh)
Inventor
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd, Shenzhen Huantai Technology Co Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of CN112771893A publication Critical patent/CN112771893A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control

Abstract

The embodiment of the application provides a method and a device for realizing 3D sound effect, a storage medium and electronic equipment. The 3D sound effect implementation method comprises the steps of acquiring current azimuth information of a virtual sound source when an audio signal is detected; selecting target signal adjusting parameters from a sample parameter set according to the current orientation information; adjusting the audio signal based on the target signal adjustment parameter; and playing the adjusted audio signal.

Description

3D sound effect implementation method and device, storage medium and electronic equipment Technical Field
The application relates to the technical field of electronics, in particular to a 3D sound effect implementation method and electronic equipment.
Background
With the rapid development of multi-mobile phone media technology and the fire heat of Virtual Reality (VR) technology, the requirement of realizing Three-dimensional (Three, Dimensions, 3D for short) sound effect on mobile terminals such as smart phones and tablet computers is driven. In the related art, the rotation of the head of a person needs to be sensed through a position sensor arranged in a headset, so that the position of a virtual sound source of a 3D sound effect is positioned, and a three-dimensional playing effect is achieved. Therefore, the mode of positioning the sound source through the peripheral equipment enables the application of the 3D sound effect to have certain limitation.
Disclosure of Invention
The embodiment of the application provides a method and a device for realizing 3D sound effect, a storage medium and electronic equipment, which can improve the universality of 3D sound effect application.
In a first aspect, an embodiment of the present application provides a method for implementing a 3D sound effect, which is applied to an electronic device, and includes:
when an audio signal is detected, acquiring current azimuth information of a virtual sound source;
selecting target signal adjusting parameters from a sample parameter set according to the current orientation information;
adjusting the audio signal based on the target signal adjustment parameter;
and playing the adjusted audio signal.
In a second aspect, an embodiment of the present application provides a 3D sound effect implementation apparatus, which is applied to an electronic device, the 3D sound effect implementation apparatus includes:
the direction acquisition module is used for acquiring the current direction information of the virtual sound source when the audio signal is detected;
the selection module is used for selecting target signal adjustment parameters from a sample parameter set according to the current azimuth information;
an adjustment module to adjust the audio signal based on the target signal adjustment parameter;
and the playing module is used for playing the adjusted audio signal.
In a third aspect, an embodiment of the present application provides a storage medium having a plurality of instructions stored therein, where the instructions are adapted to be loaded by a processor to perform the following steps:
when an audio signal is detected, acquiring current azimuth information of a virtual sound source;
selecting target signal adjusting parameters from a sample parameter set according to the current orientation information;
adjusting the audio signal based on the target signal adjustment parameter;
and playing the adjusted audio signal.
In a fourth aspect, an embodiment of the present application provides an electronic device, including a processor and a storage medium, where a plurality of instructions are stored in the storage medium, and the processor loads the instructions to perform the following steps:
when an audio signal is detected, acquiring current azimuth information of a virtual sound source;
selecting target signal adjusting parameters from a sample parameter set according to the current orientation information;
adjusting the audio signal based on the target signal adjustment parameter;
and playing the adjusted audio signal.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
For a more complete understanding of the present application and its advantages, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, wherein like reference numerals represent like parts in the following description.
Fig. 1 is a first flow chart of a 3D sound effect implementation method provided in the embodiment of the present application.
Fig. 2 is a schematic view of a first application scenario of a 3D sound effect implementation method provided in the embodiment of the present application.
Fig. 3 is a second flow chart of the 3D sound effect implementation method according to the embodiment of the present application.
Fig. 4 is a schematic view of a second application scenario of the 3D sound effect implementation method according to the embodiment of the present application.
Fig. 5 is a third flow chart of a 3D sound effect implementation method provided in the embodiment of the present application.
Fig. 6 is a schematic view of a third application scenario of the 3D sound effect implementation method according to the embodiment of the present application.
Fig. 7 is a fourth flowchart illustrating a 3D sound effect implementation method according to an embodiment of the present application.
Fig. 8 is a fifth flowchart illustrating a 3D sound effect implementation method according to an embodiment of the present application.
Fig. 9 is a first structural schematic diagram of a 3D sound effect implementation apparatus according to an embodiment of the present application.
Fig. 10 is a second structural schematic diagram of the 3D sound effect implementing device according to the embodiment of the present application.
Fig. 11 is a third structural schematic diagram of a 3D sound effect implementing device according to an embodiment of the present application.
Fig. 12 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a method and a device for realizing 3D sound effect, a storage medium and electronic equipment, which are respectively described in detail below.
As shown in fig. 1, the 3D sound effect implementation method is applied to an electronic device. The electronic equipment can be an intelligent terminal such as a smart phone and a tablet personal computer. The 3D sound effect implementation method can comprise the following steps:
101. and when the audio signal is detected, acquiring the current orientation information of the virtual sound source.
The audio signal may be obtained by a speaker built in the electronic device through conversion based on a received electrical signal, or may be obtained by an external earphone device through conversion based on a received electrical signal. Specifically, a vibration detection device may be disposed in the electronic device for detecting a vibration condition of the speaker, so as to monitor the audio signal.
In the embodiment of the present application, the current position information of the virtual sound source corresponds to the position information of the sound source in the actual physical space, which meets the user requirement. For example, if the user wants the sound source to be oriented at an angle θ on the left side of the user, the position relationship between the virtual sound source and the virtual user can be referred to fig. 2.
In the embodiment of the present application, there are various ways to determine the current position information of the virtual sound source. For example, the setting may be performed by the user through the electronic device by means of software; the self-placing state of the electronic equipment can be determined; the determination may also be made by a user for a positional relationship with the electronic device and the user. A detailed description of the relevant contents will be made below.
102. And selecting target signal adjusting parameters from the sample parameter set according to the current position information.
The electronic equipment can select matched signal adjusting parameters from the sample parameter set based on the currently acquired azimuth information of the virtual sound source.
In the embodiment of the present application, the correspondence between the direction information of the sound source and the adjustment parameter needs to be collected in advance, so as to determine the adjustment parameter of the target signal in the following. That is, in some embodiments, before detecting the audio signal, the method further includes:
acquiring a plurality of sample position information and corresponding signal adjusting parameters under the sample position information;
and establishing a mapping relation between the sample orientation information and the signal adjusting parameters, and adding the mapping relation, the sample orientation information and the signal adjusting parameters into a sample parameter set.
Specifically, the impulse response of a Head Related Transfer Function (HRTF) at the sampling orientation may be obtained. For example, if the orientation information is sampled as a left or right deflection θ centered on the user, the impulse responses of the left and right ears can be recorded as
Figure PCTCN2018116506-APPB-000001
And
Figure PCTCN2018116506-APPB-000002
wherein the impulse response can be measured manually or by machine by gradually debugging the audio signal.
Assuming that the input signal is a mono signal s, the left output signal and the right output signal are divided into
Figure PCTCN2018116506-APPB-000003
And
Figure PCTCN2018116506-APPB-000004
assuming that the input signals are two-channel signals l and r, the left output signal and the right output signal are respectively
Figure PCTCN2018116506-APPB-000005
And
Figure PCTCN2018116506-APPB-000006
when implemented, theoutAnd routThrough the reverberator, the design of eliminating the inside positioning effect reverberator in the head can adopt an artificial reverberation algorithm and 4 comb filters connected in parallel, and the system function of the comb filters is as follows:
Figure PCTCN2018116506-APPB-000007
wherein D is1~D 4Respectively representing the delays of 4 comb filters, wherein the specific numbers can be 14.61ms,18.83ms,20.74ms and 22.15ms respectively; a is1~a 4The attenuation gains for the 4 comb filters are shown and may be 0.84,0.82,0.8 and 0.78, respectively.
In practical application, the azimuth information can be sampled, and then on the premise of positioning the position of the virtual sound source as the current sampling azimuth information, the playing parameters (such as volume, time delay and the like) of the audio signal are gradually debugged manually or by a machine, so that the playing effect of emitting sound from the azimuth information in the physical space is achieved in an auditory sense when the current audio signal is played. And when the required playing effect is finally met, the debugged playing parameter is used as a corresponding signal adjusting parameter under the current sample azimuth information.
Then, the step "determining target signal adjustment parameters from the sample parameter set according to the current position information" may include the following procedures:
selecting target sample azimuth information matched with the current azimuth information from the sample parameter set;
and acquiring a signal adjusting parameter corresponding to the target sample azimuth information from the sample parameter set based on the mapping relation, and taking the signal adjusting parameter as a target signal adjusting parameter.
103. The audio signal is adjusted based on the target signal adjustment parameter.
In the embodiment of the present application, there may be a plurality of ways to adjust the audio signal based on different types of signal adjustment parameters. The following were used:
in some embodiments, the audio signal comprises a first sub audio signal and a second sub audio signal, the signal conditioning parameters comprise a first delay conditioning parameter and a second delay parameter;
the step "adjusting the audio signal based on the signal adjustment parameter" may comprise the following procedure:
adjusting an output time of the first sub audio signal based on the first delay adjustment parameter;
and adjusting the output time of the second sub audio signal based on the second time delay adjusting parameter.
Because the sound wave needs time to propagate in the air, when the sound source is not right in front (back), the ear on the same side of the sound source can hear the sound earlier, and the other ear can hear the sound later, and the small time difference (less than 0.6ms) can be distinguished by the human ear, and finally the sound is transmitted to the brain and the position information of the sound is analyzed and obtained. The time difference is useful for determining the bearing of the sound of each frequency; the time difference mainly refers to the time difference between the sound and the ear, so the time difference can be used as the orientation information of the sound source.
Specifically, the tone source position may be located based on the time difference of the two sub audio signals. In this embodiment, on the premise that the azimuth information is determined, the signal output time difference between the sub audio signals may be obtained based on the inverse transform of the azimuth information. Further, a delay parameter for each sub-signal may be determined based on the time difference. Therefore, the output time of the sub audio signal corresponding to each sub audio signal can be adjusted based on the determined time delay parameter, so that the virtual sound source can be positioned in the actual physical space.
In some embodiments, the audio signal comprises a first sub audio signal and a second sub audio signal, and the signal adjustment parameter may comprise a first volume adjustment parameter and a second volume adjustment parameter;
the step "adjusting the audio signal based on the signal adjustment parameter" may comprise the following procedure:
adjusting the volume of the first sub audio signal based on the first volume adjustment parameter;
and adjusting the volume of the second sub audio signal based on the second volume adjusting parameter.
104. And playing the adjusted audio signal.
In some embodiments, obtaining current position information of the virtual audio source comprises:
acquiring a deflection angle and a deflection direction of the electronic equipment relative to a horizontal plane;
and determining the current azimuth information of the virtual sound source according to the deflection angle and the deflection direction.
Specifically, the tone source position may be located based on the time difference of the two sub audio signals. Since the human ear senses the volume of sound abnormally sensitively, the volume difference localization plays an important role in auditory localization. Volume difference localization results from the same source receiving sound at different levels at both ears. If the sound source is biased to the left, the sound waves can directly reach the left ear, and the right ear is shielded by the head, so that the difference of sound volume heard by the left ear is larger than that heard by the right ear. The more the sound source is biased, the larger the sound volume difference.
In this embodiment, on the premise that the azimuth information is determined, the volume difference of the signal output between the sub audio signals can be obtained based on the inverse transform of the azimuth information. Further, a volume adjustment parameter for each sub-signal may be determined based on the volume difference. Therefore, the output volume of the sub audio signal corresponding to each sub audio signal can be adjusted based on the determined volume adjustment parameter, so that the virtual sound source can be positioned in the actual physical space.
In addition to the above embodiments, in the embodiments of the present application, there are various ways to determine the current position information of the virtual sound source, as follows:
in some embodiments, referring to fig. 3, the step "obtaining current position information of the virtual sound source" may include the following steps:
1011a, acquiring a deflection angle and a deflection direction of the electronic equipment relative to a horizontal plane;
1012a, determining the current orientation information of the virtual sound source according to the deflection angle and the deflection direction.
Referring to fig. 4, when the electronic device is horizontally placed, the initial position may be automatically calibrated to have a rotation angle of 0. When the electronic device is rotated, the deflection angle θ of the electronic device with respect to the horizontal plane can be detected by an acceleration sensor built in the electronic device. Accordingly, the corresponding APP can be set at the electronic device, and the virtual sound source is displayed on the APP display interface and also moved to the left by the azimuth angle θ (refer to fig. 2) relative to the virtual user. On the contrary, if the electronic device deflects to the right by theta, the virtual sound source displayed on the APP display interface also moves to the right by the azimuth angle theta relative to the virtual user.
In some embodiments, referring to fig. 5, the step "obtaining current position information of the virtual sound source" may include the following steps:
1011b, starting a camera of the electronic equipment to obtain a head portrait of the current user;
1012b, determining the deflection angle and the deflection direction of the head portrait of the user relative to the preset head portrait;
1013b, determining the current direction information of the virtual sound source according to the deflection angle and the deflection direction.
The camera can be a single camera or a double camera; the preset avatar may be a pre-acquired user frontal avatar.
In some embodiments, referring to fig. 6, after the camera of the electronic device is started to obtain the current user avatar, determining a deflection angle and a deflection direction of the user avatar relative to a preset avatar, the following process may be further included:
generating an information display interface, wherein the information display interface comprises a first area and a second area, and position information of a virtual user is displayed in the second area;
displaying the user head portrait in a first area in real time;
after the deflection angle and the deflection direction are used as the current orientation information of the virtual sound source, the method further comprises the following steps:
and displaying the virtual sound source in a second area in real time according to the azimuth information and the position information of the virtual user.
In some embodiments, referring to fig. 7, the step of "obtaining current position information of the virtual sound source" may include the following steps:
1011c, acquiring voice information input by a user, wherein the voice information comprises a deflection direction and a deflection angle;
1012c, identifying the voice information to obtain an identification result;
1013c, determining the current orientation information of the virtual sound source based on the identification result.
Specifically, the APP may be provided with an interface for calling a voice assistant of the electronic device system or a third-party voice recognition application, and a voice signal input by the user is recognized through voice. For example, the system microphone can be called through a voice interface displayed on the APP interface, and when the user speaks "rotate θ left", the electronic device obtains the voice information and recognizes the yaw direction and yaw angle carried therein through a voice recognition algorithm.
In some embodiments, referring to fig. 8, the step of "obtaining current position information of the virtual sound source" may include the following steps:
1011d, starting a target application interface;
1012d, acquiring touch position information of a user in a preset display area on the target application interface;
1013d, according to the touch position information and the position difference information between the preset position information on the display interface;
1014d, determining the orientation information of the sound source based on the position difference information.
Therefore, according to the 3D sound effect implementation method provided by the embodiment of the application, the orientation information of the 3D sound effect is confirmed and obtained through the hardware facility of the electronic equipment, so that the 3D sound effect of the audio signal is played, and the universality of the 3D sound effect application is improved; in addition, the external equipment is not needed to sense the azimuth information of the virtual sound source required by the user, and the cost is reduced.
The embodiment of the application further provides a 3D sound effect implementation device 300, which can be integrated in an electronic device, and the electronic device can be an intelligent terminal device such as a smart phone and a tablet computer.
As shown in fig. 9, the 3D sound effect implementing apparatus 300 may include: the system comprises a direction acquisition module 31, a selection module 32, an adjustment module 33 and a playing module 34. Wherein:
a direction obtaining module 31, configured to obtain current direction information of the virtual sound source when the audio signal is detected;
a selecting module 32, configured to select a target signal adjustment parameter from a sample parameter set according to the current orientation information;
an adjusting module 33, configured to adjust the audio signal based on the target signal adjusting parameter;
and a playing module 34 for playing the adjusted audio signal.
In some embodiments, the position obtaining module 31 may be specifically configured to:
acquiring a deflection angle and a deflection direction of the electronic equipment relative to a horizontal plane;
and determining the current orientation information of the virtual sound source according to the deflection angle and the deflection direction.
In some embodiments, the position obtaining module 31 may be specifically configured to:
starting a camera of the electronic equipment to acquire a head portrait of a current user;
determining the deflection angle and the deflection direction of the user head portrait relative to a preset head portrait;
and determining the current orientation information of the virtual sound source according to the deflection angle and the deflection direction.
In some embodiments, referring to fig. 10, the 3D sound effect implementing apparatus 300 may further include:
the interface generating module 35 is configured to determine a deflection angle and a deflection direction of a user avatar relative to a preset avatar after a camera of the electronic device is started to obtain the current user avatar, and generate an information display interface, where the information display interface includes a first area and a second area, and location information of a virtual user is displayed in the second area;
a first display module 36, configured to display the user avatar in the first area in real time;
and a second display module 37, configured to display the virtual sound source in the second area in real time according to the orientation information and the position information of the virtual user after taking the deflection angle and the deflection direction as the current orientation information of the virtual sound source.
In some embodiments, the position obtaining module 31 may be specifically configured to:
acquiring voice information input by a user, wherein the voice information comprises a deflection direction and a deflection angle;
recognizing the voice information to obtain a recognition result;
and determining the current orientation information of the virtual sound source based on the identification result.
In some embodiments, the position obtaining module 31 may be specifically configured to:
starting a target application interface;
acquiring touch position information of a user in a preset display area on the target application interface;
according to the position difference information between the touch position information and the preset position information on the display interface;
determining orientation information of the audio source based on the location difference information.
In some embodiments, referring to fig. 11, the 3D sound effect implementing apparatus 300 may further include:
a sample obtaining module 38, configured to obtain, before detecting an audio signal, a plurality of sample orientation information and corresponding signal adjustment parameters under the sample orientation information;
a building module 39, configured to build a mapping relationship between the sample orientation information and the signal adjustment parameter, and add the mapping relationship, the sample orientation information, and the signal adjustment parameter to a sample parameter set;
the selecting module 32 may be specifically configured to: selecting target sample azimuth information matched with the current azimuth information from a sample parameter set; and acquiring a signal adjusting parameter corresponding to the target sample position information from the sample parameter set based on the mapping relation, wherein the signal adjusting parameter is used as the target signal adjusting parameter.
In some embodiments, the audio signal comprises a first sub audio signal and a second sub audio signal, the signal conditioning parameter comprises a first delay conditioning parameter and a second delay parameter; the adjusting module 33 is specifically configured to:
adjusting an output time of a first sub audio signal based on the first delay adjustment parameter;
adjusting an output time of a second sub audio signal based on the second delay adjustment parameter.
In some embodiments, the audio signal comprises a first sub audio signal and a second sub audio signal, the signal adjustment parameter comprises a first volume adjustment parameter and a second volume adjustment parameter; the adjusting module 33 is specifically configured to:
adjusting a volume level of a first sub audio signal based on the first volume adjustment parameter;
adjusting the volume of a second sub audio signal based on the second volume adjustment parameter.
Therefore, the embodiment of the application provides a 3D sound effect implementation device, which obtains the current position information of a virtual sound source; selecting target signal adjusting parameters from a sample parameter set according to the current orientation information; adjusting the audio signal based on the target signal adjustment parameter; and playing the adjusted audio signal. The 3D sound effect implementation device improves the universality of 3D sound effect application by acquiring the azimuth information of the 3D sound effect; in addition, the external equipment is not needed to sense the azimuth information of the virtual sound source required by the user, and the cost is reduced.
The embodiment of the application also provides the electronic equipment. Referring to fig. 12, an electronic device 500 includes a processor 501 and a memory 502. The processor 501 is electrically connected to the memory 502.
The processor 501 is a control center of the electronic device 500, connects various parts of the whole electronic device by using various interfaces and lines, executes various functions of the electronic device 500 by running or loading a computer program stored in the memory 502, and calls data stored in the memory 502, and processes the data, thereby performing overall monitoring of the electronic device 500.
The memory 502 may be used to store software programs and modules, and the processor 501 executes various functional applications and data processing by running the computer programs and modules stored in the memory 502. The memory 502 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, a computer program required for at least one function, and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 502 may also include a memory controller to provide the processor 501 with access to the memory 502.
In this embodiment, the processor 501 in the electronic device 500 loads instructions corresponding to one or more processes of the computer program into the memory 502 according to the following steps, and the processor 501 runs the computer program stored in the memory 502, so as to implement the following functions:
when an audio signal is detected, acquiring current azimuth information of a virtual sound source;
selecting target signal adjusting parameters from the sample parameter set according to the current orientation information;
adjusting the audio signal based on the target signal adjustment parameter;
and playing the adjusted audio signal.
In some embodiments, in obtaining current position information of the virtual audio source, processor 501 performs the following steps:
acquiring a deflection angle and a deflection direction of the electronic equipment relative to a horizontal plane;
determining the current orientation information of the virtual sound source according to the deflection angle and the deflection direction
In some embodiments, in obtaining current position information of the virtual audio source, processor 501 performs the following steps:
starting a camera of the electronic equipment to acquire a head portrait of a current user;
determining the deflection angle and the deflection direction of the head portrait of the user relative to a preset head portrait;
and determining the current azimuth information of the virtual sound source according to the deflection angle and the deflection direction.
In some embodiments, after the camera of the electronic device is started to obtain the current head portrait of the user, and before the deflection angle and the deflection direction of the head portrait of the user with respect to the preset head portrait are determined, the processor 501 performs the following steps:
generating an information display interface, wherein the information display interface comprises a first area and a second area, and position information of a virtual user is displayed in the second area;
displaying a user head portrait in a first area in real time;
after the deflection angle and the deflection direction are used as the current azimuth information of the virtual sound source, the processor 501 further performs the following steps:
and displaying the virtual sound source in the second area in real time according to the azimuth information and the position information of the virtual user.
In some embodiments, in obtaining current position information of the virtual audio source, processor 501 performs the following steps:
acquiring voice information input by a user, wherein the voice information comprises a deflection direction and a deflection angle;
recognizing the voice information to obtain a recognition result;
current orientation information of the virtual sound source is determined based on the recognition result.
In some embodiments, in obtaining current position information of the virtual audio source, processor 501 performs the following steps:
starting a target application interface;
acquiring touch position information of a user in a preset display area on a target application interface;
according to the position difference information between the touch position information and the preset position information on the display interface;
and determining the azimuth information of the sound source based on the position difference information.
In some embodiments, prior to detecting the audio signal, the processor 501 further performs the steps of:
acquiring a plurality of sample position information and corresponding signal adjusting parameters under the sample position information;
establishing a mapping relation between the sample orientation information and the signal adjusting parameters, and adding the mapping relation, the sample orientation information and the signal adjusting parameters into a sample parameter set;
in determining target signal adjustment parameters from the sample parameter set based on the current position information, processor 501 performs the following steps:
selecting target sample azimuth information matched with the current azimuth information from the sample parameter set;
and acquiring a signal adjusting parameter corresponding to the target sample azimuth information from the sample parameter set based on the mapping relation, and taking the signal adjusting parameter as a target signal adjusting parameter.
In some embodiments, the audio signal comprises a first sub audio signal and a second sub audio signal, the signal conditioning parameters comprise a first delay conditioning parameter and a second delay parameter;
when adjusting the audio signal based on the signal adjustment parameter, the processor 501 performs the following steps:
adjusting an output time of the first sub audio signal based on the first delay adjustment parameter;
and adjusting the output time of the second sub audio signal based on the second time delay adjusting parameter.
In some embodiments, the audio signal comprises a first sub audio signal and a second sub audio signal, the signal adjustment parameter comprises a first volume adjustment parameter and a second volume adjustment parameter;
when adjusting the audio signal based on the signal adjustment parameter, the processor 501 performs the following steps:
adjusting the volume of the first sub audio signal based on the first volume adjustment parameter;
and adjusting the volume of the second sub audio signal based on the second volume adjusting parameter.
As can be seen from the above, in the electronic device according to the embodiment of the present application, when an audio signal is detected, current position information of a virtual sound source is obtained; selecting target signal adjusting parameters from the sample parameter set according to the current orientation information; and adjusting the audio signal based on the target signal adjusting parameter, and playing the adjusted audio signal. The electronic equipment obtains the azimuth information of the 3D sound effect based on the electronic equipment, and the universality of the 3D sound effect application is improved; in addition, the external equipment is not needed to sense the azimuth information of the virtual sound source required by the user, and the cost is reduced.
Referring to fig. 13, in some embodiments, the electronic device 500 may further include: a display 503, a radio frequency circuit 504, an audio circuit 505, and a power supply 509. The display 503, the radio frequency circuit 504, the audio circuit 505, the sensor 506, the camera 507, the microphone 508 and the power supply 509 are electrically connected to the processor 501.
The display 503 may be used to display information entered by or provided to the user as well as various graphical user interfaces, which may be made up of graphics, text, icons, video, and any combination thereof.
For example, in the embodiment of the present application, the display 503 may be configured to display the information display interface, display a user head portrait in a first area of the information display interface, and display the virtual sound source in a second area.
For another example, in this embodiment of the application, the display 503 may also be configured to display the target application interface, and may display the touch position of the user in real time in a preset display area of the target application interface.
The rf circuit 504 may be used for transceiving rf signals to establish wireless communication with a network device or other electronic devices via wireless communication, and for transceiving signals with the network device or other electronic devices.
The audio circuit 505 may be used to provide an audio interface between a user and an electronic device through a speaker, microphone.
In the embodiment of the present application, the electronic device may have at least two sound channels (i.e. there may be at least two sound sources), which correspond to different speakers respectively. The loudspeaker converts the received electric signal into an audio signal in the electronic equipment, and transmits the adjusted audio signal to the outside after adjusting the audio signal based on the signal adjusting parameter, thereby realizing the playing of a 3D sound effect.
And the sensor 506, wherein the sensor 506 is used for collecting external environment information. The sensors 506 may include ambient light sensors, acceleration sensors, gyroscopes, motion sensors, and other sensors. For example, in the embodiment of the present application, information such as a yaw angle and a yaw direction of the electronic device with respect to a horizontal plane may be detected by the acceleration sensor.
The camera 507, the camera 507 is used for gathering the external image information. The camera 507 may be a single camera or a dual camera. In this embodiment, the camera 507 may be configured to obtain the user avatar in real time when being started, and transmit the user avatar to the processor 501 for processing, so as to monitor a deflection angle and a deflection direction of the user avatar relative to a preset avatar.
The microphone 508 is configured to receive an externally input sound signal and convert the sound signal into an electrical signal. In this embodiment, the microphone 508 may be configured to detect and receive a voice signal input by a user, convert the voice signal into an electrical signal, and send the electrical signal to the processor for processing, and identify information carried in the voice signal by using a voice recognition algorithm. Therefore, the current direction information of the virtual sound source is obtained from the voice signal input by the user.
The power supply 509 may be used to power various components of the electronic device 500. In some embodiments, power supply 509 may be logically coupled to processor 501 through a power management system to manage charging, discharging, and power consumption management functions through the power management system.
Although not shown in fig. 13, the electronic device 500 may further include a bluetooth module, a speaker, a flash, and the like, which are not described in detail herein.
An embodiment of the present application further provides a storage medium, where the storage medium stores a computer program, and when the computer program runs on a computer, the computer is enabled to execute the 3D sound effect implementation method in any of the embodiments, for example: when an audio signal is detected, acquiring current azimuth information of a virtual sound source; selecting target signal adjusting parameters from the sample parameter set according to the current orientation information; adjusting the audio signal based on the target signal adjustment parameter; and playing the adjusted audio signal.
For another example, when acquiring the current azimuth information of the virtual sound source, specifically acquiring a deflection angle and a deflection direction of the electronic device relative to a horizontal plane; and determining the current azimuth information of the virtual sound source according to the deflection angle and the deflection direction.
For another example, when the current azimuth information of the virtual sound source is obtained, a camera of the electronic device is specifically started to obtain a current user head portrait; determining the deflection angle and the deflection direction of the head portrait of the user relative to a preset head portrait; and determining the current azimuth information of the virtual sound source according to the deflection angle and the deflection direction.
For another example, when the current direction information of the virtual sound source is obtained, the voice information input by the user is obtained, and the voice information comprises a deflection direction and a deflection angle; recognizing the voice information to obtain a recognition result; current orientation information of the virtual sound source is determined based on the recognition result.
For another example, when the current position information of the virtual sound source is obtained, the target application interface is started; acquiring touch position information of a user in a preset display area on a target application interface; according to the position difference information between the touch position information and the preset position information on the display interface; and determining the azimuth information of the sound source based on the position difference information.
In the embodiment of the present application, the storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
For the 3D sound effect implementation apparatus in the embodiment of the present application, each functional module may be integrated in one processing chip, or each module may exist alone physically, or two or more modules are integrated in one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented as a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium such as a read-only memory, a magnetic or optical disk, or the like.
The method, the device, the storage medium and the electronic device for realizing the 3D sound effect provided by the embodiment of the present application are introduced in detail, a specific example is applied in the text to explain the principle and the implementation of the present application, and the description of the embodiment is only used to help understanding the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (20)

  1. A3D sound effect implementation method is applied to electronic equipment, wherein the 3D sound effect implementation method comprises the following steps:
    when an audio signal is detected, acquiring current azimuth information of a virtual sound source;
    selecting target signal adjusting parameters from a sample parameter set according to the current orientation information;
    adjusting the audio signal based on the target signal adjustment parameter;
    and playing the adjusted audio signal.
  2. The 3D sound effect implementation method of claim 1 wherein the obtaining of the current position information of the virtual sound source comprises:
    acquiring a deflection angle and a deflection direction of the electronic equipment relative to a horizontal plane;
    and determining the current orientation information of the virtual sound source according to the deflection angle and the deflection direction.
  3. The 3D sound effect implementation method of claim 1 wherein the obtaining of the current position information of the virtual sound source comprises:
    starting a camera of the electronic equipment to acquire a head portrait of a current user;
    determining the deflection angle and the deflection direction of the user head portrait relative to a preset head portrait;
    and determining the current orientation information of the virtual sound source according to the deflection angle and the deflection direction.
  4. The 3D sound effect implementation method of claim 3 wherein after a camera of the electronic device is started to obtain a current user avatar, determining a deflection angle and a deflection direction of the user avatar relative to a preset avatar, further comprising:
    generating an information display interface, wherein the information display interface comprises a first area and a second area, and position information of a virtual user is displayed in the second area;
    displaying the user avatar in the first area in real time;
    after the deflection angle and the deflection direction are used as the current orientation information of the virtual sound source, the method further comprises the following steps:
    and displaying the virtual sound source in the second area in real time according to the azimuth information and the position information of the virtual user.
  5. The 3D sound effect implementation method of claim 1 wherein the obtaining of the current position information of the virtual sound source comprises:
    acquiring voice information input by a user, wherein the voice information comprises a deflection direction and a deflection angle;
    recognizing the voice information to obtain a recognition result;
    and determining the current orientation information of the virtual sound source based on the identification result.
  6. The 3D sound effect implementation method of claim 1 wherein the obtaining of the current position information of the virtual sound source comprises:
    starting a target application interface;
    acquiring touch position information of a user in a preset display area on the target application interface;
    according to the position difference information between the touch position information and the preset position information on the display interface;
    determining orientation information of the audio source based on the location difference information.
  7. The 3D sound effect implementation method of claim 1, wherein before detecting the audio signal, further comprising:
    acquiring a plurality of sample orientation information and corresponding signal adjustment parameters under the sample orientation information;
    establishing a mapping relation between the sample orientation information and the signal adjusting parameters, and adding the mapping relation, the sample orientation information and the signal adjusting parameters into a sample parameter set;
    determining target signal adjustment parameters from a sample parameter set according to the current position information, comprising:
    selecting target sample azimuth information matched with the current azimuth information from a sample parameter set;
    and acquiring a signal adjusting parameter corresponding to the target sample position information from the sample parameter set based on the mapping relation, wherein the signal adjusting parameter is used as the target signal adjusting parameter.
  8. The 3D sound effect implementation method of claim 1, the audio signals comprising a first sub audio signal and a second sub audio signal, the signal conditioning parameters comprising a first delay conditioning parameter and a second delay parameter;
    the adjusting the audio signal based on the signal adjustment parameter includes:
    adjusting an output time of a first sub audio signal based on the first delay adjustment parameter;
    adjusting an output time of a second sub audio signal based on the second delay adjustment parameter.
  9. The 3D sound effect implementation method of claim 1 wherein the audio signals comprise a first sub audio signal and a second sub audio signal, the signal adjustment parameters comprise a first volume adjustment parameter and a second volume adjustment parameter;
    the adjusting the audio signal based on the signal adjustment parameter includes:
    adjusting a volume level of a first sub audio signal based on the first volume adjustment parameter;
    adjusting the volume of a second sub audio signal based on the second volume adjustment parameter.
  10. The utility model provides a 3D audio effect realizing device, is applied to electronic equipment, wherein, 3D audio effect realizing device includes:
    the direction acquisition module is used for acquiring the current direction information of the virtual sound source when the audio signal is detected;
    the selection module is used for selecting target signal adjustment parameters from a sample parameter set according to the current azimuth information;
    an adjustment module to adjust the audio signal based on the target signal adjustment parameter;
    and the playing module is used for playing the adjusted audio signal.
  11. A storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform the steps of:
    when an audio signal is detected, acquiring current azimuth information of a virtual sound source;
    selecting target signal adjusting parameters from a sample parameter set according to the current orientation information;
    adjusting the audio signal based on the target signal adjustment parameter;
    and playing the adjusted audio signal.
  12. An electronic device comprising a processor and a storage medium having stored therein a plurality of instructions, the processor loading the instructions to perform the steps of:
    when an audio signal is detected, acquiring current azimuth information of a virtual sound source;
    selecting target signal adjusting parameters from a sample parameter set according to the current orientation information;
    adjusting the audio signal based on the target signal adjustment parameter;
    and playing the adjusted audio signal.
  13. The electronic device of claim 12, wherein in obtaining current position information of a virtual audio source, the processor performs the steps of:
    acquiring a deflection angle and a deflection direction of the electronic equipment relative to a horizontal plane;
    determining the current orientation information of the virtual sound source according to the deflection angle and the deflection direction
  14. The electronic device of claim 12, wherein in obtaining current position information of a virtual audio source, the processor performs the steps of:
    starting a camera of the electronic equipment to acquire a head portrait of a current user;
    determining the deflection angle and the deflection direction of the user head portrait relative to a preset head portrait;
    and determining the current orientation information of the virtual sound source according to the deflection angle and the deflection direction.
  15. The electronic device of claim 14, wherein after activating a camera of the electronic device to obtain the current user avatar, determining a deflection angle and a deflection direction of the user avatar relative to a preset avatar, the processor performs the following steps:
    generating an information display interface, wherein the information display interface comprises a first area and a second area, and position information of a virtual user is displayed in the second area;
    displaying the user avatar in the first area in real time;
    after the deflection angle and the deflection direction are taken as the current orientation information of the virtual sound source, the processor further executes the following steps:
    and displaying the virtual sound source in the second area in real time according to the azimuth information and the position information of the virtual user.
  16. The electronic device of claim 12, wherein in obtaining current position information of a virtual audio source, the processor performs the steps of:
    acquiring voice information input by a user, wherein the voice information comprises a deflection direction and a deflection angle;
    recognizing the voice information to obtain a recognition result;
    and determining the current orientation information of the virtual sound source based on the identification result.
  17. The electronic device of claim 12, wherein in obtaining current position information of a virtual audio source, the processor performs the steps of:
    starting a target application interface;
    acquiring touch position information of a user in a preset display area on the target application interface;
    according to the position difference information between the touch position information and the preset position information on the display interface;
    determining orientation information of the audio source based on the location difference information.
  18. The electronic device of claim 12, wherein prior to detecting the audio signal, the processor further performs the steps of:
    acquiring a plurality of sample orientation information and corresponding signal adjustment parameters under the sample orientation information;
    establishing a mapping relation between the sample orientation information and the signal adjusting parameters, and adding the mapping relation, the sample orientation information and the signal adjusting parameters into a sample parameter set;
    in determining target signal adjustment parameters from a sample parameter set in accordance with the current position information, the processor performs the steps of:
    selecting target sample azimuth information matched with the current azimuth information from a sample parameter set;
    and acquiring a signal adjusting parameter corresponding to the target sample position information from the sample parameter set based on the mapping relation, wherein the signal adjusting parameter is used as the target signal adjusting parameter.
  19. The electronic device of claim 12, wherein the audio signal comprises a first sub-audio signal and a second sub-audio signal, the signal adjustment parameter comprising a first delay adjustment parameter and a second delay parameter;
    upon adjusting the audio signal based on the signal adjustment parameter, the processor performs the steps of:
    adjusting an output time of a first sub audio signal based on the first delay adjustment parameter;
    adjusting an output time of a second sub audio signal based on the second delay adjustment parameter.
  20. The electronic device of claim 12, wherein the audio signal comprises a first sub-audio signal and a second sub-audio signal, the signal adjustment parameter comprising a first volume adjustment parameter and a second volume adjustment parameter;
    upon adjusting the audio signal based on the signal adjustment parameter, the processor performs the steps of:
    adjusting a volume level of a first sub audio signal based on the first volume adjustment parameter;
    adjusting the volume of a second sub audio signal based on the second volume adjustment parameter.
CN201880098267.5A 2018-11-20 2018-11-20 3D sound effect implementation method and device, storage medium and electronic equipment Pending CN112771893A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/116506 WO2020102994A1 (en) 2018-11-20 2018-11-20 3d sound effect realization method and apparatus, and storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN112771893A true CN112771893A (en) 2021-05-07

Family

ID=70773737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880098267.5A Pending CN112771893A (en) 2018-11-20 2018-11-20 3D sound effect implementation method and device, storage medium and electronic equipment

Country Status (2)

Country Link
CN (1) CN112771893A (en)
WO (1) WO2020102994A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114070931B (en) * 2021-11-25 2023-08-15 咪咕音乐有限公司 Sound effect adjusting method, device, equipment and computer readable storage medium
WO2023173285A1 (en) * 2022-03-15 2023-09-21 深圳市大疆创新科技有限公司 Audio processing method and apparatus, electronic device, and computer-readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104205880A (en) * 2012-03-29 2014-12-10 英特尔公司 Audio control based on orientation
CN105183421A (en) * 2015-08-11 2015-12-23 中山大学 Method and system for realizing virtual reality three-dimensional sound effect
CN107249166A (en) * 2017-06-19 2017-10-13 依偎科技(南昌)有限公司 A kind of earphone stereo realization method and system of complete immersion
CN108156561A (en) * 2017-12-26 2018-06-12 广州酷狗计算机科技有限公司 Processing method, device and the terminal of audio signal
US20180324539A1 (en) * 2017-05-08 2018-11-08 Microsoft Technology Licensing, Llc Method and system of improving detection of environmental sounds in an immersive environment
CN108810794A (en) * 2017-04-27 2018-11-13 蒂雅克股份有限公司 Target position setting device and acoustic image locating device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9767618B2 (en) * 2015-01-28 2017-09-19 Samsung Electronics Co., Ltd. Adaptive ambisonic binaural rendering
US9881647B2 (en) * 2016-06-28 2018-01-30 VideoStitch Inc. Method to align an immersive video and an immersive sound field

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104205880A (en) * 2012-03-29 2014-12-10 英特尔公司 Audio control based on orientation
CN105183421A (en) * 2015-08-11 2015-12-23 中山大学 Method and system for realizing virtual reality three-dimensional sound effect
CN108810794A (en) * 2017-04-27 2018-11-13 蒂雅克股份有限公司 Target position setting device and acoustic image locating device
US20180324539A1 (en) * 2017-05-08 2018-11-08 Microsoft Technology Licensing, Llc Method and system of improving detection of environmental sounds in an immersive environment
CN107249166A (en) * 2017-06-19 2017-10-13 依偎科技(南昌)有限公司 A kind of earphone stereo realization method and system of complete immersion
CN108156561A (en) * 2017-12-26 2018-06-12 广州酷狗计算机科技有限公司 Processing method, device and the terminal of audio signal

Also Published As

Publication number Publication date
WO2020102994A1 (en) 2020-05-28

Similar Documents

Publication Publication Date Title
CN108538320B (en) Recording control method and device, readable storage medium and terminal
CN108737921B (en) Play control method, system, earphone and mobile terminal
CN108668009B (en) Input operation control method, device, terminal, earphone and readable storage medium
CN108319445B (en) Audio playing method and mobile terminal
CN106489130A (en) For making audio balance so that the system and method play on an electronic device
CN109887494B (en) Method and apparatus for reconstructing a speech signal
CN108156575A (en) Processing method, device and the terminal of audio signal
CN110931053B (en) Method, device, terminal and storage medium for detecting recording time delay and recording audio
CN110460721B (en) Starting method and device and mobile terminal
CN111370018A (en) Audio data processing method, electronic device and medium
CN111370025A (en) Audio recognition method and device and computer storage medium
CN111405416A (en) Stereo recording method, electronic device and storage medium
CN111613213B (en) Audio classification method, device, equipment and storage medium
EP4203447A1 (en) Sound processing method and apparatus thereof
CN112771893A (en) 3D sound effect implementation method and device, storage medium and electronic equipment
WO2022057365A1 (en) Noise reduction method, terminal device, and computer readable storage medium
CN108924705B (en) 3D sound effect processing method and related product
WO2019147034A1 (en) Electronic device for controlling sound and operation method therefor
CN115967887B (en) Method and terminal for processing sound image azimuth
CN109360577B (en) Method, apparatus, and storage medium for processing audio
CN108882112B (en) Audio playing control method and device, storage medium and terminal equipment
CN113099373B (en) Sound field width expansion method, device, terminal and storage medium
CN115835079A (en) Transparent transmission mode switching method and switching device
CN108055633A (en) A kind of audio frequency playing method and mobile terminal
CN114594751A (en) Vehicle function testing method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210507