WO2020102994A1 - 3d音效实现方法、装置、存储介质及电子设备 - Google Patents

3d音效实现方法、装置、存储介质及电子设备

Info

Publication number
WO2020102994A1
WO2020102994A1 PCT/CN2018/116506 CN2018116506W WO2020102994A1 WO 2020102994 A1 WO2020102994 A1 WO 2020102994A1 CN 2018116506 W CN2018116506 W CN 2018116506W WO 2020102994 A1 WO2020102994 A1 WO 2020102994A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio signal
position information
adjustment parameter
information
sound source
Prior art date
Application number
PCT/CN2018/116506
Other languages
English (en)
French (fr)
Inventor
陈岩
Original Assignee
深圳市欢太科技有限公司
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市欢太科技有限公司, Oppo广东移动通信有限公司 filed Critical 深圳市欢太科技有限公司
Priority to CN201880098267.5A priority Critical patent/CN112771893A/zh
Priority to PCT/CN2018/116506 priority patent/WO2020102994A1/zh
Publication of WO2020102994A1 publication Critical patent/WO2020102994A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control

Definitions

  • This application relates to the field of electronic technology, and in particular, to a 3D sound effect realization method and an electronic device.
  • the embodiments of the present application provide a 3D sound effect implementation method, device, storage medium, and electronic equipment, which can improve the versatility of the 3D sound effect.
  • an embodiment of the present application provides a 3D sound effect implementation method, which is applied to an electronic device and includes:
  • an embodiment of the present application provides a 3D sound effect implementation device, which is applied to an electronic device, and the 3D sound effect implementation device includes:
  • the orientation acquisition module is used to acquire the current orientation information of the virtual sound source when an audio signal is detected
  • a selection module for selecting a target signal adjustment parameter from the sample parameter set according to the current position information
  • An adjustment module for adjusting the audio signal based on the target signal adjustment parameter
  • the playback module is used to play the adjusted audio signal.
  • an embodiment of the present application provides a storage medium in which multiple instructions are stored, and the instructions are adapted to be loaded by a processor to perform the following steps:
  • an embodiment of the present application provides an electronic device, including a processor and a storage medium, where multiple instructions are stored in the storage medium, and the processor loads the instructions to perform the following steps:
  • FIG. 1 is a first schematic flowchart of a method for implementing a 3D sound effect provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a first application scenario of a method for implementing a 3D sound effect provided by an embodiment of the present application.
  • FIG. 3 is a second schematic flowchart of a method for implementing 3D sound effects provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a second application scenario of a method for implementing a 3D sound effect provided by an embodiment of the present application.
  • FIG. 5 is a third schematic flowchart of a 3D sound effect implementation method provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a third application scenario of a method for implementing 3D sound effects provided by an embodiment of the present application.
  • FIG. 7 is a fourth schematic flowchart of a method for implementing a 3D sound effect provided by an embodiment of the present application.
  • FIG. 8 is a fifth schematic flowchart of a method for implementing 3D sound effects provided by an embodiment of the present application.
  • FIG. 9 is a first schematic structural diagram of a device for implementing 3D sound effects provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a second structure of a device for implementing 3D sound effects provided by an embodiment of the present application.
  • FIG. 11 is a third schematic structural diagram of a device for implementing 3D sound effects provided by an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG 13 is another schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Embodiments of the present application provide a method, device, storage medium, and electronic device for implementing 3D sound effects, which will be described in detail below.
  • the 3D sound effect implementation method is applied to electronic devices.
  • the electronic device may be a smart phone, a tablet computer and other smart terminals.
  • the 3D sound effect realization method may include the following steps:
  • the audio signal may be converted by the built-in speaker in the electronic device based on the received electrical signal, or may be converted based on the received electrical signal based on the external earphone device.
  • a vibration detection device may be provided in the electronic device to detect the vibration of the speaker, thereby implementing monitoring of the audio signal.
  • the current position information of the virtual sound source corresponds to the position information of the sound source satisfying the user's needs in the actual physical space. For example, if the user wants the sound source to be deflected at an angle of ⁇ on his left side, the positional relationship between the virtual sound source and the virtual user can refer to FIG. 2.
  • the current position information of the virtual sound source there may be multiple ways to determine the current position information of the virtual sound source. For example, it can be set by the user through the electronic device in the manner of software; it can be determined by the placement state of the electronic device itself; and can also be determined by the positional relationship between the electronic device and the user. The relevant content will be described in detail below.
  • the sample parameter set includes a plurality of preset signal adjustment parameters, and the electronic device may select a matching signal adjustment parameter from the sample parameter set based on the currently acquired position information of the virtual sound source.
  • the method before detecting the audio signal, the method further includes:
  • mapping relationship between the sample orientation information and the signal adjustment parameters is established, and the mapping relationship, sample orientation information and signal adjustment parameters are added to the sample parameter set.
  • the impact response of the head related transfer function (Head Related Transfer Function, HRTF for short) under the sampling orientation can be obtained.
  • HRTF Head Related Transfer Function
  • the left output signal and the right output signal are divided into with
  • the left and right output signals are respectively with
  • l out and r out are passed through the reverberator to eliminate the positioning effect in the head.
  • the reverberator design can use an artificial reverberation algorithm, using four parallel comb filters.
  • the system function of the comb filter is
  • D 1 ⁇ D 4 represent the delay of 4 comb filters respectively, the specific number can be 14.61ms, 18.83ms, 20.74ms and 22.15ms respectively; a 1 ⁇ a 4 represents the attenuation gain of 4 comb filters , Which can be 0.84, 0.82, 0.8, and 0.78, respectively.
  • the playback parameter that is debugged is used as the corresponding signal adjustment parameter under the current sample orientation information.
  • the step "determine the target signal adjustment parameter from the sample parameter set according to the current position information” may include the following process:
  • the signal adjustment parameter corresponding to the target sample orientation information is obtained from the sample parameter set as the target signal adjustment parameter.
  • the audio signal includes a first sub-audio signal and a second sub-audio signal
  • the signal adjustment parameter includes a first delay adjustment parameter and a second delay parameter
  • the step "adjusting the audio signal based on the signal adjustment parameters” may include the following process:
  • the output time of the second sub-audio signal is adjusted based on the second delay adjustment parameter.
  • the time difference is useful for determining the orientation of sounds of various frequencies; the time difference mainly refers to the time difference of the moment when the sound reaches the human ear, so the time difference can be used as the orientation information of the fixed sound source.
  • the position of the sound source may be located based on the time difference between the two sub-audio signals.
  • the signal output time difference between the sub-audio signals may be obtained based on the inverse transformation of the azimuth information.
  • the delay parameter of each sub-signal can be determined based on the time difference. Therefore, the output time of the sub audio signal corresponding to each sub can be adjusted based on the determined time delay parameter, thereby realizing the positioning of the virtual audio source in the actual physical space.
  • the audio signal includes a first sub-audio signal and a second sub-audio signal
  • the signal adjustment parameter may include a first volume adjustment parameter and a second volume adjustment parameter
  • the step "adjusting the audio signal based on the signal adjustment parameter” may include the following process:
  • the volume of the second sub-audio signal is adjusted based on the second volume adjustment parameter.
  • obtaining the current position information of the virtual sound source includes:
  • the position of the sound source may be located based on the time difference between the two sub-audio signals. Since the human ear is extremely sensitive to the volume of the sound, the localization of the volume difference plays a very important role in the auditory localization.
  • the volume difference positioning is caused by the same sound source receiving different sound levels in both ears. For example, when the sound source is biased to the left, the sound wave can directly reach the left ear, while the right ear is covered by the head. As a result, the volume difference heard by the left ear will be greater than that of the right ear. The more biased the sound source, the greater the volume difference.
  • the volume difference of the signal output between the sub-audio signals can be obtained based on the inverse transformation of the orientation information. Further, the volume adjustment parameter of each sub-signal can be determined based on the volume difference. Therefore, the output volume of the sub audio signal corresponding to each sub can be adjusted based on the determined volume adjustment parameter, thereby realizing the positioning of the virtual sound source in the actual physical space.
  • the step "obtaining the current position information of the virtual sound source” may include the following process:
  • the initial position rotation angle can be automatically calibrated to 0.
  • the deflection angle of the electronic device relative to the horizontal plane can be detected by the acceleration sensor built in the electronic device as ⁇ .
  • the corresponding APP can be set on the electronic device, and the virtual sound source displayed on the APP display interface is also moved to the left by the azimuth angle ⁇ relative to the virtual user (refer to FIG. 2).
  • the electronic device is deflected to the right by ⁇
  • the virtual sound source displayed on the APP display interface is also moved to the right by the azimuth angle ⁇ relative to the virtual user.
  • the step "obtaining the current position information of the virtual sound source” may include the following process:
  • the camera may be a single camera or a dual camera; the preset avatar may be a front avatar of the user obtained in advance.
  • the information display interface includes a first area and a second area, and the position information of the virtual user is displayed in the second area;
  • the method further includes:
  • the virtual sound source is displayed in the second area according to the position information and the position information of the virtual user in real time.
  • the step "obtaining the current position information of the virtual sound source” may include the following process:
  • the APP may be provided with an interface for calling a voice assistant of the electronic device system or a third-party voice recognition application to recognize the voice signal input by the user through voice.
  • the system microphone can be called through the voice interface displayed on the APP interface.
  • the electronic device obtains the voice information and recognizes the deflection direction and deflection angle carried by the voice recognition algorithm.
  • the step “obtaining the current position information of the virtual sound source” may include the following process:
  • the method for implementing 3D sound effects confirms and obtains the position information of the 3D sound effects through the hardware facilities provided by the electronic device, so as to realize the playback of the 3D sound effects of the audio signal and improve the versatility of the 3D sound effects; No need to use peripherals to sense the position information of the virtual sound source required by users, reducing costs.
  • the embodiment of the present application also provides a device 300 for realizing 3D sound effects.
  • the device may be integrated in an electronic device, and the electronic device may be a smart terminal device such as a smart phone or a tablet computer.
  • the device 300 for realizing 3D sound effects may include: an orientation obtaining module 31, a selecting module 32, an adjusting module 33, and a playing module 34. among them:
  • the position obtaining module 31 is used to obtain the current position information of the virtual sound source when an audio signal is detected;
  • the selection module 32 is configured to select a target signal adjustment parameter from the sample parameter set according to the current position information
  • the adjustment module 33 is configured to adjust the audio signal based on the target signal adjustment parameter
  • the playing module 34 is used to play the adjusted audio signal.
  • the position acquisition module 31 may be specifically used for:
  • the current position information of the virtual sound source is determined according to the deflection angle and the deflection direction.
  • the position acquisition module 31 may be specifically used for:
  • the current position information of the virtual sound source is determined according to the deflection angle and the deflection direction.
  • the 3D sound effect implementing apparatus 300 may further include:
  • the interface generation module 35 is used to generate an information display interface after starting the camera of the electronic device to obtain the current user avatar and before determining the deflection angle and direction of the user avatar relative to the preset avatar. An area and a second area, the location information of the virtual user is displayed in the second area;
  • the first display module 36 is configured to display the user avatar in the first area in real time
  • the second display module 37 is configured to use the deflection angle and the deflection direction as the current azimuth information of the virtual sound source, based on the azimuth information and the location information of the virtual user in real time, in the second The virtual sound source is displayed in the area.
  • the position acquisition module 31 may be specifically used for:
  • voice information input by the user where the voice information includes a deflection direction and a deflection angle
  • the current position information of the virtual sound source is determined based on the recognition result.
  • the position acquisition module 31 may be specifically used for:
  • the position information of the sound source is determined based on the position difference information.
  • the 3D sound effect implementing apparatus 300 may further include:
  • the sample acquisition module 38 is configured to acquire multiple sample orientation information and corresponding signal adjustment parameters under the sample orientation information before detecting the audio signal;
  • the building module 39 is used to establish a mapping relationship between the sample orientation information and signal adjustment parameters, and add the mapping relationship, sample orientation information and signal adjustment parameters to the sample parameter set;
  • the selection module 32 may be specifically used to: select target sample orientation information matching the current position information from the sample parameter set; based on the mapping relationship, obtain the target sample orientation information corresponding to the target sample position information from the sample parameter set
  • the signal adjustment parameter serves as the target signal adjustment parameter.
  • the audio signal includes a first sub-audio signal and a second sub-audio signal
  • the signal adjustment parameter includes a first delay adjustment parameter and a second delay parameter
  • the adjustment module 33 is specifically configured to:
  • the output time of the second sub-audio signal is adjusted based on the second delay adjustment parameter.
  • the audio signal includes a first sub-audio signal and a second sub-audio signal
  • the signal adjustment parameter includes a first volume adjustment parameter and a second volume adjustment parameter
  • the adjustment module 33 is specifically configured to:
  • the volume of the second sub-audio signal is adjusted based on the second volume adjustment parameter.
  • an embodiment of the present application provides a 3D sound effect implementation device, which obtains current position information of a virtual sound source; selects a target signal adjustment parameter from a sample parameter set according to the current position information; adjusts a parameter based on the target signal Adjust the audio signal; play the adjusted audio signal.
  • the 3D sound effect realization device improves the versatility of the 3D sound effect by acquiring the position information of the 3D sound effect; in addition, there is no need to use peripheral devices to sense the position information of the virtual sound source required by the user, which reduces the cost.
  • the electronic device 500 includes a processor 501 and a memory 502.
  • the processor 501 and the memory 502 are electrically connected.
  • the processor 501 is the control center of the electronic device 500, which uses various interfaces and lines to connect the various parts of the entire electronic device, executes or loads the computer program stored in the memory 502, and calls the data stored in the memory 502 to execute
  • the electronic device 500 performs various functions and processes data to perform overall monitoring of the electronic device 500.
  • the memory 502 can be used to store software programs and modules.
  • the processor 501 runs computer programs and modules stored in the memory 502 to execute various functional applications and data processing.
  • the memory 502 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, a computer program required for at least one function, and the like; the storage data area may store data created according to the use of electronic devices and the like.
  • the memory 502 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices. Accordingly, the memory 502 may further include a memory controller to provide the processor 501 with access to the memory 502.
  • the processor 501 in the electronic device 500 will load the instructions corresponding to the process of one or more computer programs into the memory 502 according to the following steps, and the processor 501 runs and stores the instructions Computer program in order to achieve the following functions:
  • the processor 501 when acquiring the current position information of the virtual sound source, performs the following steps:
  • the processor 501 when acquiring the current position information of the virtual sound source, performs the following steps:
  • the processor 501 after starting the camera of the electronic device to obtain the current user avatar, and before determining the deflection angle and direction of the user avatar relative to the preset avatar, the processor 501 performs the following steps:
  • An information display interface is generated, the information display interface includes a first area and a second area, and the location information of the virtual user is displayed in the second area;
  • the processor 501 After using the deflection angle and the deflection direction as the current position information of the virtual sound source, the processor 501 further performs the following steps:
  • the virtual sound source is displayed in the second area according to the position information and the position information of the virtual user in real time.
  • the processor 501 when acquiring the current position information of the virtual sound source, performs the following steps:
  • the voice information includes the deflection direction and the deflection angle
  • the current position information of the virtual sound source is determined based on the recognition result.
  • the processor 501 when acquiring the current position information of the virtual sound source, performs the following steps:
  • the position information of the sound source is determined based on the position difference information.
  • the processor 501 before detecting the audio signal, the processor 501 further performs the following steps:
  • the processor 501 When determining the target signal adjustment parameter from the sample parameter set according to the current position information, the processor 501 performs the following steps:
  • the signal adjustment parameter corresponding to the target sample orientation information is obtained from the sample parameter set as the target signal adjustment parameter.
  • the audio signal includes a first sub-audio signal and a second sub-audio signal
  • the signal adjustment parameter includes a first delay adjustment parameter and a second delay parameter
  • the processor 501 When adjusting the audio signal based on the signal adjustment parameters, the processor 501 performs the following steps:
  • the output time of the second sub-audio signal is adjusted based on the second delay adjustment parameter.
  • the audio signal includes a first sub-audio signal and a second sub-audio signal
  • the signal adjustment parameter includes a first volume adjustment parameter and a second volume adjustment parameter
  • the processor 501 When adjusting the audio signal based on the signal adjustment parameters, the processor 501 performs the following steps:
  • the volume of the second sub-audio signal is adjusted based on the second volume adjustment parameter.
  • the electronic device of the embodiment of the present application obtains the current position information of the virtual sound source when detecting an audio signal; selects the target signal adjustment parameter from the sample parameter set according to the current position information; adjusts the audio signal based on the target signal adjustment parameter Make adjustments and play the adjusted audio signal.
  • the electronic device obtains the position information of the 3D sound effect based on itself, and improves the versatility of the 3D sound effect. In addition, there is no need to use peripheral equipment to sense the position information of the virtual sound source required by the user, which reduces the cost.
  • the electronic device 500 may further include: a display 503, a radio frequency circuit 504, an audio circuit 505, and a power supply 509.
  • the display 503, the radio frequency circuit 504, the audio circuit 505, the sensor 506, the camera 507, the microphone 508, and the power supply 509 are electrically connected to the processor 501 respectively.
  • the display 503 can be used to display information input by the user or provided to the user and various graphical user interfaces, which can be composed of graphics, text, icons, video, and any combination thereof.
  • the display 503 may be used to display the above information display interface, and display the user's avatar in the first area of the information display interface, and display the virtual sound source in the second area.
  • the display 503 may also be used to display the target application interface described above, and may display the user's touch position in real time in a preset display area of the target application interface.
  • the radio frequency circuit 504 may be used to transmit and receive radio frequency signals to establish wireless communication with network devices or other electronic devices through wireless communication, and to transmit and receive signals with network devices or other electronic devices.
  • the audio circuit 505 can be used to provide an audio interface between a user and an electronic device through speakers and microphones.
  • the electronic device may have at least two channels (that is, there will be two less audio sources), respectively corresponding to different speakers.
  • the speaker in the electronic device converts the received electrical signal into an audio signal, adjusts the audio signal based on the signal adjustment parameters, and then transmits the adjusted audio signal to the outside world, thereby achieving 3D sound effect playback.
  • the sensor 506 is used to collect external environment information.
  • the sensor 506 may include an ambient brightness sensor, an acceleration sensor, a gyroscope, a motion sensor, and other sensors.
  • the acceleration sensor may detect information such as the deflection angle and deflection direction of the electronic device relative to the horizontal plane.
  • the camera 507 is used for collecting image information of the outside world.
  • the camera 507 may be a single camera or a dual camera.
  • the camera 507 can be used to acquire the user's avatar in real time and transmit it to the processor 501 for processing to monitor the deflection angle and deflection direction of the user's avatar relative to the preset avatar.
  • the microphone 508 is used to receive sound signals input from the outside and convert the sound signals into electrical signals.
  • the microphone 508 may be used to detect and receive the voice signal input by the user, convert the voice signal into an electrical signal and send it to the processor for processing, and use a voice recognition algorithm to recognize the information carried in the voice signal . Therefore, the current position information of the virtual sound source is obtained from the voice signal input by the user.
  • the power supply 509 can be used to power various components of the electronic device 500.
  • the power supply 509 can be logically connected to the processor 501 through a power management system, so as to realize functions such as managing charging, discharging, and power consumption management through the power management system.
  • the electronic device 500 may further include devices such as a Bluetooth module, a speaker, and a flashlight, which will not be repeated here.
  • An embodiment of the present application further provides a storage medium that stores a computer program, and when the computer program is run on a computer, the computer is caused to execute the 3D sound effect implementation method in any of the foregoing embodiments, for example: when detected For audio signals, obtain the current position information of the virtual sound source; select the target signal adjustment parameters from the sample parameter set according to the current position information; adjust the audio signal based on the target signal adjustment parameters; play the adjusted audio signal.
  • the current orientation information of the virtual sound source when acquiring the current orientation information of the virtual sound source, specifically obtain the deflection angle and deflection direction of the electronic device relative to the horizontal plane; determine the current orientation information of the virtual sound source according to the deflection angle and the deflection direction.
  • the current position information of the virtual sound source specifically start the camera of the electronic device to obtain the current user avatar; determine the deflection angle and deflection direction of the user avatar relative to the preset avatar; determine the virtual sound source according to the deflection angle and the deflection direction Current location information.
  • the voice information when acquiring the current orientation information of the virtual sound source, acquire the voice information input by the user.
  • the voice information includes the deflection direction and the deflection angle; identify the voice information to obtain a recognition result;
  • the target application interface when acquiring the current position information of the virtual sound source, the target application interface is started; the user's touch position information in the preset display area on the target application interface is obtained; according to the touch position information and the preset position information on the display interface Position difference information between the two; determine the position information of the sound source based on the position difference information.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read Only Memory, ROM,), or a random access memory (Random Access Memory, RAM), etc.
  • each functional module may be integrated into one processing chip, or each module may exist alone physically, or two or more modules may be integrated into one module.
  • the above integrated modules may be implemented in the form of hardware or software function modules. If the integrated module is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer-readable storage medium, such as a read-only memory, magnetic disk, or optical disk.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)

Abstract

本申请实施例提供一种3D音效实现方法、装置、存储介质及电子设备。该3D音效实现方法在当检测到音频信号时,获取虚拟音源的当前方位信息;根据所述当前方位信息从样本参数集合中选取目标信号调节参数;基于所述目标信号调节参数对所述音频信号进行调节;播放调节后的音频信号。

Description

3D音效实现方法、装置、存储介质及电子设备 技术领域
本申请涉及电子技术领域,尤其涉及一种3D音效实现方法及电子设备。
背景技术
随着多手机媒体技术的快速发展和虚拟现实(Virtual Reality,简称VR)技术的火热,带动了在智能手机、平板电脑等移动终端上实现三维(Three,Dimensions,简称3D)音效的要求。相关技术中,需通过头戴式耳机内置的方位传感器感应人头部的转动,来定位3D音效的虚拟音源位置,从而实现三维播放效果。可知,这种通过外设定位音源的方式,使得3D音效的应用存在一定局限性。
发明内容
本申请实施例提供一种3D音效实现方法、装置、存储介质及电子设备,可以提升3D音效应用的通用性。
第一方面,本申请实施例提供一种3D音效实现方法,应用于电子设备,包括:
当检测到音频信号时,获取虚拟音源的当前方位信息;
根据所述当前方位信息从样本参数集合中选取目标信号调节参数;
基于所述目标信号调节参数对所述音频信号进行调节;
播放调节后的音频信号。
第二方面,本申请实施例提供一种3D音效实现装置,应用于电子设备,所述3D音效实现装置包括:
方位获取模块,用于当检测到音频信号时,获取虚拟音源的当前方位信息;
选取模块,用于根据所述当前方位信息从样本参数集合中选取目标信号调节参数;
调节模块,用于基于所述目标信号调节参数对所述音频信号进行调节;
播放模块,用于播放调节后的音频信号。
第三方面,本申请实施例提供一种存储介质,所述存储介质中存储有多条指令,所述指令适于由处理器加载以执行以下步骤:
当检测到音频信号时,获取虚拟音源的当前方位信息;
根据所述当前方位信息从样本参数集合中选取目标信号调节参数;
基于所述目标信号调节参数对所述音频信号进行调节;
播放调节后的音频信号。
第四方面,本申请实施例提供一种电子设备,包括处理器以及存储介质,所述存储介质中存储有多条指令,所述处理器加载所述指令以执行以下步骤:
当检测到音频信号时,获取虚拟音源的当前方位信息;
根据所述当前方位信息从样本参数集合中选取目标信号调节参数;
基于所述目标信号调节参数对所述音频信号进行调节;
播放调节后的音频信号。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍。显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
为了更完整地理解本申请及其有益效果,下面将结合附图来进行以下说明,其中在下 面的描述中相同的附图标号表示相同部分。
图1是本申请实施例提供的3D音效实现方法的第一流程示意图。
图2是本申请实施例提供的3D音效实现方法的第一应用场景示意图。
图3是本申请实施例提供的3D音效实现方法的第二流程示意图。
图4是本申请实施例提供的3D音效实现方法的第二应用场景示意图。
图5是本申请实施例提供的3D音效实现方法的第三流程示意图。
图6是本申请实施例提供的3D音效实现方法的第三应用场景示意图。
图7是本申请实施例提供的3D音效实现方法的第四流程示意图。
图8是本申请实施例提供的3D音效实现方法的第五流程示意图。
图9是本申请实施例提供的3D音效实现装置的第一结构示意图。
图10是本申请实施例提供的3D音效实现装置的第二结构示意图。
图11是本申请实施例提供的3D音效实现装置的第三结构示意图。
图12是本申请实施例提供的电子设备的结构示意图。
图13是本申请实施例提供的电子设备的又一结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供一种3D音效实现方法、装置、存储介质及电子设备,以下将分别进行详细说明。
如图1所示,3D音效实现方法,应用于电子设备。该电子设备可以是智能手机、平板电脑等智能终端。该3D音效实现方法可以包括以下步骤:
101、当检测到音频信号时,获取虚拟音源的当前方位信息。
其中,音频信号可以由电子设备中内置的扬声器基于接收到的电信号转换得到,也可以是基于外接的耳机设备基于接收到的电信号转换得到。具体的,可以在电子设备中设置振动检测装置,用于检测扬声器的振动情况,从而实现对音频信号的监控。
在本申请实施例中,虚拟音源的当前方位信息对应满足用户需求的音源在实际物理空间中的方位信息。例如,若用户希望音源在自己左侧偏转θ角度的方位,则虚拟音源与虚拟用户的位置关系可以参考图2。
在本申请实施例中,确定虚拟音源的当前方位信息的方式可以有多种。例如,可以通过软件的方式由用户通过电子设备进行设置;可以通过电子设备的自身放置状态确定;还可以通过用于与电子设备与用户之间的位置关系进行确定。下面将会进行相关内容的详细描述。
102、根据当前方位信息从样本参数集合中选取目标信号调节参数。
其中,样本参数集合中包括有多个预设录入的信号调节参数,电子设备可基于当前获取的虚拟音源的方位信息从该样本参数集合中选取匹配的信号调节参数。
在本申请实施例中,需要预先采集音源的方位信息与调节参数之间的对应关系,以便后续目标信号调节参数的确定。也即,在一些实施例中,在检测音频信号之前,还包括:
获取多个样本方位信息、及在样本方位信息下对应的信号调节参数;
建立样本方位信息与信号调节参数之间的映射关系,并将映射关系、样本方位信息及信号调节参数添加到样本参数集合中。
具体的,可以获取该采样方位下的头相关变换函数(Head Related Transfer Function, 简称HRTF)的冲击响应。例如,采样方位信息为以用户为中心向左或向右偏转θ,则左耳和右耳的冲击响应分别可以记作
Figure PCTCN2018116506-appb-000001
Figure PCTCN2018116506-appb-000002
其中,冲击响应可由人工或机器通过逐步调试音频信号测得。
假设输入的信号为单声道信号s,则左输出信号和右侧输出信号分分别为
Figure PCTCN2018116506-appb-000003
Figure PCTCN2018116506-appb-000004
假设输入的信号为双声道信号l和r,则左侧输出信号和右侧输出信号分别为
Figure PCTCN2018116506-appb-000005
Figure PCTCN2018116506-appb-000006
具体实施时,将l out和r out经过混响器,消除头内定位效应混响器设计可以采用人工混响算法,采用4个并联的梳状滤波器,梳状滤波器系统函数为:
Figure PCTCN2018116506-appb-000007
其中,D 1~D 4分别表示4个梳状滤波器的延迟,具体数分别可以为14.61ms,18.83ms,20.74ms和22.15ms;a 1~a 4表示4个梳状滤波器的衰减增益,分别可以为0.84,0.82,0.8和0.78。
实际应用中,可以对方位信息进行采样,然后在将虚拟音源的位置定位为当前采样方位信息的前提下,通过人工或机器逐步调试音频信号的播放参数(如音量、时延等),以使当前音频信号播放时,在听觉上达到从物理空间中的该方位信息发出声音的播放效果。并将最终符合所需播放效果时,所调试到的播放参数作为当前样本方位信息下对应的信号调节参数。
则,步骤“根据当前方位信息从样本参数集合中确定目标信号调节参数”,可以包括以下流程:
从样本参数集合中选取与当前方位信息匹配的目标样本方位信息;
基于映射关系,从样本参数集合中获取目标样本方位信息对应的信号调节参数,作为目标信号调节参数。
103、基于目标信号调节参数对音频信号进行调节。
本申请实施例中,基于信号调节参数的不同类型,对音频信号进行调节的方式也可以有多种。如下:
在一些实施例中,音频信号包括第一子音频信号和第二子音频信号,信号调节参数包括第一时延调节参数和第二时延参数;
则步骤“基于信号调节参数对所述音频信号进行调节”,可以包括以下流程:
基于第一时延调节参数调节第一子音频信号的输出时间;
基于第二时延调节参数调节第二子音频信号的输出时间。
由于声波在空气中传播需要时间,所以当音源不在正前(后)方时,与音源同侧的那一只耳朵将早一点听到声音,而另一只耳朵将迟一点听到声音,这种微小的时间差(小于0.6ms)也可以被人耳分辨出来,最终传入大脑并分析得到声音的位置信息。时间差对各个频率的声音确定方位都有用;时间差主要指声音刚到人耳瞬间先后的时间差别,因此可用时间差来作定音源定向信息。
具体的,可基于两个子音频信号的时间差定位音源位置。则在本实施例中,在方位信 息确定的前提下,可以基于方位信息的反变换得到子音频信号间的信号输出时间差。进一步的可基于时间差确定各子信号的时延参数。因此,可基于确定的时延参数调节各子对应的子音频信号的输出时间,从而实现虚拟音源在实际物理空间的定位。
在一些实施例中,音频信号包括第一子音频信号和第二子音频信号,信号调节参数可以包括第一音量调节参数和第二音量调节参数;
则步骤“基于所述信号调节参数对所述音频信号进行调节”,可以包括以下流程:
基于第一音量调节参数调节第一子音频信号的音量大小;
基于第二音量调节参数调节第二子音频信号的音量大小。
104、播放调节后的音频信号。
在一些实施例中,获取虚拟音源的当前方位信息,包括:
获取电子设备相对于水平面的偏转角度、及偏转方向;
根据偏转角度、及偏转方向确定虚拟音源的当前方位信息。
具体的,可基于两个子音频信号的时间差定位音源位置。由于人耳对声音的音量大小的感受异常灵敏,这使音量差定位在听觉定位中起着十分重要的作用。音量差定位是同一音源在两耳接受到不同声级的声音而产生的。如当音源偏向左方时,声波可以直接到达左耳,而右耳则受到头部的遮蔽,结果左耳听到的音量差将大于右耳。音源越偏,音量差越大。
则在本实施例中,在方位信息确定的前提下,可以基于方位信息的反变换得到子音频信号间的信号输出的音量差。进一步的可基于音量差确定各子信号的音量调节参数。因此,可基于确定的音量调节参数调节各子对应的子音频信号的输出音量,从而实现虚拟音源在实际物理空间的定位。
在上述各实施方式的基础上,本申请实施例中确定虚拟音源的当前方位信息的方式可以有多种,如下:
在一些实施例中,参考图3,步骤“获取虚拟音源的当前方位信息”,可以包括以下流程:
1011a、获取电子设备相对于水平面的偏转角度、及偏转方向;
1012a、根据偏转角度、及偏转方向确定虚拟音源的当前方位信息。
参考图4,电子设备水平放置时,可自动校准初始位置旋转角度为0。当旋转电子设备时,可通过电子设备内置的加速度传感器检测到电子设备相对于水平面的偏转角度为θ。则相应的,可以在电子设备在设置对应的APP,在APP显示界面上显示虚拟音源也相对于虚拟用户向左移动了方位角θ(参考图2)。反之,若电子设备向右偏转θ,在APP显示界面上显示虚拟音源也相对于虚拟用户向右移动了方位角θ。
在一些实施例中,参考图5,步骤“获取虚拟音源的当前方位信息”,可以包括以下流程:
1011b、启动电子设备的摄像头获取当前用户头像;
1012b、确定用户头像相对于预设头像的偏转角度、及偏转方向;
1013b、根据偏转角度、及偏转方向确定虚拟音源的当前方位信息。
其中,该摄像头可以是单摄像头,也可以是双摄像头;预设头像可以是预先获取的用户正面头像。
在一些实施方式,参考图6,在启动电子设备的摄像头获取当前用户头像后,确定用户头像相对于预设头像的偏转角度、及偏转方向前,还可以包括以下流程:
生成信息显示界面,所述信息显示界面包括第一区域和第二区域,第二区域内显示有虚拟用户的位置信息;
实时在第一区域内显示所述用户头像;
在将偏转角度、及偏转方向作为所述虚拟音源的当前方位信息后,还包括:
实时根据方位信息、及所述虚拟用户的位置信息,在第二区域内显示所述虚拟音源。
在一些实施例中,参考图7,步骤“获取虚拟音源的当前方位信息”,可以包括以下流程:
1011c、获取用户输入的语音信息,所述语音信息包括偏转方向和偏转角度;
1012c、对所述语音信息进行识别,得到识别结果;
1013c、基于所述识别结果确定虚拟音源的当前方位信息。
具体的,APP可设置有调用电子设备系统语音助手或第三方语音识别应用的接口,通过语音识别出用户输入的语音信号。例如,可通过APP界面显示的语音接口调用系统麦克风,当用户说出“向左旋转θ”,电子设备获取该语音信息,并通过语音识别算法识别其中携带的偏转方向和偏转角度。
在一些实施例中,参考图8,步骤“获取虚拟音源的当前方位信息”,可以包括以下流程:
1011d、启动目标应用界面;
1012d、获取用户在目标应用界面上预设显示区域内的触摸位置信息;
1013d、根据触摸位置信息、和所述显示界面上的预设位置信息之间的位置差异信息;
1014d、基于位置差异信息确定所述音源的方位信息。
由上可知,本申请实施例提供的3D音效实现方法,通过电子设备自带的硬件设施来确认获取3D音效的方位信息,以实现音频信号3D音效的播放,提升3D音效应用的通用性;另外,无需采用外设感应用户所需的虚拟音源的方位信息,降低了成本。
本申请实施例还提供一种3D音效实现装置300,该装置可以集成在电子设备中,该电子设备可以是智能手机、平板电脑等智能终端设备。
如图9所示,3D音效实现装置300可包括:方位获取模块31、选取模块32、调节模块33、播放模块34。其中:
方位获取模块31,用于当检测到音频信号时,获取虚拟音源的当前方位信息;
选取模块32,用于根据所述当前方位信息从样本参数集合中选取目标信号调节参数;
调节模块33,用于基于所述目标信号调节参数对所述音频信号进行调节;
播放模块34,用于播放调节后的音频信号。
在一些实施例中,方位获取模块31,具体可以用于:
获取电子设备相对于水平面的偏转角度、及偏转方向;
根据所述偏转角度、及偏转方向确定所述虚拟音源的当前方位信息。
在一些实施例中,方位获取模块31,具体可以用于:
启动电子设备的摄像头获取当前用户头像;
确定所述用户头像相对于预设头像的偏转角度、及偏转方向;
根据所述偏转角度、及偏转方向确定所述虚拟音源的当前方位信息。
在一些实施例中,参考图10,3D音效实现装置300还可以包括:
界面生成模块35,用于在启动电子设备的摄像头获取当前用户头像后,确定所述用户头像相对于预设头像的偏转角度、及偏转方向前,生成信息显示界面,所述信息显示界面包括第一区域和第二区域,所述第二区域内显示有虚拟用户的位置信息;
第一显示模块36,用于实时在所述第一区域内显示所述用户头像;
第二显示模块37,用于在将所述偏转角度、及偏转方向作为所述虚拟音源的当前方位信息后,实时根据所述方位信息、及所述虚拟用户的位置信息,在所述第二区域内显示所述虚拟音源。
在一些实施例中,方位获取模块31,具体可以用于:
获取用户输入的语音信息,所述语音信息包括偏转方向和偏转角度;
对所述语音信息进行识别,得到识别结果;
基于所述识别结果确定虚拟音源的当前方位信息。
在一些实施例中,方位获取模块31,具体可以用于:
启动目标应用界面;
获取用户在所述目标应用界面上预设显示区域内的触摸位置信息;
根据所述触摸位置信息、和所述显示界面上的预设位置信息之间的位置差异信息;
基于所述位置差异信息确定所述音源的方位信息。
在一些实施例中,参考图11,3D音效实现装置300还可以包括:
样本获取模块38,用于在检测音频信号之前,获取多个样本方位信息、及在所述样本方位信息下对应的信号调节参数;
构建模块39,用于建立所述样本方位信息与信号调节参数之间的映射关系,并将所述映射关系、样本方位信息及信号调节参数添加到样本参数集合中;
选取模块32,具体可以用于:从样本参数集合中选取与所述当前方位信息匹配的目标样本方位信息;基于所述映射关系,从所述样本参数集合中获取所述目标样本方位信息对应的信号调节参数,作为所述目标信号调节参数。
在一些实施例中,所述音频信号包括第一子音频信号和第二子音频信号,所述信号调节参数包括第一时延调节参数和第二时延参数;调节模块33具体用于:
基于所述第一时延调节参数调节第一子音频信号的输出时间;
基于所述第二时延调节参数调节第二子音频信号的输出时间。
在一些实施例中,所述音频信号包括第一子音频信号和第二子音频信号,所述信号调节参数包括第一音量调节参数和第二音量调节参数;调节模块33具体用于:
基于所述第一音量调节参数调节第一子音频信号的音量大小;
基于所述第二音量调节参数调节第二子音频信号的音量大小。
由上可知,本申请实施例提供了一种3D音效实现装置,通过获取虚拟音源的当前方位信息;根据所述当前方位信息从样本参数集合中选取目标信号调节参数;基于所述目标信号调节参数对所述音频信号进行调节;播放调节后的音频信号。3D音效实现装置通过获取3D音效的方位信息,提升3D音效应用的通用性;另外,无需采用外设感应用户所需的虚拟音源的方位信息,降低了成本。
本申请实施例还提供一种电子设备。请参阅图12,电子设备500包括处理器501以及存储器502。其中,处理器501与存储器502电性连接。
该处理器501是电子设备500的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或加载存储在存储器502内的计算机程序,以及调用存储在存储器502内的数据,执行电子设备500的各种功能并处理数据,从而对电子设备500进行整体监控。
该存储器502可用于存储软件程序以及模块,处理器501通过运行存储在存储器502的计算机程序以及模块,从而执行各种功能应用以及数据处理。存储器502可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的计算机程序等;存储数据区可存储根据电子设备的使用所创建的数据等。此外,存储器502可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器502还可以包括存储器控制器,以提供处理器501对存储器502的访问。
在本申请实施例中,电子设备500中的处理器501会按照如下的步骤,将一个或一个以上的计算机程序的进程对应的指令加载到存储器502中,并由处理器501运行存储在存储器502中的计算机程序,从而实现如下功能:
当检测到音频信号时,获取虚拟音源的当前方位信息;
根据当前方位信息从样本参数集合中选取目标信号调节参数;
基于目标信号调节参数对音频信号进行调节;
播放调节后的音频信号。
在一些实施例中,在获取虚拟音源的当前方位信息时,处理器501执行以下步骤:
获取电子设备相对于水平面的偏转角度、及偏转方向;
根据偏转角度、及偏转方向确定虚拟音源的当前方位信息
在一些实施例中,在获取虚拟音源的当前方位信息时,处理器501执行以下步骤:
启动电子设备的摄像头获取当前用户头像;
确定用户头像相对于预设头像的偏转角度、及偏转方向;
根据偏转角度、及偏转方向确定虚拟音源的当前方位信息。
在一些实施例中,在启动电子设备的摄像头获取当前用户头像后,确定用户头像相对于预设头像的偏转角度、及偏转方向前,处理器501执行以下步骤:
生成信息显示界面,信息显示界面包括第一区域和第二区域,第二区域内显示有虚拟用户的位置信息;
实时在第一区域内显示用户头像;
在将偏转角度、及偏转方向作为虚拟音源的当前方位信息后,处理器501还执行以下步骤:
实时根据方位信息、及虚拟用户的位置信息,在第二区域内显示虚拟音源。
在一些实施例中,在获取虚拟音源的当前方位信息时,处理器501执行以下步骤:
获取用户输入的语音信息,语音信息包括偏转方向和偏转角度;
对语音信息进行识别,得到识别结果;
基于识别结果确定虚拟音源的当前方位信息。
在一些实施例中,在获取虚拟音源的当前方位信息时,处理器501执行以下步骤:
启动目标应用界面;
获取用户在目标应用界面上预设显示区域内的触摸位置信息;
根据触摸位置信息、和显示界面上的预设位置信息之间的位置差异信息;
基于位置差异信息确定音源的方位信息。
在一些实施例中,在检测音频信号之前,处理器501还执行以下步骤:
获取多个样本方位信息、及在样本方位信息下对应的信号调节参数;
建立样本方位信息与信号调节参数之间的映射关系,并将映射关系、样本方位信息及信号调节参数添加到样本参数集合中;
在根据当前方位信息从样本参数集合中确定目标信号调节参数时,处理器501执行以下步骤:
从样本参数集合中选取与当前方位信息匹配的目标样本方位信息;
基于映射关系,从样本参数集合中获取目标样本方位信息对应的信号调节参数,作为目标信号调节参数。
在一些实施例中,音频信号包括第一子音频信号和第二子音频信号,信号调节参数包括第一时延调节参数和第二时延参数;
在基于信号调节参数对音频信号进行调节时,处理器501执行以下步骤:
基于第一时延调节参数调节第一子音频信号的输出时间;
基于第二时延调节参数调节第二子音频信号的输出时间。
在一些实施例中,音频信号包括第一子音频信号和第二子音频信号,信号调节参数包括第一音量调节参数和第二音量调节参数;
在基于信号调节参数对音频信号进行调节时,处理器501执行以下步骤:
基于第一音量调节参数调节第一子音频信号的音量大小;
基于第二音量调节参数调节第二子音频信号的音量大小。
由上述可知,本申请实施例的电子设备,当检测到音频信号时,获取虚拟音源的当前方位信息;根据当前方位信息从样本参数集合中选取目标信号调节参数;基于目标信号调节参数对音频信号进行调节,并播放调节后的音频信号。该电子设备基于自身来获取3D音效的方位信息,提升3D音效应用的通用性;另外,无需采用外设感应用户所需的虚拟音源的方位信息,降低了成本。
请一并参阅图13,在某些实施方式中,电子设备500还可以包括:显示器503、射频电路504、音频电路505以及电源509。其中,其中,显示器503、射频电路504、音频电路505、传感器506、摄像头507、麦克风508以及电源509分别与处理器501电性连接。
该显示器503可以用于显示由用户输入的信息或提供给用户的信息以及各种图形用户接口,这些图形用户接口可以由图形、文本、图标、视频和其任意组合来构成。
例如,在本申请实施例中,显示器503可以用于显示上述信息显示界面,并在信息显示界面的第一区域内显示用户头像,在第二区域内显示所述虚拟音源。
又例如,在本申请实施例中,显示器503还可用于显示上述目标应用界面,并可以在目标应用界面的预设显示区域内实时显示用户的触摸位置。
该射频电路504可以用于收发射频信号,以通过无线通信与网络设备或其他电子设备建立无线通讯,与网络设备或其他电子设备之间收发信号。
该音频电路505可以用于通过扬声器、传声器提供用户与电子设备之间的音频接口。
在本申请实施例中,电子设备可以具有至少两个声道(即会少两个音源),分别对应不同的扬声器。电子设备中扬声器将接收到的电信号转换成音频信号,并基于信号调节参数对音频信号进行调节后,将调节后的音频信号传输至外界,从而实现3D音效的播放。
传感器506,传感器506用于采集外部环境信息。传感器506可以包括环境亮度传感器、加速度传感器、陀螺仪、运动传感器、以及其他传感器。例如,在本申请实施例中,可通过加速度传感器检测到电子设备相对于水平面的偏转角度、及偏转方向等信息。
摄像头507,摄像头507用于采集外界的图像信息。其中,摄像头507可以是单摄像头,也可以是双摄像头。在本申请实施例中,摄像头507可以用于在被启动时,实时获取用户头像并传输给处理器501进行处理,以监控用户头像相对于预设头像的偏转角度及偏转方向。
麦克风508,用于接收外界输入的声音信号,并将声音信号转换成电信号。在本申请实施例中,麦克风508可以用于检测并接收用户输入的语音信号,并将语音信号转换成电信号发送给处理器进行处理,采用语音识别算法,识别该语音信号中所携带的信息。从而实现,从用户输入的语音信号中获取到虚拟音源的当前方位信息。
该电源509可以用于给电子设备500的各个部件供电。在一些实施例中,电源509可以通过电源管理系统与处理器501逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管图13中未示出,电子设备500还可以包括蓝牙模块、扬声器、闪光灯等器件,在此不再赘述。
本申请实施例还提供一种存储介质,该存储介质存储有计算机程序,当该计算机程序在计算机上运行时,使得该计算机执行上述任一实施例中的3D音效实现方法,例如:当检测到音频信号时,获取虚拟音源的当前方位信息;根据当前方位信息从样本参数集合中选取目标信号调节参数;基于目标信号调节参数对音频信号进行调节;播放调节后的音频信号。
又例如,在获取虚拟音源的当前方位信息时,具体获取电子设备相对于水平面的偏转角度、及偏转方向;根据偏转角度、及偏转方向确定虚拟音源的当前方位信息。
又例如,在获取虚拟音源的当前方位信息时,具体启动电子设备的摄像头获取当前用户头像;确定用户头像相对于预设头像的偏转角度、及偏转方向;根据偏转角度、及偏转方向确定虚拟音源的当前方位信息。
又例如,在获取虚拟音源的当前方位信息时,获取用户输入的语音信息,语音信息包括偏转方向和偏转角度;对语音信息进行识别,得到识别结果;基于识别结果确定虚拟音源的当前方位信息。
又例如,在获取虚拟音源的当前方位信息时,启动目标应用界面;获取用户在目标应用界面上预设显示区域内的触摸位置信息;根据触摸位置信息、和显示界面上的预设位置信息之间的位置差异信息;基于位置差异信息确定音源的方位信息。
在本申请实施例中,存储介质可以是磁碟、光盘、只读存储器(Read Only Memory,ROM,)、或者随机存取记忆体(Random Access Memory,RAM)等。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
对本申请实施例的3D音效实现装置而言,其各功能模块可以集成在一个处理芯片中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。该集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中,该存储介质譬如为只读存储器,磁盘或光盘等。
以上对本申请实施例所提供的一种3D音效实现方法、装置、存储介质及电子设备进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (20)

  1. 一种3D音效实现方法,应用于电子设备,其中,所述3D音效实现方法包括:
    当检测到音频信号时,获取虚拟音源的当前方位信息;
    根据所述当前方位信息从样本参数集合中选取目标信号调节参数;
    基于所述目标信号调节参数对所述音频信号进行调节;
    播放调节后的音频信号。
  2. 如权利要求1所述的3D音效实现方法,其中,所述获取虚拟音源的当前方位信息,包括:
    获取电子设备相对于水平面的偏转角度、及偏转方向;
    根据所述偏转角度、及偏转方向确定所述虚拟音源的当前方位信息。
  3. 如权利要求1所述的3D音效实现方法,其中,所述获取虚拟音源的当前方位信息,包括:
    启动电子设备的摄像头获取当前用户头像;
    确定所述用户头像相对于预设头像的偏转角度、及偏转方向;
    根据所述偏转角度、及偏转方向确定所述虚拟音源的当前方位信息。
  4. 根据权利要求3所述的3D音效实现方法,其中,在启动电子设备的摄像头获取当前用户头像后,确定所述用户头像相对于预设头像的偏转角度、及偏转方向前,还包括:
    生成信息显示界面,所述信息显示界面包括第一区域和第二区域,所述第二区域内显示有虚拟用户的位置信息;
    实时在所述第一区域内显示所述用户头像;
    在将所述偏转角度、及偏转方向作为所述虚拟音源的当前方位信息后,还包括:
    实时根据所述方位信息、及所述虚拟用户的位置信息,在所述第二区域内显示所述虚拟音源。
  5. 如权利要求1所述的3D音效实现方法,其中,所述获取虚拟音源的当前方位信息,包括:
    获取用户输入的语音信息,所述语音信息包括偏转方向和偏转角度;
    对所述语音信息进行识别,得到识别结果;
    基于所述识别结果确定虚拟音源的当前方位信息。
  6. 如权利要求1所述的3D音效实现方法,其中,所述获取虚拟音源的当前方位信息,包括:
    启动目标应用界面;
    获取用户在所述目标应用界面上预设显示区域内的触摸位置信息;
    根据所述触摸位置信息、和所述显示界面上的预设位置信息之间的位置差异信息;
    基于所述位置差异信息确定所述音源的方位信息。
  7. 根据权利要求1所述的3D音效实现方法,其中,在检测音频信号之前,还包括:
    获取多个样本方位信息、及在所述样本方位信息下对应的信号调节参数;
    建立所述样本方位信息与信号调节参数之间的映射关系,并将所述映射关系、样本方位信息及信号调节参数添加到样本参数集合中;
    所述根据所述当前方位信息从样本参数集合中确定目标信号调节参数,包括:
    从样本参数集合中选取与所述当前方位信息匹配的目标样本方位信息;
    基于所述映射关系,从所述样本参数集合中获取所述目标样本方位信息对应的信号调节参数,作为所述目标信号调节参数。
  8. 如权利要求1所述的3D音效实现方法,所述音频信号包括第一子音频信号和第二子音频信号,所述信号调节参数包括第一时延调节参数和第二时延参数;
    所述基于所述信号调节参数对所述音频信号进行调节,包括:
    基于所述第一时延调节参数调节第一子音频信号的输出时间;
    基于所述第二时延调节参数调节第二子音频信号的输出时间。
  9. 如权利要求1所述的3D音效实现方法,其中,所述音频信号包括第一子音频信号和第二子音频信号,所述信号调节参数包括第一音量调节参数和第二音量调节参数;
    所述基于所述信号调节参数对所述音频信号进行调节,包括:
    基于所述第一音量调节参数调节第一子音频信号的音量大小;
    基于所述第二音量调节参数调节第二子音频信号的音量大小。
  10. 一种3D音效实现装置,应用于电子设备,其中,所述3D音效实现装置包括:
    方位获取模块,用于当检测到音频信号时,获取虚拟音源的当前方位信息;
    选取模块,用于根据所述当前方位信息从样本参数集合中选取目标信号调节参数;
    调节模块,用于基于所述目标信号调节参数对所述音频信号进行调节;
    播放模块,用于播放调节后的音频信号。
  11. 一种存储介质,其中,所述存储介质中存储有多条指令,所述指令适于由处理器加载以执行以下步骤:
    当检测到音频信号时,获取虚拟音源的当前方位信息;
    根据所述当前方位信息从样本参数集合中选取目标信号调节参数;
    基于所述目标信号调节参数对所述音频信号进行调节;
    播放调节后的音频信号。
  12. 一种电子设备,其中,包括处理器以及存储介质,所述存储介质中存储有多条指令,所述处理器加载所述指令以执行以下步骤:
    当检测到音频信号时,获取虚拟音源的当前方位信息;
    根据所述当前方位信息从样本参数集合中选取目标信号调节参数;
    基于所述目标信号调节参数对所述音频信号进行调节;
    播放调节后的音频信号。
  13. 如权利要求12所述的电子设备,其中,在获取虚拟音源的当前方位信息时,所述处理器执行以下步骤:
    获取电子设备相对于水平面的偏转角度、及偏转方向;
    根据所述偏转角度、及偏转方向确定所述虚拟音源的当前方位信息
  14. 如权利要求12所述的电子设备,其中,在获取虚拟音源的当前方位信息时,所述处理器执行以下步骤:
    启动电子设备的摄像头获取当前用户头像;
    确定所述用户头像相对于预设头像的偏转角度、及偏转方向;
    根据所述偏转角度、及偏转方向确定所述虚拟音源的当前方位信息。
  15. 如权利要求14所述的电子设备,其中,在启动电子设备的摄像头获取当前用户头像后,确定所述用户头像相对于预设头像的偏转角度、及偏转方向前,所述处理器执行以下步骤:
    生成信息显示界面,所述信息显示界面包括第一区域和第二区域,所述第二区域内显示有虚拟用户的位置信息;
    实时在所述第一区域内显示所述用户头像;
    在将所述偏转角度、及偏转方向作为所述虚拟音源的当前方位信息后,所述处理器还执行以下步骤:
    实时根据所述方位信息、及所述虚拟用户的位置信息,在所述第二区域内显示所述虚拟音源。
  16. 如权利要求12所述的电子设备,其中,在获取虚拟音源的当前方位信息时,所述处理器执行以下步骤:
    获取用户输入的语音信息,所述语音信息包括偏转方向和偏转角度;
    对所述语音信息进行识别,得到识别结果;
    基于所述识别结果确定虚拟音源的当前方位信息。
  17. 如权利要求12所述的电子设备,其中,在获取虚拟音源的当前方位信息时,所述处理器执行以下步骤:
    启动目标应用界面;
    获取用户在所述目标应用界面上预设显示区域内的触摸位置信息;
    根据所述触摸位置信息、和所述显示界面上的预设位置信息之间的位置差异信息;
    基于所述位置差异信息确定所述音源的方位信息。
  18. 如权利要求12所述的电子设备,其中,在检测音频信号之前,所述处理器还执行以下步骤:
    获取多个样本方位信息、及在所述样本方位信息下对应的信号调节参数;
    建立所述样本方位信息与信号调节参数之间的映射关系,并将所述映射关系、样本方位信息及信号调节参数添加到样本参数集合中;
    在根据所述当前方位信息从样本参数集合中确定目标信号调节参数时,所述处理器执行以下步骤:
    从样本参数集合中选取与所述当前方位信息匹配的目标样本方位信息;
    基于所述映射关系,从所述样本参数集合中获取所述目标样本方位信息对应的信号调节参数,作为所述目标信号调节参数。
  19. 如权利要求12所述的电子设备,其中,所述音频信号包括第一子音频信号和第二子音频信号,所述信号调节参数包括第一时延调节参数和第二时延参数;
    在基于所述信号调节参数对所述音频信号进行调节时,所述处理器执行以下步骤:
    基于所述第一时延调节参数调节第一子音频信号的输出时间;
    基于所述第二时延调节参数调节第二子音频信号的输出时间。
  20. 如权利要求12所述的电子设备,其中,所述音频信号包括第一子音频信号和第二子音频信号,所述信号调节参数包括第一音量调节参数和第二音量调节参数;
    在基于所述信号调节参数对所述音频信号进行调节时,所述处理器执行以下步骤:
    基于所述第一音量调节参数调节第一子音频信号的音量大小;
    基于所述第二音量调节参数调节第二子音频信号的音量大小。
PCT/CN2018/116506 2018-11-20 2018-11-20 3d音效实现方法、装置、存储介质及电子设备 WO2020102994A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880098267.5A CN112771893A (zh) 2018-11-20 2018-11-20 3d音效实现方法、装置、存储介质及电子设备
PCT/CN2018/116506 WO2020102994A1 (zh) 2018-11-20 2018-11-20 3d音效实现方法、装置、存储介质及电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/116506 WO2020102994A1 (zh) 2018-11-20 2018-11-20 3d音效实现方法、装置、存储介质及电子设备

Publications (1)

Publication Number Publication Date
WO2020102994A1 true WO2020102994A1 (zh) 2020-05-28

Family

ID=70773737

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/116506 WO2020102994A1 (zh) 2018-11-20 2018-11-20 3d音效实现方法、装置、存储介质及电子设备

Country Status (2)

Country Link
CN (1) CN112771893A (zh)
WO (1) WO2020102994A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114070931A (zh) * 2021-11-25 2022-02-18 咪咕音乐有限公司 音效调整方法、装置、设备及计算机可读存储介质
WO2023173285A1 (zh) * 2022-03-15 2023-09-21 深圳市大疆创新科技有限公司 音频处理方法、装置、电子设备及计算机可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160241980A1 (en) * 2015-01-28 2016-08-18 Samsung Electronics Co., Ltd Adaptive ambisonic binaural rendering
CN107249166A (zh) * 2017-06-19 2017-10-13 依偎科技(南昌)有限公司 一种完全沉浸式的耳机立体声实现方法及系统
US20170372748A1 (en) * 2016-06-28 2017-12-28 VideoStitch Inc. Method to align an immersive video and an immersive sound field
CN108810794A (zh) * 2017-04-27 2018-11-13 蒂雅克股份有限公司 目标位置设定装置及声像定位装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013147791A1 (en) * 2012-03-29 2013-10-03 Intel Corporation Audio control based on orientation
CN105183421B (zh) * 2015-08-11 2018-09-28 中山大学 一种虚拟现实三维音效的实现方法及系统
US10154360B2 (en) * 2017-05-08 2018-12-11 Microsoft Technology Licensing, Llc Method and system of improving detection of environmental sounds in an immersive environment
CN108156561B (zh) * 2017-12-26 2020-08-04 广州酷狗计算机科技有限公司 音频信号的处理方法、装置及终端

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160241980A1 (en) * 2015-01-28 2016-08-18 Samsung Electronics Co., Ltd Adaptive ambisonic binaural rendering
US20170372748A1 (en) * 2016-06-28 2017-12-28 VideoStitch Inc. Method to align an immersive video and an immersive sound field
CN108810794A (zh) * 2017-04-27 2018-11-13 蒂雅克股份有限公司 目标位置设定装置及声像定位装置
CN107249166A (zh) * 2017-06-19 2017-10-13 依偎科技(南昌)有限公司 一种完全沉浸式的耳机立体声实现方法及系统

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114070931A (zh) * 2021-11-25 2022-02-18 咪咕音乐有限公司 音效调整方法、装置、设备及计算机可读存储介质
CN114070931B (zh) * 2021-11-25 2023-08-15 咪咕音乐有限公司 音效调整方法、装置、设备及计算机可读存储介质
WO2023173285A1 (zh) * 2022-03-15 2023-09-21 深圳市大疆创新科技有限公司 音频处理方法、装置、电子设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN112771893A (zh) 2021-05-07

Similar Documents

Publication Publication Date Title
US11375329B2 (en) Systems and methods for equalizing audio for playback on an electronic device
WO2018149275A1 (zh) 调整音箱输出的音频的方法和装置
US20130279724A1 (en) Auto detection of headphone orientation
CN111050250B (zh) 降噪方法、装置、设备和存储介质
CN108668009B (zh) 输入操作控制方法、装置、终端、耳机及可读存储介质
CN110166890B (zh) 音频的播放采集方法、设备及存储介质
CN108319445B (zh) 一种音频播放方法及移动终端
WO2017197867A1 (zh) 一种采集声音信号的方法和装置
WO2017020422A1 (zh) 一种音频电路选择方法、装置和电路以及手持终端
CN109547848B (zh) 响度调整方法、装置、电子设备以及存储介质
CN110996305B (zh) 连接蓝牙设备的方法、装置、电子设备及介质
WO2021139535A1 (zh) 播放音频的方法、装置、系统、设备及存储介质
CN110708630B (zh) 控制耳机的方法、装置、设备及存储介质
WO2022033176A1 (zh) 音频播放控制方法、装置、电子设备及存储介质
CN110931053A (zh) 检测录音时延、录制音频的方法、装置、终端及存储介质
WO2019237667A1 (zh) 播放音频数据的方法和装置
WO2020102994A1 (zh) 3d音效实现方法、装置、存储介质及电子设备
EP4203447A1 (en) Sound processing method and apparatus thereof
WO2022057365A1 (zh) 降噪方法、终端设备及计算机可读存储介质
CN109360582B (zh) 音频处理方法、装置及存储介质
CN113099373B (zh) 声场宽度扩展的方法、装置、终端及存储介质
CN108055633A (zh) 一种音频播放方法及移动终端
CN111147982B (zh) 一种音频播放方法、装置、存储介质及终端
CN110708582B (zh) 同步播放的方法、装置、电子设备及介质
CN109218920B (zh) 一种信号处理方法、装置及终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18940605

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 30.09.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18940605

Country of ref document: EP

Kind code of ref document: A1