WO2020073565A1 - Audio processing method and apparatus - Google Patents

Audio processing method and apparatus Download PDF

Info

Publication number
WO2020073565A1
WO2020073565A1 PCT/CN2019/073126 CN2019073126W WO2020073565A1 WO 2020073565 A1 WO2020073565 A1 WO 2020073565A1 CN 2019073126 W CN2019073126 W CN 2019073126W WO 2020073565 A1 WO2020073565 A1 WO 2020073565A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
reverberation
algorithm
processed
category
Prior art date
Application number
PCT/CN2019/073126
Other languages
French (fr)
Chinese (zh)
Inventor
黄传增
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Publication of WO2020073565A1 publication Critical patent/WO2020073565A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Definitions

  • the embodiments of the present disclosure relate to the field of computer technology, and in particular to audio processing methods and devices.
  • the embodiments of the present disclosure propose an audio processing method and device.
  • an embodiment of the present disclosure provides an audio processing method including: acquiring audio to be processed; determining whether the current device supports adding reverb to audio to be processed; in response to determining that the current device supports adding reverb to audio to be processed, The reverb algorithm is determined according to the reverb category selected by the user; according to the determined reverb algorithm, the audio to be processed is processed to obtain the processed audio.
  • the method further includes: playing the processed audio.
  • determining whether the current device supports adding reverb to the audio to be processed includes: obtaining a device information set; and determining whether the current device supports adding reverb to the audio to be processed according to the device information set.
  • determining a reverb algorithm according to the reverb type selected by the user includes: in response to determining that the current device supports adding reverb to the audio to be processed, obtaining a characterizing device The algorithm category list of the correspondence between the information and the reverberation algorithm category; based on the algorithm category list, determine the reverberation algorithm category corresponding to the device information of the current device, wherein the reverberation algorithm category is used to characterize the category to which the reverberation algorithm belongs ; Determine the reverberation algorithm under the determined reverberation algorithm category according to the reverberation category selected by the user.
  • determining the reverb algorithm according to the reverb type selected by the user includes: in response to determining that the current device supports adding reverb to the audio to be processed, processing the audio Perform audio characteristic analysis to obtain audio characteristic data; determine the reverberation algorithm according to the audio characteristic data and the reverb type selected by the user.
  • the category to which the reverb algorithm belongs is divided according to one of the following items: the system overhead required by the reverb algorithm architecture; and the complexity of the reverb algorithm.
  • an embodiment of the present disclosure provides an audio processing apparatus including: a first acquiring unit configured to acquire audio to be processed; a first determining unit configured to determine whether the current device supports audio to be processed added Reverberation; the second determining unit is configured to determine the reverberation algorithm according to the reverberation category selected by the user in response to determining that the current device supports adding reverberation to the audio to be processed; The audio to be processed is processed to obtain the processed audio.
  • the device further includes a playback unit configured to play the processed audio.
  • the first determining unit is further configured to: obtain the device information set; and determine whether the current device supports adding reverberation to the audio to be processed according to the device information set.
  • the second determination unit is further configured to: in response to determining that the current device supports adding reverberation to the to-be-processed audio, acquire an algorithm category that characterizes the correspondence between device information and the reverberation algorithm category List; based on the algorithm category list, determine the reverberation algorithm category corresponding to the device information of the current device, wherein the reverberation algorithm category is used to characterize the category of the reverberation algorithm; according to the reverberation category selected by the user, the determined reverberation category The reverb algorithm is determined under the category of reverb algorithm.
  • the second determination unit is further configured to: in response to determining that the current device supports adding reverberation to the audio to be processed, perform audio characteristic analysis on the audio to be processed to obtain audio characteristic data; and the second determination unit is It is further configured to determine the reverberation algorithm according to the audio characteristic data and the reverberation category selected by the user.
  • the category to which the reverb algorithm belongs is divided according to one of the following items: the system overhead required by the reverb algorithm architecture; and the complexity of the reverb algorithm.
  • an embodiment of the present disclosure provides a terminal device, the terminal device includes: one or more processors; a storage device, on which one or more programs are stored; Or executed by multiple processors, so that the above one or more processors implement the method described in any one of the implementation manners of the first aspect.
  • an embodiment of the present disclosure provides a computer-readable medium on which a computer program is stored.
  • the program is executed by a processor, the method described in any one of the implementation manners of the first aspect is implemented.
  • the method and apparatus provided by the embodiments of the present disclosure can first determine whether the current device supports adding reverberation to the audio to be processed. Therefore, the reverb effect is turned on or off for different devices.
  • the reverberation algorithm is determined according to the reverberation category selected by the user to achieve different environmental simulation effects. According to the determined reverberation algorithm, the audio to be processed is processed to obtain the processed audio. In this process, by configuring the reverb effect on or off, you can avoid the obvious delay caused by adding the reverb effect to devices with poor performance.
  • FIG. 1 is an exemplary system architecture diagram to which an embodiment of the present disclosure can be applied;
  • FIG. 2 is a flowchart of an embodiment of an audio processing method according to the present disclosure
  • FIG. 3 is a schematic diagram of an application scenario of an audio processing method according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of an embodiment of an audio processing device according to the present disclosure.
  • FIG. 6 is a schematic structural diagram of an electronic device suitable for implementing embodiments of the present disclosure.
  • FIG. 1 shows an exemplary system architecture 100 to which an audio processing method or apparatus of an embodiment of the present disclosure can be applied.
  • the system architecture 100 may include terminal devices 101, 102, and 103, a network 104, and a server 105.
  • the network 104 is a medium used to provide a communication link between the terminal devices 101, 102, 103 and the server 105.
  • the network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, and so on.
  • the user can use the terminal devices 101, 102, 103 to interact with the server 105 through the network 104 to receive or send messages, and so on.
  • Various communication client applications such as singing applications, video recording and sharing applications, and audio processing applications, can be installed on the terminal devices 101, 102, and 103.
  • the terminal devices 101, 102, and 103 may be hardware or software.
  • the terminal devices 101, 102, and 103 are hardware, they may be various electronic devices that have a display screen and support audio processing.
  • the terminal devices 101, 102, and 103 are software, they can be installed in the above electronic device. It can be implemented as multiple software or software modules, or as a single software or software module. There is no specific limit here.
  • the server 105 may be a server that provides various services, for example, a back-end server that supports applications installed on the terminal devices 101, 102, and 103.
  • the audio processing method provided by the embodiments of the present disclosure is generally executed by the terminal devices 101, 102, and 103.
  • the audio processing device is generally provided in the terminal devices 101, 102, 103.
  • the server can be hardware or software.
  • the server can be implemented as a distributed server cluster composed of multiple servers or as a single server.
  • the server is software, it can be implemented as multiple software or software modules (for example, to provide distributed services), or as a single software or software module. There is no specific limit here.
  • terminal devices, networks, and servers in FIG. 1 are only schematic. According to the implementation needs, there can be any number of terminal devices, networks and servers.
  • the audio processing method includes:
  • Step 201 Acquire audio to be processed.
  • the execution subject of the audio processing method can acquire the audio to be processed in various ways.
  • the above-mentioned execution subject can record the voice of the user singing through the recording device to obtain the audio to be processed.
  • the recording device may be integrated on the above-mentioned executive body, or may be in communication connection with the executive body, which is not limited in this disclosure.
  • the above-mentioned execution subject may also obtain pre-stored audio from the local or other storage device connected as the audio to be processed.
  • the audio to be processed may be any audio.
  • the determination of the audio to be processed can be specified by a technician, or can be screened according to certain conditions.
  • the audio to be processed may be a complete audio sung by the user, or an audio segment sung by the user.
  • the audio to be processed may also be an audio segment with a short singing time (for example, 30 milliseconds) by the user.
  • Step 202 Determine whether the current device supports adding reverberation to the audio to be processed.
  • the above-mentioned execution subject may determine in various ways whether the current device supports adding reverberation to the audio to be processed.
  • the current device may be the above-mentioned execution subject.
  • different devices have different performance. When adding reverb to audio, a device with poor processing performance needs a longer processing time when adding reverb to the audio to be processed. In a scenario that requires real-time monitoring, it will cause a significant delay and cannot meet the needs of real-time monitoring. Therefore, it can be considered that these devices do not support adding reverb.
  • the above-mentioned execution body can obtain the performance parameters of the current device, for example, the number of computing cores in the CPU (Central Processing Unit), the size of the memory, and so on.
  • the performance parameter table may store the correspondence between the performance parameters of the device and whether it supports adding reverb.
  • determining whether the current device supports adding reverb to the audio to be processed may also include: acquiring a device information set; according to the device information set, determining whether the current device supports adding reverb to the audio to be processed .
  • the device information may be any information that can identify the device.
  • the device information may be the device model, device name, and so on.
  • the device information in the device information set may be device information of devices that do not support adding reverberation.
  • the execution subject may determine whether the device information of the current device is in the device information set. If it is, it can be determined that the current device does not support adding reverb to the audio to be processed. Otherwise, it can be determined that the current device supports adding reverb to the audio to be processed. It can be understood that, in practice, the device information in the foregoing device information set may also be device information of a device that supports adding reverberation.
  • step 203 may be continued.
  • the audio to be processed may be directly played.
  • Step 203 In response to determining that the current device supports adding reverberation to the audio to be processed, determine the reverberation algorithm according to the reverberation category selected by the user.
  • the above-mentioned execution subject may determine the reverberation algorithm according to the reverberation category selected by the user.
  • the reverberation algorithm can be divided into different reverberation categories according to the simulated environmental effects.
  • the reverb category can have hall effects, studio effects, valley effects, and so on.
  • category information (such as name, picture, etc.) of each category can be displayed on the above-mentioned executive body. Each category information is associated with the reverb category indicated by the category information. Therefore, the user can perform some operations (such as a click operation) on the category information to select the reverb category.
  • the reverberation algorithm corresponding to each reverberation category can be preset, so that the above-mentioned execution subject can determine the reverberation algorithm according to the reverberation category selected by the user.
  • determining the reverb algorithm according to the reverb type selected by the user includes: in response to determining that the current device supports audio to be processed Add reverberation, analyze the audio characteristics of the audio to be processed, and obtain audio characteristic data; determine the reverberation algorithm according to the audio characteristic data and the reverb type selected by the user.
  • the above-mentioned execution subject may perform characteristic analysis on the audio to be processed in various ways to obtain audio characteristic data.
  • the audio to be processed can be analyzed through some existing audio analysis applications or open source toolkits.
  • the audio characteristics include audio frequency, bandwidth, amplitude and so on.
  • the reverberation algorithm can be determined according to the audio characteristic data and the reverberation category selected by the user. Specifically, in the correspondence table between the pre-established audio characteristic data and the reverb category and the reverb algorithm, the reverb algorithm matching the obtained audio characteristic data and the reverb category selected by the user can be queried .
  • Step 204 Process the audio to be processed according to the determined reverberation algorithm to obtain the processed audio.
  • the above-mentioned execution subject may process the audio to be processed according to the determined reverberation algorithm to obtain the processed audio.
  • the audio to be processed may be input to at least one filter set according to the determined reverberation algorithm, so as to obtain the processed audio.
  • the number of filters and the like corresponding to each reverberation algorithm can be preset.
  • each reverberation algorithm can be obtained by combining at least one filter. For example, comb filters and all-pass filters can be selected for combination.
  • the filter here may be a hardware module in the current device or a software module in the current device according to implementation needs.
  • FIG. 3 is a schematic diagram of an application scenario of the audio processing method according to this embodiment.
  • the execution subject of the audio processing method may be the smartphone 301.
  • the smartphone 301 first obtains audio 3011 to be processed. After that, according to the performance parameters of the mobile phone 301.
  • the performance parameter takes the number of computing cores in the CPU as an example, the smartphone 301 takes dual cores as an example, and the preset processing logic is dual cores as an example to support adding reverb.
  • the smartphone 301 can determine that the current device 301 supports adding reverb to the audio 3011 to be processed.
  • the current device 301 In response to determining that the current device 301 supports adding reverberation to the audio to be processed, according to the reverberation category selected by the user 3012, taking the reverberation category selected by the user as a valley effect as an example, it can be determined by querying a preset correspondence table The reverb algorithm 3013 corresponding to the reverb category of the valley effect. According to the determined reverberation algorithm 3013, the audio to be processed is processed to obtain the processed audio 3011 '.
  • the method provided by the above embodiments of the present disclosure may determine whether the current device supports adding reverberation to the audio to be processed, so as to configure the reverberation effect to be turned on or off for different devices.
  • the reverberation algorithm is determined according to the reverberation category selected by the user to achieve different environmental simulation effects. According to the determined reverberation algorithm, the audio to be processed is processed to obtain the processed audio. In this process, by configuring the reverb effect on or off, you can avoid the obvious delay caused by adding the reverb effect to devices with poor performance.
  • FIG. 4 shows a flow 400 of yet another embodiment of an audio processing method.
  • the process 400 of the audio processing method includes the following steps:
  • Step 401 Acquire audio to be processed.
  • Step 402 Determine whether the current device supports adding reverb to the audio to be processed.
  • steps 401 and 402 for the specific processing of steps 401 and 402 and the resulting technical effects, reference may be made to steps 201 and 202 in the embodiment corresponding to FIG. 2, and details are not described herein again.
  • Step 403 In response to determining that the current device supports adding reverberation to the audio to be processed, obtain a list of algorithm categories.
  • the above-mentioned execution subject may obtain the algorithm category list in various ways.
  • a list of algorithm categories issued by a server connected by communication may be received.
  • the algorithm category list may be stored locally in advance, so that the algorithm category list can be directly obtained locally.
  • the algorithm category list is used to characterize the correspondence between the device information and the reverberation algorithm category. In practice, you can divide the reverberation algorithms according to certain indicators, such as the system overhead required by the algorithm architecture or the complexity of the algorithm, to obtain different types of reverberation algorithms.
  • the reverberation algorithm category is used to characterize the category to which the reverberation algorithm belongs. As an example, it can be divided into three categories according to the system overhead required by the algorithm.
  • the first category has low system overhead, which can be achieved by including comb filters, all-pass filters, Schroeder filters, or a combination of these filters.
  • the second category has a large system overhead, which can be combined with a delay filter through high and low-pass filtering, or an existing filter system, such as a Muller Moorer reverberator.
  • the third category has medium system overhead, which can be realized by a combination of a feedback network and an all-pass filter.
  • Step 404 based on the algorithm category list, determine the reverberation algorithm category corresponding to the device information of the current device.
  • the execution subject may determine the reverberation algorithm category corresponding to the device information of the current device based on the algorithm category list. Specifically, the device information of the current device can be matched in the algorithm category list, so as to obtain the reverberation algorithm category corresponding to the device information of the current device.
  • Step 405 Determine the reverberation algorithm under the above reverberation algorithm category according to the reverberation category selected by the user.
  • the execution subject may determine the reverberation algorithm under the reverberation algorithm category according to the reverberation category selected by the user.
  • each reverb algorithm category may include multiple algorithms. These algorithms differ according to the simulated environmental effects. Therefore, on the basis of the reverberation algorithm category determined in step 404, the above-mentioned execution subject can determine the applicable reverberation algorithm among various algorithms under the reverberation algorithm category according to the reverberation category selected by the user. It should be noted that, among the various algorithms under this category of reverberation algorithms, for determining the specific implementation of the applicable reverberation algorithm and the technical effects it brings, refer to step 203 in the embodiment corresponding to FIG. 2, here No longer.
  • Step 406 Process the audio to be processed according to the determined reverberation algorithm to obtain the processed audio.
  • step 406 the specific processing of step 406 and the technical effect brought by it can refer to step 204 in the embodiment corresponding to FIG. 2, which will not be repeated here.
  • Step 407 Play the processed audio.
  • the above-mentioned execution subject may play the processed audio through a playback device integrated or communicatively connected in the above-mentioned execution subject.
  • the audio playback device can play the processed audio, so that the user can monitor the audio with added reverberation in real time.
  • the audio processing method in this embodiment adds the determination of the reverberation algorithm category based on the algorithm category list, and the determination of the reverberation under the determined reverberation algorithm category Algorithm steps.
  • Reverberation algorithms under different reverberation algorithm categories are implemented for different devices. By setting the reverberation algorithms under different reverberation algorithm categories, it is possible to realize the better performance of equipment with better performance and make full use of its performance. For devices with poor performance, system overhead can be reduced.
  • the present disclosure provides an audio processing device.
  • the device embodiment corresponds to the method embodiment shown in FIG. 2.
  • the device can be specifically applied to various In electronic equipment.
  • the audio processing device 500 of this embodiment includes: a first acquisition unit 501, a first determination unit 502, a second determination unit 503, and a processing unit 504.
  • the first acquisition unit 501 is configured to acquire audio to be processed.
  • the first determination unit 502 is configured to determine whether the current device supports adding reverberation to the audio to be processed.
  • the second determination unit 503 is configured to determine the reverberation algorithm according to the reverberation category selected by the user in response to determining that the current device supports adding reverberation to the audio to be processed.
  • the processing unit 504 is configured to process the audio to be processed according to the determined reverberation algorithm to obtain the processed audio.
  • the specific processing of the first acquiring unit 501, the first determining unit 502, the second determining unit 503, and the processing unit 504 in the audio processing device 500 and the technical effects brought about by them can refer to the corresponding Steps 201-204 in the embodiment will not be repeated here.
  • the device 500 may further include: a playback unit (not shown in the figure).
  • the playback unit is configured to play the processed audio.
  • the first determining unit 502 may be further configured to: obtain a device information set; and determine whether the current device supports adding reverberation to the audio to be processed according to the device information set.
  • the second determining unit 503 is further configured to: in response to determining that the current device supports adding reverberation to the audio to be processed, obtain information for characterizing the device and the type of reverberation algorithm The algorithm category list of the correspondence between them; based on the algorithm category list, determine the reverberation algorithm category corresponding to the device information of the current device, where the reverberation algorithm category is used to characterize the category to which the reverberation algorithm belongs; The reverb algorithm is determined under the reverb algorithm category.
  • the second determining unit 503 is further configured to: in response to determining that the current device supports adding reverberation to the audio to be processed, perform audio characteristic analysis on the audio to be processed to obtain audio characteristics Data; determine the reverberation algorithm according to the audio characteristic data and the reverberation category selected by the user.
  • the category to which the reverb algorithm belongs is divided according to one of the following items: the system overhead required by the reverb algorithm architecture; and the complexity of the reverb algorithm.
  • the first determining unit 502 may determine whether the current device supports adding reverberation to the audio to be processed, so as to configure the reverberation effect to be turned on or off for different devices.
  • the second determining unit 503 determines a reverberation algorithm according to the reverberation category selected by the user, so as to achieve different environmental simulation effects.
  • the processing unit 504 may process the audio to be processed according to the determined reverberation algorithm, so as to obtain the processed audio. In this process, by configuring the reverb effect on or off, you can avoid the obvious delay caused by adding the reverb effect to devices with poor performance.
  • FIG. 6 shows a schematic structural diagram of an electronic device (for example, the terminal device in FIG. 1) 600 suitable for implementing the embodiments of the present disclosure.
  • Terminal devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals ( For example, mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, and so on.
  • the electronic device shown in FIG. 6 is just an example, and should not bring any limitation to the functions and use scope of the embodiments of the present disclosure.
  • the electronic device 600 may include a processing device (such as a central processing unit, a graphics processor, etc.) 601, which may be loaded into random access according to a program stored in a read-only memory (ROM) 602 or from the storage device 608
  • the program in the memory (RAM) 603 performs various appropriate operations and processes.
  • various programs and data necessary for the operation of the electronic device 600 are also stored.
  • the processing device 601, ROM 602, and RAM 603 are connected to each other via a bus 604.
  • An input / output (I / O) interface 605 is also connected to the bus 604.
  • the following devices can be connected to the I / O interface 605: including input devices 606 such as touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc .; including, for example, liquid crystal display (LCD), speaker, vibration
  • An output device 607 such as a storage device; a storage device 608 including, for example, a magnetic tape, a hard disk, etc .; and a communication device 609.
  • the communication device 609 may allow the electronic device 600 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 6 shows an electronic device 600 having various devices, it should be understood that it is not required to implement or have all the devices shown. More or fewer devices may be implemented or provided instead.
  • the process described above with reference to the flowchart may be implemented as a computer software program.
  • embodiments of the present disclosure include a computer program product that includes a computer program carried on a computer-readable medium, the computer program containing program code for performing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 609, or from the storage device 608, or from the ROM 602.
  • the processing device 601 the above-described functions defined in the method of the embodiments of the present disclosure are executed.
  • the computer-readable medium described in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • the computer-readable signal medium may include a data signal that is propagated in baseband or as part of a carrier wave, in which computer-readable program code is carried. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • the computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable signal medium may send, propagate, or transmit a program for use by or in combination with an instruction execution system, apparatus, or device .
  • the program code contained on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: electric wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • the computer-readable medium may be included in the electronic device; or it may exist alone without being assembled into the electronic device.
  • the computer-readable medium carries one or more programs, and when the one or more programs are executed by the electronic device, the electronic device is caused to: obtain the audio to be processed; determine whether the current device supports adding reverb to the audio to be processed; In order to determine that the current device supports adding reverb to the audio to be processed, the reverb algorithm is determined according to the reverb category selected by the user; according to the determined reverb algorithm, the audio to be processed is processed to obtain the processed audio.
  • Computer program code for performing the operations of the present disclosure may be written in one or more programming languages or a combination thereof, the programming languages including object-oriented programming languages such as Java, Smalltalk, C ++, and also including conventional Procedural programming language-such as "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as an independent software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (eg, through an Internet service provider Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider Internet connection e.g, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of code that contains one or more logic functions Executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession can actually be executed in parallel, and sometimes they can also be executed in reverse order, depending on the functions involved.
  • each block in the block diagrams and / or flowcharts, and combinations of blocks in the block diagrams and / or flowcharts can be implemented with dedicated hardware-based systems that perform specified functions or operations Or, it can be realized by a combination of dedicated hardware and computer instructions.
  • the units described in the embodiments of the present disclosure may be implemented in software or hardware.
  • the name of the unit does not constitute a limitation on the unit itself.
  • the first acquiring unit may also be described as a “unit acquiring audio to be processed”.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Stereophonic System (AREA)

Abstract

Disclosed in embodiments of the present disclosure are an audio processing method and apparatus. An implementation of the method comprises: obtaining an audio to be processed; determining whether the current device supports addition of a reverb to said audio; in response to determining that the current device supports addition of a reverb to said audio, determining a reverb algorithm according to a reverb type selected by a user; and processing said audio according to the determined reverb algorithm to obtain a processed audio. The implementation implements configuration of on/off of a reverb effect, thereby avoiding an obvious delay caused by addition of the reverb effect to the device having poor performance.

Description

音频处理方法和装置Audio processing method and device
本专利申请要求于2018年10月12日提交的、申请号为201811190930.2、申请人为北京微播视界科技有限公司、发明名称为“音频处理方法和装置”的中国专利申请的优先权,该申请的全文以引用的方式并入本申请中。This patent application requires the priority of the Chinese patent application filed on October 12, 2018, with the application number 201811190930.2, the applicant is Beijing Weibo Vision Technology Co., Ltd., and the invention name is "audio processing method and device". The entire text is incorporated into this application by reference.
技术领域Technical field
本公开实施例涉及计算机技术领域,具体涉及音频处理方法和装置。The embodiments of the present disclosure relate to the field of computer technology, and in particular to audio processing methods and devices.
背景技术Background technique
随着各种电子设备的不断更新迭代,目前各种电子设备的性能各不相同。因而造成了对于音频的处理能力也各不相同。对于性能较差的设备,如果对音频进行处理,耗费时间较长。在实时监听的场景下,会造成处理后的音频延时明显的问题。With the continuous updating and iteration of various electronic devices, the performance of various electronic devices is currently different. As a result, the audio processing capabilities are also different. For devices with poor performance, it takes a long time to process audio. In the scene of real-time monitoring, it will cause obvious problems of audio delay after processing.
发明内容Summary of the invention
本公开实施例提出了音频处理方法和装置。The embodiments of the present disclosure propose an audio processing method and device.
第一方面,本公开实施例提供了一种音频处理方法,该方法包括:获取待处理音频;确定当前设备是否支持对待处理音频添加混响;响应于确定当前设备支持对待处理音频添加混响,根据用户选取的混响类别来确定混响算法;根据确定的混响算法,对待处理音频进行处理以得到处理后的音频。In a first aspect, an embodiment of the present disclosure provides an audio processing method including: acquiring audio to be processed; determining whether the current device supports adding reverb to audio to be processed; in response to determining that the current device supports adding reverb to audio to be processed, The reverb algorithm is determined according to the reverb category selected by the user; according to the determined reverb algorithm, the audio to be processed is processed to obtain the processed audio.
在一些实施例中,该方法还包括:对处理后的音频进行播放。In some embodiments, the method further includes: playing the processed audio.
在一些实施例中,确定当前设备是否支持对待处理音频添加混响,包括:获取设备信息集合;根据设备信息集合,确定当前设备是否支持对待处理音频添加混响。In some embodiments, determining whether the current device supports adding reverb to the audio to be processed includes: obtaining a device information set; and determining whether the current device supports adding reverb to the audio to be processed according to the device information set.
在一些实施例中,响应于确定当前设备支持对待处理音频添加混响,根据用户选取的混响类别,确定混响算法,包括:响应于确定当 前设备支持对待处理音频添加混响,获取表征设备信息与混响算法类别之间的对应关系的算法类别列表;基于算法类别列表,确定当前设备的设备信息对应的混响算法类别,其中所述混响算法类别用于表征混响算法所属的类别;根据用户选取的混响类别,在所确定的混响算法类别下确定混响算法。In some embodiments, in response to determining that the current device supports adding reverb to the audio to be processed, determining a reverb algorithm according to the reverb type selected by the user includes: in response to determining that the current device supports adding reverb to the audio to be processed, obtaining a characterizing device The algorithm category list of the correspondence between the information and the reverberation algorithm category; based on the algorithm category list, determine the reverberation algorithm category corresponding to the device information of the current device, wherein the reverberation algorithm category is used to characterize the category to which the reverberation algorithm belongs ; Determine the reverberation algorithm under the determined reverberation algorithm category according to the reverberation category selected by the user.
在一些实施例中,响应于确定当前设备支持对待处理音频添加混响,根据用户选取的混响类别,确定混响算法,包括:响应于确定当前设备支持对待处理音频添加混响,对待处理音频进行音频特性分析,得到音频特性数据;根据音频特性数据和用户选取的混响类别,确定混响算法。In some embodiments, in response to determining that the current device supports adding reverb to the audio to be processed, determining the reverb algorithm according to the reverb type selected by the user includes: in response to determining that the current device supports adding reverb to the audio to be processed, processing the audio Perform audio characteristic analysis to obtain audio characteristic data; determine the reverberation algorithm according to the audio characteristic data and the reverb type selected by the user.
在一些实施例中,混响算法所属的类别根据下述项中的一项进行划分:混响算法架构所需的系统开销;以及混响算法的复杂度。In some embodiments, the category to which the reverb algorithm belongs is divided according to one of the following items: the system overhead required by the reverb algorithm architecture; and the complexity of the reverb algorithm.
第二方面,本公开实施例提供了一种音频处理装置,该装置包括:第一获取单元,被配置成获取待处理音频;第一确定单元,被配置成确定当前设备是否支持对待处理音频添加混响;第二确定单元,被配置成响应于确定当前设备支持对待处理音频添加混响,根据用户选取的混响类别,确定混响算法;处理单元,被配置成根据确定的混响算法,对待处理音频进行处理,得到处理后的音频。In a second aspect, an embodiment of the present disclosure provides an audio processing apparatus including: a first acquiring unit configured to acquire audio to be processed; a first determining unit configured to determine whether the current device supports audio to be processed added Reverberation; the second determining unit is configured to determine the reverberation algorithm according to the reverberation category selected by the user in response to determining that the current device supports adding reverberation to the audio to be processed; The audio to be processed is processed to obtain the processed audio.
在一些实施例中,该装置还包括:播放单元,被配置成对处理后的音频进行播放。In some embodiments, the device further includes a playback unit configured to play the processed audio.
在一些实施例中,第一确定单元进一步被配置成:获取设备信息集合;根据设备信息集合,确定当前设备是否支持对待处理音频添加混响。In some embodiments, the first determining unit is further configured to: obtain the device information set; and determine whether the current device supports adding reverberation to the audio to be processed according to the device information set.
在一些实施例中,第二确定单元进一步被配置成:响应于确定当前设备支持对所述待处理音频添加混响,获取用于表征设备信息与混响算法类别之间的对应关系的算法类别列表;基于算法类别列表,确定当前设备的设备信息对应的混响算法类别,其中所述混响算法类别用于表征混响算法所属的类别;根据用户选取的混响类别,在所确定的混响算法类别下确定混响算法。In some embodiments, the second determination unit is further configured to: in response to determining that the current device supports adding reverberation to the to-be-processed audio, acquire an algorithm category that characterizes the correspondence between device information and the reverberation algorithm category List; based on the algorithm category list, determine the reverberation algorithm category corresponding to the device information of the current device, wherein the reverberation algorithm category is used to characterize the category of the reverberation algorithm; according to the reverberation category selected by the user, the determined reverberation category The reverb algorithm is determined under the category of reverb algorithm.
在一些实施例中,第二确定单元进一步被配置成:响应于确定当 前设备支持对所述待处理音频添加混响,对待处理音频进行音频特性分析,得到音频特性数据;以及第二确定单元被进一步配置成:根据所述音频特性数据和用户选取的混响类别,确定混响算法。In some embodiments, the second determination unit is further configured to: in response to determining that the current device supports adding reverberation to the audio to be processed, perform audio characteristic analysis on the audio to be processed to obtain audio characteristic data; and the second determination unit is It is further configured to determine the reverberation algorithm according to the audio characteristic data and the reverberation category selected by the user.
在一些实施例中,混响算法所属的类别根据下述项中的一项进行划分:混响算法架构所需的系统开销;以及混响算法的复杂度。In some embodiments, the category to which the reverb algorithm belongs is divided according to one of the following items: the system overhead required by the reverb algorithm architecture; and the complexity of the reverb algorithm.
第三方面,本公开实施例提供了一种终端设备,该终端设备包括:一个或多个处理器;存储装置,其上存储有一个或多个程序;当上述一个或多个程序被上述一个或多个处理器执行,使得上述一个或多个处理器实现如第一方面中任一实现方式描述的方法。In a third aspect, an embodiment of the present disclosure provides a terminal device, the terminal device includes: one or more processors; a storage device, on which one or more programs are stored; Or executed by multiple processors, so that the above one or more processors implement the method described in any one of the implementation manners of the first aspect.
第四方面,本公开实施例提供了一种计算机可读介质,其上存储有计算机程序,上述程序被处理器执行时实现如第一方面中任一实现方式描述的方法。According to a fourth aspect, an embodiment of the present disclosure provides a computer-readable medium on which a computer program is stored. When the program is executed by a processor, the method described in any one of the implementation manners of the first aspect is implemented.
本公开实施例提供的方法和装置,首先可以确定当前设备是否支持对待处理音频添加混响。从而针对不同的设备进行混响效果的打开或关闭的配置。响应于确定当前设备支持对待处理音频添加混响,根据用户选取的混响类别来确定混响算法,以实现不同的环境模拟效果。根据确定的混响算法,对待处理音频进行处理,从而得到处理后的音频。在此过程中,通过对混响效果的开或关进行配置,可以避免在性能差的设备上添加混响效果造成的明显延时。The method and apparatus provided by the embodiments of the present disclosure can first determine whether the current device supports adding reverberation to the audio to be processed. Therefore, the reverb effect is turned on or off for different devices. In response to determining that the current device supports adding reverberation to the audio to be processed, the reverberation algorithm is determined according to the reverberation category selected by the user to achieve different environmental simulation effects. According to the determined reverberation algorithm, the audio to be processed is processed to obtain the processed audio. In this process, by configuring the reverb effect on or off, you can avoid the obvious delay caused by adding the reverb effect to devices with poor performance.
附图说明BRIEF DESCRIPTION
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本公开的其它特征、目的和优点将会变得更明显:By reading the detailed description of the non-limiting embodiments made with reference to the following drawings, other features, objects, and advantages of the present disclosure will become more apparent:
图1是本公开的一个实施例可以应用于其中的示例性系统架构图;FIG. 1 is an exemplary system architecture diagram to which an embodiment of the present disclosure can be applied;
图2是根据本公开的音频处理方法的一个实施例的流程图;2 is a flowchart of an embodiment of an audio processing method according to the present disclosure;
图3是根据本公开实施例的音频处理方法的一个应用场景的示意图;3 is a schematic diagram of an application scenario of an audio processing method according to an embodiment of the present disclosure;
图4是根据本公开的音频处理方法的又一个实施例的流程图;4 is a flowchart of still another embodiment of the audio processing method according to the present disclosure;
图5是根据本公开的音频处理装置的一个实施例的结构示意图;5 is a schematic structural diagram of an embodiment of an audio processing device according to the present disclosure;
图6是适于用来实现本公开实施例的电子设备的结构示意图。6 is a schematic structural diagram of an electronic device suitable for implementing embodiments of the present disclosure.
具体实施方式detailed description
下面结合附图和实施例对本公开作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关公开,而非对该公开的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关公开相关的部分。The disclosure will be further described in detail below with reference to the drawings and embodiments. It can be understood that the specific embodiments described herein are only used to explain the relevant disclosure, rather than limiting the disclosure. It should also be noted that, for ease of description, only the parts related to the relevant disclosure are shown in the drawings.
需要说明的是,在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本公开。It should be noted that the embodiments in the present disclosure and the features in the embodiments can be combined with each other without conflict. The disclosure will be described in detail below with reference to the drawings and in conjunction with the embodiments.
图1示出了可以应用本公开的实施例的音频处理方法或装置的示例性系统架构100。FIG. 1 shows an exemplary system architecture 100 to which an audio processing method or apparatus of an embodiment of the present disclosure can be applied.
如图1所示,系统架构100可以包括终端设备101、102、103,网络104和服务器105。网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。As shown in FIG. 1, the system architecture 100 may include terminal devices 101, 102, and 103, a network 104, and a server 105. The network 104 is a medium used to provide a communication link between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, and so on.
用户可以使用终端设备101、102、103通过网络104与服务器105交互,以接收或发送消息等。终端设备101、102、103上可以安装有各种通讯客户端应用,例如唱歌类应用、视频录制分享类应用、音频处理类应用等。The user can use the terminal devices 101, 102, 103 to interact with the server 105 through the network 104 to receive or send messages, and so on. Various communication client applications, such as singing applications, video recording and sharing applications, and audio processing applications, can be installed on the terminal devices 101, 102, and 103.
终端设备101、102、103可以是硬件,也可以是软件。当终端设备101、102、103为硬件时,可以是具有显示屏并且支持音频处理的各种电子设备。当终端设备101、102、103为软件时,可以安装在上述电子设备中。其可以实现成多个软件或软件模块,也可以实现成单个软件或软件模块。在此不做具体限定。The terminal devices 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, and 103 are hardware, they may be various electronic devices that have a display screen and support audio processing. When the terminal devices 101, 102, and 103 are software, they can be installed in the above electronic device. It can be implemented as multiple software or software modules, or as a single software or software module. There is no specific limit here.
服务器105可以是提供各种服务的服务器,例如对终端设备101、102、103上安装的应用提供支持的后台服务器。The server 105 may be a server that provides various services, for example, a back-end server that supports applications installed on the terminal devices 101, 102, and 103.
需要说明的是,本公开的实施例所提供的音频处理方法一般由终端设备101、102、103执行。相应的,音频处理装置一般设置于终端设备101、102、103中。It should be noted that the audio processing method provided by the embodiments of the present disclosure is generally executed by the terminal devices 101, 102, and 103. Correspondingly, the audio processing device is generally provided in the terminal devices 101, 102, 103.
需要说明的是,服务器可以是硬件,也可以是软件。当服务器为硬件时,可以实现成多个服务器组成的分布式服务器集群,也可以实现成单个服务器。当服务器为软件时,可以实现成多个软件或软件模块(例如用来提供分布式服务),也可以实现成单个软件或软件模块。在此不做具体限定。It should be noted that the server can be hardware or software. When the server is hardware, it can be implemented as a distributed server cluster composed of multiple servers or as a single server. When the server is software, it can be implemented as multiple software or software modules (for example, to provide distributed services), or as a single software or software module. There is no specific limit here.
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。It should be understood that the numbers of terminal devices, networks, and servers in FIG. 1 are only schematic. According to the implementation needs, there can be any number of terminal devices, networks and servers.
继续参考图2,示出了根据本公开的音频处理方法的一个实施例的流程200。该音频处理方法包括:With continued reference to FIG. 2, a flow 200 of one embodiment of an audio processing method according to the present disclosure is shown. The audio processing method includes:
步骤201,获取待处理音频。Step 201: Acquire audio to be processed.
在本实施例中,音频处理方法的执行主体(例如图1所示的终端设备101、102、103)可以通过各种方式获取待处理音频。例如,上述执行主体可以通过录音设备录制用户演唱的声音,得到待处理音频。其中,录音设备可以集成于上述执行主体上,也可以与执行主体通信连接,本公开对此不做限制。又如,上述执行主体也可以从本地或通信连接的其他存储设备中获取预先存储的音频作为待处理音频。In this embodiment, the execution subject of the audio processing method (for example, the terminal devices 101, 102, and 103 shown in FIG. 1) can acquire the audio to be processed in various ways. For example, the above-mentioned execution subject can record the voice of the user singing through the recording device to obtain the audio to be processed. Wherein, the recording device may be integrated on the above-mentioned executive body, or may be in communication connection with the executive body, which is not limited in this disclosure. For another example, the above-mentioned execution subject may also obtain pre-stored audio from the local or other storage device connected as the audio to be processed.
在本实施例中,待处理音频可以是任意的音频。待处理音频的确定可以是由技术人员指定,也可以根据一定的条件进行筛选得到。例如,用户通过终端设备(例如,智能手机)演唱时,待处理音频可以是用户演唱的完整的音频,也可以是用户演唱的一个音频片段。在实时监听的场景下,待处理音频也可以是用户演唱的时间较短(例如30毫秒)的一个音频片段。In this embodiment, the audio to be processed may be any audio. The determination of the audio to be processed can be specified by a technician, or can be screened according to certain conditions. For example, when a user sings through a terminal device (for example, a smartphone), the audio to be processed may be a complete audio sung by the user, or an audio segment sung by the user. In the scenario of real-time monitoring, the audio to be processed may also be an audio segment with a short singing time (for example, 30 milliseconds) by the user.
步骤202,确定当前设备是否支持对待处理音频添加混响。Step 202: Determine whether the current device supports adding reverberation to the audio to be processed.
在本实施例中,上述执行主体可以通过各种方式确定当前设备是否支持对待处理音频添加混响。其中,当前设备可以是上述执行主体。实践中,不同的设备的性能不同。在对音频添加混响时,处理性能较差的设备在对待处理音频添加混响时,所需的处理时间较长。在需要实时监听的场景下,会导致延时明显,无法满足实时监听的需求。因而,对于这些设备可以认为不支持添加混响。作为示例,上述执行主 体可以获取当前设备的性能参数,例如,CPU(中央处理器,Central Processing Unit)中的运算核心的数量、内存的大小等等。之后,可以根据预先设定的处理逻辑或查询预设的性能参数表,从而根据当前设备的性能参数,确定当前设备是否支持对待处理音频添加混响。其中,性能参数表中可以存储设备的性能参数与是否支持添加混响之间的对应关系。In this embodiment, the above-mentioned execution subject may determine in various ways whether the current device supports adding reverberation to the audio to be processed. Wherein, the current device may be the above-mentioned execution subject. In practice, different devices have different performance. When adding reverb to audio, a device with poor processing performance needs a longer processing time when adding reverb to the audio to be processed. In a scenario that requires real-time monitoring, it will cause a significant delay and cannot meet the needs of real-time monitoring. Therefore, it can be considered that these devices do not support adding reverb. As an example, the above-mentioned execution body can obtain the performance parameters of the current device, for example, the number of computing cores in the CPU (Central Processing Unit), the size of the memory, and so on. Afterwards, it can determine whether the current device supports adding reverberation to the audio to be processed according to the preset processing logic or querying a preset performance parameter table, so as to determine whether the current device supports the audio to be processed according to the performance parameter of the current device. The performance parameter table may store the correspondence between the performance parameters of the device and whether it supports adding reverb.
在本实施例的一些可选的实现方式中,确定当前设备是否支持对待处理音频添加混响,也可以包括:获取设备信息集合;根据设备信息集合,确定当前设备是否支持对待处理音频添加混响。In some optional implementation manners of this embodiment, determining whether the current device supports adding reverb to the audio to be processed may also include: acquiring a device information set; according to the device information set, determining whether the current device supports adding reverb to the audio to be processed .
在这些实现方式中,设备信息可以是能够标识设备的任何信息。作为示例,设备信息可以是设备型号、设备名称等等。作为示例,设备信息集合中的设备信息可以是不支持添加混响的设备的设备信息。此时,上述执行主体可以确定当前设备的设备信息是否在上述设备信息集合中。若在,可以确定当前设备不支持对待处理音频添加混响。反之,则可以确定当前设备支持对待处理音频添加混响。可以理解,实践中,上述设备信息集合中的设备信息也可以是支持添加混响的设备的设备信息。In these implementations, the device information may be any information that can identify the device. As an example, the device information may be the device model, device name, and so on. As an example, the device information in the device information set may be device information of devices that do not support adding reverberation. At this time, the execution subject may determine whether the device information of the current device is in the device information set. If it is, it can be determined that the current device does not support adding reverb to the audio to be processed. Otherwise, it can be determined that the current device supports adding reverb to the audio to be processed. It can be understood that, in practice, the device information in the foregoing device information set may also be device information of a device that supports adding reverberation.
在本实施例中,若确定当前设备支持对待处理音频添加混响,可以继续执行步骤203。In this embodiment, if it is determined that the current device supports adding reverberation to the audio to be processed, step 203 may be continued.
在本实施例的一些可选的实现方式中,若确定当前设备不支持对待处理音频添加混响,可以直接播放待处理音频。In some optional implementations of this embodiment, if it is determined that the current device does not support adding reverb to the audio to be processed, the audio to be processed may be directly played.
步骤203,响应于确定当前设备支持对待处理音频添加混响,根据用户选取的混响类别来确定混响算法。Step 203: In response to determining that the current device supports adding reverberation to the audio to be processed, determine the reverberation algorithm according to the reverberation category selected by the user.
在本实施例中,响应于确定当前设备支持对待处理音频添加混响,上述执行主体可以根据用户选取的混响类别,确定混响算法。实践中,可以根据所模拟的环境效果的不同,将混响算法划分为不同的混响类别。例如,混响类别可以有大厅效果、录音棚效果、山谷效果等等。对于各种混响类别,可以将各个类别的类别信息(例如名称、图片等信息)在上述执行主体上进行展示。其中,每个类别信息都与该类别信息所指示的混响类别相关联。从而用户可以对这些类别信息执行一 些操作(例如点击操作)以选取混响类别。In this embodiment, in response to determining that the current device supports adding reverberation to the audio to be processed, the above-mentioned execution subject may determine the reverberation algorithm according to the reverberation category selected by the user. In practice, the reverberation algorithm can be divided into different reverberation categories according to the simulated environmental effects. For example, the reverb category can have hall effects, studio effects, valley effects, and so on. For various reverb categories, category information (such as name, picture, etc.) of each category can be displayed on the above-mentioned executive body. Each category information is associated with the reverb category indicated by the category information. Therefore, the user can perform some operations (such as a click operation) on the category information to select the reverb category.
在本实施例中,可以预先设定每个混响类别所对应的混响算法,从而上述执行主体可以根据用户选取的混响类别,确定混响算法。In this embodiment, the reverberation algorithm corresponding to each reverberation category can be preset, so that the above-mentioned execution subject can determine the reverberation algorithm according to the reverberation category selected by the user.
在本实施例的一些可选的实现方式中,响应于确定当前设备支持对待处理音频添加混响,根据用户选取的混响类别,确定混响算法,包括:响应于确定当前设备支持对待处理音频添加混响,对待处理音频进行音频特性分析,得到音频特性数据;根据音频特性数据和用户选取的混响类别,确定混响算法。In some optional implementations of this embodiment, in response to determining that the current device supports adding reverb to the audio to be processed, determining the reverb algorithm according to the reverb type selected by the user includes: in response to determining that the current device supports audio to be processed Add reverberation, analyze the audio characteristics of the audio to be processed, and obtain audio characteristic data; determine the reverberation algorithm according to the audio characteristic data and the reverb type selected by the user.
在这些实现方式中,上述执行主体可以通过各种方式对待处理音频进行特性分析,得到音频特性数据。作为示例,可以通过一些现有的音频分析应用或者开源的工具包对待处理音频进行分析。其中,音频特性包括音频的频率、带宽、幅度等等。在此基础上,可以根据音频特性数据和用户选取的混响类别,确定混响算法。具体来说,可以在预先建立的音频特性数据和混响类别二者与混响算法之间的对应关系表中,查询与所得到的音频特性数据和用户选取的混响类别匹配的混响算法。In these implementations, the above-mentioned execution subject may perform characteristic analysis on the audio to be processed in various ways to obtain audio characteristic data. As an example, the audio to be processed can be analyzed through some existing audio analysis applications or open source toolkits. Among them, the audio characteristics include audio frequency, bandwidth, amplitude and so on. On this basis, the reverberation algorithm can be determined according to the audio characteristic data and the reverberation category selected by the user. Specifically, in the correspondence table between the pre-established audio characteristic data and the reverb category and the reverb algorithm, the reverb algorithm matching the obtained audio characteristic data and the reverb category selected by the user can be queried .
步骤204,根据确定的混响算法,对待处理音频进行处理以得到处理后的音频。Step 204: Process the audio to be processed according to the determined reverberation algorithm to obtain the processed audio.
在本实施例中,上述执行主体可以根据确定的混响算法,对待处理音频进行处理,得到处理后的音频。具体来说,可以将待处理音频输入根据确定的混响算法设定的至少一个滤波器,从而得到处理后的音频。需要说明的是,每种混响算法对应的滤波器的种类个数等等可以预先设定。需要说明的是,每个混响算法可以通过至少一个滤波器进行组合而得到。例如,可以选用梳状滤波器和全通滤波器进行组合。这里的滤波器根据实现需要,可以当前设备中的硬件模块,也可以是当前设备中的软件模块。In this embodiment, the above-mentioned execution subject may process the audio to be processed according to the determined reverberation algorithm to obtain the processed audio. Specifically, the audio to be processed may be input to at least one filter set according to the determined reverberation algorithm, so as to obtain the processed audio. It should be noted that the number of filters and the like corresponding to each reverberation algorithm can be preset. It should be noted that each reverberation algorithm can be obtained by combining at least one filter. For example, comb filters and all-pass filters can be selected for combination. The filter here may be a hardware module in the current device or a software module in the current device according to implementation needs.
继续参见图3,图3是根据本实施例的音频处理方法的应用场景的一个示意图。在图3的应用场景中,音频处理方法的执行主体可以是智能手机301。智能手机301首先获取待处理音频3011。之后,根据手机301的性能参数。其中,性能参数以CPU中的运算核心的数量 为例,智能手机301以双核为例,预先设定的处理逻辑为双核为支持添加混响为例。智能手机301可以确定当前设备301支持对待处理音频3011添加混响。响应于确定当前设备301支持对待处理音频添加混响,根据用户选取的混响类别3012,以用户通过点击选取的混响类别为山谷效果为例,通过查询预先设定的对应关系表,可以确定山谷效果的混响类别对应的混响算法3013。根据确定的混响算法3013,对待处理音频进行处理,得到处理后的音频3011'。3, FIG. 3 is a schematic diagram of an application scenario of the audio processing method according to this embodiment. In the application scenario of FIG. 3, the execution subject of the audio processing method may be the smartphone 301. The smartphone 301 first obtains audio 3011 to be processed. After that, according to the performance parameters of the mobile phone 301. The performance parameter takes the number of computing cores in the CPU as an example, the smartphone 301 takes dual cores as an example, and the preset processing logic is dual cores as an example to support adding reverb. The smartphone 301 can determine that the current device 301 supports adding reverb to the audio 3011 to be processed. In response to determining that the current device 301 supports adding reverberation to the audio to be processed, according to the reverberation category selected by the user 3012, taking the reverberation category selected by the user as a valley effect as an example, it can be determined by querying a preset correspondence table The reverb algorithm 3013 corresponding to the reverb category of the valley effect. According to the determined reverberation algorithm 3013, the audio to be processed is processed to obtain the processed audio 3011 '.
本公开的上述实施例提供的方法可以确定当前设备是否支持对待处理音频添加混响,从而针对不同的设备进行混响效果的打开或关闭的配置。响应于确定当前设备支持对待处理音频添加混响,根据用户选取的混响类别,确定混响算法,以实现不同的环境模拟效果。根据确定的混响算法,对待处理音频进行处理,从而得到处理后的音频。在此过程中,通过对混响效果的开或关进行配置,可以避免在性能差的设备上添加混响效果造成的明显延时。The method provided by the above embodiments of the present disclosure may determine whether the current device supports adding reverberation to the audio to be processed, so as to configure the reverberation effect to be turned on or off for different devices. In response to determining that the current device supports adding reverberation to the audio to be processed, the reverberation algorithm is determined according to the reverberation category selected by the user to achieve different environmental simulation effects. According to the determined reverberation algorithm, the audio to be processed is processed to obtain the processed audio. In this process, by configuring the reverb effect on or off, you can avoid the obvious delay caused by adding the reverb effect to devices with poor performance.
进一步参考图4,其示出了音频处理方法的又一个实施例的流程400。该音频处理方法的流程400,包括以下步骤:With further reference to FIG. 4, it shows a flow 400 of yet another embodiment of an audio processing method. The process 400 of the audio processing method includes the following steps:
步骤401,获取待处理音频。Step 401: Acquire audio to be processed.
步骤402,确定当前设备是否支持对待处理音频添加混响。Step 402: Determine whether the current device supports adding reverb to the audio to be processed.
在本实施例中,步骤401和402的具体处理及其所带来的技术效果可以参考图2对应的实施例中的步骤201和202,在此不再赘述。In this embodiment, for the specific processing of steps 401 and 402 and the resulting technical effects, reference may be made to steps 201 and 202 in the embodiment corresponding to FIG. 2, and details are not described herein again.
步骤403,响应于确定当前设备支持对待处理音频添加混响,获取算法类别列表。Step 403: In response to determining that the current device supports adding reverberation to the audio to be processed, obtain a list of algorithm categories.
在本实施例中,上述执行主体响应于确定当前设备支持对待处理音频添加混响,可以通过各种方式获取算法类别列表。作为示例,可以接收通信连接的服务器下发的算法类别列表。作为示例,也可以将预先将算法类别列表存储在本地,从而可以直接本地获取算法类别列表。其中,算法类别列表用于表征设备信息与混响算法类别之间的对应关系。实践中,可以根据一定的指标,例如算法架构所需的系统开销或算法的复杂度,对混响算法进行划分,得到不同的混响算法类别。 混响算法类别用于表征混响算法所属的类别。作为示例,可以根据算法所需的系统开销分为三个类别。第一类别系统开销小,其可以通过包括梳状滤波器、全通滤波器、施洛德Schroeder滤波器或者这些滤波器的组合而实现。第二类别系统开销大,可以通过高低通滤波其与延时滤波器的组合,也可以现有的滤波器系统,例如穆勒Moorer混响器。第三种类别系统开销中等,其可以通过反馈网络与全通滤波器的组合实现。In this embodiment, in response to determining that the current device supports adding reverberation to the audio to be processed, the above-mentioned execution subject may obtain the algorithm category list in various ways. As an example, a list of algorithm categories issued by a server connected by communication may be received. As an example, the algorithm category list may be stored locally in advance, so that the algorithm category list can be directly obtained locally. Among them, the algorithm category list is used to characterize the correspondence between the device information and the reverberation algorithm category. In practice, you can divide the reverberation algorithms according to certain indicators, such as the system overhead required by the algorithm architecture or the complexity of the algorithm, to obtain different types of reverberation algorithms. The reverberation algorithm category is used to characterize the category to which the reverberation algorithm belongs. As an example, it can be divided into three categories according to the system overhead required by the algorithm. The first category has low system overhead, which can be achieved by including comb filters, all-pass filters, Schroeder filters, or a combination of these filters. The second category has a large system overhead, which can be combined with a delay filter through high and low-pass filtering, or an existing filter system, such as a Muller Moorer reverberator. The third category has medium system overhead, which can be realized by a combination of a feedback network and an all-pass filter.
在此基础上,通过将不同类别的算法在各种设备上进行运行,确定每种设备支持运行的混响算法类别。从而建立设备信息与混响算法之间的对应关系。On this basis, by running different types of algorithms on various devices, the types of reverb algorithms supported by each device are determined. Thereby establishing the correspondence between device information and reverberation algorithm.
步骤404,基于算法类别列表,确定当前设备的设备信息对应的混响算法类别。 Step 404, based on the algorithm category list, determine the reverberation algorithm category corresponding to the device information of the current device.
在本实施例中,上述执行主体可以基于算法类别列表,确定当前设备的设备信息对应的混响算法类别。具体来说,可以将当前设备的设备信息在算法类别列表进行匹配,从而得到当前设备的设备信息对应的混响算法类别。In this embodiment, the execution subject may determine the reverberation algorithm category corresponding to the device information of the current device based on the algorithm category list. Specifically, the device information of the current device can be matched in the algorithm category list, so as to obtain the reverberation algorithm category corresponding to the device information of the current device.
步骤405,根据用户选取的混响类别,在上述混响算法类别下确定混响算法。Step 405: Determine the reverberation algorithm under the above reverberation algorithm category according to the reverberation category selected by the user.
在本实施例中,上述执行主体可以根据用户选取的混响类别,在混响算法类别下确定混响算法。In this embodiment, the execution subject may determine the reverberation algorithm under the reverberation algorithm category according to the reverberation category selected by the user.
需要说明的是,每个混响算法类别下可以包括多个算法。这些算法根据所模拟的环境效果的不同而不同。因此,在步骤404中确定的混响算法类别的基础上,上述执行主体可以根据用户选取的混响类别,在该混响算法类别下的各种算法中,确定适用的混响算法。需要说明的是,在该混响算法类别下的各种算法中,确定适用的混响算法的具体实现及其所带来的技术效果可以参考图2对应的实施例中的步骤203,在此不再赘述。It should be noted that each reverb algorithm category may include multiple algorithms. These algorithms differ according to the simulated environmental effects. Therefore, on the basis of the reverberation algorithm category determined in step 404, the above-mentioned execution subject can determine the applicable reverberation algorithm among various algorithms under the reverberation algorithm category according to the reverberation category selected by the user. It should be noted that, among the various algorithms under this category of reverberation algorithms, for determining the specific implementation of the applicable reverberation algorithm and the technical effects it brings, refer to step 203 in the embodiment corresponding to FIG. 2, here No longer.
步骤406,根据确定的混响算法,对待处理音频进行处理,得到处理后的音频。Step 406: Process the audio to be processed according to the determined reverberation algorithm to obtain the processed audio.
在本实施例中,步骤406的具体处理及其所带来的技术效果可以参 考图2对应的实施例中的步骤204,在此不再赘述。In this embodiment, the specific processing of step 406 and the technical effect brought by it can refer to step 204 in the embodiment corresponding to FIG. 2, which will not be repeated here.
步骤407,对处理后的音频进行播放。Step 407: Play the processed audio.
在本实施例中,上述执行主体可以对处理后的音频通过上述执行主体中集成的或者通信连接的播放设备进行播放。作为示例,在用户K歌的场景下,可以音频播放设备将处理后的音频进行播放,从而使用户可以实时监听到添加了混响的音频。In this embodiment, the above-mentioned execution subject may play the processed audio through a playback device integrated or communicatively connected in the above-mentioned execution subject. As an example, in the scene of the user's karaoke, the audio playback device can play the processed audio, so that the user can monitor the audio with added reverberation in real time.
从图4中可以看出,与图2对应的实施例相比,本实施例中的音频处理方法增加了基于算法类别列表确定混响算法类别,以及在确定的混响算法类别下确定混响算法的步骤。实现了针对不同的设备应用不同混响算法类别下的混响算法。通过对不同混响算法类别下的混响算法进行设置,可以实现对于性能较好的设备,充分利用其性能,得到更好的处理效果。而对于性能较差的设备,可以减少系统开销。As can be seen from FIG. 4, compared with the embodiment corresponding to FIG. 2, the audio processing method in this embodiment adds the determination of the reverberation algorithm category based on the algorithm category list, and the determination of the reverberation under the determined reverberation algorithm category Algorithm steps. Reverberation algorithms under different reverberation algorithm categories are implemented for different devices. By setting the reverberation algorithms under different reverberation algorithm categories, it is possible to realize the better performance of equipment with better performance and make full use of its performance. For devices with poor performance, system overhead can be reduced.
进一步参考图5,作为对上述各图所示方法的实现,本公开提供了一种音频处理装置,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。With further reference to FIG. 5, as an implementation of the methods shown in the above figures, the present disclosure provides an audio processing device. The device embodiment corresponds to the method embodiment shown in FIG. 2. The device can be specifically applied to various In electronic equipment.
如图5所示,本实施例的音频处理装置500包括:第一获取单元501、第一确定单元502、第二确定单元503和处理单元504。其中,第一获取单元501被配置成获取待处理音频。第一确定单元502被配置成确定当前设备是否支持对待处理音频添加混响。第二确定单元503被配置成响应于确定当前设备支持对待处理音频添加混响,根据用户选取的混响类别,确定混响算法。处理单元504被配置成根据确定的混响算法,对待处理音频进行处理,得到处理后的音频。As shown in FIG. 5, the audio processing device 500 of this embodiment includes: a first acquisition unit 501, a first determination unit 502, a second determination unit 503, and a processing unit 504. The first acquisition unit 501 is configured to acquire audio to be processed. The first determination unit 502 is configured to determine whether the current device supports adding reverberation to the audio to be processed. The second determination unit 503 is configured to determine the reverberation algorithm according to the reverberation category selected by the user in response to determining that the current device supports adding reverberation to the audio to be processed. The processing unit 504 is configured to process the audio to be processed according to the determined reverberation algorithm to obtain the processed audio.
在本实施例中,音频处理装置500中的第一获取单元501、第一确定单元502、第二确定单元503和处理单元504的具体处理及其所带来的技术效果可以参考图2对应的实施例中的步骤201-204,在此不再赘述。In this embodiment, the specific processing of the first acquiring unit 501, the first determining unit 502, the second determining unit 503, and the processing unit 504 in the audio processing device 500 and the technical effects brought about by them can refer to the corresponding Steps 201-204 in the embodiment will not be repeated here.
在本实施例的一些可选的实现方式中,该装置500还可以额包括:播放单元(图中未示出)。其中,播放单元被配置成对处理后的音频进行播放。In some optional implementations of this embodiment, the device 500 may further include: a playback unit (not shown in the figure). The playback unit is configured to play the processed audio.
在本实施例的一些可选的实现方式中,第一确定单元502可以进一步被配置成:获取设备信息集合;根据设备信息集合,确定当前设备是否支持对待处理音频添加混响。In some optional implementations of this embodiment, the first determining unit 502 may be further configured to: obtain a device information set; and determine whether the current device supports adding reverberation to the audio to be processed according to the device information set.
在本实施例的一些可选的实现方式中,第二确定单元503进一步被配置成:响应于确定当前设备支持对所述待处理音频添加混响,获取用于表征设备信息与混响算法类别之间的对应关系的算法类别列表;基于算法类别列表,确定当前设备的设备信息对应的混响算法类别,其中所述混响算法类别用于表征混响算法所属的类别;根据用户选取的混响类别,在混响算法类别下确定混响算法。In some optional implementations of this embodiment, the second determining unit 503 is further configured to: in response to determining that the current device supports adding reverberation to the audio to be processed, obtain information for characterizing the device and the type of reverberation algorithm The algorithm category list of the correspondence between them; based on the algorithm category list, determine the reverberation algorithm category corresponding to the device information of the current device, where the reverberation algorithm category is used to characterize the category to which the reverberation algorithm belongs; The reverb algorithm is determined under the reverb algorithm category.
在本实施例的一些可选的实现方式中,第二确定单元503进一步被配置成:响应于确定当前设备支持对所述待处理音频添加混响,对待处理音频进行音频特性分析,得到音频特性数据;根据音频特性数据和用户选取的混响类别,确定混响算法。In some optional implementations of this embodiment, the second determining unit 503 is further configured to: in response to determining that the current device supports adding reverberation to the audio to be processed, perform audio characteristic analysis on the audio to be processed to obtain audio characteristics Data; determine the reverberation algorithm according to the audio characteristic data and the reverberation category selected by the user.
在本实施例的一些可选的实现方式中,混响算法所属的类别根据下述项中的一项进行划分:混响算法架构所需的系统开销;以及混响算法的复杂度。In some optional implementations of this embodiment, the category to which the reverb algorithm belongs is divided according to one of the following items: the system overhead required by the reverb algorithm architecture; and the complexity of the reverb algorithm.
在本实施例中,第一确定单元502可以确定当前设备是否支持对待处理音频添加混响,从而针对不同的设备进行混响效果的打开或关闭的配置。响应于确定当前设备支持对待处理音频添加混响,第二确定单元503根据用户选取的混响类别,确定混响算法,以实现不同的环境模拟效果。处理单元504可以根据确定的混响算法,对待处理音频进行处理,从而得到处理后的音频。在此过程中,通过对混响效果的开或关进行配置,可以避免在性能差的设备上添加混响效果造成的明显延时。In this embodiment, the first determining unit 502 may determine whether the current device supports adding reverberation to the audio to be processed, so as to configure the reverberation effect to be turned on or off for different devices. In response to determining that the current device supports adding reverberation to the audio to be processed, the second determining unit 503 determines a reverberation algorithm according to the reverberation category selected by the user, so as to achieve different environmental simulation effects. The processing unit 504 may process the audio to be processed according to the determined reverberation algorithm, so as to obtain the processed audio. In this process, by configuring the reverb effect on or off, you can avoid the obvious delay caused by adding the reverb effect to devices with poor performance.
下面参考图6,其示出了适于用来实现本公开的实施例的电子设备(例如图1中的终端设备)600的结构示意图。本公开的实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸 如数字TV、台式计算机等等的固定终端。图6示出的电子设备仅仅是一个示例,不应对本公开的实施例的功能和使用范围带来任何限制。Reference is now made to FIG. 6, which shows a schematic structural diagram of an electronic device (for example, the terminal device in FIG. 1) 600 suitable for implementing the embodiments of the present disclosure. Terminal devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals ( For example, mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, and so on. The electronic device shown in FIG. 6 is just an example, and should not bring any limitation to the functions and use scope of the embodiments of the present disclosure.
如图6所示,电子设备600可以包括处理装置(例如中央处理器、图形处理器等)601,其可以根据存储在只读存储器(ROM)602中的程序或者从存储装置608加载到随机访问存储器(RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有电子设备600操作所需的各种程序和数据。处理装置601、ROM 602以及RAM603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。As shown in FIG. 6, the electronic device 600 may include a processing device (such as a central processing unit, a graphics processor, etc.) 601, which may be loaded into random access according to a program stored in a read-only memory (ROM) 602 or from the storage device 608 The program in the memory (RAM) 603 performs various appropriate operations and processes. In the RAM 603, various programs and data necessary for the operation of the electronic device 600 are also stored. The processing device 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input / output (I / O) interface 605 is also connected to the bus 604.
通常,以下装置可以连接至I/O接口605:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置606;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置607;包括例如磁带、硬盘等的存储装置608;以及通信装置609。通信装置609可以允许电子设备600与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有各种装置的电子设备600,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。Generally, the following devices can be connected to the I / O interface 605: including input devices 606 such as touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc .; including, for example, liquid crystal display (LCD), speaker, vibration An output device 607 such as a storage device; a storage device 608 including, for example, a magnetic tape, a hard disk, etc .; and a communication device 609. The communication device 609 may allow the electronic device 600 to perform wireless or wired communication with other devices to exchange data. Although FIG. 6 shows an electronic device 600 having various devices, it should be understood that it is not required to implement or have all the devices shown. More or fewer devices may be implemented or provided instead.
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置609从网络上被下载和安装,或者从存储装置608被安装,或者从ROM 602被安装。在该计算机程序被处理装置601执行时,执行本公开的实施例的方法中限定的上述功能。In particular, according to an embodiment of the present disclosure, the process described above with reference to the flowchart may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product that includes a computer program carried on a computer-readable medium, the computer program containing program code for performing the method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from the network through the communication device 609, or from the storage device 608, or from the ROM 602. When the computer program is executed by the processing device 601, the above-described functions defined in the method of the embodiments of the present disclosure are executed.
需要说明的是,本公开所述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、 只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。It should be noted that the computer-readable medium described in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two. The computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device. In this disclosure, the computer-readable signal medium may include a data signal that is propagated in baseband or as part of a carrier wave, in which computer-readable program code is carried. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above. The computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable signal medium may send, propagate, or transmit a program for use by or in combination with an instruction execution system, apparatus, or device . The program code contained on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: electric wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取待处理音频;确定当前设备是否支持对待处理音频添加混响;响应于确定当前设备支持对待处理音频添加混响,根据用户选取的混响类别,确定混响算法;根据确定的混响算法,对待处理音频进行处理,得到处理后的音频。The computer-readable medium may be included in the electronic device; or it may exist alone without being assembled into the electronic device. The computer-readable medium carries one or more programs, and when the one or more programs are executed by the electronic device, the electronic device is caused to: obtain the audio to be processed; determine whether the current device supports adding reverb to the audio to be processed; In order to determine that the current device supports adding reverb to the audio to be processed, the reverb algorithm is determined according to the reverb category selected by the user; according to the determined reverb algorithm, the audio to be processed is processed to obtain the processed audio.
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for performing the operations of the present disclosure may be written in one or more programming languages or a combination thereof, the programming languages including object-oriented programming languages such as Java, Smalltalk, C ++, and also including conventional Procedural programming language-such as "C" language or similar programming language. The program code may execute entirely on the user's computer, partly on the user's computer, as an independent software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In situations involving remote computers, the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (eg, through an Internet service provider Internet connection).
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the drawings illustrate the possible implementation architecture, functions, and operations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagram may represent a module, program segment, or part of code that contains one or more logic functions Executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession can actually be executed in parallel, and sometimes they can also be executed in reverse order, depending on the functions involved. It should also be noted that each block in the block diagrams and / or flowcharts, and combinations of blocks in the block diagrams and / or flowcharts, can be implemented with dedicated hardware-based systems that perform specified functions or operations Or, it can be realized by a combination of dedicated hardware and computer instructions.
描述于本公开的实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定,例如,第一获取单元还可以被描述为“获取待处理音频的单元”。The units described in the embodiments of the present disclosure may be implemented in software or hardware. In some cases, the name of the unit does not constitute a limitation on the unit itself. For example, the first acquiring unit may also be described as a “unit acquiring audio to be processed”.
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。The above description is only the preferred embodiment of the present disclosure and the explanation of the applied technical principles. Those skilled in the art should understand that the scope of the disclosure in this disclosure is not limited to the technical solutions formed by the specific combination of the above technical features, but should also cover the above technical features or without departing from the above disclosed concepts. Other technical solutions formed by arbitrary combinations of equivalent features. For example, the above features and the technical features disclosed in this disclosure (but not limited to) having similar functions are replaced with each other to form a technical solution.

Claims (14)

  1. 一种音频处理方法,包括:An audio processing method, including:
    获取待处理音频;Obtain pending audio;
    确定当前设备是否支持对所述待处理音频添加混响;Determine whether the current device supports adding reverberation to the audio to be processed;
    响应于确定当前设备支持对所述待处理音频添加混响,根据用户选取的混响类别来确定混响算法;In response to determining that the current device supports adding reverberation to the audio to be processed, the reverberation algorithm is determined according to the reverberation category selected by the user;
    根据确定的混响算法,对所述待处理音频进行处理以得到处理后的音频。According to the determined reverberation algorithm, the audio to be processed is processed to obtain processed audio.
  2. 根据权利要求1所述的方法,其中,所述方法还包括:The method of claim 1, wherein the method further comprises:
    对所述处理后的音频进行播放。Play the processed audio.
  3. 根据权利要求2所述的方法,其中,所述确定当前设备是否支持对所述待处理音频添加混响,包括:The method according to claim 2, wherein the determining whether the current device supports adding reverberation to the audio to be processed includes:
    获取设备信息集合;Get device information collection;
    根据所述设备信息集合,确定当前设备是否支持对所述待处理音频添加混响。According to the device information set, it is determined whether the current device supports adding reverberation to the audio to be processed.
  4. 根据权利要求3所述的方法,其中,所述响应于确定当前设备支持对所述待处理音频添加混响,根据用户选取的混响类别,确定混响算法,包括:The method according to claim 3, wherein, in response to determining that the current device supports adding reverberation to the audio to be processed, determining the reverberation algorithm according to the reverberation category selected by the user includes:
    响应于确定当前设备支持对所述待处理音频添加混响,获取表征设备信息与混响算法类别之间的对应关系的算法类别列表;In response to determining that the current device supports adding reverberation to the audio to be processed, obtaining a list of algorithm categories that characterize the correspondence between device information and the type of reverberation algorithm;
    基于所述算法类别列表,确定当前设备的设备信息对应的混响算法类别,其中所述混响算法类别用于表征混响算法所属的类别;Based on the algorithm category list, determine the reverberation algorithm category corresponding to the device information of the current device, where the reverberation algorithm category is used to characterize the category to which the reverberation algorithm belongs;
    根据用户选取的混响类别,在所确定的混响算法类别下确定混响算法。According to the reverb category selected by the user, the reverb algorithm is determined under the determined reverb algorithm category.
  5. 根据权利要求3所述的方法,其中,所述响应于确定当前设备 支持对所述待处理音频添加混响,根据用户选取的混响类别,确定混响算法,包括:The method according to claim 3, wherein, in response to determining that the current device supports adding reverberation to the audio to be processed, determining the reverberation algorithm according to the reverberation category selected by the user includes:
    响应于确定当前设备支持对所述待处理音频添加混响,对所述待处理音频进行音频特性分析,得到音频特性数据;In response to determining that the current device supports adding reverberation to the audio to be processed, performing audio characteristic analysis on the audio to be processed to obtain audio characteristic data;
    根据所述音频特性数据和用户选取的混响类别,确定混响算法。Determine the reverberation algorithm according to the audio characteristic data and the reverberation category selected by the user.
  6. 根据权利要求4所述的方法,其中混响算法所属的类别根据下述项中的一项进行划分:The method according to claim 4, wherein the category to which the reverberation algorithm belongs is divided according to one of the following items:
    混响算法架构所需的系统开销;以及The system overhead required for the reverberation algorithm architecture; and
    混响算法的复杂度。The complexity of the reverb algorithm.
  7. 一种音频处理装置,包括:An audio processing device, including:
    第一获取单元,被配置成获取待处理音频;The first acquiring unit is configured to acquire audio to be processed;
    第一确定单元,被配置成确定当前设备是否支持对所述待处理音频添加混响;The first determining unit is configured to determine whether the current device supports adding reverberation to the audio to be processed;
    第二确定单元,被配置成响应于确定当前设备支持对所述待处理音频添加混响,根据用户选取的混响类别,确定混响算法;The second determining unit is configured to determine the reverberation algorithm according to the reverberation category selected by the user in response to determining that the current device supports adding reverberation to the audio to be processed;
    处理单元,被配置成根据确定的混响算法,对所述待处理音频进行处理,得到处理后的音频。The processing unit is configured to process the audio to be processed according to the determined reverberation algorithm to obtain processed audio.
  8. 根据权利要求7所述的装置,其中,所述装置还包括:The device according to claim 7, wherein the device further comprises:
    播放单元,被配置成对所述处理后的音频进行播放。The playing unit is configured to play the processed audio.
  9. 根据权利要求8所述的装置,其中,所述第一确定单元进一步被配置成:The apparatus according to claim 8, wherein the first determining unit is further configured to:
    获取设备信息集合;Get device information collection;
    根据所述设备信息集合,确定当前设备是否支持对所述待处理音频添加混响。According to the device information set, it is determined whether the current device supports adding reverberation to the audio to be processed.
  10. 根据权利要求9所述的装置,其中,所述第二确定单元进一步 被配置成:The apparatus according to claim 9, wherein the second determining unit is further configured to:
    响应于确定当前设备支持对所述待处理音频添加混响,获取表征设备信息与混响算法类别之间的对应关系的算法类别列表;In response to determining that the current device supports adding reverberation to the audio to be processed, obtaining a list of algorithm categories that characterize the correspondence between device information and the type of reverberation algorithm;
    基于所述算法类别列表,确定当前设备的设备信息对应的混响算法类别,其中所述混响算法类别用于表征混响算法所属的类别;Based on the algorithm category list, determine the reverberation algorithm category corresponding to the device information of the current device, where the reverberation algorithm category is used to characterize the category to which the reverberation algorithm belongs;
    根据用户选取的混响类别,在所确定的混响算法类别下确定混响算法。According to the reverb category selected by the user, the reverb algorithm is determined under the determined reverb algorithm category.
  11. 根据权利要求9中任一所述的装置,其中,所述第二确定单元进一步被配置成:The apparatus according to any one of claims 9, wherein the second determining unit is further configured to:
    响应于确定当前设备支持对所述待处理音频添加混响,对所述待处理音频进行音频特性分析,得到音频特性数据;In response to determining that the current device supports adding reverberation to the audio to be processed, performing audio characteristic analysis on the audio to be processed to obtain audio characteristic data;
    根据所述音频特性数据和用户选取的混响类别,确定混响算法。Determine the reverberation algorithm according to the audio characteristic data and the reverberation category selected by the user.
  12. 根据权利要求10所述的装置,其中混响算法所属的类别根据下述项中的一项进行划分:The apparatus according to claim 10, wherein the category to which the reverberation algorithm belongs is divided according to one of the following items:
    混响算法架构所需的系统开销;以及The system overhead required for the reverberation algorithm architecture; and
    混响算法的复杂度。The complexity of the reverb algorithm.
  13. 一种终端设备,包括:A terminal device, including:
    一个或多个处理器;One or more processors;
    存储装置,其上存储有一个或多个程序,A storage device on which one or more programs are stored,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-6中任一所述的方法。When the one or more programs are executed by the one or more processors, the one or more processors implement the method according to any one of claims 1-6.
  14. 一种计算机可读介质,其上存储有计算机程序,其中,所述程序被处理器执行时实现如权利要求1-6中任一所述的方法。A computer-readable medium on which a computer program is stored, wherein the program is executed by a processor to implement the method according to any one of claims 1-6.
PCT/CN2019/073126 2018-10-12 2019-01-25 Audio processing method and apparatus WO2020073565A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811190930.2 2018-10-12
CN201811190930.2A CN111045635B (en) 2018-10-12 2018-10-12 Audio processing method and device

Publications (1)

Publication Number Publication Date
WO2020073565A1 true WO2020073565A1 (en) 2020-04-16

Family

ID=70163654

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/073126 WO2020073565A1 (en) 2018-10-12 2019-01-25 Audio processing method and apparatus

Country Status (2)

Country Link
CN (1) CN111045635B (en)
WO (1) WO2020073565A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09244641A (en) * 1996-03-12 1997-09-19 Kawai Musical Instr Mfg Co Ltd Acoustic effect device
CN107402738A (en) * 2017-06-02 2017-11-28 捷开通讯(深圳)有限公司 The method of exterior terminal and its configuration virtual audio scene, storage device
CN107733848A (en) * 2017-08-16 2018-02-23 北京中兴高达通信技术有限公司 The phone system and method for terminal audio mixing
CN108305603A (en) * 2017-10-20 2018-07-20 腾讯科技(深圳)有限公司 Sound effect treatment method and its equipment, storage medium, server, sound terminal
CN207706384U (en) * 2017-12-10 2018-08-07 张德明 It is a kind of that there is the wireless K song earphones for going voice function

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654932B (en) * 2014-11-10 2020-12-15 乐融致新电子科技(天津)有限公司 System and method for realizing karaoke application
CN106612482B (en) * 2015-10-23 2020-06-19 中兴通讯股份有限公司 Method for adjusting audio parameters and mobile terminal
JP7047383B2 (en) * 2016-02-01 2022-04-05 ソニーグループ株式会社 Sound output device, sound output method, program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09244641A (en) * 1996-03-12 1997-09-19 Kawai Musical Instr Mfg Co Ltd Acoustic effect device
CN107402738A (en) * 2017-06-02 2017-11-28 捷开通讯(深圳)有限公司 The method of exterior terminal and its configuration virtual audio scene, storage device
CN107733848A (en) * 2017-08-16 2018-02-23 北京中兴高达通信技术有限公司 The phone system and method for terminal audio mixing
CN108305603A (en) * 2017-10-20 2018-07-20 腾讯科技(深圳)有限公司 Sound effect treatment method and its equipment, storage medium, server, sound terminal
CN207706384U (en) * 2017-12-10 2018-08-07 张德明 It is a kind of that there is the wireless K song earphones for going voice function

Also Published As

Publication number Publication date
CN111045635A (en) 2020-04-21
CN111045635B (en) 2021-05-07

Similar Documents

Publication Publication Date Title
WO2020151599A1 (en) Method and apparatus for publishing video synchronously, electronic device, and readable storage medium
US11270690B2 (en) Method and apparatus for waking up device
US9386123B2 (en) Distributed audio playback and recording
WO2020228383A1 (en) Mouth shape generation method and apparatus, and electronic device
WO2023051293A1 (en) Audio processing method and apparatus, and electronic device and storage medium
WO2020224294A1 (en) Method, system, and apparatus for processing information
CN108829370B (en) Audio resource playing method and device, computer equipment and storage medium
US20240103802A1 (en) Method, apparatus, device and medium for multimedia processing
CN112291121B (en) Data processing method and related equipment
CN111045634B (en) Audio processing method and device
WO2020073565A1 (en) Audio processing method and apparatus
WO2022228067A1 (en) Speech processing method and apparatus, and electronic device
CN115756258A (en) Method, device and equipment for editing audio special effect and storage medium
CN109614137B (en) Software version control method, device, equipment and medium
CN116450256A (en) Editing method, device, equipment and storage medium for audio special effects
CN112688793B (en) Data packet obtaining method and device and electronic equipment
CN114121050A (en) Audio playing method and device, electronic equipment and storage medium
CN109375892B (en) Method and apparatus for playing audio
WO2020087788A1 (en) Audio processing method and device
CN109445873B (en) Method and device for displaying setting interface
WO2020073562A1 (en) Audio processing method and device
CN111291254A (en) Information processing method and device
WO2020073566A1 (en) Audio processing method and device
CN111145792B (en) Audio processing method and device
CN111145776B (en) Audio processing method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19870173

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 02/08/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19870173

Country of ref document: EP

Kind code of ref document: A1