WO2020073565A1 - Procédé et appareil de traitement audio - Google Patents

Procédé et appareil de traitement audio Download PDF

Info

Publication number
WO2020073565A1
WO2020073565A1 PCT/CN2019/073126 CN2019073126W WO2020073565A1 WO 2020073565 A1 WO2020073565 A1 WO 2020073565A1 CN 2019073126 W CN2019073126 W CN 2019073126W WO 2020073565 A1 WO2020073565 A1 WO 2020073565A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
reverberation
algorithm
processed
category
Prior art date
Application number
PCT/CN2019/073126
Other languages
English (en)
Chinese (zh)
Inventor
黄传增
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Publication of WO2020073565A1 publication Critical patent/WO2020073565A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Definitions

  • the embodiments of the present disclosure relate to the field of computer technology, and in particular to audio processing methods and devices.
  • the embodiments of the present disclosure propose an audio processing method and device.
  • an embodiment of the present disclosure provides an audio processing method including: acquiring audio to be processed; determining whether the current device supports adding reverb to audio to be processed; in response to determining that the current device supports adding reverb to audio to be processed, The reverb algorithm is determined according to the reverb category selected by the user; according to the determined reverb algorithm, the audio to be processed is processed to obtain the processed audio.
  • the method further includes: playing the processed audio.
  • determining whether the current device supports adding reverb to the audio to be processed includes: obtaining a device information set; and determining whether the current device supports adding reverb to the audio to be processed according to the device information set.
  • determining a reverb algorithm according to the reverb type selected by the user includes: in response to determining that the current device supports adding reverb to the audio to be processed, obtaining a characterizing device The algorithm category list of the correspondence between the information and the reverberation algorithm category; based on the algorithm category list, determine the reverberation algorithm category corresponding to the device information of the current device, wherein the reverberation algorithm category is used to characterize the category to which the reverberation algorithm belongs ; Determine the reverberation algorithm under the determined reverberation algorithm category according to the reverberation category selected by the user.
  • determining the reverb algorithm according to the reverb type selected by the user includes: in response to determining that the current device supports adding reverb to the audio to be processed, processing the audio Perform audio characteristic analysis to obtain audio characteristic data; determine the reverberation algorithm according to the audio characteristic data and the reverb type selected by the user.
  • the category to which the reverb algorithm belongs is divided according to one of the following items: the system overhead required by the reverb algorithm architecture; and the complexity of the reverb algorithm.
  • an embodiment of the present disclosure provides an audio processing apparatus including: a first acquiring unit configured to acquire audio to be processed; a first determining unit configured to determine whether the current device supports audio to be processed added Reverberation; the second determining unit is configured to determine the reverberation algorithm according to the reverberation category selected by the user in response to determining that the current device supports adding reverberation to the audio to be processed; The audio to be processed is processed to obtain the processed audio.
  • the device further includes a playback unit configured to play the processed audio.
  • the first determining unit is further configured to: obtain the device information set; and determine whether the current device supports adding reverberation to the audio to be processed according to the device information set.
  • the second determination unit is further configured to: in response to determining that the current device supports adding reverberation to the to-be-processed audio, acquire an algorithm category that characterizes the correspondence between device information and the reverberation algorithm category List; based on the algorithm category list, determine the reverberation algorithm category corresponding to the device information of the current device, wherein the reverberation algorithm category is used to characterize the category of the reverberation algorithm; according to the reverberation category selected by the user, the determined reverberation category The reverb algorithm is determined under the category of reverb algorithm.
  • the second determination unit is further configured to: in response to determining that the current device supports adding reverberation to the audio to be processed, perform audio characteristic analysis on the audio to be processed to obtain audio characteristic data; and the second determination unit is It is further configured to determine the reverberation algorithm according to the audio characteristic data and the reverberation category selected by the user.
  • the category to which the reverb algorithm belongs is divided according to one of the following items: the system overhead required by the reverb algorithm architecture; and the complexity of the reverb algorithm.
  • an embodiment of the present disclosure provides a terminal device, the terminal device includes: one or more processors; a storage device, on which one or more programs are stored; Or executed by multiple processors, so that the above one or more processors implement the method described in any one of the implementation manners of the first aspect.
  • an embodiment of the present disclosure provides a computer-readable medium on which a computer program is stored.
  • the program is executed by a processor, the method described in any one of the implementation manners of the first aspect is implemented.
  • the method and apparatus provided by the embodiments of the present disclosure can first determine whether the current device supports adding reverberation to the audio to be processed. Therefore, the reverb effect is turned on or off for different devices.
  • the reverberation algorithm is determined according to the reverberation category selected by the user to achieve different environmental simulation effects. According to the determined reverberation algorithm, the audio to be processed is processed to obtain the processed audio. In this process, by configuring the reverb effect on or off, you can avoid the obvious delay caused by adding the reverb effect to devices with poor performance.
  • FIG. 1 is an exemplary system architecture diagram to which an embodiment of the present disclosure can be applied;
  • FIG. 2 is a flowchart of an embodiment of an audio processing method according to the present disclosure
  • FIG. 3 is a schematic diagram of an application scenario of an audio processing method according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of an embodiment of an audio processing device according to the present disclosure.
  • FIG. 6 is a schematic structural diagram of an electronic device suitable for implementing embodiments of the present disclosure.
  • FIG. 1 shows an exemplary system architecture 100 to which an audio processing method or apparatus of an embodiment of the present disclosure can be applied.
  • the system architecture 100 may include terminal devices 101, 102, and 103, a network 104, and a server 105.
  • the network 104 is a medium used to provide a communication link between the terminal devices 101, 102, 103 and the server 105.
  • the network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, and so on.
  • the user can use the terminal devices 101, 102, 103 to interact with the server 105 through the network 104 to receive or send messages, and so on.
  • Various communication client applications such as singing applications, video recording and sharing applications, and audio processing applications, can be installed on the terminal devices 101, 102, and 103.
  • the terminal devices 101, 102, and 103 may be hardware or software.
  • the terminal devices 101, 102, and 103 are hardware, they may be various electronic devices that have a display screen and support audio processing.
  • the terminal devices 101, 102, and 103 are software, they can be installed in the above electronic device. It can be implemented as multiple software or software modules, or as a single software or software module. There is no specific limit here.
  • the server 105 may be a server that provides various services, for example, a back-end server that supports applications installed on the terminal devices 101, 102, and 103.
  • the audio processing method provided by the embodiments of the present disclosure is generally executed by the terminal devices 101, 102, and 103.
  • the audio processing device is generally provided in the terminal devices 101, 102, 103.
  • the server can be hardware or software.
  • the server can be implemented as a distributed server cluster composed of multiple servers or as a single server.
  • the server is software, it can be implemented as multiple software or software modules (for example, to provide distributed services), or as a single software or software module. There is no specific limit here.
  • terminal devices, networks, and servers in FIG. 1 are only schematic. According to the implementation needs, there can be any number of terminal devices, networks and servers.
  • the audio processing method includes:
  • Step 201 Acquire audio to be processed.
  • the execution subject of the audio processing method can acquire the audio to be processed in various ways.
  • the above-mentioned execution subject can record the voice of the user singing through the recording device to obtain the audio to be processed.
  • the recording device may be integrated on the above-mentioned executive body, or may be in communication connection with the executive body, which is not limited in this disclosure.
  • the above-mentioned execution subject may also obtain pre-stored audio from the local or other storage device connected as the audio to be processed.
  • the audio to be processed may be any audio.
  • the determination of the audio to be processed can be specified by a technician, or can be screened according to certain conditions.
  • the audio to be processed may be a complete audio sung by the user, or an audio segment sung by the user.
  • the audio to be processed may also be an audio segment with a short singing time (for example, 30 milliseconds) by the user.
  • Step 202 Determine whether the current device supports adding reverberation to the audio to be processed.
  • the above-mentioned execution subject may determine in various ways whether the current device supports adding reverberation to the audio to be processed.
  • the current device may be the above-mentioned execution subject.
  • different devices have different performance. When adding reverb to audio, a device with poor processing performance needs a longer processing time when adding reverb to the audio to be processed. In a scenario that requires real-time monitoring, it will cause a significant delay and cannot meet the needs of real-time monitoring. Therefore, it can be considered that these devices do not support adding reverb.
  • the above-mentioned execution body can obtain the performance parameters of the current device, for example, the number of computing cores in the CPU (Central Processing Unit), the size of the memory, and so on.
  • the performance parameter table may store the correspondence between the performance parameters of the device and whether it supports adding reverb.
  • determining whether the current device supports adding reverb to the audio to be processed may also include: acquiring a device information set; according to the device information set, determining whether the current device supports adding reverb to the audio to be processed .
  • the device information may be any information that can identify the device.
  • the device information may be the device model, device name, and so on.
  • the device information in the device information set may be device information of devices that do not support adding reverberation.
  • the execution subject may determine whether the device information of the current device is in the device information set. If it is, it can be determined that the current device does not support adding reverb to the audio to be processed. Otherwise, it can be determined that the current device supports adding reverb to the audio to be processed. It can be understood that, in practice, the device information in the foregoing device information set may also be device information of a device that supports adding reverberation.
  • step 203 may be continued.
  • the audio to be processed may be directly played.
  • Step 203 In response to determining that the current device supports adding reverberation to the audio to be processed, determine the reverberation algorithm according to the reverberation category selected by the user.
  • the above-mentioned execution subject may determine the reverberation algorithm according to the reverberation category selected by the user.
  • the reverberation algorithm can be divided into different reverberation categories according to the simulated environmental effects.
  • the reverb category can have hall effects, studio effects, valley effects, and so on.
  • category information (such as name, picture, etc.) of each category can be displayed on the above-mentioned executive body. Each category information is associated with the reverb category indicated by the category information. Therefore, the user can perform some operations (such as a click operation) on the category information to select the reverb category.
  • the reverberation algorithm corresponding to each reverberation category can be preset, so that the above-mentioned execution subject can determine the reverberation algorithm according to the reverberation category selected by the user.
  • determining the reverb algorithm according to the reverb type selected by the user includes: in response to determining that the current device supports audio to be processed Add reverberation, analyze the audio characteristics of the audio to be processed, and obtain audio characteristic data; determine the reverberation algorithm according to the audio characteristic data and the reverb type selected by the user.
  • the above-mentioned execution subject may perform characteristic analysis on the audio to be processed in various ways to obtain audio characteristic data.
  • the audio to be processed can be analyzed through some existing audio analysis applications or open source toolkits.
  • the audio characteristics include audio frequency, bandwidth, amplitude and so on.
  • the reverberation algorithm can be determined according to the audio characteristic data and the reverberation category selected by the user. Specifically, in the correspondence table between the pre-established audio characteristic data and the reverb category and the reverb algorithm, the reverb algorithm matching the obtained audio characteristic data and the reverb category selected by the user can be queried .
  • Step 204 Process the audio to be processed according to the determined reverberation algorithm to obtain the processed audio.
  • the above-mentioned execution subject may process the audio to be processed according to the determined reverberation algorithm to obtain the processed audio.
  • the audio to be processed may be input to at least one filter set according to the determined reverberation algorithm, so as to obtain the processed audio.
  • the number of filters and the like corresponding to each reverberation algorithm can be preset.
  • each reverberation algorithm can be obtained by combining at least one filter. For example, comb filters and all-pass filters can be selected for combination.
  • the filter here may be a hardware module in the current device or a software module in the current device according to implementation needs.
  • FIG. 3 is a schematic diagram of an application scenario of the audio processing method according to this embodiment.
  • the execution subject of the audio processing method may be the smartphone 301.
  • the smartphone 301 first obtains audio 3011 to be processed. After that, according to the performance parameters of the mobile phone 301.
  • the performance parameter takes the number of computing cores in the CPU as an example, the smartphone 301 takes dual cores as an example, and the preset processing logic is dual cores as an example to support adding reverb.
  • the smartphone 301 can determine that the current device 301 supports adding reverb to the audio 3011 to be processed.
  • the current device 301 In response to determining that the current device 301 supports adding reverberation to the audio to be processed, according to the reverberation category selected by the user 3012, taking the reverberation category selected by the user as a valley effect as an example, it can be determined by querying a preset correspondence table The reverb algorithm 3013 corresponding to the reverb category of the valley effect. According to the determined reverberation algorithm 3013, the audio to be processed is processed to obtain the processed audio 3011 '.
  • the method provided by the above embodiments of the present disclosure may determine whether the current device supports adding reverberation to the audio to be processed, so as to configure the reverberation effect to be turned on or off for different devices.
  • the reverberation algorithm is determined according to the reverberation category selected by the user to achieve different environmental simulation effects. According to the determined reverberation algorithm, the audio to be processed is processed to obtain the processed audio. In this process, by configuring the reverb effect on or off, you can avoid the obvious delay caused by adding the reverb effect to devices with poor performance.
  • FIG. 4 shows a flow 400 of yet another embodiment of an audio processing method.
  • the process 400 of the audio processing method includes the following steps:
  • Step 401 Acquire audio to be processed.
  • Step 402 Determine whether the current device supports adding reverb to the audio to be processed.
  • steps 401 and 402 for the specific processing of steps 401 and 402 and the resulting technical effects, reference may be made to steps 201 and 202 in the embodiment corresponding to FIG. 2, and details are not described herein again.
  • Step 403 In response to determining that the current device supports adding reverberation to the audio to be processed, obtain a list of algorithm categories.
  • the above-mentioned execution subject may obtain the algorithm category list in various ways.
  • a list of algorithm categories issued by a server connected by communication may be received.
  • the algorithm category list may be stored locally in advance, so that the algorithm category list can be directly obtained locally.
  • the algorithm category list is used to characterize the correspondence between the device information and the reverberation algorithm category. In practice, you can divide the reverberation algorithms according to certain indicators, such as the system overhead required by the algorithm architecture or the complexity of the algorithm, to obtain different types of reverberation algorithms.
  • the reverberation algorithm category is used to characterize the category to which the reverberation algorithm belongs. As an example, it can be divided into three categories according to the system overhead required by the algorithm.
  • the first category has low system overhead, which can be achieved by including comb filters, all-pass filters, Schroeder filters, or a combination of these filters.
  • the second category has a large system overhead, which can be combined with a delay filter through high and low-pass filtering, or an existing filter system, such as a Muller Moorer reverberator.
  • the third category has medium system overhead, which can be realized by a combination of a feedback network and an all-pass filter.
  • Step 404 based on the algorithm category list, determine the reverberation algorithm category corresponding to the device information of the current device.
  • the execution subject may determine the reverberation algorithm category corresponding to the device information of the current device based on the algorithm category list. Specifically, the device information of the current device can be matched in the algorithm category list, so as to obtain the reverberation algorithm category corresponding to the device information of the current device.
  • Step 405 Determine the reverberation algorithm under the above reverberation algorithm category according to the reverberation category selected by the user.
  • the execution subject may determine the reverberation algorithm under the reverberation algorithm category according to the reverberation category selected by the user.
  • each reverb algorithm category may include multiple algorithms. These algorithms differ according to the simulated environmental effects. Therefore, on the basis of the reverberation algorithm category determined in step 404, the above-mentioned execution subject can determine the applicable reverberation algorithm among various algorithms under the reverberation algorithm category according to the reverberation category selected by the user. It should be noted that, among the various algorithms under this category of reverberation algorithms, for determining the specific implementation of the applicable reverberation algorithm and the technical effects it brings, refer to step 203 in the embodiment corresponding to FIG. 2, here No longer.
  • Step 406 Process the audio to be processed according to the determined reverberation algorithm to obtain the processed audio.
  • step 406 the specific processing of step 406 and the technical effect brought by it can refer to step 204 in the embodiment corresponding to FIG. 2, which will not be repeated here.
  • Step 407 Play the processed audio.
  • the above-mentioned execution subject may play the processed audio through a playback device integrated or communicatively connected in the above-mentioned execution subject.
  • the audio playback device can play the processed audio, so that the user can monitor the audio with added reverberation in real time.
  • the audio processing method in this embodiment adds the determination of the reverberation algorithm category based on the algorithm category list, and the determination of the reverberation under the determined reverberation algorithm category Algorithm steps.
  • Reverberation algorithms under different reverberation algorithm categories are implemented for different devices. By setting the reverberation algorithms under different reverberation algorithm categories, it is possible to realize the better performance of equipment with better performance and make full use of its performance. For devices with poor performance, system overhead can be reduced.
  • the present disclosure provides an audio processing device.
  • the device embodiment corresponds to the method embodiment shown in FIG. 2.
  • the device can be specifically applied to various In electronic equipment.
  • the audio processing device 500 of this embodiment includes: a first acquisition unit 501, a first determination unit 502, a second determination unit 503, and a processing unit 504.
  • the first acquisition unit 501 is configured to acquire audio to be processed.
  • the first determination unit 502 is configured to determine whether the current device supports adding reverberation to the audio to be processed.
  • the second determination unit 503 is configured to determine the reverberation algorithm according to the reverberation category selected by the user in response to determining that the current device supports adding reverberation to the audio to be processed.
  • the processing unit 504 is configured to process the audio to be processed according to the determined reverberation algorithm to obtain the processed audio.
  • the specific processing of the first acquiring unit 501, the first determining unit 502, the second determining unit 503, and the processing unit 504 in the audio processing device 500 and the technical effects brought about by them can refer to the corresponding Steps 201-204 in the embodiment will not be repeated here.
  • the device 500 may further include: a playback unit (not shown in the figure).
  • the playback unit is configured to play the processed audio.
  • the first determining unit 502 may be further configured to: obtain a device information set; and determine whether the current device supports adding reverberation to the audio to be processed according to the device information set.
  • the second determining unit 503 is further configured to: in response to determining that the current device supports adding reverberation to the audio to be processed, obtain information for characterizing the device and the type of reverberation algorithm The algorithm category list of the correspondence between them; based on the algorithm category list, determine the reverberation algorithm category corresponding to the device information of the current device, where the reverberation algorithm category is used to characterize the category to which the reverberation algorithm belongs; The reverb algorithm is determined under the reverb algorithm category.
  • the second determining unit 503 is further configured to: in response to determining that the current device supports adding reverberation to the audio to be processed, perform audio characteristic analysis on the audio to be processed to obtain audio characteristics Data; determine the reverberation algorithm according to the audio characteristic data and the reverberation category selected by the user.
  • the category to which the reverb algorithm belongs is divided according to one of the following items: the system overhead required by the reverb algorithm architecture; and the complexity of the reverb algorithm.
  • the first determining unit 502 may determine whether the current device supports adding reverberation to the audio to be processed, so as to configure the reverberation effect to be turned on or off for different devices.
  • the second determining unit 503 determines a reverberation algorithm according to the reverberation category selected by the user, so as to achieve different environmental simulation effects.
  • the processing unit 504 may process the audio to be processed according to the determined reverberation algorithm, so as to obtain the processed audio. In this process, by configuring the reverb effect on or off, you can avoid the obvious delay caused by adding the reverb effect to devices with poor performance.
  • FIG. 6 shows a schematic structural diagram of an electronic device (for example, the terminal device in FIG. 1) 600 suitable for implementing the embodiments of the present disclosure.
  • Terminal devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals ( For example, mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, and so on.
  • the electronic device shown in FIG. 6 is just an example, and should not bring any limitation to the functions and use scope of the embodiments of the present disclosure.
  • the electronic device 600 may include a processing device (such as a central processing unit, a graphics processor, etc.) 601, which may be loaded into random access according to a program stored in a read-only memory (ROM) 602 or from the storage device 608
  • the program in the memory (RAM) 603 performs various appropriate operations and processes.
  • various programs and data necessary for the operation of the electronic device 600 are also stored.
  • the processing device 601, ROM 602, and RAM 603 are connected to each other via a bus 604.
  • An input / output (I / O) interface 605 is also connected to the bus 604.
  • the following devices can be connected to the I / O interface 605: including input devices 606 such as touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc .; including, for example, liquid crystal display (LCD), speaker, vibration
  • An output device 607 such as a storage device; a storage device 608 including, for example, a magnetic tape, a hard disk, etc .; and a communication device 609.
  • the communication device 609 may allow the electronic device 600 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 6 shows an electronic device 600 having various devices, it should be understood that it is not required to implement or have all the devices shown. More or fewer devices may be implemented or provided instead.
  • the process described above with reference to the flowchart may be implemented as a computer software program.
  • embodiments of the present disclosure include a computer program product that includes a computer program carried on a computer-readable medium, the computer program containing program code for performing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 609, or from the storage device 608, or from the ROM 602.
  • the processing device 601 the above-described functions defined in the method of the embodiments of the present disclosure are executed.
  • the computer-readable medium described in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • the computer-readable signal medium may include a data signal that is propagated in baseband or as part of a carrier wave, in which computer-readable program code is carried. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • the computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable signal medium may send, propagate, or transmit a program for use by or in combination with an instruction execution system, apparatus, or device .
  • the program code contained on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: electric wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • the computer-readable medium may be included in the electronic device; or it may exist alone without being assembled into the electronic device.
  • the computer-readable medium carries one or more programs, and when the one or more programs are executed by the electronic device, the electronic device is caused to: obtain the audio to be processed; determine whether the current device supports adding reverb to the audio to be processed; In order to determine that the current device supports adding reverb to the audio to be processed, the reverb algorithm is determined according to the reverb category selected by the user; according to the determined reverb algorithm, the audio to be processed is processed to obtain the processed audio.
  • Computer program code for performing the operations of the present disclosure may be written in one or more programming languages or a combination thereof, the programming languages including object-oriented programming languages such as Java, Smalltalk, C ++, and also including conventional Procedural programming language-such as "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as an independent software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (eg, through an Internet service provider Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider Internet connection e.g, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of code that contains one or more logic functions Executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession can actually be executed in parallel, and sometimes they can also be executed in reverse order, depending on the functions involved.
  • each block in the block diagrams and / or flowcharts, and combinations of blocks in the block diagrams and / or flowcharts can be implemented with dedicated hardware-based systems that perform specified functions or operations Or, it can be realized by a combination of dedicated hardware and computer instructions.
  • the units described in the embodiments of the present disclosure may be implemented in software or hardware.
  • the name of the unit does not constitute a limitation on the unit itself.
  • the first acquiring unit may also be described as a “unit acquiring audio to be processed”.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Stereophonic System (AREA)

Abstract

Conformément à des modes de réalisation, la présente invention concerne un procédé et un appareil de traitement audio. Une mise en œuvre du procédé consiste à : obtenir un audio à traiter ; déterminer si le dispositif actuel prend ou non en charge l'ajout d'une réverbération audit audio ; en réponse à la détermination du fait que le dispositif actuel prend en charge l'ajout d'une réverbération audit audio, déterminer un algorithme de réverbération selon un type de réverbération sélectionné par un utilisateur ; et traiter ledit audio selon l'algorithme de réverbération déterminé pour obtenir un audio traité. La mise en œuvre met en œuvre une configuration d'activation/désactivation d'un effet de réverbération, permettant ainsi d'éviter un retard évident provoqué par l'ajout de l'effet de réverbération au dispositif ayant des performances médiocres.
PCT/CN2019/073126 2018-10-12 2019-01-25 Procédé et appareil de traitement audio WO2020073565A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811190930.2A CN111045635B (zh) 2018-10-12 2018-10-12 音频处理方法和装置
CN201811190930.2 2018-10-12

Publications (1)

Publication Number Publication Date
WO2020073565A1 true WO2020073565A1 (fr) 2020-04-16

Family

ID=70163654

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/073126 WO2020073565A1 (fr) 2018-10-12 2019-01-25 Procédé et appareil de traitement audio

Country Status (2)

Country Link
CN (1) CN111045635B (fr)
WO (1) WO2020073565A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09244641A (ja) * 1996-03-12 1997-09-19 Kawai Musical Instr Mfg Co Ltd 音響効果装置
CN107402738A (zh) * 2017-06-02 2017-11-28 捷开通讯(深圳)有限公司 外部终端及其配置虚拟音频场景的方法、存储装置
CN107733848A (zh) * 2017-08-16 2018-02-23 北京中兴高达通信技术有限公司 终端混音的通话系统和方法
CN108305603A (zh) * 2017-10-20 2018-07-20 腾讯科技(深圳)有限公司 音效处理方法及其设备、存储介质、服务器、音响终端
CN207706384U (zh) * 2017-12-10 2018-08-07 张德明 一种具有去人声功能的无线k歌耳机

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654932B (zh) * 2014-11-10 2020-12-15 乐融致新电子科技(天津)有限公司 实现卡拉ok应用的系统和方法
CN106612482B (zh) * 2015-10-23 2020-06-19 中兴通讯股份有限公司 一种调整音频参数的方法及移动终端
WO2017134973A1 (fr) * 2016-02-01 2017-08-10 ソニー株式会社 Dispositif de sortie audio, procédé de sortie audio, programme et système audio

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09244641A (ja) * 1996-03-12 1997-09-19 Kawai Musical Instr Mfg Co Ltd 音響効果装置
CN107402738A (zh) * 2017-06-02 2017-11-28 捷开通讯(深圳)有限公司 外部终端及其配置虚拟音频场景的方法、存储装置
CN107733848A (zh) * 2017-08-16 2018-02-23 北京中兴高达通信技术有限公司 终端混音的通话系统和方法
CN108305603A (zh) * 2017-10-20 2018-07-20 腾讯科技(深圳)有限公司 音效处理方法及其设备、存储介质、服务器、音响终端
CN207706384U (zh) * 2017-12-10 2018-08-07 张德明 一种具有去人声功能的无线k歌耳机

Also Published As

Publication number Publication date
CN111045635A (zh) 2020-04-21
CN111045635B (zh) 2021-05-07

Similar Documents

Publication Publication Date Title
WO2020151599A1 (fr) Procédé et appareil de publication de vidéos de facon synchrone, dispositif électronique et support d'enregistrement lisible
WO2020228383A1 (fr) Procédé et appareil de génération de forme de bouche et dispositif électronique
US20200294491A1 (en) Method and apparatus for waking up device
WO2023051293A1 (fr) Procédé et appareil de traitement audio, dispositif électronique et support de stockage
WO2020168878A1 (fr) Procédé et dispositif de mise en cache de données, terminal et support d'enregistrement
WO2020147522A1 (fr) Procédé et dispositif de traitement audio
WO2022228067A1 (fr) Procédé et appareil de traitement de la parole, et dispositif électronique
WO2020224294A1 (fr) Procédé, système et appareil de traitement d'informations
CN108829370B (zh) 有声资源播放方法、装置、计算机设备及存储介质
US20240103802A1 (en) Method, apparatus, device and medium for multimedia processing
WO2021227953A1 (fr) Procédé de configuration d'effets spéciaux d'images, procédé de reconnaissance d'images, appareils et dispositif électronique
CN112291121B (zh) 一种数据处理方法及相关设备
CN111045634B (zh) 音频处理方法和装置
WO2020073565A1 (fr) Procédé et appareil de traitement audio
CN115756258A (zh) 音频特效的编辑方法、装置、设备及存储介质
CN109614137B (zh) 软件版本控制方法、装置、设备和介质
CN116450256A (zh) 音频特效的编辑方法、装置、设备及存储介质
CN114121050A (zh) 音频播放方法、装置、电子设备和存储介质
WO2020124679A1 (fr) Procédé et appareil de pré-configuration d'informations de paramètres de traitement vidéo et dispositif électronique
CN109375892B (zh) 用于播放音频的方法和装置
WO2020087788A1 (fr) Procédé et dispositif de traitement audio
CN109445873B (zh) 显示设置界面的方法和装置
WO2020073562A1 (fr) Procédé et dispositif de traitement audio
CN111367592A (zh) 信息处理方法和装置
CN111291254A (zh) 信息处理方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19870173

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 02/08/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19870173

Country of ref document: EP

Kind code of ref document: A1