CN111145792B - Audio processing method and device - Google Patents

Audio processing method and device Download PDF

Info

Publication number
CN111145792B
CN111145792B CN201811302480.1A CN201811302480A CN111145792B CN 111145792 B CN111145792 B CN 111145792B CN 201811302480 A CN201811302480 A CN 201811302480A CN 111145792 B CN111145792 B CN 111145792B
Authority
CN
China
Prior art keywords
data
played
adjusting
range
over
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811302480.1A
Other languages
Chinese (zh)
Other versions
CN111145792A (en
Inventor
黄传增
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Microlive Vision Technology Co Ltd
Original Assignee
Beijing Microlive Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Microlive Vision Technology Co Ltd filed Critical Beijing Microlive Vision Technology Co Ltd
Priority to CN201811302480.1A priority Critical patent/CN111145792B/en
Publication of CN111145792A publication Critical patent/CN111145792A/en
Application granted granted Critical
Publication of CN111145792B publication Critical patent/CN111145792B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10009Improvement or modification of read or write signals
    • G11B20/10018Improvement or modification of read or write signals analog processing for digital recording or reproduction
    • G11B20/10027Improvement or modification of read or write signals analog processing for digital recording or reproduction adjusting the signal strength during recording or reproduction, e.g. variable gain amplifiers

Abstract

The embodiment of the disclosure discloses an audio processing method and device. The specific implementation mode of the method comprises the following steps: acquiring recording data; processing the recording data to obtain data to be played; determining whether to adjust the data to be played according to the representation range information of the playback equipment on the audio data; and responding to the determination of adjusting the data to be played, and adjusting the data to be played. This embodiment provides a new way of audio processing.

Description

Audio processing method and device
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to an audio processing method and device.
Background
Recording, which may also be referred to as sound pickup, refers to the process of collecting sound. An electronic device (e.g., a terminal) may record a sound. The recording data can be obtained by recording, and the recording data can be directly used as playback data. The playback data can be played by the electronic equipment for collecting the recording data, and can also be played by other electronic equipment.
In the prior art, voice signal processing may be performed on recording data, and then the recording data after the voice signal processing is used as playback data.
Disclosure of Invention
The embodiment of the disclosure provides an audio processing method and device.
In a first aspect, an embodiment of the present disclosure provides an audio processing method, where the method includes: acquiring recording data; processing the recording data to obtain data to be played; determining whether to adjust the data to be played according to the representation range information of the playback equipment on the audio data; and adjusting the data to be played in response to the determination of adjusting the data to be played.
In a second aspect, an embodiment of the present disclosure provides an audio processing apparatus, including: an acquisition unit configured to acquire sound recording data; the processing unit is configured to process the recording data to obtain data to be played; the determining unit is configured to determine whether to adjust the data to be played according to the representation range information of the playing equipment on the audio data; and the adjusting unit is configured to adjust the data to be played in response to the determination of adjusting the data to be played.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; a storage device, on which one or more programs are stored, which, when executed by the one or more processors, cause the one or more processors to implement the method as described in any implementation manner of the first aspect.
In a fourth aspect, the disclosed embodiments provide a computer-readable medium on which a computer program is stored, wherein the computer program, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
The audio processing method and apparatus provided by the embodiment of the present disclosure determine whether to adjust the data to be played according to the representation range of the playback device on the audio data before playback, and if it is determined that adjustment is required, adjust the data to be played, and the technical effects at least include: a new audio processing approach is provided.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which some embodiments of the present disclosure may be applied;
FIG. 2 is a flow diagram for one embodiment of an audio processing method according to the present disclosure;
FIG. 3 is a schematic diagram of one application scenario of an audio processing method according to the present disclosure;
fig. 4 is a schematic diagram of another application scenario of an audio processing method according to the present disclosure;
FIG. 5 is a flow diagram according to one implementation of step 204;
FIG. 6 is a flow diagram of another embodiment of an audio processing method according to the present disclosure;
FIG. 7 is a schematic block diagram of one embodiment of an audio processing device according to the present disclosure;
FIG. 8 is a schematic block diagram of a computer system suitable for use in implementing an electronic device of an embodiment of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the audio processing method or audio processing device of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. Network 104 may be the medium used to provide communication links between terminal devices 101, 102, 103 and server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as a recording application, a call application, a live application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like, may be installed on the terminal devices 101, 102, and 103.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, and 103 are hardware, they may be various electronic devices with communication functions, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg Audio Layer 4), laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, such as a background server that supports the sound pickup function on the terminal apparatuses 101, 102, 103. The terminal device can package the original audio data obtained by pickup to obtain an audio processing request, and then sends the audio processing request to the background server. The background server can analyze and process the received data such as the audio processing request and feed back the processing result (such as playback data) to the terminal equipment.
It should be noted that the audio processing method provided by the embodiment of the present disclosure is generally executed by the terminal devices 101, 102, and 103, and accordingly, the audio processing apparatus is generally disposed in the terminal devices 101, 102, and 103. Optionally, the audio processing method provided in the embodiment of the present disclosure may also be executed by a server, where the server may receive the recording data sent by the terminal device, then execute the method disclosed in the present disclosure, and finally send the playback data generated based on the recording data to the terminal device.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring to fig. 2, a flow 200 of one embodiment of an audio processing method is shown. The embodiment is mainly exemplified by applying the method to an electronic device with certain computing capability, and the electronic device may be the terminal device shown in fig. 1. The audio processing method comprises the following steps:
step 201, acquiring the recording data.
In the present embodiment, the execution subject of the audio processing method (e.g., the terminal device shown in fig. 1) may acquire the sound recording data.
In this embodiment, the recorded sound data may be audio data collected by the execution subject or other electronic device. The executing body can directly collect or receive the recording data from other electronic equipment to obtain the recording data.
Step 202, processing the recording data to obtain data to be played.
In this embodiment, the execution main body may process the recording data to obtain the data to be played.
In this embodiment, the data to be played back may be audio data in a time domain.
In the present embodiment, the recording data may be processed in various ways.
As an example, the processing means may include, but is not limited to: performing time-frequency transformation on the recording data in the time domain form to obtain frequency spectrum data; processing the spectral data with at least one of the following speech signals: noise removal, automatic gain control and echo cancellation; and transforming the frequency spectrum data after the voice signal processing to a time domain to be used as data to be played.
Step 203, according to the representation range information of the audio data by the playback device, determining whether to adjust the data to be played.
In this embodiment, the execution main body may determine whether to adjust the data to be played according to the information of the presentation range of the audio data by the playback device.
In this embodiment, the playback device may be an electronic device that plays back data to be played back later.
As an example, the execution main body is a terminal device a, and the playback device may be a terminal device b. In an application scenario of a call between the terminal device a and the terminal device b, the playback device may be the terminal device b that is to play the data to be played later.
It will be appreciated that the presentation of audio data by the playback apparatus described above may be within certain limits. This range can be indicated using the above-described representation range information. As an example, the above-described expression range is range information indicated by a length indicating 8 bits, 16 bits, or 32 bits.
In this embodiment, the execution main body may first obtain the representation range information of the audio data of the playback data, and then determine whether to adjust the playback data in various ways.
Alternatively, the reference threshold value may be determined based on the full-scale value indicated by the above-described indication range information. For example, the full-width value indicated by the indication range information may be used as the reference threshold, and for example, 90% of the full-width value indicated by the indication range information may be used as the reference threshold.
As an example, step 203 may be implemented by: and if the audio data points which overflow (namely exceed) the range indicated by the representation range information exist in the data to be played, determining to adjust the data to be played.
As an example, step 203 may be implemented by: and if the audio data points exceeding the reference threshold (for example, 90% of the full amplitude value) exist in the data to be played, determining to adjust the data to be played. It should be noted that, by setting a reference threshold smaller than the full amplitude, it is possible to perform an advance smoothing adjustment on the audio points exceeding the threshold, so as to avoid clipping after reaching the full amplitude, thereby achieving a consistent loudness hearing.
And step 204, adjusting the data to be played in response to the determination of the adjustment of the data to be played.
In this embodiment, the execution main body may adjust the data to be played in response to determining to adjust the data to be played.
In this embodiment, the step 204 can be implemented in various ways.
As an example, the execution subject described above may multiply the data to be played back by the attenuation coefficient as a whole in response to determining that the data to be played back is adjusted.
Before playing, the data to be played is determined to be adjusted or not according to the information of the representation range of the audio data by the playing equipment, and if the adjustment is determined to be needed, the data to be played is adjusted. Therefore, if the data to be played is in an expected state (for example, the data to be played overflows the representation range), the data to be played can be adjusted in time so as to adjust the data to be played to the expected state.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the audio processing method according to the embodiment shown in fig. 2. In the application scenario of fig. 3:
first, the terminal 301 may collect voice uttered by the user to acquire recorded data.
Then, the terminal 301 may process the recording data to obtain data to be played.
Then, the terminal 301 may determine whether to adjust the data to be played back according to the representation range information of the playback device on the audio data. In the application scenario, the playback device may be the terminal 301.
Then, the terminal 301 may adjust the data to be played in response to the determination, and adjust the data to be played to obtain playback data.
Finally, the terminal 301 can read the playback data for playback.
With continued reference to fig. 4, fig. 4 is a schematic diagram of another application scenario of the audio processing method according to the embodiment shown in fig. 2. In the application scenario of fig. 4:
first, the terminal 401 may collect voice uttered by the user to acquire recorded data.
Then, the terminal 401 can transmit the sound recording data to the server 402.
Then, the server 402 may process the recording data to obtain data to be played.
Then, the server 402 can determine whether to adjust the data to be played according to the representation range information of the audio data by the playback device. In the context of this application, the playback device may be a terminal 403.
Then, the server 402 may adjust the data to be played in response to the determination, and adjust the data to be played to obtain playback data.
Finally, the server 402 can send the playback data to the terminal 403, and the terminal 403 plays the playback data.
The method provided by the above embodiment of the present disclosure determines whether to adjust the data to be played according to the representation range of the playback device on the audio data before playback, and if it is determined that adjustment is needed, adjusts the data to be played, and the technical effects at least may include: a new audio processing approach is provided.
In some embodiments, the above-mentioned indication range information includes an indication threshold.
In some embodiments, step 204 may include steps 2041 and 2042 shown in fig. 5, where:
step 2041, in response to determining to-be-played data is adjusted, determining data greater than the representation threshold value in the to-be-played data as out-of-range data.
Here, the execution body may compare data in the data to be played with the representation threshold, thereby finding out data greater than the representation threshold. And determining the data which is larger than the representation threshold value in the data to be played as the over-range data.
And 2042, adjusting the data to be played based on the over-range data.
Here, the execution body may adjust data to be played back in various ways based on the out-of-range data.
In some embodiments, step 2042 may include: and adjusting the numerical value of the out-of-range data to the representation threshold. Here, the over-range data may be adjusted to represent a threshold value, whereby pop sounds may be avoided when playing back sounds.
In some embodiments, step 2042 may include: and multiplying the whole data to be played by an attenuation coefficient. Here, the attenuation coefficient may be determined from the over-range data. It should be noted that, the above-mentioned data to be played are multiplied by the attenuation coefficient, which not only ensures that there is no over-range data in the data to be played, and avoids the pop sound of playing, but also can quickly realize the adjustment, and improve the efficiency of adjustment.
In some embodiments, step 2042 may include: determining a data subsection to be played where the over-range data is located according to the position of the over-range data in the data to be played and a preset interval threshold; and adjusting the sub-segments of the data to be played.
Here, two data points that are apart from the position by a preset interval threshold value in the data to be played may be found, and then, a portion between the two found data points may be used as a data sub-segment to be played. The sub-segments of the data to be played can be adjusted, thereby avoiding adjusting the data to be played in a large range, improving the accuracy and pertinence of adjustment and avoiding popping sound during playing.
With further reference to fig. 6, a flow 600 of yet another embodiment of an audio processing method is shown. The flow 600 of the audio processing method comprises the following steps:
step 601, acquiring the recording data.
In the present embodiment, the execution subject of the audio processing method (e.g., the terminal device shown in fig. 1) may acquire the sound recording data.
Step 602, processing the recording data to obtain data to be played.
Step 603, determining whether to adjust the data to be played according to the representation range information of the playback equipment on the audio data.
In this embodiment, please refer to the description of step 201, step 202, and step 203 in the embodiment shown in fig. 2 for details and technical effects of step 601, step 602, and step 603, which are not described herein again.
Step 604, in response to determining to adjust the data to be played, determining the data greater than the representation threshold value in the data to be played as the over-range data.
In this embodiment, please refer to the description of step 2041 for details and technical effects of step 604, which are not described herein again.
Predefined characteristic information of the out-of-range data is determined, step 605.
In this embodiment, the execution subject determines predefined characteristic information of the out-of-range data.
In this embodiment, the characteristic information may be used to indicate a characteristic of the out-of-range data.
In some embodiments, the feature information may include distribution information of the over-range data in the data to be played.
In some embodiments, the characteristic information may include an extent of the exceeding of the out-of-range data with respect to the representation threshold. Here, the exceeding degree information may be represented in various forms, and for example, a ratio of a numerical value of the out-of-range data to the above-described representation threshold value may be used as the exceeding length information. The ratio can also be compared with a preset ratio threshold value to obtain the exceeding degree information such as serious exceeding, ordinary exceeding and the like.
And step 606, adjusting the data to be played according to the characteristic information.
In this embodiment, the execution body adjusts the data to be played back according to the feature information.
In this embodiment, the feature information may have one or more items, and the data to be played may be adjusted by using the synthesis according to the one or more items of feature information.
In some embodiments, the step 606 may include: determining whether the adjustment mode of the data to be played is needed according to the distribution information, wherein the adjustment mode comprises all adjustment and partial adjustment; and adjusting the data to be played according to the determined adjustment mode. The overall adjustment may be an adjustment of the data to be played as a whole, for example an overall multiplication by an attenuation factor. The partial adjustment may be an adjustment of a portion of the data to be played, for example, a portion of the data to be played multiplied by an attenuation factor.
Here, the distribution information may include: the proportion of the over-range data in the data to be played, the number of continuous over-range data blocks and the like.
It should be noted that the distribution information may indicate the dense length of the over-range data in the data to be played. If the over-range data is dense in the data to be played, all adjustment modes can be used for adjustment. If the over-range data is dispersed in the data to be played, the adjustment can be performed by using a partial adjustment mode. Therefore, the adjustment mode can be flexibly selected to process the data to be played, and both the adjustment efficiency and the adjustment accuracy are considered.
In some embodiments, the step 606 may include: determining an attenuation coefficient according to the exceeding degree information; and adjusting the data to be played according to the determined attenuation coefficient.
The attenuation coefficient is determined according to the exceeding degree information, and the attenuation coefficient suitable for the data to be played at this time can be determined, so that the adjustment accuracy is realized, the data to be played is not adjusted to exceed the range, and the data to be played is not attenuated too much.
As can be seen from fig. 6, compared with the embodiment corresponding to fig. 2, the flow 600 of the audio processing method in this embodiment highlights the steps of determining the feature information of the out-of-range data and adjusting the data to be played according to the feature information. Therefore, the technical effects of the solution described in this embodiment at least include: a new audio processing approach is provided.
With further reference to fig. 7, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an audio processing apparatus, which corresponds to the embodiment of the method shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 7, the audio processing apparatus 700 of the present embodiment includes: an acquisition unit 701, a processing unit 702, a determination unit 703, and an adjustment unit 704. Wherein the acquisition unit is configured to acquire the sound recording data; the processing unit is configured to process the recording data to obtain data to be played; the determining unit is configured to determine whether to adjust the data to be played according to the representation range information of the playing equipment on the audio data; and the adjusting unit is configured to adjust the data to be played in response to the determination of adjusting the data to be played.
In this embodiment, specific processes of the obtaining unit 701, the processing unit 702, the determining unit 703 and the adjusting unit 704 of the audio processing apparatus 700 and technical effects brought by the specific processes can refer to related descriptions of step 201, step 202, step 203 and step 204 in the corresponding embodiment of fig. 2, which are not described herein again.
In some optional implementations of this embodiment, the indication range information includes an indication threshold; and the adjusting unit is further configured to: determining the data which is greater than the representation threshold value in the data to be played as the over-range data; and adjusting the data to be played based on the over-range data.
In some optional implementations of this embodiment, the adjusting unit is further configured to: and adjusting the numerical value of the out-of-range data to the representation threshold.
In some optional implementations of this embodiment, the adjusting unit is further configured to: and multiplying the whole data to be played by an attenuation coefficient.
In some optional implementations of this embodiment, the adjusting unit is further configured to: determining a data subsection to be played where the over-range data is located according to the position of the over-range data in the data to be played and a preset interval threshold; and adjusting the sub-segments of the data to be played.
In some optional implementations of this embodiment, the adjusting unit is further configured to: determining predefined characteristic information of the over-range data; and adjusting the data to be played according to the characteristic information.
In some optional implementations of this embodiment, the feature information includes distribution information of the over-range data in the data to be played; and the adjusting unit is further configured to: determining whether the adjustment mode of the data to be played is needed according to the distribution information, wherein the adjustment mode comprises all adjustment and partial adjustment; and adjusting the data to be played according to the determined adjustment mode.
In some optional implementations of this embodiment, the characteristic information includes an extent of the out-of-range data with respect to the representation threshold; and the adjusting unit is further configured to: determining an attenuation coefficient according to the exceeding degree information; and adjusting the data to be played according to the determined attenuation coefficient.
It should be noted that details of implementation and technical effects of each unit in the audio processing apparatus provided in the embodiment of the present disclosure may refer to descriptions of other embodiments in the present disclosure, and are not described herein again.
Referring now to fig. 8, a schematic diagram of an electronic device (e.g., a terminal or server of fig. 1) 800 suitable for implementing embodiments of the present disclosure is shown. The electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 8, an electronic device 800 may include a processing means (e.g., central processing unit, graphics processor, etc.) 801 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage means 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data necessary for the operation of the electronic apparatus 800 are also stored. The processing apparatus 801, the ROM 802, and the RAM803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
Generally, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage 808 including, for example, magnetic tape, hard disk, etc.; and a communication device 809. The communication means 809 may allow the electronic device 800 to communicate wirelessly or by wire with other devices to exchange data. While fig. 8 illustrates an electronic device 800 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 809, or installed from the storage means 808, or installed from the ROM 802. The computer program, when executed by the processing apparatus 801, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring recording data; processing the recording data to obtain data to be played; determining whether to adjust the data to be played according to the representation range information of the playback equipment on the audio data; and adjusting the data to be played in response to the determination of adjusting the data to be played.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from the at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Here, the name of the unit does not constitute a limitation of the unit itself in some cases, and for example, the acquisition unit may also be described as "a unit that acquires audio record data".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (12)

1. An audio processing method, comprising:
acquiring recording data;
processing the recording data to obtain data to be played;
determining whether to adjust the data to be played according to the representation range information of the audio data by the playback equipment, wherein the representation range information comprises a representation threshold value;
responding to the determination of adjusting the data to be played, and adjusting the data to be played, wherein the adjusting comprises the following steps: determining the data which is larger than the representation threshold value in the data to be played as the over-range data; based on the over-range data, adjusting the data to be played comprises: determining predefined characteristic information of the over-range data; adjusting the data to be played according to the characteristic information;
the characteristic information comprises distribution information of the over-range data in the data to be played; and
the adjusting the data to be played according to the characteristic information comprises:
determining whether an adjusting mode of the data to be played is required or not according to the distribution information, wherein the adjusting mode comprises all adjusting and partial adjusting;
and adjusting the data to be played according to the determined adjusting mode.
2. The method of claim 1, wherein said adjusting said data to be played back based on said out-of-range data comprises:
and adjusting the value of the out-of-range data to the representation threshold.
3. The method of claim 1, wherein said adjusting said data to be played back based on said out-of-range data comprises:
and when the adjustment mode is full adjustment, multiplying the whole data to be played by an attenuation coefficient.
4. The method of claim 1, wherein the adjusting the data to be played based on the over-range data comprises:
when the adjustment mode is partial adjustment, determining a data sub-segment to be played where the over-range data is located according to the position of the over-range data in the data to be played and a preset interval threshold;
and adjusting the sub-segments of the data to be played.
5. The method of claim 1, wherein the characteristic information includes out-of-range information of the over-range data relative to the representation threshold; and
the adjusting the data to be played according to the characteristic information comprises:
determining an attenuation coefficient according to the exceeding degree information;
and adjusting the data to be played according to the determined attenuation coefficient.
6. An audio processing apparatus comprising:
an acquisition unit configured to acquire sound recording data;
the processing unit is configured to process the recording data to obtain data to be played;
the determining unit is configured to determine whether to adjust the data to be played according to the representation range information of the playing device on the audio data, wherein the representation range information comprises a representation threshold value;
an adjusting unit configured to adjust the data to be played back in response to determining to adjust the data to be played back, including: determining the data which is larger than the representation threshold value in the data to be played as the over-range data; based on the over-range data, adjusting the data to be played comprises: determining predefined characteristic information of the over-range data; adjusting the data to be played according to the characteristic information;
the characteristic information comprises distribution information of the over-range data in the data to be played; and
the adjustment unit is further configured to:
determining whether an adjusting mode of the data to be played is required or not according to the distribution information, wherein the adjusting mode comprises all adjusting and partial adjusting;
and adjusting the data to be played according to the determined adjusting mode.
7. The apparatus of claim 6, wherein the adjustment unit is further configured to:
and adjusting the value of the out-of-range data to the representation threshold.
8. The apparatus of claim 6, wherein the adjustment unit is further configured to:
and when the adjustment mode is full adjustment, multiplying the whole data to be played by an attenuation coefficient.
9. The apparatus of claim 6, wherein the adjustment unit is further configured to:
when the adjustment mode is partial adjustment, determining a data sub-segment to be played where the over-range data is located according to the position of the over-range data in the data to be played and a preset interval threshold;
and adjusting the data subsegments to be played.
10. The apparatus of claim 6, wherein the characteristic information comprises out-of-range information of the over-range data relative to the representation threshold; and
the adjustment unit is further configured to:
determining an attenuation coefficient according to the exceeding degree information;
and adjusting the data to be played according to the determined attenuation coefficient.
11. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
12. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN201811302480.1A 2018-11-02 2018-11-02 Audio processing method and device Active CN111145792B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811302480.1A CN111145792B (en) 2018-11-02 2018-11-02 Audio processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811302480.1A CN111145792B (en) 2018-11-02 2018-11-02 Audio processing method and device

Publications (2)

Publication Number Publication Date
CN111145792A CN111145792A (en) 2020-05-12
CN111145792B true CN111145792B (en) 2022-06-14

Family

ID=70515460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811302480.1A Active CN111145792B (en) 2018-11-02 2018-11-02 Audio processing method and device

Country Status (1)

Country Link
CN (1) CN111145792B (en)

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060106472A1 (en) * 2004-11-16 2006-05-18 Romesburg Eric D Method and apparatus for normalizing sound recording loudness
CN101022269A (en) * 2006-02-14 2007-08-22 逐点半导体(上海)有限公司 Audio frequency control method and system
CN101989430B (en) * 2009-07-30 2012-07-04 比亚迪股份有限公司 Audio mixing processing system and audio mixing processing method
CN101800520B (en) * 2010-02-25 2013-05-22 青岛海信移动通信技术股份有限公司 Realization method and realization system for automatic gain control
KR101873325B1 (en) * 2011-12-08 2018-07-03 삼성전자 주식회사 Method and apparatus for processing audio in mobile terminal
CN103794220A (en) * 2012-10-29 2014-05-14 无敌科技(西安)有限公司 Apparatus and method for processing distorted audio signal
CN102968995B (en) * 2012-11-16 2018-10-02 新奥特(北京)视频技术有限公司 A kind of sound mixing method and device of audio signal
CN204721319U (en) * 2015-07-31 2015-10-21 广州飞达音响股份有限公司 The audio signal automatic amplitude limiting apparatus that a kind of threshold value is adjustable
CN107566950B (en) * 2016-07-01 2020-02-04 北京小米移动软件有限公司 Audio signal processing method and device

Also Published As

Publication number Publication date
CN111145792A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
CN110062309B (en) Method and device for controlling intelligent loudspeaker box
US20130246061A1 (en) Automatic realtime speech impairment correction
CN109582274B (en) Volume adjusting method and device, electronic equipment and computer readable storage medium
CN111435600B (en) Method and apparatus for processing audio
CN108829370B (en) Audio resource playing method and device, computer equipment and storage medium
CN111045634B (en) Audio processing method and device
CN111145792B (en) Audio processing method and device
CN110096250B (en) Audio data processing method and device, electronic equipment and storage medium
CN112307161B (en) Method and apparatus for playing audio
CN111147655B (en) Model generation method and device
CN114121050A (en) Audio playing method and device, electronic equipment and storage medium
CN112309418B (en) Method and device for inhibiting wind noise
CN111145776B (en) Audio processing method and device
CN111210837B (en) Audio processing method and device
CN109375892B (en) Method and apparatus for playing audio
CN111145769A (en) Audio processing method and device
CN111145770B (en) Audio processing method and device
CN111145793B (en) Audio processing method and device
CN112309352A (en) Audio information processing method, apparatus, device and medium
CN111048108B (en) Audio processing method and device
CN111048107B (en) Audio processing method and device
CN111045635B (en) Audio processing method and device
CN110138991B (en) Echo cancellation method and device
US20230230570A1 (en) Call environment generation method, call environment generation apparatus, and program
CN113593619B (en) Method, apparatus, device and medium for recording audio

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant