CN107357547B - Audio control method, audio control device and audio equipment - Google Patents
Audio control method, audio control device and audio equipment Download PDFInfo
- Publication number
- CN107357547B CN107357547B CN201710454578.8A CN201710454578A CN107357547B CN 107357547 B CN107357547 B CN 107357547B CN 201710454578 A CN201710454578 A CN 201710454578A CN 107357547 B CN107357547 B CN 107357547B
- Authority
- CN
- China
- Prior art keywords
- audio
- frame
- audio data
- application processor
- data frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/162—Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/42—Bus transfer protocol, e.g. handshake; Synchronisation
- G06F13/4282—Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus
- G06F13/4291—Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus using a clocked protocol
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)
- Communication Control (AREA)
Abstract
The invention discloses an audio control method, an audio control device and audio equipment, wherein the audio control method is applied to the audio equipment and comprises the following steps: the method comprises the steps that a codec of the audio equipment or an application processor generates an audio control frame in a preset format, and the audio control frame is sent in a frame period of an audio data frame, wherein the audio control frame generated by the codec and sent to the application processor can enable the application processor to obtain the attribute of the audio data frame based on the audio control frame; an audio control frame generated by the application processor and sent to the codec can cause the codec to control output of the frame of audio data on the audio device based on the audio control frame. The scheme of the invention can realize the dynamic transmission of the audio parameters without occupying extra data stream in the working state of the audio equipment and can dynamically adjust the output of the audio data.
Description
Technical Field
The present invention relates to the field of audio data transmission, and in particular, to an audio control method, an audio control apparatus, an audio device, and a computer-readable storage medium.
Background
Inter-IC Sound (I2S) is a bus standard established by philips for the transmission of audio data between digital audio devices, such as Compact Disc (CD) players, digital Sound processors, digital television Sound systems, and the like. The design of independent wire transmission clock and data signals is adopted, and the data and clock signals are separated, so that the distortion caused by time difference can be avoided, and most of audio frequencies in the market at present carry out data transmission through an I2S protocol. In the I2S protocol, there are three signals, serial clock, frame clock and serial data. The serial data is audio data represented by two's complement, each pulse of the serial clock corresponds to one bit of audio data on the serial data signal, and the frame clock is used for switching the audio data of the left and right channels. When the audio device needs to adjust the output audio during playing, it often needs to occupy additional data stream to adjust and control the related playing parameters.
Disclosure of Invention
In view of this, the present invention provides an audio control method, an audio control apparatus, an audio device, and a computer-readable storage medium, which are intended to enable the audio device to dynamically transfer audio parameters without occupying additional data streams during playing, and to enable dynamic adjustment of audio data output.
A first aspect of the present invention provides an audio control method applied to an audio device, the audio control method including:
a codec or an application processor of the audio device generates an audio control frame in a preset format, wherein the audio control frame carries audio parameters, and the audio parameters are used for transmitting the attribute of an audio data frame or controlling the output of the audio data frame;
the codec sending an audio control frame to the application processor within a frame period of an audio data frame after sending the audio data frame to the application processor, so that the application processor obtains an attribute of the audio data frame based on the audio control frame;
alternatively, the application processor sends the audio control frame to the codec within a frame period of an audio data frame after sending the audio data frame to the codec, so that the codec controls output of the audio data frame on the audio device based on the audio control frame.
A second aspect of the present invention provides an audio control apparatus applied to an audio device including a codec and an application processor, the audio control apparatus including:
a generating unit, configured to generate an audio control frame in a preset format through the codec or the application processor, where the audio control frame carries an audio parameter, and the audio parameter is used to transmit an attribute of an audio data frame or control output of the audio data frame;
a first sending unit, configured to send the audio control frame to the application processor within a frame period of the audio data frame after the codec sends the audio data to the application processor, so that the application processor obtains an attribute of the audio data frame based on the audio control frame;
a second sending unit, configured to send, after the application processor sends an audio data frame to the codec, the audio control frame to the codec within a frame period of the audio data frame, so that the codec controls output of the audio data frame on the audio device based on the audio control frame.
A third aspect of the invention provides an audio device comprising a memory, a main processor, a codec, an application processor, and a computer program stored in the memory and executable on the processor, the main processor controlling the codec and/or the application processor to implement the steps of the audio control method as described above when executing the computer program.
A fourth aspect of the invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the audio control method as described above.
As can be seen from the above, in the present invention, an audio control frame in a preset format is generated by a codec of the audio device, where the audio control frame carries audio parameters, and the audio parameters are used to transmit attributes of an audio data frame or control output of the audio data frame, and then, after sending an audio data frame to an application processor of the audio device, the codec continues to send the audio control frame to the application processor within a frame period of the audio data frame, so that the application processor obtains the attributes of the audio data frame based on the audio control frame. Alternatively, the application processor sends the audio control frame to the codec within a frame period of an audio data frame after sending the audio data frame to the codec, so that the codec controls output of the audio data frame on the audio device based on the audio control frame.
The scheme of the invention does not need to occupy additional data stream, and can transmit the audio parameters required in audio playing to the application processor by only utilizing the idle bit of the existing serial data line when transmitting the audio data frame, thereby realizing the function of dynamically adjusting the output of the audio data.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of an implementation of an audio control method according to an embodiment of the present invention;
FIG. 2 is an exemplary graph of a standard output waveform for each data line based on the I2S protocol provided by an embodiment of the present invention;
fig. 3 is a schematic flow chart of another implementation of the audio control method according to the embodiment of the present invention;
fig. 4 is an exemplary diagram of an audio control apparatus provided by an embodiment of the present invention;
fig. 5 is a schematic diagram of an audio device provided by an embodiment of the invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Example one
Fig. 1 shows an implementation flow of an audio control method according to an embodiment of the present invention, which is detailed as follows:
in step S101, a codec or an application processor of the audio device generates an audio control frame in a preset format.
In an embodiment of the present invention, the codec or the application processor of the audio device may generate an audio control frame in a preset format, where the audio control frame carries an audio parameter, and the audio parameter is used to control output of an audio data frame or to transmit an attribute of the audio data frame. For example, when the audio data frame is used as output data, the audio parameter may be a relevant parameter for adjusting audio output quality, a relevant parameter for modifying an audio output device, or a relevant parameter for controlling a Digital Signal Processing (DSP) application in the audio device, which is not limited herein. When the audio data frame is used as input data, the audio parameter may be used to report or transfer attribute information of the audio data frame, such as sound source position information, and the like, which is not limited herein. Different audio device manufacturers may preset the format of the audio control frame and the usage of the audio parameters in different scenes according to their purposes, which is not limited herein.
In step S102, after sending an audio data frame to the application processor, the codec sends the audio control frame to the application processor within a frame period of the audio data frame, so that the application processor obtains an attribute of the audio data frame based on the audio control frame.
In an embodiment of the present invention, after the codec sends an audio data frame to the application processor, the codec continues to send the audio control frame to the application processor in a frame period of the audio data frame, so that the application processor obtains the attribute of the audio data frame based on the audio control frame. The application processor is a low-power-consumption multimedia application processor and can process multimedia data such as audio and video, images and the like. Fig. 2 shows waveforms on the respective data lines when audio data is transmitted based on the I2S protocol: WS represents channel selection, and when WS is low (for example, when WS is 0), it indicates that audio data of the left channel is being transmitted, and when WS is high (for example, when WS is 1), it indicates that audio data of the right channel is being transmitted; SCK represents a serial clock, SD represents a serial data line, and an audio data frame is transmitted on the serial data line, and it can be seen that a data bit of the audio data frame corresponds to a rising edge of the serial clock. Taking the audio data of the left channel in fig. 2 as an example, although 32 data bits can be transmitted in the data transmission time of the left channel, actually, the audio data frame transmitted on the serial data line only occupies 24 data bits, which means that 8 data bits are not utilized, and it is in an idle state, which may be called idle bit. In this step, just after one frame of audio data is transmitted, the audio control frame obtained in step S101 is transmitted using the remaining idle bits. It should be noted that the number of data bits occupied by the audio control frame must not be greater than the number of idle bits after one frame of audio data frame. Therefore, in step S101, when the manufacturer presets the format of the audio control frame, the maximum number of idle bits that can be continuously occupied on the serial data line needs to be considered, so as to avoid the generated audio control frame from being too long. After sending to the application processor, if the audio device supports reading the audio control frame, the application processor analyzes the audio control frame to obtain an audio parameter; if the audio device does not support reading the audio control frame, the application processor can still read the audio data frame sent on the serial data and directly discard the audio control frame transmitted after the audio data frame, and no adverse effect is brought to the transmission of the audio data. Further, even if different audio devices are connected to one group of I2S, different audio devices may control the output of audio data differently or obtain different types of attributes of audio data frames according to different audio parameters supported by each audio device.
In step S103, after sending an audio data frame to the codec, the application processor sends the audio control frame to the codec within a frame period of the audio data frame, so that the codec controls the output of the audio data frame on the audio device based on the audio control frame.
In the embodiment of the present invention, the process of sending the audio control frame to the codec by the application processor is similar, and the audio control frame obtained in step S101 is transmitted by using the remaining idle bits after one frame of audio data frame is transmitted. After the coder obtains the audio control frame sent by the application processor, if the audio equipment supports reading the audio control frame, the coder analyzes the audio control frame to obtain an audio parameter; if the audio device does not support reading the audio control frame, the codec can still read the audio data frame sent on the serial data and directly discard the audio control frame transmitted after the audio data frame, without any adverse effect on the output of the audio data. It should be noted that, in practical applications, the audio device may only execute the step S102, may also only execute the step S103, and may also execute the step S102 and the step S103 simultaneously or sequentially, which is not limited herein.
Optionally, in order to better control the audio data, the step S101 may include:
and the codec or the application processor generates audio control frames corresponding to the audio data frames to be sent in real time according to the sending sequence of the audio data frames to be sent.
The codec or the application processor may autonomously generate an audio control frame corresponding to each audio data frame to be sent in real time according to the sending sequence of each audio data frame to be sent. For example, for the application processor, if the application processor determines that the current audio output needs to be controlled, the audio control frame corresponding to each frame of audio data frame can be generated in real time, and the codec analyzes the audio data frame after receiving the audio data frame, so as to realize real-time control; if the codec believes that control is not currently needed, the audio control frame can be discarded directly without affecting the original audio output.
In order to make the embodiments of the present invention clearer and more obvious, the following describes the embodiments of the present invention with practical examples:
the microphone array is already a mainstream device in audio devices, and through the microphone array, remote sound identification can be realized, and meanwhile, the direction of audio can be judged. The current position information is mostly transmitted through additional buses such as I2C, on one hand, complete synchronization with voice cannot be achieved, and on the other hand, power consumption of the additional buses and the controller is increased.
Based on the embodiment of the invention, after the microphone collects 16-bit voice data, that is, after the microphone collects a frame of voice data, the position information in the preset format is transferred in the frame period of the voice data frame, taking the bit number of the preset format as 8 bits as an example, the information of more than 180 angles can be described, and the resolution of 2 degrees can be achieved, but because most microphone arrays do not achieve such high precision currently, the real-time position of the source for transferring the voice data by using the position information of 8 bits is enough, and other audio parameters possibly needed by the audio equipment are transferred by using idle bits afterwards.
Specifically, the position information may be transmitted after a voice data frame of a left channel or a voice data frame of a right channel, or may be transmitted after a voice data frame of each channel, which may be set by a developer, and is not limited herein as long as the devices at both ends of the I2S bus are consistent. For devices that do not use location information, the latter location information can be completely ignored after receiving the voice data.
As can be seen from the above, in the audio transmission process based on the I2S protocol, the idle bit after one frame of audio data is used to transfer the audio parameters in the audio control frame on the serial data line, so that the audio device can adjust the audio output or obtain the attribute of the audio data based on the audio parameters. And because the audio control frame is transmitted under the I2S protocol, the real-time audio control frame has the characteristic of good real-time performance, can completely correspond to the audio data frame, and cannot solve the problem that the audio data sent by different buses cannot correspond to the audio data frame.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Example two
Fig. 3 shows an implementation flow of another audio control method provided by the second embodiment of the present invention, which is detailed as follows:
in step S301, if the codec receives a control start command sent by the application processor, the codec determines a start audio data frame under the instruction of the control start command.
In this embodiment of the present invention, when the codec receives the control start instruction sent by the application processor, it may determine a start audio data frame under the instruction of the control start instruction, where the start audio data frame is a first audio data frame that needs to transmit audio parameters this time. The control start instruction may be generated by the application processor after comprehensive judgment, or may be generated in response to an operation request of a user, which is not limited herein.
In step S302, a start audio control frame corresponding to the start audio data frame is generated.
In this embodiment of the present invention, after the start audio data frame is determined, the codec may generate a start audio control frame corresponding to the start audio data frame, where the start audio control frame indicates that the audio data frame corresponding to the start audio data frame is the first audio data frame to which the audio parameters need to be transferred, in addition to the audio parameters.
In step S303, after the codec transmits the start audio data frame to the application processor, the codec transmits the start audio control frame to the application processor within a frame period of the start audio data frame.
In an embodiment of the present invention, after sending the start audio data frame to the application processor, the codec may send the start audio control frame to the application processor within a frame period of the start audio data frame, so that the application processor obtains the attributes of the start audio data frame and other audio data frames following the start audio data frame based on the audio control frame. After the application processor receives the start audio control frame, it may be determined that an audio data frame corresponding to the audio control frame is a start audio data frame according to the audio control frame, and the audio data frame is a first audio data frame for which an attribute of the audio data needs to be acquired this time, if the attribute of the audio data acquired at this time does not change in a short time, that is, if other data frames after the start audio data frame and the start audio data frame have the same attribute, the application processor actually has acquired the attribute of other audio data frames after the start audio data frame at this time, and thus, for the other audio data frames after the start audio data frame, the codec may select not to generate an audio control frame, so as to reduce the working pressure of the codec.
Optionally, the audio control method further includes:
if the codec receives a control ending instruction sent by the application processor, determining a termination audio data frame under the instruction of the control ending instruction, wherein the termination audio data frame is the last audio data frame needing to transmit audio parameters at this time;
generating a termination audio control frame corresponding to the termination audio data frame;
after sending the termination audio data frame to the application processor, the codec sends the termination audio control frame to the application processor within the frame period of the termination audio data frame, so that the application processor finishes obtaining the attributes of other audio data frames after the termination audio data frame based on the start audio control frame.
After the codec receives the control start command, it determines the start audio data frame and generates the corresponding start audio control frame, so that the application processor can start to obtain the attributes of the audio data frames based on the start audio control frame. It is clear that the properties of such audio data frames do not normally remain unchanged for a long time. When the application processor no longer wants to obtain the attribute of the audio data, the application processor may send a control termination instruction to the codec, and after the codec obtains the control termination instruction, the codec determines a termination audio data frame, that is, a last audio data frame to which the audio parameter needs to be transferred this time, and generates a termination audio control frame corresponding to the termination audio data frame. Since the termination audio control frame indicates that the corresponding audio data frame is the last audio data frame requiring the audio parameters this time in addition to the audio parameters, the acquisition of the attributes of the audio data frame by the application processor may be ended under the indication of the termination audio control frame.
Optionally, the audio control method further includes:
the application processor acquires an environment noise value of the current environment of the audio equipment and an audio output state of the audio equipment;
then, the codec or the application processor of the audio device generates an audio control frame in a preset format, which may be represented as:
if the environmental noise value and the audio output state meet the preset audio control condition, the application processor generates an audio control frame with a preset format.
The application processor can acquire the environmental noise value of the environment where the audio equipment is located through the microphone, and meanwhile, the application processor can also acquire the current audio output state of the audio equipment. The audio output state may be a state in which the current audio device is outputting audio, that is, an application state of the audio device. For example, if the audio device further has a call function, when the audio device is in a call state, if it is detected that the ambient noise value exceeds a preset ambient noise threshold value, the output effect of the audio needs to be enhanced, so that the user can clearly perform the call. That is, once the application processor finds that the current audio output state is a call state and the environmental noise value exceeds a preset environmental noise threshold, a control start instruction is generated, so that the codec generates a start audio control frame based on the control start instruction, and controls the volume amplification when the audio data frame is output. Of course, the preset audio control state is only exemplary, and a developer may set audio control conditions in various application scenarios according to requirements, which is not limited herein. Further, when the application processor is controlling the audio data based on the audio control frame, once the environmental noise value and the audio output state do not satisfy the preset audio control condition, the application processor may stop generating the audio control frame in the preset format to end the control of the audio data frame. Further, when the application processor generates the audio control frame, it may generate an audio control frame in real time for each audio data frame, or may determine a start audio data frame when the environmental noise value and the audio output state satisfy a preset audio control condition, determine an end audio data frame when the environmental noise value and the audio output state do not satisfy the preset audio control condition, and generate a start audio control frame and an end audio control frame corresponding to the start audio data frame and the end audio data frame, so that the codec can perform output control on a series of audio data frames based on the start audio control frame and the end audio control frame generated and transmitted by the application processor. To distinguish from the start audio control frame and the end audio control frame generated by the codec, the start audio control frame and the end audio control frame generated by the application processor may be referred to as a second start audio control frame and a second end audio control frame.
Therefore, through the embodiment of the invention, the audio equipment can realize dynamic control on the audio equipment outputting the audio data without active input control operation of a user, and is more intelligent and humanized.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
EXAMPLE III
Fig. 4 shows a specific structural block diagram of an audio control apparatus according to a third embodiment of the present invention, and for convenience of description, only relevant portions of the third embodiment of the present invention are shown. The audio control device 4 is applied to an audio apparatus, and includes: a generating unit 41, a first transmitting unit 42, and a second transmitting unit 43.
The generating unit 41 is configured to generate an audio control frame in a preset format through the codec or the codec, where the audio control frame carries audio parameters, and the audio parameters are used to transmit attributes of an audio data frame or control output of the audio data frame;
a first sending unit 42, configured to send, after the codec sends an audio data frame to an application processor of the audio device, the audio control frame to the application processor within a frame period of the audio data frame, so that the application processor obtains an attribute of the audio data frame based on the audio control frame;
a second sending unit 43, configured to send the audio control frame to the codec within a frame period of the audio data frame after the application processor sends an audio data frame to the codec, so that the codec controls output of the audio data frame on the audio device based on the audio control frame.
Optionally, the generating unit 41 includes:
and the first generation subunit is configured to generate, in real time, the audio control frame corresponding to each to-be-sent audio data frame according to the sending sequence of each to-be-sent audio data frame by using the codec or the application processor.
Optionally, the generating unit 41 includes:
an initial data frame determining subunit, configured to determine, when the codec receives a control initial instruction sent by the application processor, an initial audio data frame under an instruction of the control initial instruction, where the initial audio data frame is a first audio data frame that needs to transmit audio parameters this time;
a second generating subunit, configured to generate, by a codec, a start audio control frame corresponding to the start audio data frame;
the first sending unit 42 is specifically configured to, after sending the start audio data frame to the application processor, send the start audio control frame to the application processor within a frame period of the start audio data frame, so that the application processor obtains the attributes of the start audio data frame and other audio data frames following the start audio data frame based on the audio control frame.
Optionally, the generating unit 41 includes:
a termination data frame determining subunit, configured to determine, when the codec receives a control termination instruction sent by the application processor, a termination audio data frame under an instruction of the control termination instruction, where the termination audio data frame is a last audio data frame to which an audio parameter needs to be transferred this time;
a third generating subunit, configured to generate, by the codec, a termination audio control frame corresponding to the termination audio data frame;
the first sending unit 42 is specifically configured to, after sending the termination audio data frame to the application processor, send the termination audio control frame to the application processor within a frame period of the termination audio data frame, so that the application processor enables the application processor to end obtaining the attributes of other audio data frames after the termination audio data frame based on the start audio control frame.
Optionally, the audio control apparatus 4 further includes:
an obtaining unit, configured to obtain, through the application processor, an ambient noise value of an environment where the audio device is currently located and an audio output state of the audio device;
the generating unit 41 includes:
and a fourth generating subunit, configured to generate, by the application processor, an audio control frame in a preset format when the environmental noise value and the audio output state satisfy a preset audio control condition.
As can be seen from the above, with the embodiments of the present invention, during audio transmission by an audio control apparatus using the I2S-based protocol, the idle bit after one frame of audio data frame is used on the serial data line to transfer the audio parameters in the audio control frame, so that the audio device can adjust the audio output or obtain the required parameters of the audio data based on the audio parameters. And because the audio control frame is transmitted under the I2S protocol, the real-time audio control frame has the characteristic of good real-time performance, can completely correspond to the audio data frame, and cannot solve the problem that the audio data sent by different buses cannot correspond to the audio data frame. Furthermore, the audio equipment can realize dynamic control of the audio equipment outputting audio data without active input control operation of a user, and is more intelligent and humanized.
Example four
Fig. 5 is a schematic diagram of an audio device according to a fourth embodiment of the present invention. As shown in fig. 5, the audio apparatus 5 of this embodiment includes: a main processor 50, a memory 51, a codec 52, an application processor 53, and a computer program 54, such as an audio control program, stored in the memory 51 and executable on the main processor 50. The main processor 50 executes the computer program 54 to implement the steps of the audio control method embodiments, such as the steps S101 to S103 shown in fig. 1. Alternatively, the main processor 50 implements the functions of the units in the device embodiments, for example, the functions of the units 41 to 43 shown in fig. 4, when executing the computer program 54.
Illustratively, the computer program 54 may be divided into one or more units, which are stored in the memory 51 and executed by the main processor 50 to accomplish the present invention. The one or more units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 54 in the audio control device 4. For example, the computer program 54 may be divided into a generation unit, a first transmission unit, and a second transmission unit, and each unit specifically functions as follows:
a generating unit, configured to generate an audio control frame in a preset format through the codec 52 or the application processor 53, where the audio control frame carries an audio parameter, and the audio parameter is used to transmit an attribute of an audio data frame or control output of the audio data frame;
a first sending unit, configured to send, after the codec 52 sends an audio data frame to the application processor 53 of the audio device 5, the audio control frame to the application processor 53 within a frame period of the audio data frame, so that the application processor 53 obtains an attribute of the audio data frame based on the audio control frame;
a second sending unit, configured to send the audio control frame to the codec 52 within a frame period of the audio data frame after the application processor 53 sends an audio data frame to the codec 52, so that the codec 52 controls the output of the audio data frame on the audio device 5 based on the audio control frame.
The audio device 5 may be a desktop computer, a notebook, a smart phone, a CD player, or the like. The audio device 5 may include, but is not limited to, a processor 50, a memory 51, a codec 52, and an application processor 53. It will be appreciated by those skilled in the art that fig. 5 is merely an example of an audio device 5 and does not constitute a limitation of the audio device 5 and may include more or less components than those shown, or some components may be combined, or different components, for example the audio device may also include input output devices, network access devices, buses, etc.
The host Processor 50 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the audio device 5, such as a hard disk or a memory of the audio device 5. The memory 51 may be an external storage device of the audio device 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the audio device 5. Further, the memory 51 may include both an internal storage unit and an external storage device of the audio device 5. The memory 51 is used for storing the computer program 54 and other programs and data required by the audio device 5. The above-mentioned memory 51 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/audio device and method may be implemented in other ways. For example, the above-described apparatus/audio device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.
Claims (10)
1. An audio control method applied to an audio device, the audio control method comprising:
a codec of the audio equipment generates an audio control frame in a preset format, wherein the audio control frame carries audio parameters, and the audio parameters are used for transmitting the attributes of an audio data frame;
after sending an audio data frame to an application processor of the audio device, the codec sends the audio control frame to the application processor within a frame period of the audio data frame, so that the application processor obtains an attribute of the audio data frame based on the audio control frame;
wherein, the codec of the audio device generates an audio control frame with a preset format, including:
if the codec receives a control initiator instruction sent by the application processor, determining a starting audio data frame under the instruction of the control initiator instruction, wherein the starting audio data frame is a first audio data frame needing to transmit audio parameters at this time;
generating a start audio control frame corresponding to the start audio data frame;
the codec sending the audio control frame to the application processor within a frame period of an audio data frame after sending the audio data frame to the application processor of the audio device, comprising:
the codec sends the start audio control frame to the application processor within a frame period of the start audio data frame after sending the start audio data frame to the application processor, such that the application processor obtains attributes of the start audio data frame and other audio data frames following the start audio data frame based on the audio control frame.
2. The audio control method of claim 1, wherein the codec of the audio device generates the audio control frame in a preset format, comprising:
and the codec generates audio control frames corresponding to the audio data frames to be sent in real time according to the sending sequence of the audio data frames to be sent.
3. The audio control method of claim 1, wherein the codec of the audio device generates the audio control frame in a preset format, comprising:
if the codec receives a control ending instruction sent by the application processor, determining a termination audio data frame under the instruction of the control ending instruction, wherein the termination audio data frame is the last audio data frame needing to transmit audio parameters at this time;
generating a termination audio control frame corresponding to the termination audio data frame;
the codec sending the audio control frame to the application processor within a frame period of an audio data frame after sending the audio data frame to the application processor of the audio device, comprising:
the codec sends the termination audio control frame to the application processor within a frame period of the termination audio data frame after sending the termination audio data frame to the application processor, so that the application processor ends obtaining attributes of other audio data frames after the termination audio data frame based on the start audio control frame.
4. An audio control method applied to an audio device, the audio control method comprising:
an application processor of the audio equipment generates an audio control frame in a preset format, wherein the audio control frame carries audio parameters, and the audio parameters are used for controlling the output of an audio data frame;
the application processor sends the audio control frame to a codec of the audio device within a frame period of an audio data frame after sending the audio data frame to the codec, so that the codec controls output of the audio data frame on the audio device based on the audio control frame;
after the codec obtains the audio control frame sent by the application processor, if the audio equipment supports reading the audio control frame, the codec analyzes the audio control frame to obtain an audio parameter; if the audio equipment does not support reading of the audio control frame, or if the codec judges that the control requirement for audio output does not exist currently, the codec reads the audio data frame sent by the application processor and discards the audio control frame transmitted after the audio data frame;
wherein, the application processor of the audio device generates an audio control frame with a preset format, including:
and if the application processor judges that the control requirement for audio output currently exists, generating an audio control frame corresponding to each frame of audio data frame in real time.
5. The audio control method of claim 4, wherein the application processor of the audio device generates an audio control frame in a preset format, and further comprising:
the application processor acquires an environment noise value of the current environment where the audio equipment is located and an audio output state of the audio equipment;
the application processor of the audio device generates an audio control frame in a preset format, and the method comprises the following steps:
and if the environment noise value and the audio output state meet preset audio control conditions, generating an audio control frame in a preset format by the application processor.
6. An audio control apparatus applied to an audio device including a codec and an application processor, the audio control apparatus comprising:
the generating unit is used for generating an audio control frame in a preset format through the codec, wherein the audio control frame carries audio parameters, and the audio parameters are used for transmitting the attribute of an audio data frame or controlling the output of the audio data frame;
a first sending unit, configured to send, after the codec sends an audio data frame to the application processor, the audio control frame to the application processor within a frame period of the audio data frame, so that the application processor obtains an attribute of the audio data frame based on the audio control frame;
the generation unit includes:
a starting data frame determining subunit, configured to determine, when the codec receives a control starting instruction sent by the application processor, a starting audio data frame under an instruction of the control starting instruction, where the starting audio data frame is a first audio data frame that needs to transmit audio parameters this time;
a second generating subunit, configured to generate, by the codec, a start audio control frame corresponding to the start audio data frame;
the first sending unit is specifically configured to, after sending the start audio data frame to the application processor, send the start audio control frame to the application processor within a frame period of the start audio data frame, so that the application processor obtains the attributes of the start audio data frame and other audio data frames following the start audio data frame based on the audio control frame.
7. The audio control apparatus of claim 6, wherein the generating unit comprises:
and the first generation subunit is used for generating the audio control frame corresponding to each audio data frame to be sent in real time according to the sending sequence of each audio data frame to be sent by the codec.
8. An audio control apparatus applied to an audio device including a codec and an application processor, the audio control apparatus comprising:
the generating unit is used for generating an audio control frame in a preset format through the application processor, wherein the audio control frame carries audio parameters, and the audio parameters are used for controlling the output of an audio data frame;
a second sending unit, configured to send, after the application processor sends an audio data frame to the codec, the audio control frame to the codec within a frame period of the audio data frame, so that the codec controls output of the audio data frame on the audio device based on the audio control frame;
the audio control device is further configured to, after the codec obtains the audio control frame sent by the application processor, if the audio device supports reading the audio control frame, analyze the audio control frame by the codec to obtain an audio parameter; if the audio equipment does not support reading of the audio control frame, or if the codec judges that the control requirement for audio output does not exist currently, the codec reads the audio data frame sent by the application processor and discards the audio control frame transmitted after the audio data frame;
the generating unit is specifically configured to generate an audio control frame corresponding to each frame of audio data frame in real time if the application processor determines that a control requirement for audio output currently exists.
9. An audio device comprising a memory, a main processor, a codec, an application processor and a computer program stored in said memory and executable on said processor, wherein said main processor, when executing said computer program, controls said codec and/or application processor to carry out the steps of the method according to any one of claims 1 to 3; alternatively, the host processor, when executing the computer program, controls the codec and/or application processor to carry out the steps of the method according to any one of claims 4 to 5.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 3; alternatively, the computer program realizes the steps of the method according to any one of claims 4 to 5 when executed by a processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710454578.8A CN107357547B (en) | 2017-06-15 | 2017-06-15 | Audio control method, audio control device and audio equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710454578.8A CN107357547B (en) | 2017-06-15 | 2017-06-15 | Audio control method, audio control device and audio equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107357547A CN107357547A (en) | 2017-11-17 |
CN107357547B true CN107357547B (en) | 2020-06-26 |
Family
ID=60273682
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710454578.8A Active CN107357547B (en) | 2017-06-15 | 2017-06-15 | Audio control method, audio control device and audio equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107357547B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109874097B (en) * | 2017-12-05 | 2020-10-30 | 炬芯科技股份有限公司 | Active noise reduction earphone and test system thereof |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101198068A (en) * | 2006-12-08 | 2008-06-11 | 中兴通讯股份有限公司 | Auxiliary data encapsulation method for media data flow of multimedia broadcasting system |
CN101213795A (en) * | 2005-06-30 | 2008-07-02 | 诺基亚公司 | Method and apparatus for padding time-slice frames with useful data |
CN101272205A (en) * | 2008-05-14 | 2008-09-24 | 中兴通讯股份有限公司 | Transmission method and device of mobile multimedia broadcasting service parameter information |
CN104104800A (en) * | 2014-07-25 | 2014-10-15 | Tcl通讯(宁波)有限公司 | Method for adjusting volume output of mobile terminal according to environment noises and mobile terminal |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FI116439B (en) * | 2004-06-04 | 2005-11-15 | Nokia Corp | Video and audio synchronization |
-
2017
- 2017-06-15 CN CN201710454578.8A patent/CN107357547B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101213795A (en) * | 2005-06-30 | 2008-07-02 | 诺基亚公司 | Method and apparatus for padding time-slice frames with useful data |
CN101198068A (en) * | 2006-12-08 | 2008-06-11 | 中兴通讯股份有限公司 | Auxiliary data encapsulation method for media data flow of multimedia broadcasting system |
CN101272205A (en) * | 2008-05-14 | 2008-09-24 | 中兴通讯股份有限公司 | Transmission method and device of mobile multimedia broadcasting service parameter information |
CN104104800A (en) * | 2014-07-25 | 2014-10-15 | Tcl通讯(宁波)有限公司 | Method for adjusting volume output of mobile terminal according to environment noises and mobile terminal |
Also Published As
Publication number | Publication date |
---|---|
CN107357547A (en) | 2017-11-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10839782B2 (en) | External extended device and audio playback method | |
CN101237441A (en) | I2C bus control circuit | |
US20140132835A1 (en) | Electronic device with thunderbolt interface, connecting method thereof, and docking apparatus | |
CN108124172B (en) | Cloud projection method, device and system | |
CN107197414B (en) | Method for configuring an audio rendering and/or acquisition device | |
CN111415673A (en) | Customized audio processing based on user-specific and hardware-specific audio information | |
EP4171066A1 (en) | Audio processing method, apparatus, and system, and storage medium | |
CN108289232B (en) | Control method of playing device, terminal device and storage medium | |
CN107357547B (en) | Audio control method, audio control device and audio equipment | |
CN105812439A (en) | Audio transmission method and device | |
KR100787220B1 (en) | Electronic apparatus with device possible to both read and write simultaneously and method thereof | |
CN108616767B (en) | Audio data transmission method and device | |
US20050120130A1 (en) | Data output device, data transmitting device, data processing system, data output method, data transmitting method, data processing method, their programs and recording media storing these programs | |
US8976222B2 (en) | Image processing apparatus and image processing method | |
CN113132044B (en) | Audio synchronous processing circuit and method thereof | |
JP4970607B1 (en) | Transmitting apparatus, receiving apparatus, and transmission method | |
JP4502686B2 (en) | Communication method and processor | |
CN108628573B (en) | Method and device for supporting input of multiple types of audio signals | |
CN105451134A (en) | Audio transmission method and terminal equipment | |
JP2004354677A (en) | Information processing device, method therefor, program therefor, recording medium recorded with he program, and reproducing device | |
CN110704340B (en) | Data transmission device, system and method | |
CN115103291B (en) | FIFO cache control method, device and system | |
CN109343822A (en) | A kind of determination method, apparatus, equipment and the storage medium of audio frequency apparatus | |
CN111083137B (en) | Method and device for adjusting state of terminal equipment of Internet of things and operating system | |
US10510359B1 (en) | Command processing device and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |