CN109102816B - Encoding control method and device and electronic equipment - Google Patents

Encoding control method and device and electronic equipment Download PDF

Info

Publication number
CN109102816B
CN109102816B CN201810925240.0A CN201810925240A CN109102816B CN 109102816 B CN109102816 B CN 109102816B CN 201810925240 A CN201810925240 A CN 201810925240A CN 109102816 B CN109102816 B CN 109102816B
Authority
CN
China
Prior art keywords
call
audio
electronic device
quality information
audio data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810925240.0A
Other languages
Chinese (zh)
Other versions
CN109102816A (en
Inventor
龚金华
严锋贵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810925240.0A priority Critical patent/CN109102816B/en
Publication of CN109102816A publication Critical patent/CN109102816A/en
Application granted granted Critical
Publication of CN109102816B publication Critical patent/CN109102816B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/18Vocoders using multiple modes
    • G10L19/22Mode decision, i.e. based on audio signal content versus external parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/18Vocoders using multiple modes
    • G10L19/20Vocoders using multiple modes using sound class specific coding, hybrid encoders or object based coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Telephone Function (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The embodiment of the application discloses a coding control method, a coding control device and electronic equipment. The method comprises the following steps: acquiring a current call characteristic parameter in a call state; acquiring an audio coding type corresponding to the call characteristic parameter; and coding the call audio data transmitted to the audio playing equipment based on the audio coding type. The method comprises the steps of acquiring current call characteristic parameters and audio coding types corresponding to the call characteristic parameters in a call state, and coding call audio data transmitted to audio playing equipment based on the audio coding types. Therefore, in the process of transmitting the audio data in the call process to the audio playing device, the audio coding type of the audio data to be transmitted to the audio playing device can be determined according to the call characteristic parameters, and the determination of the audio coding type can be more flexible.

Description

Encoding control method and device and electronic equipment
Technical Field
The present application relates to the field of electronic device technologies, and in particular, to a method and an apparatus for controlling encoding, and an electronic device.
Background
With the development and popularization of bluetooth technology, more audio playing devices can support data transmission with electronic devices through a bluetooth protocol. For example, the vehicle-mounted device may transmit music with the electronic device through the bluetooth protocol, and the headset may receive communication voice transmitted by the electronic device through the bluetooth protocol.
However, when the electronic device transmits audio data to other electronic devices by means of bluetooth, the type of audio coding used is fixed.
Disclosure of Invention
In view of the foregoing, the present application provides a method, an apparatus and an electronic device for controlling encoding, so as to improve the foregoing problems.
In a first aspect, the present application provides an encoding control method applied to an electronic device, the method including: acquiring a current call characteristic parameter in a call state; acquiring an audio coding type corresponding to the call characteristic parameter; and coding the call audio data transmitted to the audio playing equipment based on the audio coding type.
In a second aspect, the present application provides a transmission control apparatus, operable on an electronic device, the apparatus comprising: the parameter acquisition unit is used for acquiring the current call characteristic parameters in the call state; the coding type acquisition unit is used for acquiring an audio coding type corresponding to the call characteristic parameter; and the transmission control unit is used for encoding the call audio data transmitted to the audio playing equipment based on the audio encoding type.
In a third aspect, the present application provides an electronic device comprising one or more processors and a memory; one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the methods described above.
In a fourth aspect, the present application provides a computer-readable storage medium having a program code stored therein, wherein the program code performs the above method when running.
According to the encoding control method, the encoding control device and the electronic equipment, under a call state, the current call characteristic parameters are obtained, the audio encoding types corresponding to the call characteristic parameters are obtained, and then call audio data transmitted to the audio playing equipment are encoded based on the audio encoding types. Therefore, in the process of transmitting the audio data in the call process to the audio playing device, the audio coding type of the audio data to be transmitted to the audio playing device can be determined according to the call characteristic parameters, and the determination of the audio coding type can be more flexible.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a flow chart of a coding control method proposed in the present application;
fig. 2 is a schematic diagram illustrating an electronic device interacting audio data with other terminal devices in an encoding control method proposed in the present application;
fig. 3 is a schematic diagram illustrating another electronic device interacting audio data with other terminal devices in an encoding control method proposed in the present application;
fig. 4 shows a flow chart of another encoding control method proposed by the present application;
fig. 5 is a block diagram illustrating a structure of an encoding control apparatus proposed in the present application;
fig. 6 is a block diagram showing another encoding control apparatus proposed in the present application;
fig. 7 shows a block diagram of an electronic device proposed in the present application;
fig. 8 is a storage unit for storing or carrying program codes for implementing the encoding control method according to the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
With the demand for music experience by users of electronic devices, electronic devices are beginning to support playing audio files. The audio architecture of electronic devices is similar to that of computers, and is mainly implemented by a processor and a built-in audio CODEC (coder-decoder). Wherein, the processor receives the input of the audio file, converts the input into an I2S signal, transmits the signal to the CODEC, converts the signal into an analog signal and plays the analog signal.
The electronic device may be any of various types of computer system devices that are mobile or portable and perform wireless communications, among others.
With the further demand of users for music playing, more audio playing devices other than electronic devices are available, and the audio playing devices can receive audio data transmitted by the electronic devices and play the audio data. Common audio playing devices include earphones, sound boxes, vehicle-mounted audio playing devices, and the like. As a transmission mode, the electronic device can transmit the audio data to an external audio playing device for playing in a bluetooth transmission mode. In addition, the electronic device can also transmit the audio data to the audio playing device by means of a mode based on the IEEE802.11 protocol.
The electronic equipment encodes the audio data to be transmitted in the process of transmitting the audio data to the external audio playing equipment in a Bluetooth-based mode. The common bluetooth audio coding methods include: Apt-X encoding, SBC encoding, and AAC encoding.
In many scenarios, for example, the electronic device encodes the audio data based on Apt-X encoding, encodes the audio data based on SBC encoding, or encodes the audio data based on AAC.
Therefore, the inventor proposes an encoding control method, an encoding control device and an electronic device, which can flexibly determine an encoding mode and improve the flexibility of the encoding mode.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 1, an encoding control method provided by the present application is applied to an electronic device, and the method includes:
step S110: and acquiring the current call characteristic parameters in the call state.
The call state may be a state in which the electronic terminal is in a state of establishing a voice communication channel with other terminal devices, or a state in which the voice communication channel is already established. Optionally, the voice communication channel may be a voice communication channel based on a circuit switching manner, or a voice communication channel established based on a packet switching manner. As one mode, the circuit switching mode is a switching mode for the purpose of circuit connection, and a physical channel exclusively occupied by two parties is established between the two parties of communication before communication. The packet switching method is a store-forward switching method, i.e. packets arriving at the switching equipment are sent to a memory for temporary storage and processing, and then sent out when the corresponding output circuit is available.
The electronic device may determine whether the electronic device is in a call state in various ways. Optionally, for the electronic device with the Android operating system, whether the electronic device is in a call state may be interpreted by detecting a return value of an object after the phonestatelelist class is instantiated. As one way, the electronic device may register a phonestalistener object (generally, inherits a phonestalistener class to complete some packages) in a system telephone management service (TelephonyManager), and then implement the detection of the call state through a phonestalistener callback method onCallStateChanged (String incominnumber). For example, if the value of the STATE is telephonymanager, call _ STATE _ IDLE, it indicates that the call in the call STATE is hung up, and if the value of the STATE is telephonymanager, call _ STATE _ RINGING, it indicates that there is an incoming call but the incoming call is not answered yet, and if the value of the STATE is telephonymanager, call _ STATE _ OFFHOOK, it may indicate that the electronic device is in a call.
In addition, whether the electronic device is in a call state currently can be judged by detecting some communication requests sent by the electronic device or the content of the received communication requests. For example, if it is detected that the electronic device establishes the voice communication channel in the mobile communication manner, it may be determined that the electronic device is in a call state if it is detected that the electronic device sends a traffic channel allocation request to the base station or receives a traffic channel allocation instruction sent by the base station.
Step S120: and acquiring the audio coding type corresponding to the call characteristic parameter.
It can be understood that the audio data to be transmitted to the audio playing device by the electronic device in the call state is acquired by the electronic device from another terminal device. For example, if the electronic device performs a voice call with another terminal through the base station, in one case, the other terminal transmits the collected audio data to the electronic device, and then the electronic device transmits the audio data to the audio playing device. In this case, the electromagnetic environment of the other terminal and the signal environment between the other terminal and the electronic device affect the quality of the audio data.
One way, the call characteristic parameter includes call quality information. The step of obtaining the audio coding type corresponding to the call characteristic parameter may comprise: the method comprises the steps of obtaining an audio coding type corresponding to call quality information, wherein the lower the call quality represented by the call quality information is, the higher the code rate of the audio coding type corresponding to the call quality information is, so that under the condition that a call environment is not good, the electronic equipment can maximally fidelity the quality of original audio data, and further maximally fidelity the audio data played by audio playing equipment, so that a user can clearly hear voice content.
Under the condition that the call characteristic parameters include call quality information, the call quality information can be determined by acquiring parameters of downlink voice in the call process. In this case, the step of acquiring the current call characteristic parameter includes: and acquiring parameters of downlink voice in the call process, and acquiring call quality information according to the parameters of the downlink voice.
As one mode, the acquired parameters of the downlink voice include a signal-to-noise ratio of the downlink voice. It should be noted that the SIGNAL-to-NOISE RATIO is called SNR or S/N (SIGNAL-to-NOISE RATIO) in english, and is also called SIGNAL-to-NOISE RATIO. Refers to the ratio of signal to noise in an electronic device or system. The signal may refer to an electronic signal (for example, the aforementioned downstream voice including audio data) from outside the device that needs to be processed by the electronic device, and the noise refers to an irregular extra signal (or information) that does not exist in the original signal generated after passing through the device, and the signal does not change with the change of the original signal.
In this application, as one approach, the call quality information includes quality levels, e.g., the number 1 indicates the highest quality level, the number 2 indicates the next highest quality level, and so on. Then in this case the call quality information is a number that characterizes the quality class. As one approach, a mapping between the signal-to-noise ratio and a number characterizing the quality level may be established. As shown in the following table:
ID SNR grade
1 a 1
2 b 2
TABLE 1
In table 1, the field SNR represents the signal-to-noise ratio, and the field grade represents the quality level. Then for a signal-to-noise ratio of a, the corresponding quality level is 1. For a signal-to-noise ratio of b, the corresponding quality level is 2. Accordingly, different audio coding types may be determined for different quality levels. For example, the level 1 with the highest quality level may correspond to an audio coding type with a general coding rate. And for the level 2 or 3 with the highest quality level, the audio coding type with the highest coding rate can be corresponded. It can be understood that the higher the coding rate is, the higher the fidelity rate of the encoded audio data is, and the audio playing device can play the original content of the audio data with higher quality.
Furthermore, as another mode, the call quality information may be determined according to the surrounding environment where the terminal device that has established the voice communication channel with the electronic device is located. It is understood that as shown in fig. 2, the terminal device 99 transmits the collected audio data to the base station 98, the base station 98 transmits the audio data to the electronic device 100, and the electronic device 100 transmits the audio data to the audio playing device (illustrated as the bluetooth headset 97). Alternatively, as shown in fig. 3, unlike the method shown in fig. 2, the terminal device 99 is connected to the network through the wireless access point 96, and the electronic device 100 is connected to the network through the wireless access point 95, so that the audio data collected by the terminal device may be transmitted to the network through the wireless access point 96, then to the wireless access point 95, and then to the electronic device 100.
In this case, the audio data to be transmitted to the audio playing device by the electronic device is originally collected by the terminal device that establishes the voice communication channel with the electronic device, and if the surrounding environment of the terminal device that establishes the voice communication channel with the electronic device is not good, the quality of the audio data itself may be poor. For example, if the ambient environment is noisy, then the captured audio data may include loud ambient noise. For another example, if the signal strength between the terminal device and the base station that establish the voice communication channel with the electronic device is not strong, a large loss of audio data transmitted to the base station may be caused.
Then, as one way, the terminal device that has established a voice communication channel with the electronic device may transmit an environmental parameter (a parameter representing the intensity of noise or the intensity of a signal connected to a base station) representing the current surrounding environment to the electronic device, so that the electronic device determines call quality information according to the environmental parameter. The terminal device establishing the voice communication channel with the electronic device can transmit the environment parameters to the electronic device in various ways.
Optionally, the terminal device that establishes the voice communication channel with the electronic device may add the environment parameter to a certain parameter of the audio data, and in this case, the electronic device may extract a value of the certain parameter from the audio data after receiving the audio data, so as to obtain the environment parameter. Furthermore, as another mode, if both the electronic device and the terminal device that establishes the voice communication channel with the electronic device can access the network in a communication mode other than the network protocol mode that establishes the voice communication channel, the terminal device that establishes the voice communication channel with the electronic device may also upload the environment parameter to the server first, and then the server pushes the environment parameter to the electronic device.
For example, if the electronic device and the terminal device are in a voice communication channel established based on a mobile communication protocol such as 2G, 3G, or 4G. And under the condition that the electronic equipment and the terminal equipment both support WIFI communication or both support the simultaneous establishment of packet switching or circuit switching, the terminal equipment can transmit the acquired environmental parameters to the server in a WIFI or packet interaction mode, and the server transmits the environmental parameters to the electronic equipment in a WIFI or packet interaction mode. It can be understood that, in order to facilitate the server to obtain to which electronic device the received environment parameter needs to be transmitted, when the terminal device uploads the data to the server, the data includes not only the environment parameter but also an identifier of the electronic device corresponding to the environment parameter, for example, a MAC address of the electronic device, so that the server can determine, from the electronic devices that have established network connection with itself, the electronic device corresponding to the MAC address as a receiver of the environment parameter.
As one way, the terminal device may organize the format of the data sent to the server in a customized data encapsulation manner. In this case, the data transmitted by the electronic device to the server may include "#" + "44-45-53-54-00-00" + "N" + "80 db" + "#". Wherein the symbol "###" characterizes the beginning of the data and the symbol "##" characterizes the end of the data. And "44-45-53-54-00-00" indicates the MAC address of the electronic device, and "N" indicates the entry data with respect to the ambient noise, then "80 db" indicates the ambient noise. It can be understood that, after receiving the piece of data, the server searches for the IP address corresponding to the MAC address of 44-45-53-54-00-00, and if the IP address is found, sends 80db of the noise parameter to the electronic device corresponding to the IP address. It will be appreciated that the server may also package the data to be transmitted based on the aforementioned custom data packaging approach.
It should be noted that, in this case, the communication quality information may still be the quality level, and the association relationship between the environment parameter and the quality level may still adopt the manner of the data table, which is not described herein again.
In addition, in addition to the aforementioned manner of acquiring call quality information, the call quality information may also be determined according to the uplink speech. In this case, the step of acquiring the current call characteristic parameter includes: identifying uplink voice in the communication process; and acquiring the call quality information according to the content identified in the uplink voice.
It can be understood that the uplink voice is a voice collected by the electronic device and sent to the opposite communication terminal, that is, a voice sent to the terminal communicating with the electronic device. Then, the electronic device may store the voice content representing poor quality of the voice signal in the dictionary library in advance in the local storage dictionary library, and then after acquiring the uplink voice content, the electronic device may recognize the acquired uplink voice content from the uplink voice content, and then recognize whether the uplink voice content has the content existing in the dictionary library, and if so, may further determine the communication quality information corresponding to the recognized content.
For example, as one mode, if the communication quality information is still expressed by a level, if "i am not clearly heard" is recognized in the upstream voice content, the current communication quality is judged to be level 2, and if "i am not clearly heard" is recognized in the voice content, the current communication quality is judged to be level 1. If the uplink voice content is recognized to have no content, the current communication quality can be determined to be 3 levels. Different bluetooth audio encodings may be corresponded to different communication quality levels. Optionally, the 3-level audio coding mode representing the best communication quality may correspond to an audio coding mode with a relatively low coding rate, so that it may be ensured that audio data may be transmitted to the audio playing device with high fidelity, and excessive power consumption of the electronic device and processor resources may not be consumed. And for the 2-level and 3-level audio coding modes with lower communication quality and the condition that the quality of the audio data received by the electronic equipment is not high, the audio coding mode with the highest coding rate is adopted, so that the audio data distortion cannot occur as much as possible in the process of transmitting the audio data to the audio playing equipment by the electronic equipment, and the further distortion of the audio data is avoided.
It should be noted that, the aforementioned call characteristic parameter including the call quality information is an optional implementation manner of the present application, and besides this manner, the call characteristic parameter may include a call object level. In this case, the step of obtaining the audio coding type corresponding to the call characteristic parameter comprises: and acquiring an audio coding type corresponding to the call object grade, wherein the higher the call object grade is, the higher the code rate of the audio coding type corresponding to the call object grade is.
When a user uses the electronic device to communicate, for some call objects, the user can know that the communication object may have speech expression with unclear wording or no fluency for some reasons. For example, for older communication partners. As a mode, the user of the electronic device may mark the users in the electronic device, and when the electronic device detects that the electronic device is in a call state, it may be detected whether the current communication object is a user who has been marked as the user with unclear wording or no fluency and performs speech expression, or a user who has been marked as a user requiring a higher bluetooth audio coding rate, and if so, the user is identified as a user with a higher user level, and the code rate of the audio coding type corresponding to the user level is higher. In addition, for some more important communication objects, such as important clients, the user can also mark as a user needing a higher bluetooth audio coding rate.
Further, the call characteristic parameter may alternatively comprise a complexity of an electromagnetic environment in which the electronic device is located. In the present application, the electronic device may determine the complexity of the current electromagnetic environment in a number of ways. Optionally, as a manner, the electronic device may determine the number of WLAN networks that can be currently searched, and correspond the number of WLAN networks to the complexity of the electromagnetic environment. For example, when the number of WLAN networks is in the section from 0 to a, the corresponding complexity is 1, and if the number of WLAN networks is in the section from a to b, the corresponding complexity is 2, and so on. In this case, higher complexity may correspond to an audio coding scheme with a higher coding rate.
It will be appreciated that both WLAN networks and bluetooth may operate in the 2.4GHz band. However, the channel of the signal it is specifically broadcasting may still vary. Besides detecting the number of WLAN networks, the electronic device can also determine how many other devices are in use on the channel where the currently broadcasted bluetooth signal is located, thereby obtaining the complexity of the electromagnetic environment. For example, when the number of occupied channels is in the section from 0 to a, the corresponding complexity is 1, and if the number of occupied channels is in the section from a to b, the corresponding complexity is 2, and so on. In this case, higher complexity may correspond to an audio coding scheme with a higher coding rate. It is understood that the channel occupation refers to the number of other electronic devices in the case that the current electronic device broadcasts the channel used by the bluetooth signal and other electronic devices are also used.
Step S130: and coding the call audio data transmitted to the audio playing equipment based on the audio coding type.
As one way, the electronic device may encode call audio data transmitted to the audio playback device based on the audio encoding type if it is detected that the electronic device is in the audio playback device connection state. It should be noted that the electronic device may play audio through an audio playing module (e.g., a speaker or an earphone) configured by the electronic device, or may transmit the audio data to an external audio playing device, so that the external audio playing device plays the audio data. In this case, the connection status of the audio playback device is used to indicate that the electronic device is currently transmitting audio data to an external audio playback device for playback.
The external audio playing device and the electronic device are in a wired or wireless connection state, and both the external audio playing device and the electronic device can be triggered to enter the audio playing device connection state. The wireless mode includes connection based on an IEEE802.11 protocol or connection based on a Bluetooth communication protocol.
Then, under the condition of connection based on a bluetooth communication protocol, as a mode, detecting whether the electronic device is connected with a bluetooth device through bluetooth, if the electronic device is detected to be connected with the bluetooth device, detecting whether the type of the bluetooth device is an audio device type, and if so, judging that the electronic device is in an audio playing device connection state. For example, in an android operating system based electronic device, the type of device currently connected through bluetooth may be determined based on a return value obtained by instantiating a bluetooth class. If the returned value is AUDIO _ VIDEO _ CAMCORDER, the image acquisition device is characterized. If the returned value is AUDIO _ VIDEO _ CAR _ AUDIO, the vehicle-mounted device is characterized. If the returned value is AUDIO _ VIDEO _ HIFI _ AUDIO, the AUDIO playback device or the VIDEO playback device is characterized. If the returned value is AUDIO _ VIDEO _ MICROPHONE, then the MICROPHONE is characterized.
According to the coding control method, in a call state, a current call characteristic parameter is acquired, an audio coding type corresponding to the call characteristic parameter is acquired, and call audio data transmitted to audio playing equipment is coded based on the audio coding type. Therefore, in the process of transmitting the audio data in the call process to the audio playing device, the audio coding type of the audio data to be transmitted to the audio playing device can be determined according to the call characteristic parameters, and the determination of the audio coding type can be more flexible.
Referring to fig. 4, an encoding control method provided by the present application is applied to an electronic device, and the method includes:
step S210: and detecting whether the electronic equipment is in a vehicle-mounted mode or not in a call state.
The electronic device can determine whether the vehicle is in a driving state in various ways. As one way, when the electronic device detects that the navigation application installed therein turns on the navigation mode or starts navigation, it is determined that the vehicle is in a driving state. If the electronic equipment collects the facial feature parameters of the driver in real time at the beginning of the navigation, more electric quantity can be consumed. In addition, at the starting time of driving, the driver is not easy to have driving fatigue, and the driving fatigue phenomenon is more easy to occur when the vehicle is in the process of straight-line driving for a long time period, as a mode, when the electronic device detects that the navigation application installed in the electronic device starts the navigation mode or starts navigation, and further detects that a straight-line road section with the length exceeding the preset length exists in the route after the current position on the navigation route, the electronic device starts to acquire the facial feature parameters of the driver. The straight road section is a road section which does not need to change roads or turn right angles in included paths.
Alternatively, the electronic device may determine that the vehicle in which the electronic device is located is in the driving state when the electronic device detects shaking at a preset frequency. It is understood that when the vehicle is in a running state, the vehicle is in a relatively shaking state, and the shaking frequency fluctuates within a certain range, and when the electronic device detects shaking of the electronic device in the certain range, it can be determined that the vehicle is in a form state.
Step S220: and if the electronic equipment is detected not to be in the vehicle-mounted mode, encoding the call audio data transmitted to the audio playing equipment based on the default audio encoding type.
Step S230: and if the electronic equipment is detected to be in the vehicle-mounted mode, acquiring the current call characteristic parameters.
Step S240: and acquiring the audio coding type corresponding to the call characteristic parameter.
Step S250: and coding the call audio data transmitted to the audio playing equipment based on the audio coding type.
According to the encoding control method, whether the electronic equipment is in the vehicle-mounted mode or not is detected in the call state, if the electronic equipment is detected to be in the vehicle-mounted mode, the current call characteristic parameter is acquired, the audio encoding type corresponding to the call characteristic parameter is acquired, and then call audio data transmitted to the audio playing equipment is encoded based on the audio encoding type. Therefore, in the process of transmitting the audio data in the call process to the audio playing device, the audio coding type of the audio data to be transmitted to the audio playing device can be determined according to the call characteristic parameters, and the determination of the audio coding type can be more flexible. Moreover, the encoding mode with better transmission quality can be selected in the vehicle-mounted mode, so that the user can hear the audio content better, the user does not need to distinguish the heard content with distraction, and the driving safety is improved.
Referring to fig. 5, a transmission control apparatus 300 provided by the present application operates on an electronic device, the apparatus 300 includes: a parameter acquisition unit 310, a coding type acquisition unit 320, and a transmission control unit 330.
The parameter obtaining unit 310 is configured to obtain a current call characteristic parameter in a call state.
An encoding type obtaining unit 320, configured to obtain an audio encoding type corresponding to the call characteristic parameter.
One way, the call characteristic parameter includes call quality information. The encoding type obtaining unit 320 is specifically configured to obtain an audio encoding type corresponding to the call quality information, where the lower the call quality represented by the call quality information is, the higher the code rate of the audio encoding type corresponding to the call quality information is.
Optionally, the parameter obtaining unit 310 is specifically configured to obtain a parameter of the downlink voice in the call process, and obtain the call quality information according to the parameter of the downlink voice.
Optionally, the parameter obtaining unit 310 is specifically configured to identify the uplink speech in the call process, and obtain the call quality information according to the content identified in the uplink speech.
Alternatively, the call characteristic parameter includes a call object class. The encoding type obtaining unit 320 is specifically configured to obtain an audio encoding type corresponding to the call object level, where the higher the call object level is, the higher the code rate of the audio encoding type corresponding to the call object level is.
A transmission control unit 330, configured to encode call audio data transmitted to the audio playing device based on the audio encoding type.
Referring to fig. 6, a transmission control apparatus 400 provided in the present application operates on an electronic device, where the apparatus 400 includes:
a call mode detection unit 410, configured to detect whether the electronic device is in a call mode.
The vehicle-mounted mode detection unit 420 is configured to detect whether the electronic device is in the vehicle-mounted mode when the call mode detection unit 410 detects that the electronic device is in the call mode.
A transmission control unit 430, configured to encode call audio data transmitted to the audio playing device based on a default audio encoding type if the in-vehicle mode detection unit 420 detects that the electronic device is not in the in-vehicle mode.
A parameter obtaining unit 440, configured to obtain a current call characteristic parameter if the vehicle-mounted mode detecting unit 420 detects that the electronic device is in the vehicle-mounted mode;
an encoding type obtaining unit 450, configured to obtain an audio encoding type corresponding to the call characteristic parameter.
And the transmission control unit 430 is further configured to encode call audio data transmitted to the audio playing device based on the audio encoding type.
It should be noted that, as will be clear to those skilled in the art, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present application, the coupling between the modules may be electrical, mechanical or other type of coupling. In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
It should be noted that, in the present application, it is a prior art that can be specifically applied to how to encode audio data according to the audio encoding type, and the present application will not be described in detail.
To sum up, the encoding control method, the encoding control apparatus, and the electronic device provided in the present application detect whether the electronic device is in an audio playing device connection state after entering a preset content playing scene, and if the electronic device is detected to be in the audio playing device connection state, reduce a buffering threshold of a buffering node through which the audio data is transmitted to the audio playing device, so that when the electronic device plays image data and audio data corresponding to the image data, a value representing that a trigger starts to process the audio data buffered by the buffering node can be reduced under a condition that the audio data needs to be transmitted to the audio playing device, so that the data at the buffering node can be processed faster, so as to increase a frequency at which the data in the buffering node is processed, and thus the audio data can be played by the audio device faster, audio playback delay is reduced.
An electronic device provided by the present application will be described with reference to fig. 7.
Referring to fig. 7, based on the encoding control method and apparatus, another electronic device 100 capable of executing the terminal control method is further provided in the embodiment of the present application. The electronic device 100 includes one or more processors 102 (only one shown) and a memory 104 coupled to each other. The memory 104 stores programs that can execute the content of the foregoing embodiments, and the processor 102 can execute the programs stored in the memory 104.
Processor 102 may include one or more processing cores, among other things. The processor 102 interfaces with various components throughout the electronic device 100 using various interfaces and circuitry to perform various functions of the electronic device 100 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 104 and invoking data stored in the memory 104. Alternatively, the processor 102 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 102 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 102, but may be implemented by a communication chip.
The Memory 104 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 104 may be used to store instructions, programs, code sets, or instruction sets. The memory 104 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal 100 in use, such as a phonebook, audio-video data, chat log data, and the like.
The wireless module 106 is configured to receive and transmit electromagnetic waves, and achieve interconversion between the electromagnetic waves and the electrical signals, so as to communicate with a communication network or other devices, for example, an audio playing device. The wireless module 106 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The wireless module 106 may communicate with various networks, such as the internet, an intranet, a wireless network, or with other devices via a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The wireless network described above may use various communication standards, protocols, and technologies, including but not limited to the IEEE802 protocol and the bluetooth protocol, and may even include those that are not yet developed.
Furthermore, the audio circuit 110 may be further configured to encode audio data to be transmitted via bluetooth (when the wireless module 106 transmits data based on the bluetooth communication protocol), wherein the type of audio encoding used in encoding may be obtained according to the manner provided in the foregoing embodiments.
Referring to fig. 8, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable medium 800 has stored therein a program code that can be called by a processor to execute the method described in the above-described method embodiments.
The computer-readable storage medium 800 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 800 includes a non-volatile computer-readable storage medium. The computer readable storage medium 800 has storage space for program code 810 to perform any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 810 may be compressed, for example, in a suitable form.
For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (7)

1. An encoding control method applied to an electronic device, the method comprising:
detecting whether the electronic equipment is in a vehicle-mounted mode or not in a call state;
if the electronic equipment is detected to be in the vehicle-mounted mode, acquiring current call characteristic parameters, wherein the call characteristic parameters comprise call quality information;
acquiring an audio coding type corresponding to the call quality information, wherein the lower the call quality represented by the call quality information is, the higher the code rate of the audio coding type corresponding to the call quality information is;
coding call audio data transmitted to the audio playing equipment based on the audio coding type;
and if the electronic equipment is detected not to be in the vehicle-mounted mode, encoding the call audio data transmitted to the audio playing equipment based on the default audio encoding type.
2. The method of claim 1, wherein the step of obtaining the current call feature parameters comprises:
acquiring parameters of downlink voice in the call process;
and acquiring the call quality information according to the parameters of the downlink voice.
3. The method of claim 1, wherein the step of obtaining the current call feature parameters comprises:
identifying uplink voice in the communication process;
and acquiring the call quality information according to the content identified in the uplink voice.
4. The method of claim 1, wherein the step of detecting whether the electronic device is in an in-vehicle mode comprises:
detecting whether the electronic equipment is in a navigation state;
or detecting whether the electronic equipment is in a preset shaking state and moves at a preset speed.
5. A transmission control apparatus, operable with an electronic device, the apparatus comprising:
the vehicle-mounted mode detection unit is used for detecting whether the electronic equipment is in a vehicle-mounted mode or not in a call state;
the parameter acquisition unit is used for acquiring current call characteristic parameters if the electronic equipment is detected to be in the vehicle-mounted mode, wherein the call characteristic parameters comprise call quality information;
the coding type obtaining unit is used for obtaining an audio coding type corresponding to the call quality information, wherein the lower the call quality represented by the call quality information is, the higher the code rate of the audio coding type corresponding to the call quality information is;
the transmission control unit is used for encoding call audio data transmitted to the audio playing equipment based on the audio encoding type;
the transmission control unit is further configured to encode the call audio data transmitted to the audio playing device based on a default audio encoding type if it is detected that the electronic device is not in the vehicle-mounted mode.
6. An electronic device comprising one or more processors and memory;
one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-4.
7. A computer-readable storage medium, having a program code stored therein, wherein the program code when executed by a processor performs the method of any of claims 1-4.
CN201810925240.0A 2018-08-14 2018-08-14 Encoding control method and device and electronic equipment Active CN109102816B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810925240.0A CN109102816B (en) 2018-08-14 2018-08-14 Encoding control method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810925240.0A CN109102816B (en) 2018-08-14 2018-08-14 Encoding control method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN109102816A CN109102816A (en) 2018-12-28
CN109102816B true CN109102816B (en) 2020-12-29

Family

ID=64849650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810925240.0A Active CN109102816B (en) 2018-08-14 2018-08-14 Encoding control method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN109102816B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110072011B (en) * 2019-04-24 2021-07-20 Oppo广东移动通信有限公司 Method for adjusting code rate and related product
CN110689899A (en) * 2019-09-17 2020-01-14 重庆百瑞互联电子技术有限公司 Dynamic adjustment method and system for Bluetooth audio
CN111081264B (en) * 2019-12-06 2022-03-29 北京明略软件系统有限公司 Voice signal processing method, device, equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1728598A (en) * 2004-07-31 2006-02-01 西门子(中国)有限公司 Method for adjusting coding speed in procedure of voice mobile communication
CN102752809A (en) * 2011-04-21 2012-10-24 联芯科技有限公司 Uplink adaptive multiple rate (AMR) regulation and control method and system
CN104219402A (en) * 2014-02-14 2014-12-17 爱沟通网络科技有限公司 System for improving VoIP communication quality
CN105023579A (en) * 2014-04-30 2015-11-04 中国电信股份有限公司 Voice coding realization method and apparatus in voice communication, and communication terminal
CN105280188A (en) * 2014-06-30 2016-01-27 美的集团股份有限公司 Audio signal encoding method and system based on terminal operating environment
CN105374363A (en) * 2014-08-25 2016-03-02 广东美的集团芜湖制冷设备有限公司 Audio signal encoding method and system
CN106028401A (en) * 2016-05-09 2016-10-12 Tcl移动通信科技(宁波)有限公司 Selection method and system for voice coding rate in call
CN107105360A (en) * 2017-06-20 2017-08-29 惠州Tcl移动通信有限公司 A kind of Anti-interference earphone system and its anti-interference method
CN107170460A (en) * 2017-06-30 2017-09-15 深圳Tcl新技术有限公司 Sound quality adjusting method, system, host side and storage medium
CN107404462A (en) * 2016-05-20 2017-11-28 北京小米移动软件有限公司 Adjust the method and device of session parameter

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SE9601606D0 (en) * 1996-04-26 1996-04-26 Ericsson Telefon Ab L M Ways for radio telecommunication systems
KR100294701B1 (en) * 1998-03-25 2001-07-12 서평원 method for removal interferemce seguential in CDMA system with multi transmission loss
JP3273599B2 (en) * 1998-06-19 2002-04-08 沖電気工業株式会社 Speech coding rate selector and speech coding device
JP2000091985A (en) * 1998-09-08 2000-03-31 Hitachi Ltd Power control method for communication system
JP3805205B2 (en) * 2000-04-06 2006-08-02 株式会社エヌ・ティ・ティ・ドコモ Method and apparatus for measuring communication quality in CDMA cellular system
CN1275400C (en) * 2002-12-31 2006-09-13 中兴通讯股份有限公司 Velocity regulating method of speech sound self adaptive multivelocity
CN104837042B (en) * 2015-05-06 2018-01-16 腾讯科技(深圳)有限公司 The coding method of digital multimedia data and device
CN106816155B (en) * 2016-12-23 2020-04-24 维沃移动通信有限公司 Method and device for improving signal-to-noise ratio of voice transmission

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1728598A (en) * 2004-07-31 2006-02-01 西门子(中国)有限公司 Method for adjusting coding speed in procedure of voice mobile communication
CN102752809A (en) * 2011-04-21 2012-10-24 联芯科技有限公司 Uplink adaptive multiple rate (AMR) regulation and control method and system
CN104219402A (en) * 2014-02-14 2014-12-17 爱沟通网络科技有限公司 System for improving VoIP communication quality
CN105023579A (en) * 2014-04-30 2015-11-04 中国电信股份有限公司 Voice coding realization method and apparatus in voice communication, and communication terminal
CN105280188A (en) * 2014-06-30 2016-01-27 美的集团股份有限公司 Audio signal encoding method and system based on terminal operating environment
CN105374363A (en) * 2014-08-25 2016-03-02 广东美的集团芜湖制冷设备有限公司 Audio signal encoding method and system
CN106028401A (en) * 2016-05-09 2016-10-12 Tcl移动通信科技(宁波)有限公司 Selection method and system for voice coding rate in call
CN107404462A (en) * 2016-05-20 2017-11-28 北京小米移动软件有限公司 Adjust the method and device of session parameter
CN107105360A (en) * 2017-06-20 2017-08-29 惠州Tcl移动通信有限公司 A kind of Anti-interference earphone system and its anti-interference method
CN107170460A (en) * 2017-06-30 2017-09-15 深圳Tcl新技术有限公司 Sound quality adjusting method, system, host side and storage medium

Also Published As

Publication number Publication date
CN109102816A (en) 2018-12-28

Similar Documents

Publication Publication Date Title
CN109150399B (en) Data transmission method and device, electronic equipment and computer readable medium
EP3629561B1 (en) Data transmission method and system, and bluetooth headphone
CN109445740B (en) Audio playing method and device, electronic equipment and storage medium
CN109257500B (en) Audio playing method and device, electronic equipment and storage medium
CN109041024B (en) Code rate optimization method and device, electronic equipment and storage medium
CN109102816B (en) Encoding control method and device and electronic equipment
US10950238B2 (en) Bluetooth speaker base, method and system for controlling thereof
CN109151194B (en) Data transmission method, device, electronic equipment and storage medium
CN109274407B (en) Data transmission method, device, electronic equipment and storage medium
CN109151212B (en) Equipment control method and device and electronic equipment
US8977202B2 (en) Communication apparatus having a unit to determine whether a profile is operating
CN103327014B (en) A kind of method of speech processing, Apparatus and system
CN109144463B (en) Transmission control method and device and electronic equipment
CN103402171A (en) Method and terminal for sharing background music during communication
CN109348453B (en) Audio transmission method and system based on vehicle-mounted Ethernet
CN111199743A (en) Audio coding format determining method and device, storage medium and electronic equipment
CN104092658B (en) Audio data play method and device
US20060044120A1 (en) Car audio system and method combining with MP3 player
CN114006890B (en) Data transmission method, device, storage medium and terminal device
EP3745813A1 (en) Method for operating a bluetooth device
KR20230018495A (en) Audio coding method and apparatus
CN110366752B (en) Voice frequency division transmission method, source terminal, play terminal, source terminal circuit and play terminal circuit
CN109298846B (en) Audio transmission method and device, electronic equipment and storage medium
CN115482827A (en) Audio data transmission method and device and chip
CN109150400B (en) Data transmission method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant