WO2020232631A1 - Procédé de transmission de la voix par répartition en fréquence, terminal source, terminal de lecture, circuit de terminal source et circuit de terminal de lecture - Google Patents

Procédé de transmission de la voix par répartition en fréquence, terminal source, terminal de lecture, circuit de terminal source et circuit de terminal de lecture Download PDF

Info

Publication number
WO2020232631A1
WO2020232631A1 PCT/CN2019/087811 CN2019087811W WO2020232631A1 WO 2020232631 A1 WO2020232631 A1 WO 2020232631A1 CN 2019087811 W CN2019087811 W CN 2019087811W WO 2020232631 A1 WO2020232631 A1 WO 2020232631A1
Authority
WO
WIPO (PCT)
Prior art keywords
frequency band
voice
source
link
voice signal
Prior art date
Application number
PCT/CN2019/087811
Other languages
English (en)
Chinese (zh)
Inventor
郭仕林
Original Assignee
深圳市汇顶科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市汇顶科技股份有限公司 filed Critical 深圳市汇顶科技股份有限公司
Priority to CN201980000976.XA priority Critical patent/CN110366752B/zh
Priority to PCT/CN2019/087811 priority patent/WO2020232631A1/fr
Publication of WO2020232631A1 publication Critical patent/WO2020232631A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/0017Lossless audio signal coding; Perfect reconstruction of coded audio signal by transmission of coding error
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing

Definitions

  • This application relates to the field of communications, in particular to a voice frequency division transmission method, a source end, a playback end, a source end circuit, and a playback end circuit.
  • lossy encoding with ultra-high compression rates is usually used for transmission, at the expense of sound quality in exchange for longer battery life.
  • Commonly used lossy coding such as: AAC (Advanced Audio Coding, advanced audio coding) coding method, SBC (Sub-band coding, sub-band coding) coding method or MP3 (Moving Picture Experts Group Audio Layer III, motion picture experts compress standard audio Level 3) Encoding methods, etc., usually the high frequency part is directly discarded to save bandwidth, but when the encoded data is received in this way, due to the loss of the high frequency part, the original voice signal cannot be obtained, so that it cannot provide users with more information. Good sound quality experience.
  • this application provides a voice frequency division transmission method, source end, playback end, Source end circuit and playback end circuit.
  • the first aspect of the embodiments of the present application provides a voice frequency division transmission method, including:
  • the source end encodes the first frequency band speech signal and the second frequency band speech signal; the source end marks the frame synchronization information into the encoded first frequency band speech signal and the encoded second frequency band speech signal; the source end passes the first synchronization
  • the link and the second synchronization link respectively send the encoded first frequency band speech signal with frame synchronization information and the encoded second frequency band speech signal with frame synchronization information to the playback terminal.
  • an implementation of the first aspect includes: the compression rate of the encoding method of the second frequency band speech signal is higher than the compression rate of the encoding method of the first frequency band speech signal; or the second frequency band speech The compression rate of the encoding method of the signal is lower than the compression rate of the encoding method of the voice signal in the first frequency band.
  • the source end encoding the second frequency band voice signal includes:
  • the source end encodes the high-frequency speech signal, and the source end encodes the high-frequency speech signal including CELT encoding or SBR encoding; or
  • the source end encodes the non-high frequency speech signal, and the source end encodes the non-high frequency speech signal including SILK encoding, SBC encoding, AAC encoding or MP3 encoding.
  • the source end marks the frame synchronization information into the encoded first frequency band speech signal and the encoded second frequency band speech signal include:
  • the source marks the encoded voice signal in any frequency band detected within a preset delay as a frame of data, and the frame synchronization information includes the start time and end time of a frame of data.
  • the source end sends the encoded second frequency band voice signal with frame synchronization information to the playback end through the second synchronization link Before, including:
  • the source end establishes a second synchronization link with the playback end according to the used link parameters.
  • the source end sends a second synchronization link request to the playback end
  • the source receives the reply to the second synchronization link request
  • the source terminal determines whether to establish the second synchronization link according to the response to the second synchronization link request.
  • the method before the source end encodes the voice signal in the second frequency band, the method further includes:
  • the source terminal judges whether the playback terminal supports voice frequency division transmission according to the control data stream, and the control data stream is transmitted through the asynchronous link.
  • the source controls the data flow Judging whether the playback end supports voice frequency division transmission includes:
  • the control data stream includes the value of the custom UUID. If the custom UUID value of the player received by the source is equal to the preset UUID value, the source determines that the player supports voice frequency division transmission.
  • the source terminal determines that the playback terminal supports voice frequency division transmission according to the control data stream, the source terminal sends an audio configuration parameter request to the playback terminal ;
  • the source terminal receives the audio configuration parameters corresponding to the voice signal in the second frequency band supported by the playback terminal.
  • the audio configuration parameters include coding and decoding parameters and bit rates, and the coding and decoding parameters include one or both of coding and decoding methods;
  • the source terminal determines the used audio configuration parameters according to the audio configuration parameters corresponding to the second frequency band voice signal supported by the playback terminal and sends the used audio configuration parameters to the playback terminal.
  • the number of second synchronization links is less than or equal to the number of frequency bands of the second frequency band voice signal
  • the source disconnects one or more second synchronization links according to the power situation or the link quality of the second synchronization link;
  • the source establishes one or more second synchronization links according to the power condition or the link quality of the second synchronization link.
  • the second aspect of the embodiments of the present application provides a voice frequency division transmission method, including:
  • the playback terminal receives the encoded first frequency band speech signal with frame synchronization information and the encoded second frequency band speech signal with frame synchronization information through the first synchronization link and the second synchronization link, respectively;
  • the playback terminal decodes the received voice signal in the first frequency band and the voice signal in the second frequency band;
  • the player uses the frame synchronization information to synchronize the decoded first frequency band speech signal and the decoded second frequency band speech signal.
  • the method includes:
  • the playback terminal performs digital-to-analog conversion on the synchronized voice signal in the first frequency band and the synchronized voice signal in the second frequency band through one or more different digital-to-analog converters;
  • the playback terminal amplifies the first frequency band voice signal after digital-to-analog conversion and the second frequency band voice signal after digital-to-analog conversion through one or more different amplifiers respectively;
  • the playback terminal performs electroacoustic conversion on the amplified first frequency band speech signal and the amplified second frequency band speech signal through one or more different electroacoustic converters.
  • the playback end sends the control data stream to the source end, so that the source end determines whether the playback end supports voice frequency division transmission according to the control data stream.
  • the playback terminal receives the second synchronization sent by the source terminal. Link request and send a reply to the second synchronization link request;
  • the playback end If the playback end supports voice frequency division transmission, the playback end sends the link parameters supported by the playback end to the source end.
  • the method before the playback end receives the second synchronization link request sent by the source end, the method includes:
  • the player receives the audio configuration parameter request sent by the source;
  • the playback terminal sends the audio configuration parameters corresponding to the second frequency band voice signal supported by the playback terminal to the source terminal;
  • the player receives the used audio configuration parameters sent by the source and configures the used audio configuration parameters.
  • the player disconnects one or more second synchronization links according to the power condition or the link quality of the second synchronization link ;
  • the playback end requests the source end to establish one or more second synchronization links according to the power condition or the link quality of the second synchronization link.
  • the third aspect of the embodiments of the present application provides a source terminal for voice frequency division transmission, and the source terminal includes:
  • Encoding module used to encode the first frequency band speech signal and the second frequency band speech signal
  • the pre-synchronization module is used to mark the frame synchronization information into the encoded first frequency band speech signal and the encoded second frequency band speech signal;
  • the first sending module is configured to send the encoded first frequency band speech signal with frame synchronization information and the encoded second frequency band speech signal with frame synchronization information through the first synchronization link and the second synchronization link, respectively To the player.
  • the compression rate of the encoding method of the second frequency band speech signal is higher than the compression rate of the encoding method of the first frequency band speech signal
  • the compression rate of the encoding method of the speech signal in the second frequency band is lower than the compression rate of the encoding method of the speech signal in the first frequency band.
  • the encoding module includes:
  • the high-frequency encoding module is used to encode high-frequency speech signals, and the encoding methods for high-frequency speech signals include CELT encoding or SBR encoding; or
  • the non-high frequency encoding module is used to encode non-high frequency speech signals.
  • the encoding methods for non-high frequency speech signals include SILK encoding, SBC encoding, AAC encoding or MP3 encoding.
  • the pre-synchronization module includes:
  • the data frame marking module is used to mark the encoded voice signal of any frequency band detected within a preset delay as one frame of data, and the frame synchronization information includes the start time and end time of a frame of data.
  • the source further includes:
  • the first parameter determination module the first sending module sends the encoded second frequency band voice signal with frame synchronization information to the playback terminal through the second synchronization link, and is used to send the playback terminal according to the link parameters supported by the playback terminal. Determine the link parameters used;
  • the link establishment module is used to establish a second synchronization link with the playback terminal according to the used link parameters.
  • the source further includes:
  • the second sending module, the first parameter determining module is used to send a second synchronization link request to the playback terminal before determining the link parameter to be used according to the link parameters supported by the playback terminal sent by the playback terminal;
  • the first receiving module is configured to receive a reply to the second synchronization link request
  • the first parameter determination module is further configured to determine whether to establish the second synchronization link according to the reply to the second synchronization link request.
  • the source further includes:
  • the first judging module before the encoding module encodes the voice signal in the second frequency band, is used to judge whether the playback end supports voice frequency division transmission according to the control data stream, and control the data flow through asynchronous link transmission.
  • the source also includes a UUID module for identifying the voice frequency division transmission service through a custom universally unique identification code (UUID);
  • UUID custom universally unique identification code
  • the first judgment module includes:
  • the second judgment module, the control data stream includes the value of the custom UUID, and if the received custom UUID value of the player is equal to the preset UUID value, it is used to determine that the player supports voice frequency division transmission.
  • the source further includes:
  • the third sending module if the first judging module determines that the playback end supports voice frequency division transmission according to the control data stream, it is used to send an audio configuration parameter request to the playback end;
  • the second receiving module used to receive the audio configuration parameters corresponding to the voice signal in the second frequency band supported by the player.
  • the audio configuration parameters include codec parameters and bit rates, and the codec parameters include one or two of the coding mode and the decoding mode ;as well as
  • the second parameter determination module is configured to determine the audio configuration parameters to be used according to the audio configuration parameters corresponding to the second frequency band voice signal supported by the playback terminal;
  • the third sending module is also used to send the used audio configuration parameters to the playback terminal.
  • the source further includes:
  • the first transmission control module is configured to disconnect one or more second synchronization links according to the power condition or the link quality of the second synchronization link, and the number of the second synchronization links is less than or equal to that of the second frequency band voice signal Number of frequency bands; or
  • the first transmission control module is also used to establish one or more second synchronization links according to the power condition or the link quality of the second synchronization link, and the number of the second synchronization links is less than or equal to the frequency band of the second frequency band voice signal number.
  • the fourth aspect of the embodiments of the present application provides a playback terminal for voice frequency division transmission.
  • the playback terminal includes:
  • the third receiving module is used to receive the encoded first frequency band speech signal with frame synchronization information and the encoded second frequency band speech signal with frame synchronization information through the first synchronization link and the second synchronization link, respectively ;
  • the decoding module is used to decode the received coded first frequency band speech signal and the coded second frequency band speech signal;
  • the synchronization module is used to synchronize the decoded first frequency band speech signal and the decoded second frequency band speech signal through frame synchronization information.
  • the playback terminal further includes:
  • One or more different digital-to-analog conversion modules for performing digital-to-analog conversion on the synchronized voice signal in the first frequency band and the voice signal in the second frequency band respectively;
  • One or more different amplifying modules for respectively amplifying the first frequency band voice signal and the second frequency band voice signal after digital-to-analog conversion
  • One or more different electro-acoustic conversion modules are used to perform electro-acoustic conversion on the amplified first frequency band voice signal and the second frequency band voice signal respectively.
  • the playback terminal further includes:
  • the fourth sending module, the third receiving module is used to send the control data stream to the source end before receiving the encoded second frequency band speech signal with frame synchronization information through the second synchronization link, so that the source end can control the data stream according to the Determine whether the playback terminal supports voice frequency division transmission.
  • the playback terminal further includes:
  • the fourth receiving module if the source terminal determines that the playback terminal supports voice frequency division transmission according to the control data stream, it is used to receive the second synchronization link request sent by the source terminal;
  • the fifth sending module is used to send a reply to the second synchronization link request
  • the fifth sending module is also used to send link parameters supported by the playback end to the source end.
  • the playback terminal further includes:
  • the fifth receiving module, the fourth receiving module is used to receive the audio configuration parameter request sent by the source end before receiving the second synchronization link request sent by the source end;
  • the sixth sending module is used to send the audio configuration parameters corresponding to the second frequency band voice signal supported by the playback terminal to the source terminal;
  • the fifth receiving module is also used to receive the audio configuration parameters used by the source end.
  • the parameter configuration module is used to configure the audio configuration parameters used.
  • the playback terminal further includes:
  • the second transmission control module is configured to disconnect one or more second synchronization links according to the power condition or the link quality of the second synchronization link;
  • the second transmission control module is further configured to request the source end to establish one or more second synchronization links according to the power condition or the link quality of the second synchronization link.
  • the fifth aspect of the embodiments of the present application provides a source terminal for voice frequency division transmission, including: a memory and a processor;
  • the memory is coupled to the processor
  • Memory used to store program instructions
  • the processor is configured to call the program instructions stored in the memory to enable the source to execute the voice frequency division transmission method described in the first aspect.
  • the sixth aspect of the embodiments of the present application provides a playback terminal for voice frequency division transmission, including: a memory and a processor;
  • the memory is coupled to the processor
  • Memory used to store program instructions
  • the processor is configured to call the program instructions stored in the memory to make the playback terminal execute the voice frequency division transmission method described in the second aspect.
  • the seventh aspect of the embodiments of the present application provides a computer-readable storage medium, including a computer program stored thereon, and the computer program is executed by a processor to implement the voice frequency division transmission method described in the first aspect.
  • An eighth aspect of the embodiments of the present application provides a computer-readable storage medium, including: a computer program stored thereon, wherein the computer program is executed by a processor to implement the voice frequency division described in the second aspect. Transmission method.
  • a ninth aspect of the embodiments of the present application provides a source circuit, including:
  • An encoder for encoding the first frequency band speech signal and the second frequency band speech signal
  • the source controller connected to the encoder, is used to send the encoded first frequency band speech signal with frame synchronization information and the encoded voice signal with frame synchronization information through the first synchronization link and the second synchronization link, respectively
  • the voice signal of the second frequency band is sent to the playback end circuit.
  • the source circuit further includes a filter, which is connected to the encoder and is used to separate the voice signal in the first frequency band from the voice signal in the second frequency band.
  • a tenth aspect of the embodiments of the present application provides a playback end circuit, including:
  • the player controller is used to receive the encoded first frequency band speech signal with frame synchronization information and the encoded second frequency band speech signal with frame synchronization information through the first synchronization link and the second synchronization link, respectively ;as well as
  • the decoder is connected to the controller of the playback terminal and is used to decode the received voice signal in the first frequency band and the voice signal in the second frequency band.
  • the playback end circuit further includes:
  • One or more different digital-to-analog converters respectively connected to the decoder, for performing digital-to-analog conversion on the decoded first frequency band speech signal and the decoded second frequency band speech signal respectively;
  • One or more different amplifiers respectively connected to one or more digital-to-analog converters for respectively amplifying the first-band voice signal after digital-to-analog conversion and the second-band voice signal after digital-to-analog conversion;
  • One or more different electro-acoustic converters are respectively connected to one or more amplifiers, and are used to perform electro-acoustic conversion on the amplified first frequency band speech signal and the amplified second frequency band speech signal respectively.
  • the advantageous effect of the embodiments of the present application is that: the embodiments of the present application provide a voice frequency division transmission method, a source end, a playback end, a source end circuit, and a playback end circuit, through a first synchronization link And the second synchronization link respectively send the encoded first frequency band speech signal with frame synchronization information and the encoded second frequency band speech signal with frame synchronization information to the playback end, which solves the problem caused by the limitation of transmission bandwidth The problem of sound quality degradation and the problem of affecting the audio being played when the sound quality is improved.
  • FIG. 1 is a flowchart of a voice frequency division transmission method according to an embodiment of the application
  • FIG. 2 is a flowchart of an embodiment of the application in which the source marks a voice signal in any frequency band after encoding detected within a preset delay as one frame of data;
  • FIG. 3 is a schematic diagram of a source terminal acquiring frame synchronization information and a playback terminal performing synchronization according to an embodiment of the application;
  • Fig. 4 is a schematic diagram of a UUID setting method according to an embodiment of the application.
  • FIG. 5 is a flowchart of another voice frequency division transmission method according to an embodiment of the application.
  • FIG. 6 is a schematic diagram of the structure of the source end of an embodiment of the application.
  • FIG. 7 is a schematic diagram of the structure of the playback terminal according to an embodiment of the application.
  • FIG. 8 is a schematic diagram of another source structure according to an embodiment of the application.
  • FIG. 9 is a schematic structural diagram of another playback terminal according to an embodiment of the application.
  • FIG. 10 is a schematic diagram of the structure of the source circuit and the playback circuit of an embodiment of the application.
  • FIG. 11 is a schematic structural diagram of another source-end circuit and a playback-end circuit according to an embodiment of the application.
  • the embodiment of the present application provides a voice frequency division transmission method, which can be applied to various audio electronic devices.
  • This embodiment describes a voice frequency division transmission method provided in the embodiment of the present application from the perspective of the source end.
  • the source end is the transmitting end of the voice signal
  • the playback end is the receiving end of the voice signal.
  • the source end can be an electronic device such as a mobile phone, TV, computer, tablet or mp3 that stores the voice signal, or one of the electronic devices or Multiple chips
  • the playback end may be an electronic device such as a speaker or earphone that plays the voice signal, or one or more chips in the electronic device.
  • the source terminal and the playback terminal in this embodiment can support various transmission protocols, such as Bluetooth low energy protocol, classic Bluetooth protocol, or wifi, which is not limited in this embodiment.
  • FIG. 1 is a flowchart of a voice frequency division transmission method according to an embodiment of the present application. The method includes the following steps:
  • the source end encodes the voice signal in the first frequency band and the voice signal in the second frequency band;
  • the number of frequency bands of the first frequency band speech signal and the second frequency band speech signal is not limited, and the first frequency band speech signal or the second frequency band speech signal may be a speech signal in one frequency band or multiple sub-band speech signals.
  • the first frequency band speech signal and the second frequency band speech signal can be obtained through filtering.
  • This embodiment does not limit the type of filter, which can be a low-pass, high-pass, or band-pass filter and a combination of multiple filters; in this embodiment, the first frequency band speech signal and the second frequency band speech signal can also be Obtained by other methods besides filtering.
  • the encoding method only encodes the voice signal of a specific frequency band, then there is no need to filter to obtain the voice signal of the specific frequency band, and the voice signal of all frequency bands can be directly transmitted to the encoder.
  • the coding of this frequency band taking alphabet coding as an example, syntax can realize the coding of voice signals in a specific frequency band without the need to obtain the specific frequency band by means of filtering or the like.
  • all frequency bands of the voice signal are 20Hz-20kHz.
  • the voice signal of the first frequency band and the voice signal of the second frequency band in this embodiment can be any frequency band in 20Hz-20kHz, or any frequency other than 20Hz-20kHz. Frequency band.
  • the source terminal marks the frame synchronization information into the encoded first frequency band speech signal and the encoded second frequency band speech signal;
  • the source end Since the source end sends the encoded first frequency band speech signal and the encoded second frequency band speech signal to the playback end through the first synchronization link and the second synchronization link, respectively, synchronization needs to be performed on the playback end to improve audio quality , And the playback end needs frame synchronization information when performing synchronization. Therefore, the source end can mark the frame synchronization information into the encoded first frequency band speech signal and the encoded second frequency band speech signal.
  • the source end sends the encoded first frequency band speech signal with frame synchronization information and the encoded second frequency band speech signal with frame synchronization information to the playback end through the first synchronization link and the second synchronization link, respectively .
  • the synchronization link can be used to transmit voice signals.
  • the synchronization link only allows a limited number of retransmissions. It is assumed that the original audio transmission method of the system uses the first synchronization link to transmit voice signals. This embodiment of the application It can be fully compatible with the original audio transmission method of the system. When the source end and the playback end use the original audio transmission method for audio transmission, this solution only needs to send the data with the first synchronization link and the second synchronization link on this basis.
  • the encoded first frequency band speech signal and second frequency band speech signal of the frame synchronization information are sent to the playback end without affecting the original audio transmission of the system.
  • the second frequency band speech signal is divided into multiple sub-band speech signals
  • multiple second synchronization links can be established to respectively transmit the multiple sub-band voice signals.
  • the number of second synchronization links is not limited, and it can be one or more.
  • the source end It is possible to establish only a second synchronization link with the playback terminal, and send the encoded second frequency band voice signal with frame synchronization information to the playback terminal through the second synchronization link.
  • the second frequency band voice signal The signal is divided into multiple sub-band voice signals, and the multiple sub-band voice signals can also be transmitted through the second synchronization link.
  • the multiple sub-band voice signals are transmitted through the second synchronization link, time division multiplexing the way.
  • only one second synchronization link may be established to transmit the second frequency band voice signal. If the second frequency band voice signal is divided into two sub-band voice signals, only one or two second synchronization links may be established To transmit these two sub-band voice signals.
  • the source end transmits the encoded first-band speech signal and second-band speech signal with frame synchronization information to the playback end through the first synchronization link and the second synchronization link, so that the playback end can further verify the received first frequency
  • the frequency band speech signal and the second frequency band speech signal are processed.
  • the original audio transmission method of the system uses a first synchronization link to transmit the voice signal of the main frequency band
  • the main frequency band is 20Hz-18kHz speech signal
  • the original audio transmission method of the system can use a first synchronization link to transmit
  • the voice signal of 20Hz-18kHz, and the voice signal of 18kHz-20kHz are usually discarded directly to save bandwidth, but this will cause the high frequency part of the voice signal received by the playback end to be lost, and the sound quality is reduced.
  • this embodiment can The encoded 18kHz-20kHz voice signal is transmitted to the playback end through the second synchronization link, and the integrity of the voice signal can be ensured at the source end, so that all or most of the frequency band audio signals are encoded and transmitted to improve
  • the first synchronization link and the second synchronization link can transmit voice signals in any frequency band, which is not limited in this embodiment.
  • the first synchronization link and the second synchronization link are used to respectively transmit the first frequency band voice signal and the second frequency band voice signal.
  • the system only needs to establish a second synchronization link to transmit the encoded second frequency band speech signal to improve the sound quality. Moreover, when establishing the second synchronization link or transmitting the encoded first frequency signal In the case of a two-band voice signal, it will not affect the first synchronization link to transmit the encoded first-band voice signal. Therefore, the user will not hear a short pause in the audio being played.
  • the embodiment of the present application provides a voice frequency division transmission method, which transmits an encoded first frequency band voice signal with frame synchronization information and an encoded voice signal with frame synchronization information through a first synchronization link and a second synchronization link, respectively
  • the second frequency band voice signal is sent to the playback end, which solves the problem of sound quality degradation caused by transmission bandwidth limitation and the problem of affecting the audio being played when the sound quality is improved.
  • the compression rate of the encoding method of the voice signal in the second frequency band is higher than the compression rate of the encoding method of the voice signal in the first frequency band;
  • the compression rate of the encoding method of the speech signal in the second frequency band is lower than the compression rate of the encoding method of the speech signal in the first frequency band.
  • the speech signal is divided into the first frequency band speech signal and the second frequency band speech signal.
  • the compression rate of the encoding method of the first frequency band speech signal and the second frequency band speech signal can be different.
  • the speech signal is mainly composed of the first frequency band speech signal
  • the compression rate of the encoding method of the second frequency band voice signal can be set to be lower than the compression rate of the encoding method of the first frequency band voice signal.
  • the source end encoding the second frequency band voice signal includes: the source end encoding the high-frequency voice signal, or the source end encoding the non-high-frequency voice signal.
  • the voice signal in the second frequency band can be a high-frequency voice signal or a non-high-frequency voice signal.
  • For high-frequency voice signals because they occupy more bandwidth, when encoding high-frequency voice signals, you can select those suitable for high-frequency voice signals. Encoding method to improve sound quality while saving some bandwidth. Due to the different characteristics of high-frequency voice signals and non-high-frequency voice signals, different encoding methods can be adopted for high-frequency voice signals and non-high-frequency voice signals.
  • the encoding methods of high-frequency voice signals of 18kHz-20kHz can include CELT Encoding method or SBR (Spectral Band Replication) encoding method, where CELT encoding method is a core encoding algorithm built into the alphabet encoder; encoding methods for non-high frequency speech signals can include SILK encoding method and SBC encoding method , AAC encoding method or MP3 encoding method, where the SILK encoding method is also a core encoding algorithm built in the alphabet encoder; it should be noted that in this embodiment, the second frequency band speech signal is a high-frequency speech signal or a non-high-frequency speech signal. Take the voice signal as an example.
  • CELT encoding method is a core encoding algorithm built into the alphabet encoder
  • encoding methods for non-high frequency speech signals can include SILK encoding method and SBC encoding method , AAC encoding method or MP3 encoding method, where the SILK encoding method is also a core
  • the user can divide the voice signal in the second frequency band into multiple sub-band voice signals according to requirements and then encode them separately, which is not limited in this embodiment.
  • the second synchronization link may be used to transmit the encoded high-frequency voice signal or the non-high-frequency voice signal.
  • the source end encoding the first frequency band speech signal and the second frequency band speech signal may be the same or different.
  • the source end encodes the first frequency band speech signal and the second frequency band speech signal in a different way, Save some bandwidth while improving sound quality.
  • the selection of encoding methods in this embodiment can be diversified. Taking the voice signal in the second frequency band as an example, if the voice signal in the second frequency band is divided into multiple sub-band voice signals, the multiple sub-band voice signals can also be the same or different. Encoding method. Choosing different encoding methods or setting different code rates or compression rates for voice signals of different frequency bands can provide users with a better sound quality experience by adjusting the data compression ratio and only increasing a small amount of bandwidth occupation.
  • the source terminal marks the frame synchronization information into the encoded first frequency band speech signal and the encoded second frequency band speech signal includes the source marking detected within a preset delay
  • the encoded voice signal in any frequency band is a frame of data
  • the frame synchronization information includes the start time and end time of a frame of data.
  • the source can obtain the frame synchronization information and send it to the playback end, so that the playback end can obtain the frame synchronization information to perform synchronization.
  • FIG. 2 is an embodiment of the present application marking the coded voice signal in any frequency band detected within a preset delay time as a frame of data
  • the flow chart can specifically include the following steps:
  • the source detects a voice signal in any frequency band among the encoded voice signals in multiple frequency bands;
  • the source starts timing the first delay until the first delay is equal to the preset delay
  • the source marks the detected voice signals in multiple frequency bands within the first delay as the first frame of data;
  • the source After the first delay, the source again detects a voice signal in any frequency band among the encoded voice signals in multiple frequency bands;
  • the source starts timing the second delay until the second delay is equal to the preset delay
  • the source marks the voice signals of multiple frequency bands detected within the second delay as the second frame of data
  • the source After the second delay, the source again detects a voice signal in any frequency band among the encoded voice signals in multiple frequency bands;
  • the source starts timing the Nth delay until the Nth delay is equal to the preset delay, N>2 and an integer;
  • the source marks the voice signals of multiple frequency bands detected within the Nth delay as the Nth frame of data.
  • the source end After the source end marks the encoded speech signal of any frequency band detected within the preset delay as one frame of data, the speech signal of any frequency band contains the marking information, so the player can recognize the same frame of speech signal.
  • the source terminal obtains the frame synchronization information, and the frame synchronization information includes the start time and end time of a frame of data.
  • the value of N can be set by the user according to the data volume of the voice signal, which is not limited in this embodiment.
  • the source end sends the frame synchronization information to the playback end.
  • the playback end can use the frame synchronization information to synchronize the voice signal received by the playback end to further improve audio quality . You can choose to mark the start time and end time of the first frame data, the second frame data, and the Nth frame data into the corresponding frame of data, so that the player can synchronize according to the start time and end time.
  • the second synchronization link may also It is used to transmit the encoded non-high frequency voice signal, and this embodiment does not restrict it;
  • Figure 3(a) shows the source end processing the voice signal in the first frequency band and the voice signal in the second frequency band, as shown in Figure 3(a) ), this embodiment only takes the first frequency band speech signal and the second frequency band speech signal each having only one sub-band speech signal as an example, but the source end of this embodiment marks the encoded code detected within the preset delay
  • the method in which the voice signal in any frequency band is one frame of data can also be applied to an application scenario where the voice signal in the first frequency band and the voice signal in the second frequency band respectively have multiple sub-band voice signals.
  • the unencoded speech signal is divided into data frames.
  • the first frame can be divided into high-frequency speech signals and non-frequency speech signals through filters.
  • High-frequency voice signal and then encode the high-frequency voice signal and the non-high-frequency voice signal separately.
  • the high-frequency voice signal is encoded, it is 01DATA1, and the non-high-frequency voice signal is encoded as 01DATA2.
  • the voice signal 01DATA2 is transmitted to the playback end through the first synchronization link, and the encoded high-frequency voice signal 01DATA1 is transmitted to the playback end through the second synchronization link.
  • the encoded voice signal in the first frequency band and the voice signal in the second frequency band are also transmitted to the playback terminal according to the processing manner of the first frame.
  • Figure 3(c) shows the process of marking the encoded voice signal in any frequency band detected within the preset delay time as one frame of data at the source, as shown in Figure 3(c) .
  • the source sends a frame of data at an interval Interval.
  • the source starts to count the first delay when it detects a voice signal in any one of the encoded voice signals in multiple frequency bands
  • the source detects the encoded high-frequency voice signal 01DATA1 in the first frame of data, it starts timing the first delay Delay1 until the first delay is equal to the preset delay, which can be greater than , Is less than or equal to the interval time Interval, this embodiment does not limit the specific length of the preset delay.
  • the source marks the detected voice signals in multiple frequency bands within the first delay as the first frame of data.
  • the first frame of data is 01DATA1 and 01DATA2
  • the frame synchronization information includes the start time and end time of 01DATA1 and 01DATA2;
  • the source After the first delay Delay1, the source again detects the voice signal 02DATA1 in any frequency band among the encoded voice signals in multiple frequency bands, and then starts timing the second delay Delay2 until the second delay is equal to the preset delay ;
  • the source marks the 02DATA1 and 02DATA2 detected within the second delay as the second frame of data, and the frame synchronization information includes the start and end times of 01DATA1 and 01DATA2, and so on, until the source marks all the data.
  • the frame synchronization information can also include the end moments End1 and End2 of the first delay Delay1 and the second delay Delay2.
  • the frame synchronization information of the first frame of data can be directly written into each frame of data, for example, written to 01DATA1 and 01DATA2. , So that the player can obtain the frame synchronization information and perform synchronization.
  • synchronization can be performed. If the playback end obtains the frame synchronization information, the end of the first delay time End1 pair The first frame of data received after encoding starts to be decoded, and at the end of the second delay, End2 starts to decode the received second frame of data after encoding, and so on, to achieve synchronization; or one frame of data Start decoding to achieve synchronization; after the player obtains the frame synchronization information, since the frame synchronization information has been written into 01DATA1 and 01DATA2, the player can also perform synchronization based on the specific data in 01DATA1 and 01DATA2.
  • the method in which the source terminal marks the encoded voice signal in any frequency band detected within the preset delay time as one frame of data is only an exemplary description.
  • the technology in the art Personnel can refer to the solution of the embodiment of the present application, and without creative work, can also obtain other methods of obtaining frame synchronization information according to this embodiment.
  • the source end establishes a second synchronization link with the playback end according to the used link parameters.
  • the source end Before the source end sends the encoded second frequency band voice signal with frame synchronization information to the playback end through the second synchronization link, it needs to establish a second synchronization link with the playback end according to the link parameters used, and the source The terminal can determine the link parameters used according to the link parameters supported by the playback terminal sent by the playback terminal.
  • the source terminal can also send the link parameters supported by the source terminal to the playback terminal, so that the playback terminal can according to the link parameters supported by the source terminal and the link parameters supported by the playback terminal.
  • the link parameter selects one or more link parameters supported by both the source and the playback end and sends it to the source so that the source can determine the link parameters used.
  • the source end sends the link parameters supported by the source end to the playback end, which enables the playback end to determine the link parameters used.
  • the source end sends the link parameters supported by the source end to the playback end.
  • the link parameter selected by the player is the link parameter used, that is, the player can also determine
  • the player can determine the link parameters to be used according to the link parameters supported by the source sent by the source.
  • the link parameters used are determined to facilitate the establishment of the second synchronization link.
  • the link parameters include data frame size and data frame interval, PHY (PhysicalLayer, physical layer) communication information, etc.
  • the source end sends a second synchronization link request to the playback end
  • the source receives the reply to the second synchronization link request
  • the source terminal determines whether to establish the second synchronization link according to the reply of the second synchronization link request.
  • the playback end After the playback end receives the second synchronization link request sent by the source end, the playback end sends a reply to the second synchronization link request to the source end, and the source end determines whether to establish the second synchronization link according to the response of the second synchronization link request road.
  • the source sends a second synchronization link request to the playback end, so that the playback end can choose whether to accept the establishment of the second synchronization link.
  • the playback end chooses to accept the establishment of the second synchronization link, it sends the link parameters supported by the playback end to the source. So that the source can determine the link parameters used.
  • the playback end After the playback end receives the second synchronization link request sent by the source end, the playback end sends a reply to the second synchronization link request so that the source end determines whether to establish the second synchronization link, and the source end determines to establish the second synchronization link Later, the source terminal determines the link parameters used according to the link parameters supported by the playback terminal. It should be noted that the link parameters used need to be supported by both the playback terminal and the source terminal.
  • the source end before the source end encodes the voice signal in the second frequency band, the source end determines whether the playback end supports voice frequency division transmission according to the control data stream.
  • the source can send a request to control the data stream to the player through the asynchronous link.
  • the control data stream can be transmitted through the asynchronous link between the source and the player.
  • the asynchronous link allows unlimited Retransmission times, so no data packets will be lost; after the player receives the request for the control data stream sent by the source, the player sends the control data stream to the source, and the source can judge by the control data stream sent by the player Whether the playback end supports the voice frequency division transmission, perform a judgment before starting the voice frequency division transmission, so as not to increase energy consumption when the playback end does not support voice frequency division transmission, when the source end judges that the playback end does not support voice frequency division transmission , There is no need for voice frequency division transmission, and the source end will use the original audio transmission method of the system for audio transmission.
  • the source end judges whether the playback end supports voice frequency division transmission according to the control data stream includes:
  • the control data stream includes the value of the custom UUID. If the custom UUID value of the player received by the source is equal to the preset UUID value, the source determines that the player supports voice frequency division transmission. The source uses the value of a custom UUID to determine whether the player supports voice frequency division transmission, which can improve the device compatibility between the source and the player. If the value of the custom UUID of the player received by the source is equal to the preset UUID value, the source determines that the player supports voice frequency division transmission. Take the Bluetooth connection between the source and the player as an example. In the Bluetooth protocol, UUID is used to identify the service provided by the Bluetooth device.
  • the UUID type can be the primary service (Primary Service), characteristic (Characteristic), etc., and the user can customize The Universal Unique Identifier (UUID) identifies the voice frequency division transmission service.
  • the UUID setting method can refer to Figure 4, in Figure 4, Enhance Audio Value can represent the voice frequency division transmission service, the user can set the preset UUID value corresponding to Enhance Audio Value, and the preset UUID value of the main service and the enhanced service can be The same or different; in Figure 4, handle represents an index, which can help find the address of the UUID in the memory.
  • the specific value of OxXXXX is determined by the specific address of the UUID in the memory.
  • This embodiment does not limit the value of 0xXXXX
  • the values of Y and Z are determined by the size of the data corresponding to the UUID, and this embodiment does not limit this; in Figure 4, 0xOPQ is a user-defined UUID, and the user can refer to the format of the UUID value in Figure 4
  • Fig. 4 is only an exemplary description. In actual use, those skilled in the art can refer to the solutions of the embodiments of the present application, and without creative work, Other ways of setting the UUID value can also be obtained according to this embodiment.
  • the source terminal determines that the playback terminal supports voice frequency division transmission according to the control data stream, the following steps are performed:
  • the source sends an audio configuration parameter request to the player:
  • the source terminal receives the audio configuration parameters corresponding to the voice signal in the second frequency band supported by the playback terminal.
  • the audio configuration parameters include coding and decoding parameters and bit rates, and the coding and decoding parameters include one or both of coding and decoding methods;
  • the source terminal determines the used audio configuration parameters according to the audio configuration parameters corresponding to the second frequency band voice signal supported by the playback terminal and sends the used audio configuration parameters to the playback terminal.
  • the player will receive the audio configuration parameter request sent by the source. After the player receives the audio configuration parameter request from the source, the player will send the audio configuration parameters corresponding to the second frequency band voice signal supported by the player to At the source end, after the source end receives the audio configuration parameters corresponding to the second frequency band voice signal supported by the player, the source end determines the audio configuration parameters to be used according to the audio configuration parameters corresponding to the second frequency band voice signal supported by the player The audio configuration parameters are sent to the player. In addition, the audio configuration parameters used need to be audio configuration parameters supported by the source. The playback terminal receives the used audio configuration parameters sent by the source terminal and configures the used audio configuration parameters. After determining the used audio configuration parameters, the playback terminal can also configure the used audio configuration parameters.
  • the source end can also send the audio configuration parameters supported by the source end to the playback end, so that the playback end can according to the audio configuration parameters supported by the source end and the playback end support Select one or more audio configuration parameters that are supported by both the source and the player to send the audio configuration parameters to the source so that the source can determine the audio configuration parameters used.
  • the source end can send the audio configuration parameters supported by the source end to the playback end, so that the playback end can also determine the audio configuration parameters used.
  • the playback end selects one according to the audio configuration parameters supported by the source end and the audio configuration parameters supported by the playback end
  • the audio configuration parameter selected by the player is the audio configuration parameter used;
  • the audio configuration parameters include encoding and decoding parameters and bit rates
  • the encoding and decoding parameters include one or two of encoding and decoding
  • the source receives one of the encoding and decoding supported by the playback end.
  • the source can determine the encoding method used, so that the player can decode it after receiving the voice signal; after the player receives one or two of the encoding and decoding methods supported by the source, the player
  • the decoding method used can be determined so that the player can decode the received encoded first-band voice signal and second-band voice signal;
  • the encoding and decoding parameters include one or two of the encoding method or the decoding method, and It can include sampling depth and sampling rate.
  • the source can perform corresponding encoding according to the encoding method, and the playback end can correspond according to the decoding method.
  • the audio configuration parameters can also include the device number, device address, etc., to facilitate the mutual recognition of the source and the player.
  • the audio configuration parameters can also include the bit rate.
  • the bit rate can be set according to the data frame size. Setting the bit rate can further save bandwidth. You can set the bit rate of the voice signal in the second frequency band to be higher or lower than that of the voice signal in the first frequency band.
  • the bit rate of the high-frequency voice signal can be set to be lower than the bit rate of the non-high-frequency voice signal.
  • the bit rate of the high-frequency voice signal can be set to be lower than or equal to a percentage of the bit rate of the non-high-frequency voice signal. Twenty percent, this embodiment does not limit the specific value of the code rate.
  • the number of second synchronization links is less than or equal to the number of frequency bands of the second frequency band voice signal.
  • the number of second synchronization links may be equal to the number of frequency bands of the second frequency band voice signal, that is, each second synchronization link may only transmit one sub-band voice signal in the second frequency band voice signal;
  • the number of synchronization links may be less than the number of frequency bands of the second frequency band voice signal, that is, each second synchronization link may transmit two or two sub-band voice signals in the second frequency band voice signal.
  • the source can suspend the voice frequency division transmission according to the power situation or the link quality of the second synchronization link.
  • disconnecting one or more second synchronization links includes disconnecting part or all of the second synchronization links.
  • the source can disconnect one or more second synchronization links to partially or completely suspend the voice frequency division transmission to extend the battery life, so as not to occupy the bandwidth and reach the limit.
  • the source can re-establish the one or more second synchronization links according to the power situation or the link quality of the second synchronization link
  • the source can re-establish one or more The second synchronization link is to restart or enhance voice frequency division transmission.
  • one or more second synchronization links can be disconnected or one or more second synchronization links can be re-established according to the power situation or the link quality of the second synchronization link, whether it is the source end or
  • the playback end initiates the disconnection of one or more second synchronization links, and the initiated end should unconditionally accept the disconnection of one or more second synchronization links.
  • FIG. 5 is a flowchart of a voice frequency division transmission method provided by an embodiment of the application. Methods include:
  • the playback terminal receives the encoded first frequency band speech signal with frame synchronization information and the encoded second frequency band speech signal with frame synchronization information through the first synchronization link and the second synchronization link, respectively;
  • the playback terminal decodes the received voice signal in the first frequency band and the voice signal in the second frequency band;
  • the player uses the frame synchronization information to synchronize the decoded first frequency band speech signal and the decoded second frequency band speech signal.
  • the source end transmits the encoded first frequency band speech with frame synchronization information through the first synchronization link and the second synchronization link, respectively
  • the signal and the second frequency band voice signal are sent to the playback terminal.
  • the playback terminal receives the encoded first frequency band voice signal and the second frequency band voice signal with frame synchronization information, the encoded voice signal with frame synchronization information
  • the first frequency band speech signal and the second frequency band speech signal are decoded separately, and this embodiment does not limit the type of decoding mode.
  • the playback terminal can use the same decoding method for the received voice signal in the first frequency band and the voice signal in the second frequency band, or different decoding methods.
  • the playback terminal pair receives
  • the decoding methods of the encoded multiple sub-band voice signals can be the same or different. For example, if the player decodes the received encoded four sub-band voice signals, you can choose one, two, or three Or four decoding methods.
  • the playback terminal uses the frame synchronization information to synchronize the decoded voice signal in the first frequency band and the second frequency band, which can further improve the audio quality
  • the specific synchronization method is not limited.
  • the first synchronization link and the second synchronization link respectively receive the encoded first frequency band speech signal with frame synchronization information and the encoded second frequency band speech signal with frame synchronization information, which solves the problem The problem of sound quality degradation caused by the limitation of transmission bandwidth and the problem of affecting the audio being played when the sound quality is improved.
  • the user can Hear the sound.
  • the player uses the frame synchronization information to synchronize the decoded first frequency band speech signal and the decoded second frequency band speech signal, the following steps are included:
  • the playback terminal performs digital-to-analog conversion on the synchronized voice signal in the first frequency band and the synchronized voice signal in the second frequency band through one or more different digital-to-analog converters;
  • the playback terminal amplifies the voice signal in the first frequency band after digital-to-analog conversion and the voice signal in the second frequency band after digital-to-analog conversion through different one or more amplifiers;
  • the playback terminal performs electroacoustic conversion on the amplified first frequency band speech signal and the amplified second frequency band speech signal through one or more different electroacoustic converters.
  • the playback terminal After the playback terminal synchronizes the decoded voice signal in the first frequency band with the decoded voice signal in the second frequency band through the frame synchronization information sent by the source, the playback terminal can perform digital-to-analog conversion through a digital-to-analog converter, and then through a The amplifier is amplified, and then an electro-acoustic converter is used for electro-acoustic conversion; after the playback end synchronizes the decoded first frequency band speech signal and the decoded second frequency band speech signal, it can also synchronize the first frequency band
  • the voice signal and the voice signal in the second frequency band are respectively subjected to digital-to-analog conversion through different digital-to-analog converters.
  • two or more different digital-to-analog converters may be used to adapt to the characteristics of voice signals in different frequency bands.
  • the player can amplify the voice signal in the first frequency band and the voice signal in the second frequency band after digital-to-analog conversion through different amplifiers.
  • two or more different amplifiers can be used to adapt to voices in different frequency bands.
  • the playback terminal performs electro-acoustic conversion on the amplified voice signal in the first frequency band and the voice signal in the second frequency band through different electro-acoustic converters.
  • two or more different The electro-acoustic converter is adapted to the characteristics of voice signals in different frequency bands.
  • the electro-acoustic converter may be a device that converts electrical energy into sound energy, such as a speaker or a horn. The present embodiment does not limit the type of the electro-acoustic converter.
  • the playback terminal performs digital-to-analog conversion on the synchronized voice signal in the first frequency band and the voice signal in the second frequency band respectively, and different digital-to-analog converters can be used according to the characteristics of the voice signals passing through different frequency bands to compare the synchronized first frequency band.
  • the voice signal of the first frequency band and the voice signal of the second frequency band are respectively subjected to digital-to-analog conversion to further improve the audio quality.
  • the player amplifies the first-band voice signal and the second-band voice signal after the digital-to-analog conversion.
  • the playback terminal performs digital-to-analog conversion on the synchronized voice signal in the first frequency band and the voice signal in the second frequency band
  • the playback terminal performs digital-to-analog conversion on the voice signal in the first frequency band and the voice signal in the second frequency band.
  • Amplification according to the characteristics of voice signals in different frequency bands, different amplifiers can be used to amplify the synchronized voice signals in the first frequency band and the voice signals in the second frequency band to further improve the audio quality.
  • the voice signal in the first frequency band and the voice signal in the second frequency band are converted into electro-acoustics.
  • the playback terminal only performs digital-to-analog conversion on the synchronized voice signal in the first frequency band and the voice signal in the second frequency band through different digital-to-analog converters to further improve the audio quality;
  • the audio quality can be further improved by separately amplifying the voice signal in the first frequency band and the voice signal in the second frequency band after digital-to-analog conversion through different amplifiers; the playback end only passes the amplified voice signal in the first frequency band and the voice signal in the second frequency band.
  • Different electro-acoustic converters can also further improve the audio quality.
  • the playback end before the playback end receives the encoded second frequency band voice signal with frame synchronization information through the second synchronization link, the playback end can send a control data stream to the source end to Enable the source to judge whether the playback end supports voice frequency division transmission according to the control data stream, and determine whether the playback end supports voice frequency division transmission before starting voice frequency division transmission, so as to avoid increasing energy consumption when the playback end does not support voice frequency division transmission , Assuming that the playback end does not support voice frequency division transmission, and the source end does not determine whether the playback end supports voice frequency division transmission according to the control data stream, when the source end starts voice frequency division transmission, the source end will When the two-band speech signal is encoded, energy consumption begins to increase. When the source terminal determines that the playback terminal does not support voice frequency division transmission, the source terminal will continue to use the original audio transmission method of the system for audio transmission, instead of encoding both the first frequency band voice signal and the second frequency band voice signal.
  • the playback terminal if the source terminal determines that the playback terminal supports voice frequency division transmission according to the control data stream, the playback terminal will receive the second synchronization link request sent by the source terminal, and the playback terminal will receive After the second synchronization link request sent by the source end, the reply of the second synchronization link request is sent to the source end. If the playback end supports voice frequency division transmission, the playback end sends the link parameters supported by the playback end to the source end for the source end. Establish a second synchronization link.
  • the player receives the audio configuration parameter request sent by the source;
  • the playback terminal sends the audio configuration parameters corresponding to the second frequency band voice signal supported by the playback terminal to the source terminal;
  • the player receives the used audio configuration parameters sent by the source and configures the used audio configuration parameters.
  • the source sends an audio configuration parameter request to the player. After the player receives the audio configuration parameter request from the source, the player sends the audio configuration parameters corresponding to the second frequency band voice signal supported by the player to the source, and the source receives the playback After the audio configuration parameters corresponding to the second-band voice signal supported by the player, the source determines the audio configuration parameters used according to the audio configuration parameters corresponding to the second-band voice signal supported by the player and sends the used audio configuration parameters to the player. The player receives the used audio configuration parameters sent by the source and configures the used audio configuration parameters. At the same time, the source must also configure the used audio configuration parameters.
  • the playback end can disconnect one or more second synchronization links according to the power condition or the link quality of the second synchronization link.
  • the playback end can disconnect one or more second synchronization links according to the power condition or the link quality of the second synchronization link.
  • the playback end One or more second synchronization links can be disconnected to partially or completely stop the voice frequency division transmission to extend the endurance time.
  • the disconnected one or more The voice signal of the corresponding frequency band on the second synchronization link is not encoded and decoded; or when the link quality of the second synchronization link is poor, the player can disconnect one or more second synchronization links to avoid occupying bandwidth The effect of improving the sound quality is not achieved; in addition, after the voice frequency division transmission is partially or completely suspended, the player can request the source to re-establish one or more pieces of data according to the power situation or the link quality of the second synchronization link.
  • the second synchronization link is used to restart or enhance voice frequency division transmission.
  • the playback end can request the source end
  • the disconnected one or more second synchronization links are re-established to restart or enhance voice frequency division transmission.
  • one or more second synchronization links can be disconnected or one or more second synchronization links can be re-established according to the power situation or the link quality of the second synchronization link, whether it is the source end or
  • the playback end initiates the disconnection of one or more second synchronization links, and the initiated end should unconditionally accept the disconnection of one or more second synchronization links.
  • the embodiment of the present application provides a voice frequency division transmission method.
  • the first synchronization link and the second synchronization link respectively receive the first frequency band speech signal and the second frequency band speech signal after encoding with frame synchronization information.
  • FIG. 6 is a schematic structural diagram of the source provided in this embodiment. As shown in FIG. 6, the source End 50 includes:
  • the encoding module 51 is configured to encode the first frequency band speech signal and the second frequency band speech signal;
  • the pre-synchronization module 52 is used to mark the frame synchronization information into the encoded first frequency band speech signal and the encoded second frequency band speech signal;
  • the first sending module 53 is configured to send the encoded first frequency band speech signal with frame synchronization information and the encoded second frequency band speech signal with frame synchronization information through the first synchronization link and the second synchronization link, respectively Signal to the playback terminal.
  • the compression rate of the encoding method of the voice signal in the second frequency band is higher than the compression rate of the encoding method of the voice signal in the first frequency band;
  • the compression rate of the encoding method of the speech signal in the second frequency band is lower than the compression rate of the encoding method of the speech signal in the first frequency band.
  • the encoding module includes:
  • the high-frequency encoding module is used to encode high-frequency speech signals, and the encoding methods for high-frequency speech signals include CELT encoding or SBR encoding; or
  • the non-high frequency encoding module is used to encode non-high frequency speech signals.
  • the encoding methods for non-high frequency speech signals include SILK encoding, SBC encoding, AAC encoding or MP3 encoding.
  • the pre-synchronization module includes:
  • the data frame marking module is used to mark the encoded voice signal of any frequency band detected within a preset delay as one frame of data, and the frame synchronization information includes the start time and end time of a frame of data.
  • the source also includes:
  • the first parameter determination module the first sending module sends the encoded second frequency band voice signal with frame synchronization information to the playback terminal through the second synchronization link, and is used to send the playback terminal according to the link parameters supported by the playback terminal. Determine the link parameters used;
  • the link establishment module is used to establish a second synchronization link with the playback terminal according to the used link parameters.
  • the source also includes:
  • the second sending module, the first parameter determining module is used to send a second synchronization link request to the playback terminal before determining the link parameter to be used according to the link parameters supported by the playback terminal sent by the playback terminal;
  • the first receiving module is configured to receive a reply to the second synchronization link request
  • the first parameter determination module is further configured to determine whether to establish the second synchronization link according to the reply to the second synchronization link request.
  • the source also includes:
  • the first judgment module is used for judging whether the playback end supports voice frequency division transmission according to the control data stream before the encoding module encodes the voice signal in the second frequency band, and the control data stream is transmitted through the asynchronous link.
  • the source terminal also includes a UUID module, which is used to identify the voice frequency division transmission service through a custom universally unique identifier (UUID);
  • UUID universally unique identifier
  • the first judgment module includes:
  • the second judgment module, the control data stream includes the value of the custom UUID, and if the received custom UUID value of the player is equal to the preset UUID value, it is used to determine that the player supports voice frequency division transmission.
  • the source also includes:
  • the third sending module if the first judging module determines that the playback end supports voice frequency division transmission according to the control data stream, it is used to send an audio configuration parameter request to the playback end;
  • the second receiving module used to receive the audio configuration parameters corresponding to the voice signal in the second frequency band supported by the player.
  • the audio configuration parameters include codec parameters and bit rates, and the codec parameters include one or two of the coding mode and the decoding mode ;as well as
  • the second parameter determination module is configured to determine the audio configuration parameters to be used according to the audio configuration parameters corresponding to the second frequency band voice signal supported by the playback terminal;
  • the third sending module is also used to send the used audio configuration parameters to the playback terminal.
  • the source also includes:
  • the first transmission control module is configured to disconnect one or more second synchronization links according to the power condition or the link quality of the second synchronization link, and the number of the second synchronization links is less than or equal to that of the second frequency band voice signal Number of frequency bands; or
  • the first transmission control module is also used to establish one or more second synchronization links according to the power condition or the link quality of the second synchronization link, and the number of the second synchronization links is less than or equal to the frequency band of the second frequency band voice signal number.
  • the embodiment of the application provides a source end for executing the voice frequency division transmission method proposed in the foregoing embodiment, and transmits the encoded post-synchronization information with frame synchronization information through the first synchronization link and the second synchronization link.
  • the first frequency band speech signal and the encoded second frequency band speech signal with frame synchronization information are sent to the playback end, which solves the problem of sound quality degradation caused by transmission bandwidth limitation and the problem of affecting the audio being played when the sound quality is improved.
  • FIG. 7 is a schematic structural diagram of the playback terminal provided in this embodiment. As shown in FIG. End 60 includes:
  • the third receiving module 61 is configured to receive the encoded first frequency band speech signal with frame synchronization information and the encoded second frequency band speech signal with frame synchronization information through the first synchronization link and the second synchronization link, respectively signal;
  • the decoding module 62 is configured to decode the received coded first frequency band speech signal and the coded second frequency band speech signal;
  • the synchronization module 63 is used to synchronize the decoded first frequency band speech signal and the decoded second frequency band speech signal through frame synchronization information.
  • the playback terminal also includes:
  • One or more different digital-to-analog conversion modules for performing digital-to-analog conversion on the synchronized voice signal in the first frequency band and the voice signal in the second frequency band respectively;
  • One or more different amplifying modules for respectively amplifying the first frequency band voice signal and the second frequency band voice signal after digital-to-analog conversion
  • One or more different electro-acoustic conversion modules are used to perform electro-acoustic conversion on the amplified first frequency band voice signal and the second frequency band voice signal respectively.
  • the playback terminal also includes:
  • the fourth sending module, the third receiving module is used to send the control data stream to the source end before receiving the encoded second frequency band speech signal with frame synchronization information through the second synchronization link, so that the source end can control the data stream according to the Determine whether the playback terminal supports voice frequency division transmission.
  • the playback terminal also includes:
  • the fourth receiving module if the source terminal determines that the playback terminal supports voice frequency division transmission according to the control data stream, it is used to receive the second synchronization link request sent by the source terminal;
  • the fifth sending module is used to send a reply to the second synchronization link request
  • the fifth sending module is also used to send link parameters supported by the playback end to the source end.
  • the playback terminal also includes:
  • the fifth receiving module, the fourth receiving module is used to receive the audio configuration parameter request sent by the source end before receiving the second synchronization link request sent by the source end;
  • the sixth sending module is used to send the audio configuration parameters corresponding to the second frequency band voice signal supported by the playback terminal to the source terminal;
  • the fifth receiving module is also used to receive the audio configuration parameters used by the source end.
  • the parameter configuration module is used to configure the audio configuration parameters used.
  • the playback terminal also includes:
  • the second transmission control module is configured to disconnect one or more second synchronization links according to the power condition or the link quality of the second synchronization link;
  • the second transmission control module is further configured to request the source end to establish one or more second synchronization links according to the power condition or the link quality of the second synchronization link.
  • the embodiment of the present application provides a playback terminal that receives the encoded first frequency band speech signal and the second frequency band speech signal with frame synchronization information through the first synchronization link and the second synchronization link, respectively, which solves the problem of transmission
  • the embodiment of the present application may also provide a source end for executing the voice frequency division transmission method proposed in the embodiment.
  • the source end 70 includes a memory 71 and a processor 72;
  • the memory 71 is coupled with the processor 72;
  • the memory 71 is used to store program instructions
  • the processor 72 is configured to call the program instructions stored in the memory to make the source end execute the voice frequency division transmission method.
  • the source provided in the embodiment of the present application can execute the voice frequency division transmission method provided in any of the above-mentioned embodiments.
  • voice frequency division transmission method provided in any of the above-mentioned embodiments.
  • the embodiment of the present application may also provide a playback terminal for executing the voice frequency division transmission method proposed in the embodiment.
  • the playback terminal 80 includes a memory 81 and a processor 82;
  • the memory 81 is coupled with the processor 82;
  • the memory 81 is used to store program instructions
  • the processor 82 is configured to call the program instructions stored in the memory to make the playback terminal execute the voice frequency division transmission method.
  • the playback terminal provided in the embodiment of the present application can execute the voice frequency division transmission method provided in any of the above-mentioned embodiments.
  • voice frequency division transmission method provided in any of the above-mentioned embodiments.
  • the embodiment of the present application may also provide a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by the processor 72, the voice frequency division transmission method executed by the source is executed.
  • the computer-readable storage medium provided by the embodiment of the present application can execute the voice frequency division transmission method executed by the source provided in any of the above-mentioned embodiments.
  • voice frequency division transmission method executed by the source provided in any of the above-mentioned embodiments.
  • the embodiment of the present application may also provide a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by the processor 82, the voice frequency division transmission method executed by the player is executed.
  • the computer-readable storage medium provided by the embodiment of the present application can execute the voice frequency division transmission method executed by the player provided in any of the above-mentioned embodiments.
  • voice frequency division transmission method executed by the player provided in any of the above-mentioned embodiments.
  • the embodiment of the present application may also provide a source-end circuit, which may be used to implement the voice frequency division transmission method proposed in the foregoing embodiment.
  • FIG. 10 is a schematic structural diagram of the source-end circuit provided in this embodiment.
  • the source circuit includes:
  • An encoder for encoding the first frequency band speech signal and the second frequency band speech signal
  • the source controller connected to the encoder, is used to send the encoded first frequency band speech signal with frame synchronization information and the encoded voice signal with frame synchronization information through the first synchronization link and the second synchronization link, respectively
  • the voice signal of the second frequency band is sent to the playback end circuit.
  • encoder 0 is the original encoder of the system.
  • the first synchronization link 0 of the system is used to transmit the encoded first frequency band speech signal.
  • the speech signal is separated into a first frequency band speech signal and a second frequency band speech signal.
  • the second frequency band speech signal is a high frequency speech signal
  • the first frequency band speech signal is a non-high frequency speech signal;
  • encoder 0 can Using alphabet encoder, there are two core encoding algorithms built in alphabet encoder: CELT encoding method and SILK encoding method.
  • the encoder uses CELT encoding to encode high-frequency speech signals; this embodiment can also use multiple encoders, this embodiment does not limit the number of encoders, the source end has encoder 0 and encoder 1 as For example, for example, encoder 0 can encode non-high-frequency speech signals of speech signals, while encoder 1 can encode high-frequency speech signals. Due to the different characteristics of high-frequency speech signals and non-high-frequency speech signals, Therefore, if different encoding methods are used for encoding separately, the audio quality can be improved by adding a small amount of bandwidth.
  • the encoder 0 encodes it and transmits it to the playback end through the second synchronization link 1.
  • the source controller is connected to the encoder 0 for passing through the first synchronization link 0 and the second synchronization link 1 transmit the encoded first frequency band speech signal and the second frequency band speech signal with frame synchronization information.
  • m>1 is an integer, which can correspond to m second synchronization links respectively.
  • the number of second synchronization links here is m
  • the bars are merely illustrative. In this embodiment, the number of second synchronization links less than m may also be used to transmit m sub-band voice signals, and this embodiment does not limit the number of second synchronization links.
  • the source-end circuit may further include a filter, which is connected to the encoder and is used to separate the voice signal in the first frequency band and the voice signal in the second frequency band.
  • a filter which is connected to the encoder and is used to separate the voice signal in the first frequency band and the voice signal in the second frequency band.
  • the voice signal is encoded by the filter 1 and then transmitted to the playback end through the corresponding second synchronization link 1.
  • the voice signal of the first frequency band can also be obtained It is implemented by a filter, or no filter is needed. Only encoder 0 is used to encode the voice signal of a specific frequency band.
  • the number of filters is not limited in this embodiment, or only A filter, especially if the encoding method used by some encoders only encodes the voice signal of a specific frequency band, the encoder may not need the corresponding filter, for example, the filter may not be required before the alphabet encoder.
  • the embodiment does not limit the types of filters. Low-pass filters, high-pass filters, band-pass filters, or other filters and any combination thereof can be used.
  • the filter can also be used to separate multiple sub-band voice signals in the first frequency band voice signal, and can also be used to separate multiple sub-band voice signals in the second frequency band voice signal.
  • the embodiment of the application provides a source-end circuit, which encodes a voice signal in a first frequency band and a voice signal in a second frequency band, and transmits the encoding with frame synchronization information through the first synchronization link and the second synchronization link.
  • the subsequent first frequency band voice signal and second frequency band voice signal solve the problem of sound quality degradation caused by transmission bandwidth limitation and the problem of affecting the audio being played when the sound quality is improved.
  • the embodiment of the present application may also provide a playback end circuit, which can be used to implement the voice frequency division transmission method proposed in the foregoing embodiment. Please refer to FIG. 10, which is the source end provided by this embodiment. Schematic diagram of the circuit and the structure of the playback end circuit.
  • the playback terminal circuit includes:
  • the player controller is configured to receive the encoded first frequency band speech signal with frame synchronization information and the encoded second frequency band speech signal with frame synchronization information through the first synchronization link and the second synchronization link, respectively; as well as
  • a plurality of decoders are connected with the controller of the playback end, and are used for decoding the received voice signal of the first frequency band and the voice signal of the second frequency band.
  • decoder 0 can be the original decoder of the system, and the first synchronization link 0 of the system is used to transmit the encoded first frequency band speech signal.
  • This embodiment can use one or more decoders. This embodiment does not limit the number of decoders.
  • one decoder can use different decoding methods to decode the encoded first frequency band speech signal and second frequency band speech signal with frame synchronization information. ; Take decoder 0 and decoder 1 on the playback side as an example. For example, decoder 0 can decode non-high-frequency voice signals, while decoder 1 can decode high-frequency voice signals.
  • the number of decoders may be equal to the number of encoders, or may not be equal to the number of encoders.
  • the decoded voice signal in the first frequency band and the voice signal in the second frequency band pass through a digital-to-analog converter and an amplifier, and then undergo electro-acoustic conversion through an electro-acoustic converter.
  • the playback end circuit further includes:
  • One or more different digital-to-analog converters respectively connected to the decoder, for performing digital-to-analog conversion on the decoded first frequency band speech signal and the decoded second frequency band speech signal respectively;
  • One or more different amplifiers respectively connected to one or more digital-to-analog converters for respectively amplifying the first-band voice signal after digital-to-analog conversion and the second-band voice signal after digital-to-analog conversion;
  • One or more different electro-acoustic converters are respectively connected to one or more amplifiers, and are used to perform electro-acoustic conversion on the amplified first frequency band speech signal and the amplified second frequency band speech signal respectively.
  • FIG. 11 is a schematic diagram of the structure of the source-end circuit and the playback-end circuit of an embodiment of this application.
  • the decoded voice signal in the first frequency band and the voice signal in the second frequency band can pass through a digital-to-analog converter.
  • the digital-to-analog converter 1 perform digital-to-analog conversion separately.
  • different digital-to-analog converters can be selected according to the characteristics of the voice signals in different frequency bands to further improve the sound quality.
  • the first frequency band speech signal and the second frequency band speech signal can be amplified in an amplifier, and then an electroacoustic converter is used for electroacoustic conversion; the first frequency band speech signal and the second frequency band speech signal after digital-to-analog conversion can also be separately Through amplifier 0 and amplifier 1, respectively, for the voice signals of different frequency bands, different amplifiers can be selected according to the characteristics of the voice signals of different frequency bands to further improve the sound quality.
  • the amplified voice signals of the first frequency band and the second frequency band can be An electroacoustic converter is used for electroacoustic conversion; the amplified voice signal of the first frequency band and the voice signal of the second frequency band can also be converted by multiple electroacoustic converters, which can be selected according to the characteristics of the voice signal of different frequency bands Electroacoustic converters with different frequency responses are used to perform electroacoustic conversion on the amplified first-band voice signal and the second-band voice signal to further improve the audio quality.
  • speaker 0 and speaker 1 are used for electroacoustic conversion.
  • each decoder corresponds to a digital-to-analog converter, an amplifier, and a speaker.
  • Figure 11 is only an exemplary illustration, and multiple decoders can also share one or more digital-analogs.
  • the converter, one or more amplifiers or one or more speakers are not limited in this embodiment.
  • the voice signal in the second frequency band contains m sub-band voice signals, it can correspond to m second synchronization links respectively.
  • the number of corresponding decoders may be less than or equal to m
  • the number of corresponding digital-to-analog converters, amplifiers, and electro-acoustic converters may be less than or equal to m
  • the specific value of m is not limited in this embodiment.
  • the embodiment of the present application provides a playback terminal circuit, which receives the encoded first frequency band speech signal and the second frequency band speech signal with frame synchronization information through the first synchronization link and the second synchronization link respectively, which solves the problem of The problem of sound quality degradation caused by transmission bandwidth limitation and the problem of affecting the audio being played when the sound quality is improved.
  • the processor may be an integrated circuit chip with signal processing capabilities.
  • the steps of the foregoing method embodiments can be completed by hardware integrated logic circuits in the processor or instructions in the form of software.
  • the above-mentioned processor may be a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (field programmable gate array, FPGA) or other Programming logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • Programming logic devices discrete gates or transistor logic devices, discrete hardware components.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
  • the memory in the embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory can be read-only memory (ROM), programmable read-only memory (programmable rom, PROM), erasable programmable read-only memory (erasable PROM, EPROM), and electrically available Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • the volatile memory may be random access memory (RAM), which is used as an external cache.
  • RAM random access memory
  • static random access memory static random access memory
  • dynamic RAM dynamic random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • double data rate synchronous dynamic random access memory double data rate SDRAM, DDR SDRAM
  • enhanced synchronous dynamic random access memory enhanced SDRAM, ESDRAM
  • serial link DRAM SLDRAM
  • direct rambus RAM direct rambus RAM
  • B corresponding to A means that B is associated with A, and B can be determined according to A.
  • determining B according to A does not mean that B is determined only according to A, and B can also be determined according to A and/or other information.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Telephonic Communication Services (AREA)
  • Telephone Function (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

La présente invention relève du domaine des communications. Elle concerne en particulier un procédé de transmission de la voix par répartition en fréquence, un terminal source, un terminal de lecture, un circuit de terminal source et un circuit de terminal de lecture. Le procédé de transmission de la voix par répartition en fréquence comprend les étapes au cours desquelles : le terminal source code un signal vocal d'une première bande de fréquences et un signal vocal d'une seconde bande de fréquences ; le terminal source marque des informations de synchronisation de trames dans le signal vocal de la première bande de fréquences codé et dans le signal vocal de la seconde bande de fréquences codé ; et le terminal source envoie le signal vocal de la première bande de fréquences codé avec les informations de synchronisation de trames et le signal vocal de la seconde bande de fréquences codé avec les informations de synchronisation de trames au terminal de lecture, respectivement par l'intermédiaire d'une première liaison de synchronisation et d'une seconde liaison de synchronisation. D'après la présente invention, le signal vocal de la première bande de fréquences codé avec les informations de synchronisation de trames et le signal vocal de la seconde bande de fréquences codé avec les informations de synchronisation de trames sont envoyés au terminal de lecture par l'intermédiaire de la première liaison de synchronisation et de la seconde liaison de synchronisation respectivement, ce qui règle le problème de la dégradation de la qualité du son due à la limitation de la bande passante de transmission et le problème de l'altération du contenu audio lu grâce à l'amélioration de la qualité du son.
PCT/CN2019/087811 2019-05-21 2019-05-21 Procédé de transmission de la voix par répartition en fréquence, terminal source, terminal de lecture, circuit de terminal source et circuit de terminal de lecture WO2020232631A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980000976.XA CN110366752B (zh) 2019-05-21 2019-05-21 一种语音分频传输方法、源端、播放端、源端电路和播放端电路
PCT/CN2019/087811 WO2020232631A1 (fr) 2019-05-21 2019-05-21 Procédé de transmission de la voix par répartition en fréquence, terminal source, terminal de lecture, circuit de terminal source et circuit de terminal de lecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/087811 WO2020232631A1 (fr) 2019-05-21 2019-05-21 Procédé de transmission de la voix par répartition en fréquence, terminal source, terminal de lecture, circuit de terminal source et circuit de terminal de lecture

Publications (1)

Publication Number Publication Date
WO2020232631A1 true WO2020232631A1 (fr) 2020-11-26

Family

ID=68225504

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/087811 WO2020232631A1 (fr) 2019-05-21 2019-05-21 Procédé de transmission de la voix par répartition en fréquence, terminal source, terminal de lecture, circuit de terminal source et circuit de terminal de lecture

Country Status (2)

Country Link
CN (1) CN110366752B (fr)
WO (1) WO2020232631A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112738732B (zh) * 2019-10-28 2022-07-29 成都鼎桥通信技术有限公司 一种音频播放方法和装置
CN112992161A (zh) * 2021-04-12 2021-06-18 北京世纪好未来教育科技有限公司 音频编码方法、音频解码方法、装置、介质及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1848241A (zh) * 1995-12-01 2006-10-18 数字剧场系统股份有限公司 多通道音频编码器
CN105163155A (zh) * 2015-08-26 2015-12-16 小米科技有限责任公司 同步播放方法及装置
CN105653237A (zh) * 2016-02-01 2016-06-08 宇龙计算机通信科技(深圳)有限公司 音频播放控制方法、音频播放控制系统和终端
US20190050192A1 (en) * 2014-07-30 2019-02-14 Sonos, Inc. Contextual Indexing of Media Items

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4402073A (en) * 1980-03-11 1983-08-30 Vanderhoff Communications Ltd. Speech and data communication network
CA2149006C (fr) * 1994-06-07 2003-07-15 Cecil Henry Bannister Systeme synchrone de transmissions de paroles et de donnees
JP3484341B2 (ja) * 1998-03-30 2004-01-06 三菱電機株式会社 音声信号伝送装置
JP2003023683A (ja) * 2001-07-06 2003-01-24 Mitsubishi Electric Corp 音声中継伝送システム
US7805313B2 (en) * 2004-03-04 2010-09-28 Agere Systems Inc. Frequency-based coding of channels in parametric multi-channel coding systems
US7272567B2 (en) * 2004-03-25 2007-09-18 Zoran Fejzo Scalable lossless audio codec and authoring tool
US8755407B2 (en) * 2005-02-18 2014-06-17 Qualcomm Incorporated Radio link protocols for enhancing efficiency of multi-link communication systems
RU2396726C2 (ru) * 2005-02-18 2010-08-10 Квэлкомм Инкорпорейтед Протоколы радиосвязи для многоканальных систем связи
US8769046B2 (en) * 2005-03-23 2014-07-01 Qualcomm Incorporated Methods and apparatus for using multiple wireless links with a wireless terminal
US7464313B2 (en) * 2006-03-09 2008-12-09 Motorola, Inc. Hybrid approach for data transmission using a combination of single-user and multi-user packets
KR101379263B1 (ko) * 2007-01-12 2014-03-28 삼성전자주식회사 대역폭 확장 복호화 방법 및 장치
US20080300025A1 (en) * 2007-05-31 2008-12-04 Motorola, Inc. Method and system to configure audio processing paths for voice recognition
CN103812824A (zh) * 2012-11-07 2014-05-21 中兴通讯股份有限公司 音频多编码传输方法及相应装置
WO2014108738A1 (fr) * 2013-01-08 2014-07-17 Nokia Corporation Encodeur de paramètres de multiples canaux de signal audio

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1848241A (zh) * 1995-12-01 2006-10-18 数字剧场系统股份有限公司 多通道音频编码器
US20190050192A1 (en) * 2014-07-30 2019-02-14 Sonos, Inc. Contextual Indexing of Media Items
CN105163155A (zh) * 2015-08-26 2015-12-16 小米科技有限责任公司 同步播放方法及装置
CN105653237A (zh) * 2016-02-01 2016-06-08 宇龙计算机通信科技(深圳)有限公司 音频播放控制方法、音频播放控制系统和终端

Also Published As

Publication number Publication date
CN110366752A (zh) 2019-10-22
CN110366752B (zh) 2023-10-10

Similar Documents

Publication Publication Date Title
US11109138B2 (en) Data transmission method and system, and bluetooth headphone
KR102569374B1 (ko) 블루투스 장치 동작 방법
TWI287371B (en) Method and system for dynamically changing audio stream bit rate based on condition of a bluetooth connection
JP7246307B2 (ja) 接続されたマルチメディアデバイスの制御
CN109785841B (zh) 一种蓝牙智能设备语音交互系统及方法
TW200805901A (en) Method and system for optimized architecture for bluetooth streaming audio applications
WO2008122212A1 (fr) Procédé et système d'auto-réglage de formats de codage audio d'une émission bluetooth a2dp
TW201118737A (en) Dynamically provisioning a device with audio processing capability
EP3745813A1 (fr) Procédé de fonctionnement d'un dispositif bluetooth
WO2021052293A1 (fr) Procédé de codage audio et appareil
WO2021160040A1 (fr) Procédé de transmission audio et dispositif électronique
WO2020232631A1 (fr) Procédé de transmission de la voix par répartition en fréquence, terminal source, terminal de lecture, circuit de terminal source et circuit de terminal de lecture
CN115174538A (zh) 数据传输方法、装置、电子设备及计算机可读介质
US20110235632A1 (en) Method And Apparatus For Performing High-Quality Speech Communication Across Voice Over Internet Protocol (VoIP) Communications Networks
CN111385780A (zh) 一种蓝牙音频信号传输方法和装置
US20140163971A1 (en) Method of using a mobile device as a microphone, method of audio playback, and related device and system
CN117062034A (zh) 蓝牙数据的传输方法、装置、设备及存储介质
CN214413006U (zh) 真无线立体声耳机
US11581002B2 (en) Communication method, apparatus, and system for digital enhanced cordless telecommunications (DECT) base station
JP5006975B2 (ja) 背景雑音情報の復号化方法および背景雑音情報の復号化手段
CN113347614A (zh) 音频处理设备、系统和方法
CN110662205B (zh) 一种基于蓝牙的音频传输方法、装置、介质和设备
US12131741B2 (en) Audio transmission method and electronic device
TWI798890B (zh) 用於產生立體聲語音效果的藍牙語音通信系統及相關的電腦程式產品
TW202405793A (zh) 用於無線通訊之音頻壓縮裝置、音頻壓縮系統及音頻壓縮方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19929354

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19929354

Country of ref document: EP

Kind code of ref document: A1