WO2010009660A1 - Procédé et appareil de conversion de trames de données - Google Patents

Procédé et appareil de conversion de trames de données Download PDF

Info

Publication number
WO2010009660A1
WO2010009660A1 PCT/CN2009/072802 CN2009072802W WO2010009660A1 WO 2010009660 A1 WO2010009660 A1 WO 2010009660A1 CN 2009072802 W CN2009072802 W CN 2009072802W WO 2010009660 A1 WO2010009660 A1 WO 2010009660A1
Authority
WO
WIPO (PCT)
Prior art keywords
format
frame
data
speech
payload
Prior art date
Application number
PCT/CN2009/072802
Other languages
English (en)
Chinese (zh)
Inventor
代金良
舒默特·艾雅
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2010009660A1 publication Critical patent/WO2010009660A1/fr

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/173Transcoding, i.e. converting between two coded representations avoiding cascaded coding-decoding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/18Vocoders using multiple modes
    • G10L19/24Variable rate codecs, e.g. for generating different qualities using a scalable representation such as hierarchical encoding or layered encoding

Definitions

  • the present invention relates to the field of communications technologies, and in particular, to a data frame conversion method and apparatus.
  • G.729 is a narrowband voice compression scheme widely used in Voice over Internet Protocol (VOIP) communication.
  • the encoding rate is 8kb/s, which is one frame every 10ms.
  • G.729 Appendix B adds a silent compression scheme based on G.729 to support non-continuous transmission, which can further save communication bandwidth.
  • G.729.1 is a new generation of voice codec standard. It has a layered and scalable architecture. It uses G.729 as the core and supports 12 different codec rates from 8kb/s to 32kb/s.
  • the frame length is 20ms. In order to distinguish from the 10 ms frame of G.729 (B), a frame with a frame length of 20 ms is referred to herein as a superframe.
  • G.729.1 C is a new G.729.1 silent compression scheme with a frame length of 20ms. It also has a layered and scalable architecture with G.729B as the base layer and a maximum SID frame length of 43 bits.
  • G.729B voice frame (SP, Speech), valid payload is 80 bits; silence description frame (SID, Silence Insertion Descriptor), effective payload is 15 bits; silent frame (NT, NO- DATA ), the effective payload is 0.
  • G.729.1C voice frame (SP), valid payload ranging from 160 bits to 640 bits; mute description frame (SID), valid payload ranging from 15 bits to 43 bits; (NT), the effective payload is 0. Since the frame length of G.729B is 10ms, and the frame length of G.729.1C is 20ms, the frame lengths of G.729B and G.729.1C are different. Therefore, when interworking between the two, the code stream may need to be performed. Repackage.
  • the G.729.1C encoder and decoder provided by the prior art supports a mode "G729B-BST" dedicated to encoding the G.729B code stream, so that G.729.1C can be compatible with the G.729B code. flow.
  • the G.729.1C and G.729B compatible modes require a special command input indication.
  • the G.729.1C encoder actually encodes one frame every 10ms according to the coding mode of G.729B; the input of the decoder of G.729.1C is the code stream of G.729B. This way of working, in the actual communication system, may cause G.729B and G.729.1C to communicate with each other.
  • a simplified G.729B and G.729.1C interworking system as shown in Figure 1:
  • the gateway When interworking between the terminal 1 using the G.729B encoder and the terminal 3 using the G.729.1C decoder, first, the gateway needs to notify the terminal 3 in advance, and the terminal 1 will transmit the code stream of G.729B, so the gateway will Send a command to start the G729B-BST mode of terminal 3, and then the two parties start interworking.
  • the gateway when interworking between the terminal 2 using the G.729.1C encoder and the terminal 4 using the G.729B decoder, the gateway first needs to notify the terminal 2 that the terminal 4 can only decode the code stream of the G.729B, so the gateway You need to send a command to start G729B-BST mode of terminal 2, and then both parties can communicate normally.
  • the gateway sends a command to the terminal 3 to start the G729B-BST mode due to channel loss or error, so that although the interworking channel is established between the terminal 1 and the terminal 3, the terminal 3 cannot correctly decode the terminal 1 The generated code stream; similarly, if the command sent to the terminal 2 to start the G729B-BST mode is lost or errored, the terminal 2 and the terminal 4 cannot communicate with each other correctly.
  • the present invention provides a data frame conversion method and apparatus, which can improve the stability of data frame conversion.
  • the payload data of at least one of the received two consecutive said first format data frames is encapsulated into a second format data frame.
  • the embodiment of the invention further provides a data frame conversion device, including:
  • a receiving unit configured to receive a first format data frame
  • an encapsulating unit configured to encapsulate the second format data frame by using the payload data of at least one of the two consecutive first format data frames.
  • a receiving unit configured to receive a second format data frame
  • a data extracting unit configured to extract core layer data in the second format data frame; format data frame.
  • the payload data of one format data frame is extracted; and the extracted payload data is encapsulated into another format data frame, so that the code streams of the two formats can be directly converted into each other. Without the complicated network negotiation process, the reliability and stability of data frame conversion are improved.
  • FIG. 1 is a schematic diagram of a system architecture for implementing G.729B and G.729.1C conversion in the prior art
  • FIG. 2 is a schematic diagram of converting a G.729B data frame into a G.729.1C data frame according to an embodiment of the present invention
  • 2 is a schematic diagram of the adjacent relationship of two consecutive G.729B data frames in the embodiment of the present invention
  • FIG. 4 is a schematic diagram of the adjacent relationship of two consecutive G.729B data frames in the embodiment of the present invention
  • 2 is a schematic diagram of the adjacent relationship of two consecutive G.729B data frames
  • FIG. 6 is a schematic diagram of the adjacent relationship of two consecutive G.729B data frames in the embodiment of the present invention
  • FIG. 6 is a schematic diagram of the adjacent relationship of two consecutive G.729B data frames in the embodiment of the present invention
  • FIG. 7 is two consecutive embodiments in the embodiment of the present invention.
  • Figure 8 is a schematic diagram of the adjacent relationship of two consecutive G.729B data frames in the embodiment of the present invention.
  • Figure 9 is a schematic diagram of two consecutive G.729B data in the embodiment of the present invention;
  • FIG. 10 is a schematic structural diagram of a G.729.1C data frame constructed according to a G.729B data frame according to an embodiment of the present invention;
  • FIG. 11 is a schematic diagram of adjacency relationship between two consecutive G.729B data frames according to an embodiment of the present invention
  • FIG. 12 is a schematic structural diagram of a G.729.1C data frame constructed according to a G.729B data frame according to an embodiment of the present invention
  • FIG. 13 is a schematic diagram of splitting a G.729.1C voice frame into two G.729 voice frames according to an embodiment of the present invention
  • FIG. 14 is a schematic diagram of splitting a G.729.1C voice frame into two G.729 voice frames according to another embodiment of the present invention.
  • FIG. 15 is a schematic structural diagram of a data frame conversion apparatus according to an embodiment of the present invention
  • FIG. 16 is a schematic structural diagram of a data frame conversion apparatus according to another embodiment of the present invention. detailed description
  • the embodiment of the present invention provides a data frame format conversion method and device, and re-constructs a data frame when interworking between data frames of different formats, so that the gateway does not need to send a special command to the peer device to instruct to start the corresponding working mode.
  • the stability of the communication system is ensured, and the workload of the gateway during interworking is greatly reduced.
  • the data frame in the embodiment of the present invention may specifically include: a voice frame, a silence description frame (referred to as a silence frame), and a silent frame.
  • the mute frame and the unvoiced frame can also be synthesized into non-speech frames.
  • a method for converting a data frame, which is converted from the first format to the second format includes the following steps:
  • the second format is encapsulated into the second format.
  • the format data frame includes: combining the payload data of the subsequent frame with the previous frame payload data, and encapsulating the combined payload data into a second format speech frame, and the payload length value of the second format speech frame. The sum of the payload length values of the two first format speech frames;
  • the data frame encapsulated into the second format can be:
  • the first format speech frame construction A second format speech frame or discard the first format speech frame. The following are explained separately:
  • parameters of the first format speech frame including at least one of a line pair, an adaptive codebook delay, an adaptive codebook gain, and a fixed codebook gain parameter, by using the first
  • the parameters of the format speech frame are interpolated and requantized with corresponding parameters of the previous speech frame to construct a second format speech frame.
  • the method may include: decoding, from the first format voice frame payload, a parameter, where the parameter includes at least one of a line spectrum pair, an adaptive codebook delay, an adaptive codebook gain, and a fixed codebook gain parameter; Interpolating the parameters of the previous speech frame to obtain the interpolated parameters; quantifying the interpolated parameters to obtain the quantized parameters, and combining the quantized parameters according to the first format into a first format interpolated speech frame; The payload of the first format speech frame is combined with the payload data of the first format interpolated speech frame, and the combined payload data is encapsulated into a second format speech frame.
  • processing of the first format non-speech frame can be:
  • Discarding the first format speech frame in which case the processing of the first format non-voice frame may be:
  • the first format non-voice frame is the first format silence frame
  • the first format is used.
  • the mute frame is encapsulated into a mute frame of the second format;
  • the two consecutive first format data frames include a first format silence frame and a first format silence frame, such as the first format silence frame first, the first format silence frame after or
  • the data frame encapsulated into the second format includes:
  • the mute frame of the first format is encapsulated into a mute frame of the second format.
  • Another method for converting a data frame provided by the embodiment of the present invention converts from the second format to the first format, including:
  • the core layer data is used as payload data to be encapsulated into two first format data frames. Specifically, the following three situations are included:
  • the second format data frame is a second format silence frame
  • the data frames are respectively encapsulated into two first format formats, and the core layer data in the second format silence frame is encapsulated into a first format silence frame and a first format silence frame.
  • the second format data frame is a second format speech frame
  • the data frame is encapsulated into two first format data, respectively, by: encapsulating the first core layer data into a first format speech frame by using the second format speech frame;
  • the core layer data is encapsulated into a first format speech frame.
  • the second format data frame is the second format unvoiced frame, which means that no data packets are received, so there is no need to send any data packets.
  • the first format may be G.729B, and the second format is G.729.1C.
  • the conversion is divided into two types, that is, the conversion between the G.729B encoder and the G.729.1C decoder, and G.729.1C
  • the conversion between the encoder and the G.729B decoder is described below. These two cases are described separately below.
  • a voice frame SP
  • a valid payload is 80 bits
  • a silence description frame SID
  • the effective payload is 15 bits
  • G.729.1C voice frame (SP), valid payload ranging from 160 bits to 640 bits; mute description frame (SID), valid payload from 15 bits to 43 The bits are not equal; the silence frame (NT), the effective payload is 0.
  • G.729B Since the frame length of G.729B is 10ms, and the frame length of G.729.1C is 20ms, in the actual communication system, only two 10ms G.729B packets will be received every 20ms. Combined, packaged into a 20ms G.729.1C packet. For a coded stream, one packet is one frame, the same below. As shown in Figure 2, two G.729B data packets are encapsulated into one G.729.1C data packet, and the latter G.729B data packet is merged with the payload data of the previous G.729B data packet, and merged. The payload data is encapsulated into a G.729.1C packet, and the payload length of the G.729.1C packet is the sum of the payload length values of the two G.729B packets.
  • the G.729B code stream is directly encapsulated into a G.729.1C code stream, there may be a packet length that the G.729.1C decoder cannot recognize, for example, a G.729B SP frame and a The SID frame is encapsulated into a superframe of G.729.1C, and the effective payload is 95 bits, but in fact the G.729.1C decoder cannot identify the packet with the payload of 95 bits. In this case, the G.729.1C decoder still does not work.
  • the two 10ms G.729B data packets cannot be directly encapsulated into 20ms G.729.1 data packets, but only these two frames can be used.
  • the silent frame does not need to be transmitted, that is to say, the silent frame is only a concept of time interval relative to the gateway, and does not receive any information, and does not need to send any information.
  • G.729B mute frame after 10ms buffering constructing a G.729B speech frame with the first 10ms
  • a 20ms G.729.1C speech frame is sent, and the payload length of the data packet is 160 bits.
  • the buffered 10ms G.729B mute frame is constructed as a 20ms G.729.1C mute frame, and the data is sent.
  • the packet payload length is 15 bits, as shown in Figure 5, where the payload "SP", part of the speech frame data constructed by the "SP" frame of G.729B, the construction method can be a direct copy, or It is to decode the line spectrum pair, adaptive codebook delay, codebook gain and other parameters in the G.729B speech frame and interpolate and re-quantize the corresponding parameters of the previous frame. It should be noted that in actual use It is also possible to interpolate only some of the parameters, and it is not necessary to interpolate the above four parameters, for example, only interpolating the parameters of the line language, and keeping the bits corresponding to other parameters unchanged.
  • the positions of the SP and the SP are flexible, that is, the constructed speech frame data can be placed in front, or the received speech frame data can be placed in front, and the positions of SP' and SP in the following embodiments are similar.
  • two 10ms G.729B data packets can be directly encapsulated into a 20ms G.729.1C data packet, and the payload length after encapsulation. It is 15 bits, which is the length of the packet that G.729.1C can recognize, as shown in Figure 7.
  • the G.729B silent frame is adjacent to the voice frame, as shown in Figure 9.
  • the two 10ms data packets form a 20ms G.729.1C data packet, such as As shown in FIG. 10, where "SP'" represents the speech frame data constructed by the "SP" frame of G.729B, the construction method may be a direct copy, or a line pair in the G.729B speech frame,
  • the parameters such as adaptive codebook delay and codebook gain are decoded and interpolated and re-quantized with the corresponding parameters of the previous frame.
  • the gateway After discarding the 10ms speech frame, the current 20ms is treated as a silent frame, which means that the gateway does not need to send any data packets.
  • the silence frame is adjacent to the voice frame
  • the construction method may be a direct copy, or a line pair in the G.729B speech frame, an adaptive codebook
  • the parameters such as delay and codebook gain are decoded and interpolated with the corresponding parameters of the previous frame, and the positions of SP' and SP are flexible.
  • the constructed speech frame data can be placed in front or received.
  • the voice frame data is set to the front.
  • the gateway only needs to put the G.729.1C data.
  • the data outside the core layer in the packet is discarded, and then the 20ms packet is divided into two 10ms packets. According to the frame type of the G.729.1C packet, it is divided into the following three cases:
  • the embodiment of the present invention extracts the payload data of one format data frame; and encapsulates the extracted payload data into another format data frame, so that the code streams of the two formats can directly convert each other, and The complex network negotiation process is not required, and the data frame is efficiently converted, and the system performance, such as reliability and high efficiency, is improved.
  • the embodiment of the present invention does not need to decode a format by the gateway and re-encode to another format. The method performs conversion between different formats, so the processing load of the gateway can be greatly reduced, and the gateway resources are saved.
  • the transmitting end is a G.729B encoder
  • the receiving end is a G.729.1C decoder.
  • the gateway receives the G.729B format data frame sent by the transmitting end, as shown in Table 1, and needs to be converted to G.729.1.
  • the data frame in C format is re-encapsulated into a G.729.1C data frame with a frame length of 20ms, as shown in Table 2, and then output to the receiving end as a G.729.1C decoder.
  • the G.729B code stream of the input gateway is 10ms-frame, that is, the converter receives two frames of G.729B data frames within 20ms, and then encapsulates the G.729.1C data frame for 20ms transmission. Go out. details as follows:
  • the two frames can be directly combined into one frame according to the method shown in FIG. 2;
  • the output stream of the gateway is as shown in Table 2:
  • the gateway actually does not receive any data, but only takes a few time slots; for the silent frames in Table 2, the gateway does not need to send any data; in Table 2, after encapsulation (n) in the G.729.1C data frame payload segment represents the payload data corresponding to the nth frame in Table 1.
  • This embodiment is basically the same as the first embodiment.
  • the difference is that for two consecutive G.729B frames, which are voice frames and non-voice frames, the voice frame data is not simply discarded, but the non-voice is buffered according to the specific situation.
  • the frame is processed, and the specific operation steps in this embodiment are as follows: 201)
  • consecutive two G.729B frames are respectively a voice frame and a silence frame, refer to FIG. 3, first buffering a 10ms silence frame data, and then copying the previous 10ms voice frame data to the next 10ms, two The 10ms voice frame data is combined into a 20ms G.729.1C voice frame data packet according to the method shown in Figure 2.
  • the buffered silence frame data is packed into a 20ms G.729.1C silence frame data. package;
  • the two frames may be directly combined into one frame according to the method shown in FIG. 2;
  • the output code stream of the gateway is as shown in Table 3:
  • This embodiment is basically the same as the second embodiment. The difference is that each frame of the received speech frame is decoded by the decoder of G.729B, the line code pair, the adaptive codebook delay, the adaptive codebook gain and the fixed. Codebook gains These have short-term, stable parameters and are buffered, but do not require re-construction of the speech signal.
  • the parameter is added in the parameter field with the latest received parameter and the corresponding parameter of the previous frame buffer, and then quantized, and the pitch delay parity is updated.
  • the symbol P can be any one of the above four encoding parameters, and the interpolation of the four parameters can also select different interpolation methods as needed.
  • the four encoded parameters after interpolation are respectively recorded as: line spectrum pair W, adaptive codebook delay a g, adaptive codebook gain ⁇ , and fixed codebook gain ⁇ .
  • the interpolated parameters are quantized by the G.729 algorithm, and the quantized adaptive codebook is used to delay updating the pitch delay parity bits, and then the fixed codebook index and the fixed code received in the speech frame are received this time.
  • the bits corresponding to the symbol and the re-quantized line spectrum pair, the adaptive codebook delay, the adaptive codebook gain, and the fixed codebook gain are combined into a new 10 ms G.729B according to the G.729B format. Speech frame.
  • two 10ms speech frame data that is, the interpolated and received data
  • the ⁇ , ⁇ , and updated speech frame parameter buffers are used. Area ⁇ , la g p , ⁇ and re .
  • the buffered silence frame data is packed into a 20ms G.729.1C silent frame data packet.
  • the payload data of two frames can be configured in the position of the new G.729.1C data frame as needed. That is, the interpolated speech frame data can be set to the front, or the received speech frame data can be set to the front.
  • the two frames can be directly combined into one frame according to the method shown in FIG. 2;
  • the output code stream of the gateway is as shown in Table 4:
  • the transmitting end is a G.729.1C encoder
  • the receiving end is a G.729B decoder.
  • the gateway receives the G.729.1C format data frame sent by the transmitting end, as shown in Table 1, and needs to be converted into G.
  • the data frame of the 729B format that is, the G.729.1C data frame with a frame length of 20ms, as shown in Table 5, is divided into two 10ms G.729B data frames, and then output to the G.729B decoder of the receiving end.
  • the code stream of the input gateway is G.729.1C, as shown in Table 5:
  • the input G.729.1C code stream is 20ms-frame, that is, the converter receives a frame of G.729.1C data frame within 20ms, and then Split into two 10ms G.729B data frames, the specific processing flow is as follows:
  • bits other than the narrowband core layer are discarded.
  • G.729.1C data frame is a voice frame
  • the received G.729.1C data frame is a mute frame
  • split the G.729.1C mute frame into a 10ms G.729B mute frame data packet according to the method shown in FIG. 14 and send it if If one frame is a voice frame, the current 10ms G.729B mute frame needs to be sent immediately; if the first 10ms G.729B is a mute frame, it can be postponed until the next 10ms, and the current 10ms is processed according to the unvoiced frame.
  • the 10ms frame is a silent frame, and does not need to be sent actually;
  • the output stream of the gateway is as shown in Table 6.
  • a data frame conversion device 1500 according to an embodiment of the present invention, as shown in FIG. 15, the device may be a gateway or a network switching device, and the conversion device includes:
  • the receiving unit 1510 is configured to receive the first format data frame.
  • the encapsulating unit 1520 is configured to encapsulate the second format data frame by using the payload data of the two consecutive first format data frames.
  • the encapsulating unit 1520 may include a merging unit 1521, a structuring unit 1522, and discarding At least one of unit 1523 and silent package unit 1524, wherein:
  • the merging unit 1521 is configured to combine the payload data of the subsequent frame with the previous frame payload data when the two consecutive first format data frames are the first format voice frame, and combine the combined payloads
  • the data is encapsulated into a second format speech frame
  • the constructing unit 1522 is configured to construct a second format speech frame by using the first format speech frame when the two first format number data frames include the first format speech frame and the first format non-speech frame;
  • the discarding unit 1523 is configured to discard the first format voice frame when the two first format data frames include the first format voice frame and the first format non-voice frame;
  • the mute encapsulation unit 1524 is configured to encapsulate the mute frame of the first format into a mute frame of the second format when the two first format data frames include the first format mute frame.
  • the construction unit 1522 may further include a copy module 1522a and/or (the extraction module 1522b and the interpolation module 1522c), wherein:
  • the copying module 1522a is configured to copy the payload in the first format speech frame to construct a second format speech frame.
  • the extracting module 1522b is configured to extract parameters of the first format speech frame, where the parameter includes at least one of a line pair, an adaptive codebook delay, an adaptive codebook gain, and a fixed codebook gain parameter; the interpolation module 1522c And a method for interpolating and requantizing the parameter of the first format speech frame with the corresponding parameter of the previous speech frame to construct a second format speech frame.
  • the first format may be G.729B
  • the second format may be G.729.1C.
  • Another data frame conversion apparatus 1600 provided by the embodiment of the present invention, as shown in FIG. 16, includes: a receiving unit 1610, configured to receive a second format data frame;
  • the data extracting unit 1620 is configured to extract core layer data in the second format data frame; and a format data frame.
  • the encapsulating unit 1630 may include a first encapsulating module 1631 and a second encapsulating module 1632. And at least one of the third package modules 1633, wherein:
  • the first encapsulating module 1631 is configured to encapsulate the core layer data in the second format mute frame into a first format mute frame and a first when the second format data frame is the second format mute frame. Formatted silence frame;
  • the second encapsulating module 1632 is configured to: when the first format data frame is the first format speech frame, encapsulate the first core layer data in the second format speech frame into a first format speech frame;
  • the module 1633 is configured to: when the first format data frame is the first format speech frame, encapsulate the second core layer data in the second format speech frame into a first format speech frame.
  • the payload data of one format data frame is extracted; and the extracted payload data is encapsulated into another format data frame, so that the codes of the two formats are obtained.
  • the flow can be directly converted to each other without a complicated network negotiation process, and the data frame is efficiently converted, and the system performance, such as reliability and high efficiency, is improved.
  • the embodiment of the present invention does not need to decode a format through the gateway.
  • the method of encoding into another format performs conversion between different formats, thereby greatly reducing the processing load of the gateway and saving gateway resources.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Telephonic Communication Services (AREA)

Abstract

Le mode de réalisation de la présente invention concerne un procédé de conversion de trames de données, qui comprend les étapes consistant à : recevoir des trames de données dans un premier format; encapsuler la charge utile d'au moins une des deux trames de données continues reçues dans un premier format dans des trames de données dans un second format. Un autre procédé de conversion de trames de données comprend les étapes consistant à : recevoir des trames de données dans un second format; acquérir les données de couche principale desdites trames de données dans un second format; encapsuler respectivement les données de couche principale desdites données au second format en tant que charge utile dans deux trames de données au premier format. Le mode de réalisation de la présente invention concerne également un appareil correspondant permettant de convertir des trames de données. Les flux de code dans deux formats peuvent être interconvertis directement sans processus de négociation de réseau complexe, et la fiabilité et la stabilité de la conversion de trames de données sont améliorées selon le mode de réalisation de la présente invention.
PCT/CN2009/072802 2008-07-25 2009-07-17 Procédé et appareil de conversion de trames de données WO2010009660A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN200810134755.5 2008-07-25
CN200810134755 2008-07-25

Publications (1)

Publication Number Publication Date
WO2010009660A1 true WO2010009660A1 (fr) 2010-01-28

Family

ID=41570019

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2009/072802 WO2010009660A1 (fr) 2008-07-25 2009-07-17 Procédé et appareil de conversion de trames de données

Country Status (2)

Country Link
CN (1) CN101635723A (fr)
WO (1) WO2010009660A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102143185B (zh) * 2011-03-31 2015-11-25 北京经纬恒润科技有限公司 数据传输方法和数据传输装置
CN106375063B (zh) * 2016-08-30 2020-02-21 上海华为技术有限公司 一种数据传输方法及其设备
CN110557226A (zh) * 2019-09-05 2019-12-10 北京云中融信网络科技有限公司 一种音频传输方法和装置
CN113726634B (zh) * 2021-08-19 2023-03-21 宏图智能物流股份有限公司 一种语音传输系统及方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004064041A1 (fr) * 2003-01-09 2004-07-29 Dilithium Networks Pty Limited Procede et appareil visant a ameliorer la qualite du transcodage de la voix
CN1553723A (zh) * 2003-06-08 2004-12-08 华为技术有限公司 一种实现移动通信网络互通的方法
US20080160987A1 (en) * 2006-12-28 2008-07-03 Yanhua Wang Methods, systems, and computer program products for silence insertion descriptor (sid) conversion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004064041A1 (fr) * 2003-01-09 2004-07-29 Dilithium Networks Pty Limited Procede et appareil visant a ameliorer la qualite du transcodage de la voix
CN1553723A (zh) * 2003-06-08 2004-12-08 华为技术有限公司 一种实现移动通信网络互通的方法
US20080160987A1 (en) * 2006-12-28 2008-07-03 Yanhua Wang Methods, systems, and computer program products for silence insertion descriptor (sid) conversion

Also Published As

Publication number Publication date
CN101635723A (zh) 2010-01-27

Similar Documents

Publication Publication Date Title
JP4541624B2 (ja) 無線ボイス−オーバ−データ通信システムにおける効率的データ送信制御のための方法と装置
JP4842075B2 (ja) 音声伝送装置
US20080117906A1 (en) Payload header compression in an rtp session
JP2008514126A (ja) テレビ電話ネットワークにおける呼設定
EP1845691B1 (fr) Dispositif et procédé de relais de train de données
WO2011124161A1 (fr) Procédé de codage et de décodage de signaux audio, dispositif, et système de codeur/décodeur (codec)
WO2010009660A1 (fr) Procédé et appareil de conversion de trames de données
JP2005513542A (ja) 無線ユニット間におけるハイファイ音響信号の送信
WO2007109960A1 (fr) Procédé, système et détecteur de signal de données permettant de réaliser un service de données
US6947887B2 (en) Low speed speech encoding method based on Internet protocol
JPH10262015A (ja) 多重伝送方法およびシステム並びにそこで用いる音声ジッタ吸収方法
CN103188403A (zh) 语音网关在线监听方法
JP4050961B2 (ja) パケット型音声通信端末
JP2004153471A (ja) チェックサム算出方法、チェックサム記録方法、およびその方法を利用可能な通信装置
US7796584B2 (en) Method for connection between communication networks of different types and gateway apparatus
JP5255358B2 (ja) 音声伝送システム
EP1475929B1 (fr) Composant de commande pour supprimer des trames codées d'un flux de télécommunication
JP2000349824A (ja) 音声データ送受信システム
JP3980592B2 (ja) 通信装置、符号化列送信装置、符号化列受信装置、これらの装置として機能させるプログラムとこれを記録した記録媒体、および符号列受信復号方法、通信装置の制御方法
US20050195861A1 (en) Sound communication system and mobile station
JP2001308923A (ja) 音声パケット通信装置
CN113450809B (zh) 语音数据处理方法、系统及介质
JP2002252644A (ja) 音声パケット通信装置及び音声パケット通信方法
JP3374815B2 (ja) 音声を扱うネットワーク及びそれに用いる音声ゆらぎ吸収方法
KR100549610B1 (ko) 복수개의 모뎀을 갖는 단말기의 사운드 데이터 공유 장치및 그 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09799975

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09799975

Country of ref document: EP

Kind code of ref document: A1