CN108933786B - Method for improving cipher text voice quality of receiver of wireless digital communication system - Google Patents

Method for improving cipher text voice quality of receiver of wireless digital communication system Download PDF

Info

Publication number
CN108933786B
CN108933786B CN201810710872.5A CN201810710872A CN108933786B CN 108933786 B CN108933786 B CN 108933786B CN 201810710872 A CN201810710872 A CN 201810710872A CN 108933786 B CN108933786 B CN 108933786B
Authority
CN
China
Prior art keywords
frame
sequence number
voice
rtp packet
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810710872.5A
Other languages
Chinese (zh)
Other versions
CN108933786A (en
Inventor
朱振荣
张莹
符东昇
史胜伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING SONICOM NETWORK SYSTEM CO LTD
First Research Institute of Ministry of Public Security
Original Assignee
BEIJING SONICOM NETWORK SYSTEM CO LTD
First Research Institute of Ministry of Public Security
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING SONICOM NETWORK SYSTEM CO LTD, First Research Institute of Ministry of Public Security filed Critical BEIJING SONICOM NETWORK SYSTEM CO LTD
Priority to CN201810710872.5A priority Critical patent/CN108933786B/en
Publication of CN108933786A publication Critical patent/CN108933786A/en
Application granted granted Critical
Publication of CN108933786B publication Critical patent/CN108933786B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/65Network streaming protocols, e.g. real-time transport protocol [RTP] or real-time control protocol [RTCP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a method for improving the ciphertext voice quality of a receiver of a wireless digital communication system, which solves the problem of the reduction of the encrypted voice communication quality caused by dynamic adjustment time delay and voice frame loss. Through the cooperative cooperation between the base station (or system) of the receiving party and the mobile station, different abnormal scenes are distinguished, and compensation processing is performed in a targeted manner, so that the problem of failure in voice decryption caused by the loss of synchronization of encryption and decryption between the receiving party and the mobile station is solved.

Description

Method for improving cipher text voice quality of receiver of wireless digital communication system
Technical Field
The invention relates to the technical field of communication, in particular to a method for improving the quality of ciphertext voice of a receiver of a wireless digital communication system.
Background
The PDT digital trunking system is a new dedicated mobile communication system based on TDMA technology, and the 12.5kHz channel is divided into two time slots, each having a duration of 30ms and a transmission rate of 9.6 kbps. The vocoder in the PDT system selects an algorithm with a code rate of 2.4kbps, the coding is carried out based on 60ms voice, and the length of the coded data is 144 bits.
The PDT system interconnection adopts an extended RTP protocol to realize the real-time transmission of voice data. Since the RTP protocol is based on UDP transmission, in order to ensure real-time call quality, problems caused by network jitter, packet loss, and the like need to be considered. When the network is congested, queuing delay will affect the end-to-end delay, which results in different RTP packet delays transmitted through the same connection. Network jitter is the degree to which it describes the delay variation. If the delay difference of the RTP packets is not eliminated, the real-time conversation may be interrupted. Jitter buffers (jitter buffers) are typically employed in conjunction with appropriate methods to remove network jitter. The essence is that the jitter of the network side is eliminated at the cost of additionally increasing the delay at the receiving end, and the capability of removing the network jitter is determined by introducing the size of the delay. Because the time delay of the arriving RTP data packet is greater than the specified time delay, the receiving end will actively process as packet loss, so the more the additional time delay is, the more the RTP data packets that can arrive on time, the less the situation of active packet loss occurs, and the better the call quality is. However, for PDT systems, the shorter the end-to-end voice delay is, the better, to ensure the timeliness of the communication, which requires that the additional added delay be as short as possible. Therefore, in order to achieve better call quality, the additional delay needs to be continuously adjusted dynamically according to the network condition.
The RTP standard header structure is defined as shown in fig. 1; wherein:
a) version number (V): 2 bits to mark the RTP version used, the PDT cluster system uses a fixed version number 2.
b) Padding (P): 1 bit, if the bit is set, the end of the RTP packet contains additional padding bytes.
c) Extension bit (X): 1 bit, if set, followed by an extension header after the RTP fixed header.
d) CSRC Counter (CC): 4bits containing the number of CSRCs followed by a fixed header.
e) Marker bit (M): 1 bit, the interpretation of the flag being specified by the specific protocol. It is used to allow important events, such as frame boundaries, to be marked in the bitstream.
f) Load type (PT): 7 bits, identifying the type of RTP payload.
g) Sequence number (Sequence number): 16 bits, the sender increases the value of the field by 1 after sending each RTP packet, and the receiver can detect the packet loss and recover the packet sequence by the field. The initial value of the sequence number is random.
h) Timestamp (Timestamp): 32 bits, the sample time of the first byte of data in the packet is recorded. At the beginning of a session, the timestamp is initialized to an initial value. The value of the time stamp is constantly increasing over time even when no signal is sent. The time stamp is used together with the sequence number to remove jitter and achieve synchronization.
The existing RTP jitter cancellation measures can be well applied to PDT plaintext voice communication, and voice quality is basically not affected much. However, problems may arise when PDTs enable end-to-end encryption. The main ideas of PDT end-to-end voice encryption are: the voice is encrypted at the sending end and decrypted at the receiving end, so that the voice is ensured not to exist in a plaintext form in the equipment nodes between the calling party and the called party. As the stream cipher algorithm has the advantages of no error code diffusion and capability of pre-calculation, the algorithm is adopted for PDT end-to-end voice encryption with higher real-time requirement. The stream cipher algorithm requires that the calling and called parties must be synchronized in the using process to correctly realize encryption and decryption, namely, the calling and called parties use the same key stream for encryption or decryption from the same position. If frame loss or multi-frame occurs in the voice transmission process, the key stream between the calling party and the called party is out of step, and the receiving party fails to decrypt the voice.
When PDT end-to-end encryption function is enabled, some non-important bits of the speech frames, which have little impact on the quality of the speech sound, need to be stolen to carry the speech frame sequence number for the generation and synchronization of the key stream. The vocoder of the sender encodes the 60ms plaintext voice into a voice frame; then, carrying out exclusive or with the key stream; then, after carrying 12-bit speech frame serial number by using the information bits in the speech frame, outputting the ciphertext speech frame. After receiving the cipher text voice frame, the vocoder of the receiver firstly extracts the voice frame serial number of 12 bits from the cipher text voice frame; then, the encrypted voice is subjected to exclusive OR with the key stream, and the decrypted voice is restored to be plaintext voice. The interaction of the mobile station with the security module is illustrated in fig. 2. The end-to-end voice encryption synchronization mechanism is shown in fig. 3.
In the PDT end-to-end voice encryption scheme, the end-to-end voice encryption synchronization procedure between the sender and the receiver specifically includes:
a sender flow:
1) determining a secret key TEK used by communication according to a secret Key Index (KI), a group calling/single calling identifier (G/I), TMSI and SMSI;
2) generating a new Initial Vector (IV) by a random number generator when a voice call starts;
3) computing an end-to-end encryption control frame password checksum (CCSUM);
4) constructing and sending an end-to-end encryption control frame;
5) generating a key CK by using a key derivation algorithm;
6) generating a voice frame sequence number (FN) by a voice frame counter, wherein the counter is automatically increased when one frame of voice is encrypted;
7) calculating a key stream KS by using a key stream generation algorithm;
8) performing XOR on the KS and a frame of plaintext voice data to obtain ciphertext voice data;
9) carrying FN by using information bits in the ciphertext voice data to form a ciphertext voice frame and then sending the ciphertext voice frame.
The flow of the receiving party:
1) obtaining information such as G/I, TMSI, SMSI and the like from signals such as GRANT/Grp _ V _ Ch _ Usr/UU _ V _ Ch _ Usr and the like;
2) extracting KI from the received end-to-end encryption control frame;
3) determining a TEK used by the communication according to the KI, the G/I, TMSI and the SMSI;
4) after the correctness of CCSUM is verified, extracting IV;
5) generating a key CK by using a key derivation algorithm;
6) extracting FN from the ciphertext voice frame;
7) calculating a key stream KS by using a key stream generation algorithm;
8) and the KS is XOR-ed with the corresponding ciphertext voice to obtain plaintext voice.
Fig. 4 depicts an end-to-end encrypted GROUP call flow in which MS1, the mobile station initiating the end-to-end encrypted voice call, MS2, the mobile station entering the call late during the call, GROUP, is the talkgroup member already on the network before the call begins.
The existing end-to-end voice encryption technology may cause the degradation of the call quality in the following two cases:
(1) when a packet loss condition occurs in a frame of encrypted speech frame at the side of the receiving side base station, because downlink data of the base station air interface must be continuously sent, a frame of silent frame of plaintext is sent to replace the missing ciphertext speech frame.
(2) At the side of the receiving base station, in the continuous process of voice communication, if the network jitter time delay exceeds a certain time, the system can treat the corresponding voice frame as packet loss. In order to alleviate the problem of continuous multi-frame voice packet loss caused by the reasons, the system temporarily increases the receiving time delay according to the network jitter time delay change. In order to ensure that the downlink data of the air interface of the base station is continuously sent, one or more silent frames of the plaintext are sent to fill up the data empty window period caused by the increased delay.
In both cases, the wireless receiving terminal will take the received data as ciphertext speech, decrypt it to obtain a frame of random speech data, and play it as a frame of pop (loud noise). This may greatly degrade the voice call quality.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a method for improving the ciphertext voice quality of a receiving party of a wireless digital communication system, which solves the problems of the reduction of the encrypted voice call quality caused by the dynamic adjustment of time delay and the lack of voice frames, and enables a receiving party MS to distinguish whether the voice transmitted by an air interface has lost frames or is added with extra voice frames so as to carry out corresponding compensation processing according to different conditions and achieve the aim of improving the encrypted voice call quality.
A method for improving the quality of ciphertext speech at a receiver of a wireless digital communication system, comprising the steps of:
step S1, the sender MS initiates a voice call, and the sender TS sends an RTP packet to the receiver TS; a receiving party TS receives RTP packets from a sending party TS, the RTP packets are sequenced according to the sequence generated by the sending party TS, then repeated packets and expired RTP packets arriving after delay are discarded, and finally the sequenced RTP packets are put into a receiving cache queue;
step S2, the receiving party TS takes the appointed RTP packet from the receiving buffer queue according to the empty voice frame sending time interval, if the obtaining is successful, the voice frame in the RTP packet is directly sent; if the acquisition fails, determining whether extra time delay needs to be introduced, if so, sending a plaintext additional frame by the air interface and continuously waiting for the RTP packet to be acquired, and if not, sending a plaintext mute frame by the air interface and skipping the RTP packet to be acquired;
step S3, the receiving side MS receives the voice frame sent from the receiving side TS at regular time, and performs the following processing according to the type of the obtained voice frame: if the frame is a cipher text speech frame, the cipher text speech frame is directly decrypted and played, if the frame is a plaintext additional frame, comfortable background noise or silence is played, and if the frame is an invalid speech frame or a plaintext silence frame, the comfortable background noise or silence is played and a frame of decryption key stream is skipped.
Further, step S1 specifically includes:
s1.1, a receiving party TS sets an empty receiving buffer queue and a packet loss queue, a received first RTP packet is placed in the receiving buffer queue, the received first RTP packet comprises a corresponding serial number, and the serial number and the timestamp of the first RTP packet are stored; setting SNNextThe sequence number of the first RTP packet, and then starts step S2 in parallel;
s1.2, receiving next RTP packet by a receiving party TS, calculating an extended sequence number according to a sequence number and a time stamp in the RTP packet received this time and a sequence number and a time stamp of the previous RTP packet, and judging whether the extended sequence number of the RTP packet received this time is smaller than SN or notNextIf so, namely the RTP packet received this time is expired, executing the step S1.3, otherwise jumping to the step S1.5;
s1.3, judging whether the extended sequence number corresponding to the RTP packet received this time appears in a packet loss queue, if so, calculating the overdue time delay of the packet, and recording the overdue time delay into the packet loss queue;
s1.4, discarding the expired RTP packet received this time, and jumping to the step S1.6;
s1.5, according to the extended sequence number of the RTP packet received this time, after eliminating the repeated packet, inserting the RTP packet received this time into a receiving cache queue, including the extended sequence number;
s1.6, judging whether the call is ended, if so, executing a step S1.7, otherwise, jumping to the step S1.2;
s1.7, notifying the end of the call in the sending flow corresponding to the step S2, and ending the step S1.
Further, in said step S1.2, the extended sequence number is calculated by:
s1.2.1, calculating utdelta as the time stamp-TS in the RTP packet received this timebase;TSbaseA timestamp of the last received RTP packet; if utdelta is greater than MAX _ TS _ MISORDER, setting utdelta to 4294967296-utdelta; MAX _ TS _ MISORDER is a parameter.
S1.2.2, calculating and obtaining the SN (SN-SN)base+ utdelta/timestamp step interval;
s1.2.3, mixing TSbase、SNbaseAnd respectively updating the time stamp and the extended sequence number SN corresponding to the RTP packet received this time.
Still further, the MAX _ TS _ MISORDER value is 2147483648.
Further, the specific flow of step S2 includes:
s2.1, according to the sending time interval of the empty voice frame, the RTP packet in the receiving buffer queue in the step S1 is obtained at fixed time, and whether the expanding serial number SN exists in the receiving buffer queue or not is judgedNextIf so, executing step S2.2, otherwise jumping to step S2.3; for the first RTP packet, SNNextThe sequence number of the first RTP packet;
s2.2, taking out the SN as the extended sequence number in the queueNextSending the voice frame in the RTP packet through an air interface, and jumping to the step S2.5;
s2.3, mixing SNNextRecording the current sending time to a packet loss queue, initializing the overdue time delay to 0, judging whether additional time delay needs to be introduced, if so, executing the stepsStep S2.4, otherwise, jumping to step S2.5;
s2.4, emptying the packet loss queue, then sending a plaintext additional frame through an air interface, and jumping to the step S2.6;
s2.5, setting SNNext=SNNext+1, which indicates that a new frame of speech frame is to be acquired next time;
s2.6, judging whether a notification that the call is finished is received, if so, executing the step S2.7, otherwise, jumping to the step S2.1;
s2.7, ending the step S2.
Further, in the step S2.3, the method for determining whether an additional time delay needs to be introduced includes:
counting the SN of the extended sequence number in the packet loss queueNext-M and SNNextThe number of packets with the expiration time delay larger than 0, if the number of packets is larger than N, extra time delay needs to be introduced; otherwise, no extra time delay is required to be introduced, and M and N are preset parameters.
Still further, the value of the parameter M is 30, and the value of the parameter N is 3.
Further, the specific flow of step S3 includes:
s3.1, a receiving party MS receives a voice frame at regular time according to the sending time interval of the voice frame of the air interface, judges whether the voice frame is a valid voice frame, if so, executes the step S3.2, otherwise, jumps to the step S3.3;
s3.2, judging whether the frame is a clear text mute frame, if so, executing the step S3.3, otherwise, jumping to the step S3.4;
s3.3, requesting a frame of decryption key stream and discarding to ensure that the subsequent voice decryption is still in a synchronous state, and jumping to the step S3.5;
s3.4, judging whether the frame is a plaintext additional frame, if so, executing the step S3.5, otherwise, jumping to the step S3.6;
s3.5, playing comfortable background noise or muting, and jumping to the step S3.7;
s3.6, requesting to decrypt the key stream, then decrypting the received ciphertext voice frame, and playing the decrypted voice;
s3.7, judging whether the call is ended, if so, executing a step S3.8, otherwise, jumping to the step S3.1;
s3.8 ends step S3.
The invention has the beneficial effects that: the problem that the encrypted voice communication quality is reduced caused by dynamic adjustment of time delay and loss of voice frames is solved. Through the cooperative cooperation between the base station (or system) of the receiving party and the mobile station, different abnormal scenes are distinguished, and compensation processing is performed in a targeted manner, so that the problem of failure in voice decryption caused by the loss of synchronization of encryption and decryption between the receiving party and the mobile station is solved.
Drawings
FIG. 1 is a schematic diagram illustrating the definition of RTP standard header structure;
FIG. 2 is a schematic diagram of the interaction of a mobile station with a security module;
FIG. 3 is a schematic diagram of an end-to-end voice encryption synchronization mechanism;
FIG. 4 is a schematic diagram of a group call flow with end-to-end encryption;
FIG. 5 is a schematic diagram of a system for implementing the embodiment of the present invention;
FIG. 6 is a flowchart illustrating an implementation of step S1 according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating an implementation of step S2 in an embodiment of the present invention;
FIG. 8 is a flowchart illustrating an implementation of step S3 according to an embodiment of the present invention;
fig. 9 is a timing chart of receiving and transmitting in embodiment 1 of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings, and it should be noted that the present embodiment is based on the technical solution, and the detailed implementation and the specific operation process are provided, but the protection scope of the present invention is not limited to the present embodiment.
As shown in fig. 5-8, a method for improving the quality of ciphertext speech at a receiving party of a wireless digital communication system, comprises the steps of:
step S1, the sender MS initiates a voice call, and the sender TS sends an RTP packet to the receiver TS; a receiving party TS receives RTP packets from a sending party TS, the RTP packets are sequenced according to the sequence generated by the sending party TS, then repeated packets and expired RTP packets arriving after delay are discarded, and finally the sequenced RTP packets are put into a receiving cache queue;
step S2, the receiving party TS takes the appointed RTP packet from the receiving buffer queue according to the empty voice frame sending time interval, if the obtaining is successful, the voice frame in the RTP packet is directly sent; if the acquisition fails, determining whether extra time delay needs to be introduced, if so, sending a plaintext additional frame by the air interface and continuously waiting for the RTP packet to be acquired, and if not, sending a plaintext mute frame by the air interface and skipping the RTP packet to be acquired;
step S3, the receiving side MS receives the voice frame sent from the receiving side TS at regular time, and performs the following processing according to the type of the obtained voice frame: if the frame is a cipher text speech frame, the cipher text speech frame is directly decrypted and played, if the frame is a plaintext additional frame, comfortable background noise or silence is played, and if the frame is an invalid speech frame or a plaintext silence frame, the comfortable background noise or silence is played and a frame of decryption key stream is skipped.
Further, step S1 specifically includes:
s1.1, a receiving party TS sets an empty receiving buffer queue and a packet loss queue, a received first RTP packet is placed in the receiving buffer queue, the received first RTP packet comprises a corresponding serial number, and the serial number and the timestamp of the first RTP packet are stored; setting SNNextThe sequence number of the first RTP packet, and then starts step S2 in parallel;
s1.2, receiving next RTP packet by a receiving party TS, calculating an extended sequence number according to a sequence number and a time stamp in the RTP packet received this time and a sequence number and a time stamp of the previous RTP packet, and judging whether the extended sequence number of the RTP packet received this time is smaller than SN or notNextIf so, namely the RTP packet received this time is expired, executing the step S1.3, otherwise jumping to the step S1.5;
s1.3, judging whether the extended sequence number corresponding to the RTP packet received this time appears in a packet loss queue, if so, calculating the overdue time delay of the packet, and recording the overdue time delay into the packet loss queue;
s1.4, discarding the expired RTP packet received this time, and jumping to the step S1.6;
s1.5, according to the extended sequence number of the RTP packet received this time, after eliminating the repeated packet, inserting the RTP packet received this time into a receiving cache queue, including the extended sequence number;
s1.6, judging whether the call is ended, if so, executing a step S1.7, otherwise, jumping to the step S1.2;
s1.7, notifying the end of the call in the sending flow corresponding to the step S2, and ending the step S1.
Further, in said step S1.2, the extended sequence number is calculated by:
s1.2.1, calculating utdelta as the time stamp-TS in the RTP packet received this timebase;TSbaseA timestamp of the last received RTP packet; if utdelta is greater than MAX _ TS _ MISORDER, setting utdelta to 4294967296-utdelta; MAX _ TS _ MISORDER is a parameter. In practical applications, the MAX _ TS _ MISORDER may be adjusted and set according to actual needs and experience, and in this embodiment, the value of MAX _ TS _ MISORDER is 2147483648.
S1.2.2, calculating and obtaining the SN (SN-SN)base+ utdelta/timestamp step interval;
s1.2.3, mixing TSbase、SNbaseAnd respectively updating the time stamp and the extended sequence number SN corresponding to the RTP packet received this time.
Further, the specific flow of step S2 includes:
s2.1, according to the sending time interval of the empty voice frame, the RTP packet in the receiving buffer queue in the step S1 is obtained at fixed time, and whether the expanding serial number SN exists in the receiving buffer queue or not is judgedNextIf so, executing step S2.2, otherwise jumping to step S2.3; for the first RTP packet, SNNextThe sequence number of the first RTP packet;
s2.2, taking out the SN as the extended sequence number in the queueNextSending the voice frame in the RTP packet through an air interface, and jumping to the step S2.5;
s2.3, mixing SNNextRecording the current sending time to a packet loss queue, initializing the overdue time delay to 0, judging whether additional time delay needs to be introduced or not, and if so, judging whether the additional time delay needs to be introduced or notStep S2.4 is executed, otherwise, step S2.5 is skipped;
s2.4, emptying the packet loss queue, then sending a plaintext additional frame through an air interface, and jumping to the step S2.6;
s2.5, setting SNNext=SNNext+1, which indicates that a new frame of speech frame is to be acquired next time;
s2.6, judging whether a notification that the call is finished is received, if so, executing the step S2.7, otherwise, jumping to the step S2.1;
s2.7, ending the step S2.
Further, in the step S2.3, the method for determining whether an additional time delay needs to be introduced includes:
counting the SN of the extended sequence number in the packet loss queueNext-M and SNNextThe number of packets with the expiration time delay larger than 0, if the number of packets is larger than N, extra time delay needs to be introduced; otherwise, no extra time delay is required to be introduced, and M and N are preset parameters. In a specific application, M, N can be adjusted and set according to actual needs and experience, and in this embodiment, the value of parameter M is 30, and the value of parameter N is 3.
Further, the specific flow of step S3 includes:
s3.1, a receiving party MS receives a voice frame at regular time according to the sending time interval of the voice frame of the air interface, judges whether the voice frame is a valid voice frame, if so, executes the step S3.2, otherwise, jumps to the step S3.3;
s3.2, judging whether the frame is a clear text mute frame, if so, executing the step S3.3, otherwise, jumping to the step S3.4;
s3.3, requesting a frame of decryption key stream and discarding to ensure that the subsequent voice decryption is still in a synchronous state, and jumping to the step S3.5;
s3.4, judging whether the frame is a plaintext additional frame, if so, executing the step S3.5, otherwise, jumping to the step S3.6;
s3.5, playing comfortable background noise or muting, and jumping to the step S3.7;
s3.6, requesting to decrypt the key stream, then decrypting the received ciphertext voice frame, and playing the decrypted voice;
s3.7, judging whether the call is ended, if so, executing a step S3.8, otherwise, jumping to the step S3.1;
s3.8 ends step S3.
In the above method for improving the quality of ciphertext voice at the receiving end of a wireless digital communication system, for the sake of simplicity of description, the following assumptions are made:
(1) in the process that a sending party TS generates corresponding RTP packets according to the sequence of the voice frames, one voice frame (60 ms in PDT) corresponds to one RTP packet, each RTP packet has a sequence number which is continuously increased by 1, and the time stamp stepping interval of each packet is fixed (480 in PDT); in case of an invalid speech frame (speech frame causing TS reception failure when the sender MS uplink transmission is disturbed), the sender TS needs to reserve a sequence number for it even if it does not have to send the corresponding RTP packet. Therefore, the receiving party TS can know how to rebuild the time sequence relation of the sending party voice at the air interface only according to the sequence number. For a sender TS that is not so designed, a virtual sequence number that conforms to the above assumption can be constructed based on the sequence number and the timestamp in each RTP packet.
(2) The RTP packet Sequence number is an unsigned integer of 16 bits, and there may be a Sequence number overflow (i.e., the Sequence number increases to 65535 and returns to 0) problem in the above method. In the process of executing the flow, the method described in sections a.1 and a.3 in appendix a (appendix a) in the RFC 3550 document may be used to expand the 16-bit serial number into a longer (for example, 32-bit or 64-bit, so as to ensure that the extended serial number cannot overflow in practical applications). For example, in the process of determining whether the received RTP packet has expired in step S1.2, the procedure is performed by comparing the extended sequence numbers.
Example 1
In the following embodiments, the PDT system is taken as an example, as shown in fig. 9:
firstly, an air interface receiving and RTP sending process of a sending party TS:
the sender TS generates a corresponding RTP packet according to a speech frame received by the air interface every 60ms, each RTP packet is assigned a sequence number that is continuously incremented by 1, and the timestamp step interval of each packet is 480.
After receiving the V1 voice frame, the sender packs the voice frame into a first RTP packet with a sequence number of 65530 and a time stamp of 1000480; a V2 voice frame, which is packed into a second RTP packet with a sequence number of 65531 and a timestamp of 1000960; the following is the same as shown in Table 1.
TABLE 1
Figure BDA0001716575120000151
Figure BDA0001716575120000161
Where the V12 speech frame was received with a failure due to the sender MS being disturbed in its uplink transmission, the TS treats it as an invalid frame and still assigns it a sequence number of 5.
Thus, the receiving party TS can simply judge the sequence of the voice frames and the time interval between the previous and the next voice frames according to the sequence number of the RTP packet sequence number.
Second, receiving RTP of receiving part TS and sending flow of empty port:
1) after receiving RTP containing V1, the receiving TS puts it (including corresponding sequence number) into receiving buffer queue, and sets SNNext65530 (RTP sequence number of V1), and starts the transmission flow;
2) when V1 air interface transmission timing arrives, the obtained extended sequence number is SNNext(65530) And transmitting the V1 voice frame, and setting SNNext65531;
3) after receiving RTP containing V2, the receiver calculates its extended sequence number as 65531 because its extended sequence number is not less than SNNextIf the extended sequence number is not expired, the extended sequence number is put into a receiving buffer queue;
4) when V2 air interface transmission timing arrives, the obtained extended sequence number is SNNext(65531) And transmitting the V2 voice frame, and setting SNNext65532;
5) when V3 air interface transmission is timed toWhen the time arrives, the extended sequence number is obtained as SNNext(65532) Fails to provide SN when V3 voice frame failsNext(65532) Recording the current sending time to a packet loss queue, analyzing the overdue time delay in the packet loss queue, confirming that extra time delay is not required to be introduced temporarily, sending a clear text mute frame, and setting SNNextIs 65533.
6) After receiving RTP containing V3, the receiver calculates its extended sequence number as 65532, since its extended sequence number is less than SNNextWhen the packet is expired, because the extended sequence number of the packet appears in the packet loss queue, the expiration time delay (the current receiving time-the sending time recorded in the packet loss queue) of the packet is calculated, the assumed time is 30ms, the packet is recorded in the packet loss queue, and the expired RTP packet is discarded;
7) after receiving RTP containing V4, the receiver calculates its extended sequence number as 65533 because its extended sequence number is not less than SNNextIf the extended sequence number is not expired, the extended sequence number is put into a receiving buffer queue;
8) when V4 air interface transmission timing arrives, the obtained extended sequence number is SNNext(65533) And transmitting the V4 voice frame, and setting SNNext65534;
9) after receiving RTP containing V5, the receiver calculates its extended sequence number as 65534 since its extended sequence number is not less than SNNextIf the extended sequence number is not expired, the extended sequence number is put into a receiving buffer queue;
10) when V5 air interface transmission timing arrives, the obtained extended sequence number is SNNext(65534) And transmitting the V5 voice frame, and setting SNNext65535;
11) after receiving RTP containing V6, the receiver calculates its extended sequence number as 65535, since its extended sequence number is not less than SNNextIf the extended sequence number is not expired, the extended sequence number is put into a receiving buffer queue;
12) when V6 air interface transmission timing arrives, the obtained extended sequence number is SNNext(65535) And transmitting the V6 voice frame, and setting SNNext65536;
13) when V7 air interface is transmittingWhen the time arrives, the extended sequence number is obtained as SNNext(65536) Fails to provide SN when V7 voice frame failsNext(65536) Recording the current sending time to a packet loss queue, analyzing the overdue time delay in the packet loss queue, confirming that extra time delay is not required to be introduced temporarily, sending a clear text mute frame, and setting SNNext65537.
14) After receiving the RTP containing V7, the receiver calculates the extended sequence number as 65536 according to the sequence number 0 and the time stamp 1003360, because the extended sequence number is less than the SNNextWhen the packet is expired, because the extended sequence number of the packet appears in the packet loss queue, the expiration time delay (the current receiving time-the sending time recorded in the packet loss queue) of the packet is calculated, the assumed time is 30ms, the packet is recorded in the packet loss queue, and the expired RTP packet is discarded;
15) when V8 air interface emission timing arrives, obtaining SN as expansion sequence numberNext(65537) Fails to provide SN when V8 voice frame failsNext(65537) Recording the current sending time to a packet loss queue, analyzing the overdue time delay in the packet loss queue, confirming that extra time delay is not required to be introduced temporarily, sending a clear text mute frame, and setting SNNext65538.
16) After receiving the RTP containing V8, the receiver calculates its extended sequence number as 65537 according to sequence number 1 and timestamp 1003840, since its extended sequence number is smaller than SNNextWhen the packet is expired, because the extended sequence number of the packet appears in the packet loss queue, the expiration time delay (the current receiving time-the sending time recorded in the packet loss queue) of the packet is calculated, the assumed time is 30ms, the packet is recorded in the packet loss queue, and the expired RTP packet is discarded;
17) when V9 air interface emission timing arrives, obtaining SN as expansion sequence numberNext(65538) Fails to provide SN when V9 voice frame failsNext(65538) Recording the current sending time to a packet loss queue, analyzing the overdue time delay in the packet loss queue, confirming that extra time delay needs to be introduced, emptying the packet loss queue, sending a plaintext additional frame, and SNNextRemains unchanged, still 65538.
18) After receiving the RTP containing V9, the receiver, based on sequence number 2 and timestamp 1004320,calculating its extended sequence number as 65538 since its extended sequence number is not less than SNNextIf the extended sequence number is not expired, the extended sequence number is put into a receiving buffer queue;
19) when V9 air interface retransmission timing arrives, the obtained extended sequence number is SNNext(65534) And transmitting the V9 voice frame, and setting SNNext65539;
20) after receiving RTP containing V10, the receiver calculates its extended sequence number as 65539 because its extended sequence number is not less than SNNextIf the extended sequence number is not expired, the extended sequence number is put into a receiving buffer queue;
21) when V10 air interface transmission timing arrives, the obtained extended sequence number is SNNext(65539) And transmitting the V10 voice frame, and setting SNNext65540;
22) after receiving the RTP containing V11, the receiver calculates its extended sequence number as 65540 since its extended sequence number is not less than SNNextIf the extended sequence number is not expired, the extended sequence number is put into a receiving buffer queue;
23) when V11 air interface transmission timing arrives, the obtained extended sequence number is SNNext(65540) And transmitting the V11 voice frame, and setting SNNext65541;
24) when V12 air interface emission timing arrives, obtaining SN as expansion sequence numberNext(65541) Fails to provide SN when V12 voice frame failsNext(65541) Recording the current sending time to a packet loss queue, analyzing the overdue time delay in the packet loss queue, confirming that extra time delay is not required to be introduced temporarily, sending a clear text mute frame, and setting SNNextIs 65542.
25) After receiving RTP containing V13, the receiver calculates its extended sequence number as 65542, since its extended sequence number is not less than SNNextIf the extended sequence number is not expired, the extended sequence number is put into a receiving buffer queue;
26) when V13 air interface transmission timing arrives, the obtained extended sequence number is SNNext(65542) And transmitting the V13 voice frame, and setting SNNext65543;
27) after receiving RTP containing V14, the receiver calculates its extended sequence number as 65543 because its extended sequence number is not less than SNNextIf the extended sequence number is not expired, the extended sequence number is put into a receiving buffer queue;
28) after receiving the RTP containing V15, the receiver calculates its extended sequence number as 65544, since its extended sequence number is not less than SNNextIf the extended sequence number is not expired, the extended sequence number is put into a receiving buffer queue;
29) when V14 air interface transmission timing arrives, the obtained extended sequence number is SNNext(65543) And transmitting the V14 voice frame, and setting SNNext65544;
30) after receiving the RTP containing V16, the receiver calculates its extended sequence number as 65545 since its extended sequence number is not less than SNNextIf the extended sequence number is not expired, the extended sequence number is put into a receiving buffer queue;
31) when V15 air interface transmission timing arrives, the obtained extended sequence number is SNNext(65544) And transmitting the V15 voice frame, and setting SNNext65545;
32) after receiving the RTP containing V17, the receiver calculates its extended sequence number as 65546, since its extended sequence number is not less than SNNextIf the extended sequence number is not expired, the extended sequence number is put into a receiving buffer queue;
33) when V16 air interface transmission timing arrives, the obtained extended sequence number is SNNext(65545) And transmitting the V16 voice frame, and setting SNNext65546;
34) after receiving RTP containing V18, the receiver calculates its extended sequence number to be 65547, since its extended sequence number is not less than SNNextIndicating that it is not expired, it (including the corresponding extended sequence number) is placed in the receive buffer queue.
Thirdly, receiving and decrypting process of the receiver MS:
1) after receiving the voice frame of V1, the terminal judges the voice frame type as an effective ciphertext voice frame, directly decrypts the ciphertext voice and plays the ciphertext voice;
2) after receiving the voice frame of V2, the terminal judges the voice frame type as an effective ciphertext voice frame, directly decrypts the ciphertext voice and plays the ciphertext voice;
3) after receiving the voice frame of V3, the terminal judges the type of the voice frame is a clear text mute frame, requests a frame of decryption key stream and discards the decryption key stream to ensure that the subsequent voice decryption is still in a synchronous state, and then plays comfortable background noise or mute;
4) after receiving the voice frame of V4, the terminal judges the voice frame type as an effective ciphertext voice frame, directly decrypts the ciphertext voice and plays the ciphertext voice;
5) after receiving the voice frame of V5, the terminal judges the voice frame type as an effective ciphertext voice frame, directly decrypts the ciphertext voice and plays the ciphertext voice;
6) after receiving the voice frame of V6, the terminal judges the voice frame type as an effective ciphertext voice frame, directly decrypts the ciphertext voice and plays the ciphertext voice;
7) after receiving the voice frame of V7, the terminal judges the type of the voice frame is a clear text mute frame, requests a frame of decryption key stream and discards the decryption key stream to ensure that the subsequent voice decryption is still in a synchronous state, and then plays comfortable background noise or mute;
8) after receiving the voice frame of V8, the terminal judges the type of the voice frame is a clear text mute frame, requests a frame of decryption key stream and discards the decryption key stream to ensure that the subsequent voice decryption is still in a synchronous state, and then plays comfortable background noise or mute;
9) after receiving the voice frame of V9, the terminal judges the type of the voice frame is a plaintext additional frame, and plays comfortable background noise or mute;
10) after receiving the V9 voice frame sent in a delayed manner, the terminal judges the voice frame type to be an effective ciphertext voice frame, and directly decrypts the ciphertext voice and plays the ciphertext voice;
11) after receiving the voice frame of V10, the terminal judges the voice frame type as an effective ciphertext voice frame, directly decrypts the ciphertext voice and plays the ciphertext voice;
12) after receiving the voice frame of V11, the terminal judges the voice frame type as an effective ciphertext voice frame, directly decrypts the ciphertext voice and plays the ciphertext voice;
13) after receiving the voice frame of V12, the terminal judges the type of the voice frame is a clear text mute frame, requests a frame of decryption key stream and discards the decryption key stream to ensure that the subsequent voice decryption is still in a synchronous state, and then plays comfortable background noise or mute;
14) after receiving the voice frame of V13, the terminal judges the voice frame type as an effective ciphertext voice frame, directly decrypts the ciphertext voice and plays the ciphertext voice;
15) after receiving the voice frame of V14, the terminal judges the voice frame type as an effective ciphertext voice frame, directly decrypts the ciphertext voice and plays the ciphertext voice;
16) after receiving the voice frame of V15, the terminal judges that the voice frame type is an invalid voice frame (MS receiving fails due to interference of TS downlink transmission of a receiving party), requests a frame of decryption key stream and discards the decryption key stream to ensure that subsequent voice decryption is still in a synchronous state, and then plays comfortable background noise or silence;
17) and after receiving the voice frame of V16, the terminal judges the voice frame type to be a valid ciphertext voice frame, and directly decrypts the ciphertext voice and plays the ciphertext voice.
The method solves the problem of the reduction of the encrypted voice communication quality caused by the dynamic adjustment of the time delay and the lack of the voice frame. Through the cooperative cooperation between the base station (or system) of the receiving party and the mobile station, different abnormal scenes are distinguished, and compensation processing is performed in a targeted manner, so that the problem of failure in voice decryption caused by the loss of synchronization of encryption and decryption between the receiving party and the mobile station is solved.
Various changes and modifications can be made by those skilled in the art based on the above technical solutions and concepts, and all such changes and modifications should be included in the scope of the present invention.

Claims (8)

1. A method for improving the quality of ciphertext speech at a receiver of a wireless digital communication system, comprising the steps of:
step S1, the sender MS initiates a voice call, and the sender TS sends an RTP packet to the receiver TS; a receiving party TS receives RTP packets from a sending party TS, the RTP packets are sequenced according to the sequence generated by the sending party TS, then repeated packets and expired RTP packets arriving after delay are discarded, and finally the sequenced RTP packets are put into a receiving cache queue;
step S2, the receiving party TS takes the appointed RTP packet from the receiving buffer queue according to the empty voice frame sending time interval, if the obtaining is successful, the voice frame in the RTP packet is directly sent; if the acquisition fails, determining whether extra time delay needs to be introduced, if so, sending a plaintext additional frame by the air interface and continuously waiting for the RTP packet to be acquired, and if not, sending a plaintext mute frame by the air interface and skipping the RTP packet to be acquired;
step S3, the receiving side MS receives the voice frame sent from the receiving side TS at regular time, and performs the following processing according to the type of the obtained voice frame: if the frame is a cipher text speech frame, the cipher text speech frame is directly decrypted and played, if the frame is a plaintext additional frame, comfortable background noise or silence is played, and if the frame is an invalid speech frame or a plaintext silence frame, the comfortable background noise or silence is played and a frame of decryption key stream is skipped.
2. The method according to claim 1, wherein the step S1 specifically includes:
s1.1, a receiving party TS sets an empty receiving buffer queue and a packet loss queue, a received first RTP packet is placed in the receiving buffer queue, the received first RTP packet comprises a corresponding serial number, and the serial number and the timestamp of the first RTP packet are stored; setting SNNextThe sequence number of the first RTP packet, and then starts step S2 in parallel;
s1.2, receiving next RTP packet by a receiving party TS, calculating an extended sequence number according to a sequence number and a time stamp in the RTP packet received this time and a sequence number and a time stamp of the previous RTP packet, and judging whether the extended sequence number of the RTP packet received this time is smaller than SN or notNextIf so, namely the RTP packet received this time is expired, executing the step S1.3, otherwise jumping to the step S1.5;
s1.3, judging whether the extended sequence number corresponding to the RTP packet received this time appears in a packet loss queue, if so, calculating the overdue time delay of the packet, and recording the overdue time delay into the packet loss queue;
s1.4, discarding the expired RTP packet received this time, and jumping to the step S1.6;
s1.5, according to the extended sequence number of the RTP packet received this time, after eliminating the repeated packet, inserting the RTP packet received this time into a receiving cache queue, including the extended sequence number;
s1.6, judging whether the call is ended, if so, executing a step S1.7, otherwise, jumping to the step S1.2;
s1.7, notifying the end of the call in the sending flow corresponding to the step S2, and ending the step S1.
3. The method for improving the quality of ciphertext speech at the receiving end of a wireless digital communication system according to claim 2, wherein in the step S1.2, the extended sequence number is calculated by:
s1.2.1, calculating utdelta as the time stamp-TS in the RTP packet received this timebase;TSbaseA timestamp of the last received RTP packet; if utdelta is greater than MAX _ TS _ MISORDER, setting utdelta to 4294967296-utdelta; MAX _ TS _ MISORDER is a parameter;
s1.2.2, calculating and obtaining the SN (SN-SN)base+ utdelta/timestamp step interval;
s1.2.3, mixing TSbase、SNbaseAnd respectively updating the time stamp and the extended sequence number SN corresponding to the RTP packet received this time.
4. The method of claim 3 wherein the MAX _ TS _ MISORDER value is 2147483648.
5. The method as claimed in claim 1, wherein the specific process of step S2 includes:
s2.1, sending time interval according to voice frame of air interfaceAt other intervals, the RTP packet in the receiving buffer queue in step S1 is obtained at regular time, and it is determined whether the extended sequence number SN exists in the receiving buffer queueNextIf so, executing step S2.2, otherwise jumping to step S2.3; for the first RTP packet, SNNextThe sequence number of the first RTP packet;
s2.2, taking out the SN as the extended sequence number in the queueNextSending the voice frame in the RTP packet through an air interface, and jumping to the step S2.5;
s2.3, mixing SNNextRecording the current sending time to a packet loss queue, initializing the expiration time delay of the packet loss queue to 0, judging whether additional time delay needs to be introduced, if so, executing a step S2.4, otherwise, jumping to the step S2.5;
s2.4, emptying the packet loss queue, then sending a plaintext additional frame through an air interface, and jumping to the step S2.6;
s2.5, setting SNNext=SNNext+1, which indicates that a new frame of speech frame is to be acquired next time;
s2.6, judging whether a notification that the call is finished is received, if so, executing the step S2.7, otherwise, jumping to the step S2.1;
s2.7, ending the step S2.
6. The method of claim 5 for improving the quality of the ciphertext speech at the receiving end of the wireless digital communication system, wherein in the step S2.3, the method for determining whether the extra delay needs to be introduced is as follows:
counting the SN of the extended sequence number in the packet loss queueNext-M and SNNextThe number of packets with the expiration time delay larger than 0, if the number of packets is larger than N, extra time delay needs to be introduced; otherwise, no extra time delay is required to be introduced, and M and N are preset parameters.
7. The method of claim 6, wherein the value of M is 30 and the value of N is 3.
8. The method as claimed in claim 1, wherein the specific process of step S3 includes:
s3.1, a receiving party MS receives a voice frame at regular time according to the sending time interval of the voice frame of the air interface, judges whether the voice frame is a valid voice frame, if so, executes the step S3.2, otherwise, jumps to the step S3.3;
s3.2, judging whether the frame is a clear text mute frame, if so, executing the step S3.3, otherwise, jumping to the step S3.4;
s3.3, requesting a frame of decryption key stream and discarding to ensure that the subsequent voice decryption is still in a synchronous state, and jumping to the step S3.5;
s3.4, judging whether the frame is a plaintext additional frame, if so, executing the step S3.5, otherwise, jumping to the step S3.6;
s3.5, playing comfortable background noise or muting, and jumping to the step S3.7;
s3.6, requesting to decrypt the key stream, then decrypting the received ciphertext voice frame, and playing the decrypted voice;
s3.7, judging whether the call is ended, if so, executing a step S3.8, otherwise, jumping to the step S3.1;
s3.8 ends step S3.
CN201810710872.5A 2018-07-03 2018-07-03 Method for improving cipher text voice quality of receiver of wireless digital communication system Active CN108933786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810710872.5A CN108933786B (en) 2018-07-03 2018-07-03 Method for improving cipher text voice quality of receiver of wireless digital communication system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810710872.5A CN108933786B (en) 2018-07-03 2018-07-03 Method for improving cipher text voice quality of receiver of wireless digital communication system

Publications (2)

Publication Number Publication Date
CN108933786A CN108933786A (en) 2018-12-04
CN108933786B true CN108933786B (en) 2021-04-09

Family

ID=64447298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810710872.5A Active CN108933786B (en) 2018-07-03 2018-07-03 Method for improving cipher text voice quality of receiver of wireless digital communication system

Country Status (1)

Country Link
CN (1) CN108933786B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112217734B (en) * 2019-07-10 2022-11-18 海能达通信股份有限公司 Voice information synchronization method and communication system
CN112564995B (en) * 2019-09-25 2022-04-01 大唐移动通信设备有限公司 Method and base station for reducing voice packet loss statistics
CN111836214B (en) * 2020-07-08 2022-03-01 公安部第一研究所 Method for evaluating and improving voice quality of wireless digital communication system receiving party
CN112422370B (en) * 2020-11-20 2023-02-03 维沃移动通信有限公司 Method and device for determining voice call quality
CN114448954A (en) * 2021-12-30 2022-05-06 普强时代(珠海横琴)信息技术有限公司 Mute processing method and device, storage medium and electronic device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1549536A (en) * 2003-05-09 2004-11-24 ��Ϊ�������޹�˾ Method for ordering to eliminate its jitter time delay by time stamp of RTP data pocket
CN1627747A (en) * 2003-12-09 2005-06-15 华为技术有限公司 Method of realizing dynamic adjusting dithered buffer in procedure of voice transmission
CN105516090A (en) * 2015-11-27 2016-04-20 刘军 Media play method, device and music teaching system
CN105939289A (en) * 2015-12-21 2016-09-14 小米科技有限责任公司 Network jitter processing method, network jitter processing device and terminal equipment
KR20160123562A (en) * 2015-04-16 2016-10-26 주식회사래피드정보통신 Receiver for processing data packet and data packet processing method of receiver
KR101677376B1 (en) * 2014-08-29 2016-11-17 영남대학교 산학협력단 APPARATUS FOR CONTROLLING SIZE OF VoIP PACKET AND METHOD THEREOF
CN107534589A (en) * 2015-04-14 2018-01-02 高通股份有限公司 De-jitter buffer updates

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1549536A (en) * 2003-05-09 2004-11-24 ��Ϊ�������޹�˾ Method for ordering to eliminate its jitter time delay by time stamp of RTP data pocket
CN1627747A (en) * 2003-12-09 2005-06-15 华为技术有限公司 Method of realizing dynamic adjusting dithered buffer in procedure of voice transmission
KR101677376B1 (en) * 2014-08-29 2016-11-17 영남대학교 산학협력단 APPARATUS FOR CONTROLLING SIZE OF VoIP PACKET AND METHOD THEREOF
CN107534589A (en) * 2015-04-14 2018-01-02 高通股份有限公司 De-jitter buffer updates
KR20160123562A (en) * 2015-04-16 2016-10-26 주식회사래피드정보통신 Receiver for processing data packet and data packet processing method of receiver
CN105516090A (en) * 2015-11-27 2016-04-20 刘军 Media play method, device and music teaching system
CN105939289A (en) * 2015-12-21 2016-09-14 小米科技有限责任公司 Network jitter processing method, network jitter processing device and terminal equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"IP电话服务质量改进方案研究及实现";刘剑华;《中国优秀硕士学位论文全文数据库-信息科技辑》;20100315;全文 *

Also Published As

Publication number Publication date
CN108933786A (en) 2018-12-04

Similar Documents

Publication Publication Date Title
CN108933786B (en) Method for improving cipher text voice quality of receiver of wireless digital communication system
TWI419565B (en) Method for buffering packets of a media stream, system for buffering a media stream, device and chipset for transmitting, server and computer program product
KR100840146B1 (en) Method and apparatus for achieving crypto-synchronization in a packet data communication system
RU2423009C1 (en) Method and device to measure synchronisation of talk spurts reproduction within sentence without impact at audibility
RU2369040C2 (en) Buffering during data streaming
US7369662B2 (en) Maintaining end-to-end synchronization on a telecommunications connection
JP2007529967A (en) Efficient transmission of cryptographic information in a secure real-time protocol
UA76407C2 (en) Method and device (variants) for encrypting transmissions in a communication system
WO2009122831A1 (en) Concealment processing device, concealment processing method, and concealment processing program
US20040008844A1 (en) Changing a codec or MAC size without affecting the encryption key in packetcable communication
JP2008527899A (en) Apparatus and method for signal encryption / decryption in communication system
JP4600513B2 (en) Data transmission apparatus, transmission rate control method, and program
CN109714295B (en) Voice encryption and decryption synchronous processing method and device
US8594075B2 (en) Method and system for wireless VoIP communications
JP3838511B2 (en) Video compression encoding transmission / reception device
EP3185505B1 (en) Data packet transmission processing method and device
JP6333969B2 (en) Broadcast transmission / reception apparatus and broadcast transmission / reception method
US8306069B2 (en) Interleaved cryptographic synchronization
JP4655870B2 (en) Packet transmission / reception system and elapsed time measurement method
CN112217734B (en) Voice information synchronization method and communication system
CN106788959B (en) encryption voice synchronization method for PDT cluster system
Miyamoto et al. Mobile backhaul uplink jitter reduction techniques with optical-wireless cooperative control
Samarakoon et al. Encrypted video over TETRA
EP1627490B1 (en) Processor and method for end-to-end encryption synchronisation
EP1634406B1 (en) Processor, method, transmitter and terminal for use in communications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant