CN112217734A - Voice information synchronization method and communication system - Google Patents

Voice information synchronization method and communication system Download PDF

Info

Publication number
CN112217734A
CN112217734A CN201910621704.3A CN201910621704A CN112217734A CN 112217734 A CN112217734 A CN 112217734A CN 201910621704 A CN201910621704 A CN 201910621704A CN 112217734 A CN112217734 A CN 112217734A
Authority
CN
China
Prior art keywords
frame
voice
sequence number
speech
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910621704.3A
Other languages
Chinese (zh)
Other versions
CN112217734B (en
Inventor
罗正华
左银丽
祝志威
马琰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hytera Communications Corp Ltd
Original Assignee
Hytera Communications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hytera Communications Corp Ltd filed Critical Hytera Communications Corp Ltd
Priority to CN201910621704.3A priority Critical patent/CN112217734B/en
Publication of CN112217734A publication Critical patent/CN112217734A/en
Application granted granted Critical
Publication of CN112217734B publication Critical patent/CN112217734B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/34Flow control; Congestion control ensuring sequence integrity, e.g. using sequence numbers

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The application discloses a synchronization method of voice information and a communication system, wherein the communication system comprises a transmitting device and a receiving device, and the synchronization method comprises the following steps: acquiring a first voice frame; setting voice frame information for the first voice frame to form a second voice frame, wherein the voice frame information comprises a frame sequence number; and sending the plurality of second voice frames to the receiving equipment so that the receiving equipment plays the plurality of second voice frames according to the frame sequence number. By the method, the situation of voice frame disorder can be reduced.

Description

Voice information synchronization method and communication system
Technical Field
The present application relates to the field of wireless communication technologies, and in particular, to a method and a system for synchronizing voice information.
Background
In a wireless communication system, a mobile terminal sends a plurality of voice superframes when a transmitting device sends voice according to an industry standard protocol (DMR/PDT, etc.), wherein each voice superframe comprises a plurality of voice frames, and the voice frames are issued to a receiving device according to a certain time sequence relation. In the cross-station encryption voice call, network jitter and other conditions can occur, and in order to ensure that voice frames issued by a base station are continuous and orderly, voice buffering is set for delayed sending according to the network jitter condition.
The inventor of the application finds in long-term research and development that in the existing encrypted voice call, extra system access time is added to each call, and when network jitter exceeds voice cache time, voice frame disorder issued by a base station still occurs, so that harsh noise appears in the voice received by a terminal, and when a station is crossed and a zone is crossed, call drop is serious.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a voice information synchronization method and a communication system, which can solve the problem of voice frame disorder in the prior art.
In order to solve the technical problem, the application adopts a technical scheme that: a method for synchronizing voice information is provided, and the method comprises the following steps: acquiring a first voice frame; setting voice frame information for the first voice frame to form a second voice frame, wherein the voice frame information comprises a frame sequence number; and sending a plurality of second voice frames to a receiving device so that the receiving device plays the second voice frames according to the frame sequence number.
In order to solve the above technical problem, another technical solution adopted by the present application is: a method for synchronizing voice information is provided, and the method comprises the following steps: receiving a plurality of second voice frames and acquiring voice frame information from the second voice frames; acquiring a frame sequence number corresponding to the second voice frame from the voice frame information; and playing the plurality of second voice frames according to the frame sequence number.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided a communication system including a transmitting device, a base station, and a receiving device, the communication system being configured to implement the steps of the above-described method for synchronizing voice information.
The beneficial effect of this application is: the application provides a synchronization method and a communication system of voice information, wherein the synchronization method of the voice information comprises the following steps: acquiring a first voice frame; setting voice frame information for the first voice frame to form a second voice frame, wherein the voice frame information comprises a frame sequence number; and sending the plurality of second voice frames to the receiving equipment so that the receiving equipment plays the plurality of second voice frames according to the frame sequence number. By acquiring the frame sequence number of the voice frame information in the voice frame, the playing sequence of the second voice frame can be obtained according to the frame sequence number, and the condition of voice frame disorder can be further reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flowchart illustrating an embodiment of a method for synchronizing voice messages according to the present application;
FIG. 2 is a schematic diagram of the structure of the speech frame shown in FIG. 1;
FIG. 3 is a detailed flowchart of step S12 in the embodiment of FIG. 1;
FIG. 4 is a flow chart illustrating a method for synchronizing voice messages according to another embodiment of the present application;
FIG. 5 is a detailed flowchart of step S42 in the embodiment of FIG. 4;
FIG. 6 is a flowchart illustrating a specific process of step S43 in the embodiment of FIG. 4;
FIG. 7 is a schematic diagram of a plurality of second speech frames transmitted by the transmitting device in the embodiment of FIG. 4;
FIG. 8 is a schematic structural diagram of a plurality of second speech frames after being interfered in the embodiment of FIG. 7;
FIG. 9 is a schematic diagram of a plurality of second speech frames received by the receiving device in the embodiment of FIG. 4;
FIG. 10 is another specific flowchart of step S43 in the embodiment of FIG. 4;
FIG. 11 is a schematic diagram illustrating a detailed flow chart of step S43 in the embodiment of FIG. 4;
FIG. 12 is a diagram illustrating the structure of a plurality of second speech frames received by the receiving device before the handover in the embodiment of FIG. 11;
FIG. 13 is a diagram illustrating the structure of a plurality of second speech frames received by the receiving device after a handover in the embodiment of FIG. 11;
fig. 14 is a schematic structural diagram of an embodiment of a communication system according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. Non-conflicting ones of the following embodiments may be combined with each other.
Referring to fig. 1, fig. 1 is a flowchart illustrating an embodiment of a method for synchronizing voice information according to the present invention, where the method disclosed in the present application is applied to a communication system including a transmitting device and a receiving device, and a detailed description of an embodiment of the communication system will be described later. The main execution body of the synchronization method of this embodiment is a transmitting device, and the synchronization method of this embodiment may specifically include the following steps:
step S11: a first speech frame is obtained.
A transmitting device acquires a first speech frame.
In a wireless communication system, according to an industry standard protocol, such as DMR (Digital Mobile Radio standard), PDT (Police Digital Trunking communication standard), etc., when a transmitting device transmits voice to a receiving device, the transmitting device may forward the voice to the receiving device through a base station, that is, the transmitting device transmits a voice superframe to the base station, and the base station transmits the voice superframe to the receiving device.
Of course, in other communication systems, the transmitting device may also send the voice superframe directly to the receiving device.
And taking 360ms as a voice superframe, wherein each voice superframe comprises a voice A frame, a voice B frame, a voice C frame, a voice D frame, a voice E frame and a voice F frame, and the voice A frame comprises voice synchronization information and is used for realizing the time sequence synchronization of each superframe.
After receiving the voice frame A containing the voice synchronization information sent by the base station, the receiving equipment receives the voice frame B, the voice frame C, the voice frame D, the voice frame E and the voice frame F at intervals of 60 ms. Therefore, in this embodiment, the air interface timing sequence of the speech frame is an a frame, a B frame, a C frame, a D frame, an E frame, and an F frame.
Step S12: and setting voice frame information for the first voice frame to form a second voice frame, wherein the voice frame information comprises a frame sequence number.
The transmitting device sets speech frame information for the first speech frame to form a second speech frame.
Referring to fig. 2, fig. 2 is a schematic structural diagram of the speech frame shown in fig. 1. The speech frame 100 includes two groups 108 of bits 12 and speech synchronization or embedded signaling 14. In this embodiment, a certain number of bits are freed in two groups of 108 bits 12 of the first speech frame for carrying speech frame information, which at least includes a frame sequence number, to form a second speech frame.
Step S13: and sending the plurality of second voice frames to the receiving equipment so that the receiving equipment plays the plurality of second voice frames according to the frame sequence number.
The transmitting device transmits a plurality of second speech frames to the receiving device.
When a transmitting device needs to send a plurality of first voice frames to a receiving device, a plurality of second voice frames are sent to a base station first, wherein the second voice frames include voice frame information, such as frame sequence numbers of the second voice frames.
After receiving the second speech frame, the receiving device may have network jitter, packet loss, and the like, which may cause the second speech frame transmitted by the transmitting device to be different from the second speech frame received by the receiving device, for example, the air interface timing of the speech frame is different, other speech frames are doped, and the like.
And the receiving equipment acquires the voice frame information from the second voice frame and acquires the frame sequence number corresponding to the second voice frame from the voice frame information.
In the encrypted voice call, because the voice of each voice frame is encrypted by different encryption parameters before being transmitted, after receiving each encrypted voice frame, the receiving equipment can correctly decrypt only by adopting the same encryption parameter as the transmitting party, thereby playing the correct voice.
In this embodiment, after receiving a second speech frame sent by the transmitting device, the receiving device obtains the carried speech frame information from the second speech frame, thereby obtaining a frame sequence number corresponding to the second speech frame. And when the receiving equipment receives the second voice frames, acquiring a plurality of frame sequence numbers corresponding to the second voice frames.
After the receiving device obtains the plurality of second voice frames, if network jitter occurs, the second air interface timing sequences of the plurality of second voice frames may change, that is, the second air interface timing sequences of the plurality of second voice frames received by the receiving device are different from the first air interface timing sequences of the plurality of second voice frames sent by the transmitting device.
Because the frame sequence number of the second speech frame received by the receiving device is the same as the frame sequence number of the second speech frame sent by the transmitting device, after receiving the plurality of second speech frames, the receiving device can play the second speech frames without being influenced by the second air interface time sequence, and the plurality of second speech frames are played according to the sequence of the frame sequence numbers, so that disorder of the plurality of second speech frames caused by change of the air interface time sequence can be avoided.
The method for synchronizing the voice information in this embodiment can enable the receiving device to know the playing sequence of the second voice frame according to the frame sequence number by sending the second voice frame containing the frame sequence number to the receiving device, and can reduce the situation of voice frame disorder.
Alternatively, the present embodiment may implement step S12 by the method as described in fig. 3, and the method of the present embodiment includes steps S21 to S24.
Step S21: a frame number is set for the first speech frame.
The transmitting device fills the speech frame information into the first speech frame.
Specifically, the transmitting device sets a corresponding frame sequence number for each first speech frame, for example, sequentially sets the frame sequence numbers according to the sequence of the plurality of speech frames.
Step S22: and calculating the check code of the first voice frame according to the frame sequence number.
The transmitting device calculates a check code, such as a Cyclic Redundancy Check (CRC) code, for the first speech frame based on the frame number. The cyclic redundancy check code is a commonly used check code with error detection and correction capabilities, and is used for data check during synchronous communication.
Step S23: and cascading the frame serial number and the check code, and calculating the cascaded frame serial number and the check code to obtain the error correcting code.
The transmitting device concatenates the frame sequence number and the check code, and calculates the concatenated frame sequence number and the check code to obtain a Correction Error code, such as Forward Error Correction (FEC). The error detection of the voice information in the transmission process is verified by the receiving device, in the FEC mode, through forward error correction, the receiving device can not only find the error, but also determine the position of the binary symbol where the error occurs, so as to correct it, in this embodiment, the forward error correction is used to correct the transmission error code.
Step S24: and concatenating the frame sequence number, the check code and the error correction code into voice frame information.
The transmitting equipment concatenates the frame sequence number, the check code and the error correcting code into voice frame information, and after the voice frame information is obtained, the voice frame information is filled into the first voice frame to form a second voice frame.
The present application further proposes a synchronization method of voice information of another embodiment, where an execution subject of the embodiment is a receiving device, as shown in fig. 4, the synchronization method of the embodiment includes the following steps:
step S41: and receiving a plurality of second voice frames and acquiring voice frame information from the second voice frames.
And the receiving equipment receives a plurality of second voice frames and acquires voice frame information from the second voice frames.
Step S42: and acquiring the frame sequence number corresponding to the second voice frame from the voice frame information.
Specifically, the present embodiment may implement step S42 by the method as shown in fig. 5. The present embodiment includes step S51 and step S52.
Step S51: and carrying out error correction processing on the voice frame information to obtain a first frame sequence number and a check code.
Step S52: and carrying out check code error detection processing on the first frame sequence number according to the check code so as to obtain a frame sequence number corresponding to the second voice frame.
Steps S51 to S52 are also described below:
after receiving a plurality of second voice frames sent by the transmitting equipment or forwarded by the base station, the receiving equipment acquires voice frame information from the second voice frames, and performs error correction processing and error detection processing on the voice frame information, so as to obtain frame sequence numbers corresponding to the second voice frames.
Specifically, the receiving device performs FEC error correction processing on the voice frame information to obtain a first frame number and a check code corresponding to the second voice frame, and performs CRC error detection processing on the obtained first frame number to obtain a correct frame number. According to the obtained plurality of frame sequence numbers, the time sequence relation among the time sequences of the plurality of second air interfaces can be determined, so that the conditions that the plurality of second voice frames are out of order and the like are correctly processed.
Step S43: and playing the plurality of second voice frames according to the frame sequence number.
Specifically, the present embodiment may implement step S43 by the method as shown in fig. 6. The present embodiment includes steps S61 to S63.
Step S61: and judging whether the first frame sequence number of the current second voice frame and the first frame sequence number of the previous second voice frame are continuous or not. If not, step S62 is executed, and if yes, step S63 is executed.
Referring to fig. 7 and fig. 8 together, fig. 7 is a schematic structural diagram of a plurality of second speech frames sent by the transmitting device in the embodiment of fig. 4, and fig. 8 is a schematic structural diagram of a plurality of second speech frames after being interfered in the embodiment of fig. 7; fig. 9 is a schematic diagram of a structure of a plurality of second speech frames received by the receiving device in the embodiment of fig. 4.
In this embodiment, the transmitting device sends 18 first voice frames, and accordingly, the number of the multiple frames is 1 to 18, and the first air interface timing sequence is three groups of frames a to F. In the second voice frames sent by the base station, due to network jitter, at least two second voice frames occur out of order, for example, two second voice frames with frame numbers of 3 and 4.
The receiving device has received two second speech frames sent by the transmitting device or the base station, the second air interface time sequences of the two second speech frames are respectively an a frame and a B frame, and the corresponding two second frame sequence numbers are 1 and 2. Due to network jitter, two second voice frames of a second air interface time sequence C frame and a second air interface time sequence D frame sent to the receiving equipment by the transmitting equipment or the base station are out of order, the second frame sequence numbers of the second air interface time sequence C frame and the second air interface time sequence D frame are respectively corresponding to 4 and 3, and the second frame sequence number currently acquired by the receiving equipment is 4.
And after receiving the current second frame sequence number, the receiving device judges whether the second frame sequence number of the current second voice frame and the second frame sequence number of the previous second voice frame are continuous or not. If not, step S62 is executed.
In this embodiment, the second frame sequence number of the current second speech frame is 4, and the second frame sequence number of the second speech frame of the previous second speech frame is 2, the receiving device may determine that the current second speech frame sent by the transmitting device or the base station is not continuous with the second speech frame of the previous second speech frame, at this time, step S62 is executed.
Step S62: the current second speech frame is buffered and the replacement frame is played.
When the current second voice frame is not continuous with the previous second voice frame, the receiving device caches the current second voice frame in the receiving device and plays the replacement frame, so that the playing continuity of the plurality of second voice frames is ensured. In this embodiment, the replacement frame may be a comfort noise frame, and the replacement frame does not occupy the position of the air interface timing, so that it can be ensured that the frame number of the air interface timing does not change.
Step S63: and playing the buffered second voice frame.
When the receiving device receives a second speech frame of a second air interface timing sequence D frame sent by the transmitting device or the base station, and the frame number of the second speech frame is 3, if the second frame number of the second air interface timing sequence D frame is continuous with the second frame number of the air interface timing sequence B frame sent by the transmitting device or the base station, the second speech frame of the second air interface timing sequence D frame is continuously played.
The frame number of the buffered second speech frame of the receiving device is 4, and it can be determined that the buffered second frame number 4 of the second speech frame is consecutive to the second frame number 3 of the previous second speech frame, and at this time, the buffered second speech frame is played. The receiving equipment exchanges two second voice frames of the second air interface time sequence C frame and the second air interface time sequence D frame, so that the decryption sequence of the second voice frames received by the receiving equipment is consistent with the encryption sequence of the first voice frames sent by the transmitting equipment, and the correct decryption of the receiving equipment is ensured.
And the receiving equipment performs check code error detection processing on the first frame sequence number to obtain a second frame sequence number, and plays a plurality of second voice frames according to the second frame sequence number.
In another embodiment, step S43 may also be implemented by a method as shown in fig. 10. The present embodiment includes steps S101 to S104.
Step S101: and judging whether the first frame sequence number of the current second voice frame and the first frame sequence number of the previous second voice frame are continuous or not. If not, step S102 is executed, and if yes, step S105 is executed.
Step S101 is similar to step S61 and will not be described here.
Step S102: and judging whether the current second voice frame is a mute frame or not. If yes, step S103 is executed, and if no, step S104 is executed.
Step S103: the replacement frame is played.
Step S104: the current second speech frame is buffered and the replacement frame is played.
Step S104 is similar to step S62 and will not be described here.
Step S105: and playing the buffered second voice frame.
Step S105 is similar to step S63 and is not described in detail here.
Steps S102 to S103 will be explained together as follows:
in the process of transmitting a second speech frame by a transmitting device or a base station, due to network jitter, the base station may issue more silent frames S, so that the second air interface timing sequence of the second speech frame subsequently issued by the base station is delayed backward, for example, the base station issues more than two silent frames S, the second air interface timing sequences are delayed backward by 2 bits, at this time, the second air interface timing sequence is 20 frames, and more frames a and B are issued, but the second frame sequence number is still 1-18.
When the receiving device receives the second speech frame, it determines whether the current second speech frame is a mute frame, if it determines that the current second speech frame is a mute frame, it plays a replacement frame, for example, a comfort noise frame is used to replace the current mute frame, so as to ensure the continuity of playing, reduce noise, and reduce the access time of the system.
Meanwhile, because the substitute frame does not occupy the second air interface time sequence, the number of the second air interface time sequences of the second voice frames sent by the transmitting equipment or the base station can be known through the subsequently received second frame sequence numbers, so that the received second voice frames are adjusted, the decryption sequence of the second voice frames received by the receiving equipment is consistent with the encryption sequence of the first voice frames sent by the transmitting equipment, and the correct decryption of the receiving equipment is ensured.
In another embodiment, when the receiving device is handed off to acquire the speech frame information from the second speech frame, step S43 can also be implemented by the method shown in fig. 11. The present embodiment includes step S111 and step S112.
S111: and acquiring first voice frame information from a second voice frame before handoff, acquiring second voice frame information from the second voice frame after handoff, and acquiring a frame sequence number before handoff according to the first voice frame information and a frame sequence number after handoff according to the second voice frame information.
Referring to fig. 12 and fig. 13 together, fig. 12 is a schematic structural diagram of a plurality of second speech frames received by the receiving device before the handover in the embodiment of fig. 11, and fig. 13 is a schematic structural diagram of a plurality of second speech frames received by the receiving device after the handover in the embodiment of fig. 11.
The receiving equipment receives the voice frame information in the encryption call in an area-crossing manner, acquires the first voice frame information from the second voice frame before the area-crossing, and obtains the frame sequence number before the area-crossing according to the first voice frame information. In this embodiment, a receiving device acquires ten second speech frames before handover, and accordingly, the second empty port timing sequences of the ten second speech frames are a frame to F frame and a frame to D frame, acquires corresponding ten first speech frame information from the ten second speech frames, and obtains frame sequence numbers 1 to 10 before handover according to the ten first speech frame information.
And similarly, the receiving equipment acquires second voice frame information from the second voice frame after the handover and obtains a frame sequence number after the handover according to the second voice frame information. In this embodiment, the receiving device acquires six second speech frames after the handover, and accordingly, the second air interface time sequences of the six second speech frames are a frame to F frame, acquires corresponding six second speech frame information from the six second speech frames, and obtains frame sequence numbers 13 to 18 after the handover according to the six second speech frame information.
Specifically, the receiving device obtains the frame sequence number of the last second speech frame before the handover and the frame sequence number of the first second speech frame after the handover, and obtains the number of speech frames different between the second speech frame before the handover and the second speech frame after the handover according to the frame sequence number of the last second speech frame before the handover and the frame sequence number of the first second speech frame after the handover.
Specifically, the receiving device obtains the frame number of the last second speech frame before the handover as 10 and the frame number of the first second speech frame after the handover as 13, so that it can know that the frame number of the last second speech frame before the handover and the frame number of the first second speech frame after the handover differ by two frame numbers, and thus, the number of speech frames that the second speech frame before the handover and the second speech frame after the handover differ is 2.
S112: and playing the second voice frame according to the frame sequence number before the handoff and the frame sequence number after the handoff.
Specifically, the receiving device acquires the number of voice frames different between before and after the handover, acquires the frame number of the last second voice frame before the handover after the frame number is shifted from the number of the voice frames, and plays the second voice frame corresponding to the frame number.
In this embodiment, the frame number of the last second speech frame before the handover is 10, the number of speech frames different between the second speech frame before the handover and the second speech frame after the handover is 2, and the frame number of the last second speech frame before the handover is 10, and the frame number obtained after the two-bit offset is 13. After the second speech frame with the frame number of 10 is played, the second speech frame with the frame number of 13 is directly played.
When the second speech frame is received in the cross-area, the frame sequence number of the second speech frame is obtained, the time sequence relation between the last second speech frame before the cross-area and the first second speech frame after the cross-area can be obtained, and the second speech frame after the cross-area can be directly and correctly decrypted by subtracting the lost second speech frame.
According to the method, the receiving equipment carries out error correction processing and error detection processing on the acquired voice frame information by carrying the voice frame information containing the frame sequence number, the check code and the error correction code in each first voice frame, so that a second frame sequence number is obtained, the playing sequence of a second voice frame is obtained according to the second frame sequence number, and the second voice frame can be synchronously and correctly decrypted when a plurality of second voice frames are out of order without depending on a second air interface time sequence of the second voice frame; the receiving equipment uses the replacement frame to replace the mute frame, so that the playing continuity of a plurality of second voice frames can be ensured, the occurrence of noise is reduced, and meanwhile, the access time of a communication system can be reduced; when the voice frame is lost in the cross-region, the second voice frame can be correctly decrypted and played, and the cross-region word dropping caused by encryption is avoided.
Corresponding to the method for synchronizing voice information in the foregoing embodiments, the present application provides a communication system, and specifically please refer to fig. 14, where fig. 14 is a schematic structural diagram of an embodiment of a communication system in the present application. The communication system 200 disclosed herein includes a transmitting device 22 and a receiving device 26,
the transmitting device 22 acquires a first voice frame and sets voice frame information for the first voice frame to form a second voice frame, wherein the voice frame information includes a frame sequence number; the transmitting device 22 sends a plurality of second speech frames to the receiving device 26; the receiving device 26 receives a plurality of second voice frames and obtains voice frame information from the second voice frames; the receiving device 26 obtains the frame sequence number corresponding to the second speech frame from the speech frame information, and plays the plurality of second speech frames according to the frame sequence number.
The communication system 200 is further configured to implement the steps of the synchronization method according to any of the above embodiments.
The communication system 200 according to this embodiment can reduce the occurrence of speech frame misordering.
In other communication systems, when transmitting voice to a receiving device, a transmitting device may forward the voice through a base station, that is, the transmitting device transmits a voice superframe to the base station, and the base station transmits the voice superframe to the receiving device.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A method for synchronizing voice information, the method comprising:
acquiring a first voice frame;
setting voice frame information for the first voice frame to form a second voice frame, wherein the voice frame information comprises a frame sequence number;
and sending a plurality of second voice frames to a receiving device so that the receiving device plays the second voice frames according to the frame sequence number.
2. The synchronization method of claim 1, wherein the speech frame information further comprises: the steps of setting the voice frame information for the first voice frame to form a second voice frame include:
setting the frame sequence number for the first voice frame;
calculating the check code of the first voice frame according to the frame sequence number;
cascading the frame serial number and the check code, and calculating the frame serial number and the check code after cascading to obtain a corrected error code;
and concatenating the frame sequence number, the check code and the error correction code into the voice frame information.
3. A method for synchronizing voice information, the method comprising:
receiving a plurality of second voice frames and acquiring voice frame information from the second voice frames;
acquiring a frame sequence number corresponding to the second voice frame from the voice frame information;
and playing the plurality of second voice frames according to the frame sequence number.
4. The synchronization method according to claim 3, wherein the step of obtaining the frame sequence number corresponding to the second speech frame from the speech frame information comprises:
carrying out error correction processing on the voice frame information to obtain a first frame sequence number and a check code;
and carrying out check code error detection processing on the first frame sequence number according to the check code so as to obtain a frame sequence number corresponding to the second voice frame.
5. The synchronization method according to claim 3, wherein the step of playing the second speech frames according to the frame sequence number comprises:
judging whether the first frame sequence number of the current second voice frame and the first frame sequence number of the previous second voice frame are continuous or not;
if not, caching the current second voice frame and playing a replacement frame;
and if so, playing the second voice frame which is cached.
6. The synchronization method of claim 5, further comprising:
if the first frame sequence number of the current second voice frame is judged to be discontinuous with the first frame sequence number of the previous second voice frame, judging whether the current second voice frame is a mute frame;
if the current second voice frame is a mute frame, playing a replacement frame;
and if the current second speech frame is not a mute frame, executing the steps of caching the current second speech frame and playing a replacement frame.
7. The synchronization method according to claim 3, wherein the step of playing the second speech frames according to the frame sequence number comprises:
when the voice frame information is acquired from the second voice frame in a cross-region mode, acquiring first voice frame information from the second voice frame before the cross-region mode, acquiring second voice frame information from the second voice frame after the cross-region mode, and acquiring a frame sequence number before the cross-region mode according to the first voice frame information and a frame sequence number after the cross-region mode according to the second voice frame information;
and playing the second voice frame according to the frame sequence number before the handover and the frame sequence number after the handover.
8. The method of claim 7, wherein the step of obtaining the speech frame sequence number before and after the handover based on the speech frame information comprises:
obtaining the frame sequence number of the last second voice frame before the handover and the frame sequence number of the first second voice frame after the handover;
and obtaining the number of the voice frames different between the second voice frame before the handoff and the second voice frame after the handoff according to the frame sequence number of the last second voice frame before the handoff and the frame sequence number of the first second voice frame after the handoff.
9. The method of claim 8, wherein the step of playing the second speech frame based on the frame sequence number before the handoff and the frame sequence number after the handoff comprises:
acquiring the frame number of the last second voice frame before the handover after the frame number deviates from the voice frame number;
and playing the second voice frame corresponding to the frame sequence number after the frame sequence number of the last second voice frame before the handover deviates from the voice frame number.
10. A communication system, characterized in that the communication system comprises a transmitting device, a base station and a receiving device, the communication system being configured to implement the steps of the method for synchronizing speech information according to any of claims 1-9.
CN201910621704.3A 2019-07-10 2019-07-10 Voice information synchronization method and communication system Active CN112217734B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910621704.3A CN112217734B (en) 2019-07-10 2019-07-10 Voice information synchronization method and communication system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910621704.3A CN112217734B (en) 2019-07-10 2019-07-10 Voice information synchronization method and communication system

Publications (2)

Publication Number Publication Date
CN112217734A true CN112217734A (en) 2021-01-12
CN112217734B CN112217734B (en) 2022-11-18

Family

ID=74048165

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910621704.3A Active CN112217734B (en) 2019-07-10 2019-07-10 Voice information synchronization method and communication system

Country Status (1)

Country Link
CN (1) CN112217734B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115602171A (en) * 2022-12-13 2023-01-13 广州小鹏汽车科技有限公司(Cn) Voice interaction method, server and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010016494A1 (en) * 2000-02-23 2001-08-23 Takanori Hayashi System for avoiding congestion in mobile communication and method of doing the same
CN102006593A (en) * 2010-10-29 2011-04-06 公安部第一研究所 End-to-end voice encrypting method for low-speed narrowband wireless digital communication
CN202050421U (en) * 2010-09-21 2011-11-23 公安部第一研究所 End-to-end encrypted speech processing device
US20140146695A1 (en) * 2012-11-26 2014-05-29 Kwangwoon University Industry-Academic Collaboration Foundation Signal processing apparatus and signal processing method thereof
CN108933786A (en) * 2018-07-03 2018-12-04 公安部第研究所 Method for improving radio digital communication system recipient's ciphertext voice quality

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010016494A1 (en) * 2000-02-23 2001-08-23 Takanori Hayashi System for avoiding congestion in mobile communication and method of doing the same
CN202050421U (en) * 2010-09-21 2011-11-23 公安部第一研究所 End-to-end encrypted speech processing device
CN102006593A (en) * 2010-10-29 2011-04-06 公安部第一研究所 End-to-end voice encrypting method for low-speed narrowband wireless digital communication
US20140146695A1 (en) * 2012-11-26 2014-05-29 Kwangwoon University Industry-Academic Collaboration Foundation Signal processing apparatus and signal processing method thereof
CN108933786A (en) * 2018-07-03 2018-12-04 公安部第研究所 Method for improving radio digital communication system recipient's ciphertext voice quality

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115602171A (en) * 2022-12-13 2023-01-13 广州小鹏汽车科技有限公司(Cn) Voice interaction method, server and computer readable storage medium

Also Published As

Publication number Publication date
CN112217734B (en) 2022-11-18

Similar Documents

Publication Publication Date Title
RU2242095C2 (en) Effective in-band signal transfer for discontinuous transmission and change in configuration of communication systems for variable-speed adaptive signal transfer
US10887048B2 (en) Bluetooth transmission using low density parity check
WO2021057461A1 (en) Method for polar code segment encoding, and communication device
WO2006039001A2 (en) Crypto-synchronization for secure communication
CN108933786B (en) Method for improving cipher text voice quality of receiver of wireless digital communication system
KR20100085925A (en) Extraction of values from partially-corrupted data packets
KR101331431B1 (en) Communication terminal, method for receiving data and computer program product
JP2001520841A (en) Method and apparatus for encrypting information transmission
US11785502B2 (en) Selective relay of data packets
US10805045B2 (en) Polar code encoding method and device and polar code decoding method and device
KR100603909B1 (en) A method and a system for transferring AMR signaling frames on halfrate channels
CN112217734B (en) Voice information synchronization method and communication system
US8102857B2 (en) System and method for processing data and control messages in a communication system
WO2002021728A2 (en) Methods and systems for multiplexing and decoding variable length messages in digital communications systems
US20060039325A1 (en) System and method for decoding signalling messages on FLO HR channels
ES2715273T3 (en) Decryption method and apparatus for a packet data convergence protocol layer in a wireless communication system
JPH11177527A (en) Method and device for data transmission for cdma
GB2358562A (en) Radio communications system for use with two different standards/protocols
WO2011151805A1 (en) Methods and apparatus for controlling location for starting decoding of sub-packets of a communication packet
CN106788959B (en) encryption voice synchronization method for PDT cluster system
WO2021003707A1 (en) Synchronization method for voice information and communication system
KR20080053230A (en) Method and apparatus for handling reordering in a wireless communications system
WO2018202062A1 (en) Method and device for downlink synchronization
CN108811076B (en) Downlink synchronization method and device
EP4351048A1 (en) Communication method and communication apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant