CN113037456A - Voice synchronization method, device and system and related equipment - Google Patents

Voice synchronization method, device and system and related equipment Download PDF

Info

Publication number
CN113037456A
CN113037456A CN201911349463.8A CN201911349463A CN113037456A CN 113037456 A CN113037456 A CN 113037456A CN 201911349463 A CN201911349463 A CN 201911349463A CN 113037456 A CN113037456 A CN 113037456A
Authority
CN
China
Prior art keywords
voice
frame
transmission channel
target
target type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911349463.8A
Other languages
Chinese (zh)
Other versions
CN113037456B (en
Inventor
陈卓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hytera Communications Corp Ltd
Original Assignee
Hytera Communications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hytera Communications Corp Ltd filed Critical Hytera Communications Corp Ltd
Priority to CN201911349463.8A priority Critical patent/CN113037456B/en
Publication of CN113037456A publication Critical patent/CN113037456A/en
Application granted granted Critical
Publication of CN113037456B publication Critical patent/CN113037456B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L7/00Arrangements for synchronising receiver with transmitter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L7/00Arrangements for synchronising receiver with transmitter
    • H04L7/0008Synchronisation information channels, e.g. clock distribution lines

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Synchronisation In Digital Transmission Systems (AREA)

Abstract

The application provides a voice synchronization method, a device, a system and related equipment, based on a transmitter sending a voice superframe at preset time intervals, wherein the voice superframe comprises a target type voice frame and a plurality of non-target type voice frames, the target type voice frame comprises a synchronous word, at least one voice frame in the plurality of non-target type voice frames comprises a link control signaling LC, the method is applied to a receiver, and the method comprises the following steps: searching for the presence of a portion identical to a sync word in a speech frame received from a transmission channel; if so, extracting the voice frame from the voice frame received on the transmission channel based on the part same as the synchronous word; if not, searching whether a part same as the LC exists in the voice frame received from the transmission channel; if so, extracting the speech frames from the speech frames received on the transmission channel based on the same portion as the LC. In the application, the reliability of the voice synchronization can be improved through the above mode.

Description

Voice synchronization method, device and system and related equipment
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method, an apparatus, a system, and a related device for voice synchronization.
Background
In a call service of a DMR (Digital Mobile Radio) system or a PDT (Police Digital Trunking) system, a sender needs to pack voice data into voice frames, and then sends a voice frame every preset time. In order to realize reliable conversation with the sender, the receiver needs to perform voice frame synchronization.
However, how to perform reliable speech frame synchronization becomes a problem.
Disclosure of Invention
In order to solve the foregoing technical problems, embodiments of the present application provide a method, an apparatus, a system, and related devices for voice synchronization, so as to achieve the purpose of improving reliability of voice synchronization, and the technical solution is as follows:
a voice synchronization method is based on that a transmitter sends a voice superframe every preset time, the voice superframe comprises a target type voice frame and a plurality of non-target type voice frames, the target type voice frame comprises a synchronization word, at least one voice frame in the plurality of non-target type voice frames comprises a link control signaling LC, the method is applied to a receiver, and the method comprises the following steps:
searching for whether there is a portion identical to the sync word in a speech frame received from a transmission channel;
if the synchronous word exists, extracting a voice frame from the voice frame received on the transmission channel based on the part which is the same as the synchronous word;
if not, searching whether a part same as the LC exists in a voice frame received from the transmission channel;
and if so, extracting the voice frame from the voice frame received on the transmission channel based on the same part as the LC.
Preferably, if M non-target type voice frames in a voice super frame transmitted by the transmitter respectively include partial LCs, and information formed by each partial LC is a complete LC, and M is smaller than the total number of non-target type voice frames in a voice super frame transmitted by the transmitter, searching whether a part identical to the LC exists in the voice frames received from the transmission channel, includes:
determining a part LC which is in the front of the voice superframe and is not used as a target LC;
and searching whether a part identical to the target LC exists in the voice frames received from the transmission channel, and if the part identical to the target LC does not exist, returning to the step of determining that the part LC which is in the front of the voice superframe and is not used is the target LC until the part identical to the target LC is searched.
Preferably, if N non-target type voice frames in a voice super frame transmitted by the transmitter respectively include partial LCs, and information formed by each partial LC is a complete LC, where N is equal to the total number of non-target type voice frames in a voice super frame transmitted by the transmitter, searching whether there is a part identical to the LC in the voice frames received from the transmission channel, including:
determining a part LC which is in the front of the voice superframe and is not used as a target LC;
and searching whether a part identical to the target LC exists in the voice frames received from the transmission channel, and if the part identical to the target LC does not exist, returning to the step of determining that the part LC which is in the front of the voice superframe and is not used is the target LC until the part identical to the target LC is searched.
A method for voice synchronization applied to a transmitter, the method comprising:
sending a voice superframe at intervals of preset time, wherein the voice superframe comprises a target type voice frame and a plurality of non-target type voice frames, the target type voice frame comprises a synchronous word, and at least one voice frame in the plurality of non-target type voice frames comprises an LC (inductance capacitance), so that a receiver searches whether a part same as the synchronous word exists in the voice frames received from a transmission channel; if the synchronous word exists, extracting a voice frame from the voice frame received on the transmission channel based on the part which is the same as the synchronous word; if not, searching whether a part same as the LC exists in a voice frame received from the transmission channel; and if so, extracting the voice frame from the voice frame received on the transmission channel based on the same part as the LC.
A voice synchronization device, based on that a transmitter sends a voice superframe every preset time, the voice superframe includes a target type voice frame and a plurality of non-target type voice frames, the target type voice frame includes a synchronization word, at least one voice frame in the plurality of non-target type voice frames includes a link control signaling LC, the method is applied to a receiver, the device includes:
a first searching module, configured to search, in a speech frame received from a transmission channel, whether a portion identical to the sync word exists;
a first extraction module, configured to, if there is a portion identical to the sync word in a speech frame received from a transmission channel, extract the speech frame from the speech frame received from the transmission channel based on the portion identical to the sync word;
a second searching module, configured to search whether a part identical to the LC exists in a speech frame received from a transmission channel if the part identical to the sync word does not exist in the speech frame received from the transmission channel;
and the second extraction module is used for extracting the voice frame from the voice frame received on the transmission channel based on the part which is the same as the LC if the part which is the same as the LC exists in the voice frame received from the transmission channel.
Preferably, if M non-target type voice frames in a voice superframe transmitted by the transmitter respectively include partial LCs, and information formed by each partial LC is a complete LC, and M is smaller than the total number of non-target type voice frames in a voice superframe transmitted by the transmitter, the second searching module includes:
a first determining submodule, configured to determine, as a target LC, a portion LC that is sequentially previous and unused in the voice superframe;
and the first searching sub-module is used for searching whether a part which is the same as the target LC exists in the voice frame received from the transmission channel, and if the part which is the same as the target LC does not exist in the voice frame, the step of determining the part LC which is in the front of the voice superframe and is not used as the target LC is returned to be executed until the part which is the same as the target LC is searched.
Preferably, if N non-target type voice frames in a voice super frame transmitted by the transmitter respectively include partial LCs, and information formed by each partial LC is a complete LC, where N is equal to the total number of non-target type voice frames in a voice super frame transmitted by the transmitter, the second searching module includes:
a second determining submodule, configured to determine, as a target LC, a portion LC that is sequentially previous and unused in the voice superframe;
and a second searching sub-module, configured to search whether a portion identical to the target LC exists in the voice frame received over the transmission channel, and if the portion identical to the target LC does not exist, return to the step of determining that the portion LC that is in the preceding sequence and is not used in the voice superframe is the target LC until the portion identical to the target LC is found.
A voice synchronization apparatus applied to a transmitter, the apparatus comprising:
a sending module, configured to send a voice superframe every preset time, where the voice superframe includes a target type voice frame and multiple non-target type voice frames, the target type voice frame includes a sync word, and at least one of the multiple non-target type voice frames includes an LC, so that a receiver searches for a portion, which is the same as the sync word, in a voice frame received from a transmission channel; if the synchronous word exists, extracting a voice frame from the voice frame received on the transmission channel based on the part which is the same as the synchronous word; if not, searching whether a part same as the LC exists in a voice frame received from the transmission channel; and if so, extracting the voice frame from the voice frame received on the transmission channel based on the same part as the LC.
A receiver, based on a transmitter sending a voice superframe every preset time, the voice superframe including a target type voice frame and a plurality of non-target type voice frames, the target type voice frame including a synchronization word, at least one of the plurality of non-target type voice frames including a link control signaling LC, the receiver comprising: a processor, a memory, and a data bus through which the processor and the memory communicate;
the memory is used for storing programs;
the processor is used for executing the program;
the program is specifically for:
searching for whether there is a portion identical to the sync word in a speech frame received from a transmission channel;
if the synchronous word exists, extracting a voice frame from the voice frame received on the transmission channel based on the part which is the same as the synchronous word;
if not, searching whether a part same as the LC exists in a voice frame received from the transmission channel;
and if so, extracting the voice frame from the voice frame received on the transmission channel based on the same part as the LC.
A speech synchronization system comprising: a transmitter and a receiver as described above;
the transmitter is configured to send a voice superframe once every preset time, where the voice superframe includes a target type voice frame and multiple non-target type voice frames, the target type voice frame includes a synchronization word, and at least one of the multiple non-target type voice frames includes an LC.
A receiver as described above and a transmitter as described above.
Compared with the prior art, the beneficial effect of this application is:
in the application, a target type speech frame and a plurality of non-target type speech frames are included based on a speech super frame, the target type speech frame includes a synchronous word, at least one speech frame in the plurality of non-target type speech frames includes a frame structure of a link control signaling LC, a receiver can respectively search whether the same part exists in the received speech frames by adopting different contents (such as the synchronous word and the LC), and the speech synchronization can be still rapidly carried out under the condition that the speech frame to which the synchronous word belongs is lost, so that the reliability of the speech synchronization is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
FIG. 1 is a flow chart of a method for voice synchronization provided herein;
FIG. 2 is a schematic diagram of one result of the speech synchronization provided herein;
FIG. 3 is a flow chart of another method of voice synchronization provided herein;
FIG. 4 is a flow chart of yet another method of voice synchronization provided herein;
FIG. 5 is a schematic diagram of a logic structure of a voice synchronization apparatus provided in the present application;
fig. 6 is a schematic structural diagram of a receiver provided in the present application;
fig. 7 is a schematic structural diagram of a speech synchronization system provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application discloses a voice synchronization method, which is based on that a transmitter sends a voice superframe every preset time, the voice superframe comprises a target type voice frame and a plurality of non-target type voice frames, the target type voice frame comprises a synchronization word, at least one voice frame in the plurality of non-target type voice frames comprises a link control signaling LC, the method is applied to a receiver, and the method comprises the following steps: searching for whether there is a portion identical to the sync word in a speech frame received from a transmission channel; if the synchronous word exists, extracting a voice frame from the voice frame received on the transmission channel based on the part which is the same as the synchronous word; if not, searching whether a part same as the LC header information exists in a voice frame received from the transmission channel; and if so, extracting the voice frame from the voice frame received on the transmission channel based on the same part as the LC. In the present application, the reliability of voice synchronization can be improved.
Next, a speech synchronization method disclosed in an embodiment of the present application is introduced, where a transmitter sends a speech super frame at intervals of a preset time, where the speech super frame includes a target type speech frame and a plurality of non-target type speech frames, where the target type speech frame includes a synchronization word, and at least one speech frame of the plurality of non-target type speech frames includes a LC (Link Control). A voice superframe can be understood as: a group of speech frames consisting of a plurality of speech frames, such as a speech superframe in DMR (Digital Mobile Radio) protocol, includes a frame, B frame, C frame, D frame, E frame, and F frame.
It is understood that, in the case that a certain non-target type voice frame in a voice superframe includes LC, the LC included in the non-target type voice frame is complete LC.
Under the condition that a plurality of non-target type voice frames in a voice superframe respectively comprise partial LCs, information formed by the partial LCs respectively comprised by each non-target type voice frame is complete LC, and the partial LCs respectively comprised by each non-target type voice frame are different from each other. Partial LC can be understood as: a portion of a complete LC. Preferably, the length of the partial LC included in each non-target type speech frame is the same, for example, if 4 non-target type speech frames in a speech super-frame include LC, the complete LC is 72 bits, and the partial LC included in each of the four non-target type speech frames is 18 bits.
Based on the above description, as shown in fig. 1, a flow chart of an embodiment 1 of a speech synchronization method provided by the present application is applied to a receiver, and the method may include the following steps:
step S11, searching whether there is a part identical to the sync word in the speech frame received from the transmission channel.
If yes, go to step S12; if not, go to step S13.
It will be appreciated that the sync word is known between the transmitter and the receiver, following the specifications of the communication protocol, and the specific location of the sync word in the speech is known. For example, in a PDT system, the sync word is in the middle of an a frame, and the sync word is 48 bits in length.
The sync word is typically in the first speech frame in a speech superframe, so it is preferable to look up in the speech frame received from the transmission channel whether there is a part identical to the sync word.
Step S12, extracting the speech frame from the speech frame received on the transmission channel based on the same part as the sync word.
In the case that the part identical to the sync word is found in the speech frames received on the transmission channel in step S11, based on the part identical to the sync word, the position of the speech frame to which the sync word belongs may be determined from the speech frames received on the transmission channel, and based on the relative position relationship between the position of the speech frame to which the sync word belongs and other speech frames, the speech frames may be sequentially extracted from the speech frames received on the transmission channel. If the voice superframe comprises an A frame, a B frame, a C frame, a D frame, an E frame and an F frame, and the voice frame to which the synchronous word belongs is the A frame, firstly extracting the A frame from the voice frame received on the transmission channel, and then extracting the B frame adjacent to the A frame based on the A frame; extracting a C frame adjacent to the B frame based on the B frame; extracting a D frame adjacent to the C frame based on the C frame; extracting an E frame adjacent to the D frame based on the D frame; based on the E frame, F frames adjacent to the E frame are extracted.
Step S13, searching whether there is the same part as the LC in the voice frame received from the transmission channel.
If so, go to step S14.
In the case that the step S11 finds no part identical to the sync word in the speech frame received on the transmission channel, indicating that the a frame to which the sync word belongs has been lost, it continues to find whether there is a part identical to the LC in the speech frame received on the transmission channel.
The receiver needs to predetermine the LC before receiving speech frames from the transmission channel. There are two ways for the receiver to determine LC: first, the transmitting party transmits the LC at the beginning of the speech frame, so the receiver can directly acquire the LC. Secondly, because the LC is formed by combining the information such as the address of the transmitting party and the address of the receiving party, and the receiving party participating in the service already knows the information such as the address of the transmitting party and the address of the receiving party when the service flow starts, the receiver can calculate the LC by the information such as the address of the transmitting party and the address of the receiving party.
Step S14, based on the same part as the LC, extracting a speech frame from the speech frames received on the transmission channel.
In the case that the part identical to the LC is found in the voice frames received on the transmission channel in step S13, based on the part identical to the LC, the position of the voice frame to which the LC belongs may be determined from the voice frames received on the transmission channel, and based on the relative position relationship between the position of the voice frame to which the LC belongs and other voice frames, the voice frames may be sequentially extracted from the voice frames received on the transmission channel. If the voice superframe comprises an A frame, a B frame, a C frame, a D frame, an E frame and an F frame, and the voice frame to which the synchronous word belongs is the A frame, under the condition that the A frame is lost and the LC belongs to the B frame, firstly extracting the B frame from the voice frame received on a transmission channel, and then extracting the C frame adjacent to the B frame based on the B frame; extracting a D frame adjacent to the C frame based on the C frame; extracting an E frame adjacent to the D frame based on the D frame; based on the E frame, F frames adjacent to the E frame are extracted.
In the application, a target type speech frame and a plurality of non-target type speech frames are included based on a speech super frame, the target type speech frame includes a synchronous word, at least one speech frame in the plurality of non-target type speech frames includes a frame structure of a link control signaling LC, a receiver can respectively search whether the same part exists in the received speech frames by adopting different contents (such as the synchronous word and the LC), and the speech synchronization can be still rapidly carried out under the condition that the speech frame to which the synchronous word belongs is lost, so that the reliability of the speech synchronization is improved. As shown in fig. 2, without the speech synchronization method of the present solution, if a synchronization word is used for speech synchronization, if a speech frame to which the synchronization word belongs is lost, a speech frame cannot be synchronized in a speech frame received from a transmission channel, and for a receiving party, a time period corresponding to an a frame-F frame is silent; under the condition of adopting the voice synchronization method of the scheme, if the voice frame of the synchronization word, namely the frame A is lost, the frame B can be adopted for voice synchronization, and if the frame B is synchronized, the receiving party has voice in the time period corresponding to the frame B-F.
As another optional embodiment 2 of the present application, the frame structure of the voice superframe introduced in embodiment 1 is refined, which specifically may be: the M non-target type voice frames in a voice super frame transmitted by the transmitter respectively comprise partial LCs, information formed by all the partial LCs is complete LCs, all the partial LCs are different, and M is smaller than the total number of the non-target type voice frames in the voice super frame transmitted by the transmitter. For example, a voice super frame includes a frame a, a frame B, a frame C, a frame D, a frame E, and a frame F, the frame a is a target voice type voice frame, the frame B, the frame C, the frame D, the frame E, and the frame F are all non-target voice type voice frames, the middle position of the frame a is a sync word, the middle positions of the frame B, the frame C, the frame D, and the frame E are respectively different partial LCs, and the middle position of the frame F may be a null value.
Based on the frame structure of the voice superframe described in this embodiment, this embodiment describes a refinement of the voice synchronization method described in embodiment 1 above, and as shown in fig. 3, the method may include, but is not limited to, the following steps:
step S21, searching whether there is a part identical to the sync word in the speech frame received from the transmission channel.
If yes, go to step S22; if not, go to step S23.
Step S22, extracting the speech frame from the speech frame received on the transmission channel based on the same part as the sync word.
The detailed procedures of steps S21-S22 can be found in the related descriptions of steps S11-S12 in embodiment 1, and are not repeated herein.
And step 23, determining the part LC which is in the front of the voice superframe and is not used as the target LC.
According to the frame structure of the voice superframe, the position and the sequence of each part LC in the voice frame in the voice superframe can be determined. In order to avoid losing voice frames, preferably, synchronization needs to be performed according to the sequence of the voice frames in the voice superframe, so that a part LC which is in the front of the voice superframe and is not used needs to be determined as a target LC.
Now, for example, the preceding and unused part LC in the voice superframe is described, for example, the target type voice frame included in the voice superframe is an a frame, the non-target type voice frames are a frame, a C frame, a D frame, an E frame and an F frame, the tail of the B frame is adjacent to the head of the C frame, the tail of the C frame is adjacent to the head of the D frame, the tail of the D frame is adjacent to the head of the E frame, the tail of the E frame is adjacent to the head of the F frame, the part LC in the B frame is represented as part LC1, the part LC in the C frame is represented as part LC2, the part LC in the D frame is represented as part LC3, the part LC in the E frame is represented as part LC4, the part LC in the F frame is represented as part LC5, the unused parts LC1, LC2, LC3, LC4 and LC5 find the same part in the received voice frame, and the preceding and unused part LC is represented as part LC 1; if the same part is queried in the received speech frame using part LC1, the part LC ordered before and not used is part LC 2; if the same part is queried in the received speech frame using part LC2, the part LC ordered before and not used is part LC 3; if the same part is queried in the received speech frame using part LC3, the part LC ordered before and not used is part LC 4; if the same portion was queried in the received speech frame using portion LC4, the portion LC ordered before and not used is portion LC 5.
Step S24, searching whether there is the same part as the target LC in the voice frame received from the transmission channel.
If not, the process returns to step S23 until the same part as the target LC is found in the voice frame received from the transmission channel.
Still taking the target type voice frame included in the voice superframe in step S23 as an a frame, and the non-target type voice frames as a B frame, a C frame, a D frame, an E frame, and an F frame as examples, to find out whether there is a portion identical to the target LC in the voice frames received from the transmission channel; if not, the process returns to step S23 until the same part as the target LC is found in the voice frame received from the transmission channel. For example, if the same part as the part LC1 in the B frame is not found in the voice frame received from the transmission channel, the part LC2 in the voice superframe is determined to be the target LC, whether the same part as the part LC2 exists in the voice frame received from the transmission channel is found, if the same part as the part LC2 in the B frame is not found in the voice frame received from the transmission channel, the part LC3 in the voice superframe is determined to be the target LC, whether the same part as the part LC3 exists in the voice frame received from the transmission channel is found, and if the same part as the part LC3 exists in the voice frame received from the transmission channel, the finding is stopped.
If it is found that the same portion as the target LC exists in the voice frame received from the transmission channel, step S25 is performed.
Steps S23-S24 are a specific implementation of step S13 in example 1.
Step S25, extracting the voice frame from the voice frame received on the transmission channel based on the same part as the target LC.
The detailed process of step S25 can be referred to the related description of step S14 in embodiment 1, and is not repeated here.
As another optional embodiment 3 of the present application, the frame structure of the voice superframe introduced in embodiment 1 is refined, which specifically may be: the N non-target type voice frames in a voice super frame transmitted by the transmitter respectively comprise partial LCs, information formed by each partial LC is complete LC, each partial LC is different, and N is equal to the total number of the non-target type voice frames in the voice super frame transmitted by the transmitter. For example, a voice super frame includes a frame a, a frame B, a frame C, a frame D, a frame E, and a frame F, the frame a is a target voice type voice frame, the frame B, the frame C, the frame D, the frame E, and the frame F are all non-target voice type voice frames, the middle position of the frame a is a sync word, and the middle positions of the frame B, the frame C, the frame D, the frame E, and the frame F are different portions LC respectively.
Based on the frame structure of the voice superframe described in this embodiment, this embodiment describes a refinement of the voice synchronization method described in embodiment 1 above, and as shown in fig. 4, the method may include, but is not limited to, the following steps:
step S31, searching whether there is a part identical to the sync word in the speech frame received from the transmission channel.
If yes, go to step S32; if not, go to step S33.
Step S32, extracting the speech frame from the speech frame received on the transmission channel based on the same part as the sync word.
And step 33, determining the part LC which is in the front of the voice superframe and is not used as the target LC.
Step S34, searching whether there is the same part as the target LC in the voice frame received from the transmission channel.
If not, the process returns to step S33 until the same part as the target LC is found in the voice frame received from the transmission channel.
If it is found that the same portion as the target LC exists in the voice frame received from the transmission channel, step S35 is performed.
Steps S33-S34 are a specific implementation of step S13 in example 1.
Step S35, extracting the voice frame from the voice frame received on the transmission channel based on the same part as the target LC.
The detailed procedures of steps S31-S35 can be referred to the related descriptions of steps S21-S25 in embodiment 2, and are not described herein again.
In another embodiment of the present application, another method for voice synchronization is presented, applied to a transmitter, which may include:
sending a voice superframe at intervals of preset time, wherein the voice superframe comprises a target type voice frame and a plurality of non-target type voice frames, the target type voice frame comprises a synchronous word, and at least one voice frame in the plurality of non-target type voice frames comprises an LC (inductance capacitance), so that a receiver searches whether a part same as the synchronous word exists in the voice frames received from a transmission channel; if the synchronous word exists, extracting a voice frame from the voice frame received on the transmission channel based on the part which is the same as the synchronous word; if not, searching whether a part same as the LC exists in a voice frame received from the transmission channel; and if so, extracting the voice frame from the voice frame received on the transmission channel based on the same part as the LC.
Next, a voice synchronization apparatus provided in the present application will be described, and the voice synchronization apparatus described below and the voice synchronization method described above may be referred to correspondingly.
Referring to fig. 5, a voice synchronization apparatus sends a voice superframe every preset time based on a transmitter, where the voice superframe includes a target type voice frame and a plurality of non-target type voice frames, the target type voice frame includes a synchronization word, and at least one of the plurality of non-target type voice frames includes a link control signaling LC, and the voice synchronization apparatus is applied to a receiver, and the apparatus includes: a first search module 11, a first extraction module 12, a second search module 13 and a second extraction module 14.
A first searching module, configured to search, in a speech frame received from a transmission channel, whether a portion identical to the sync word exists;
a first extraction module, configured to, if there is a portion identical to the sync word in a speech frame received from a transmission channel, extract the speech frame from the speech frame received from the transmission channel based on the portion identical to the sync word;
a second searching module, configured to search whether a part identical to the LC exists in a speech frame received from a transmission channel if the part identical to the sync word does not exist in the speech frame received from the transmission channel;
and the second extraction module is used for extracting the voice frame from the voice frame received on the transmission channel based on the part which is the same as the LC if the part which is the same as the LC exists in the voice frame received from the transmission channel.
In this embodiment, if M non-target type voice frames in a voice super frame transmitted by the transmitter respectively include partial LCs, and information formed by each partial LC is a complete LC, and M is smaller than the total number of non-target type voice frames in a voice super frame transmitted by the transmitter, the second searching module may include:
a first determining submodule, configured to determine, as a target LC, a portion LC that is sequentially previous and unused in the voice superframe;
and the first searching sub-module is used for searching whether a part which is the same as the target LC exists in the voice frame received from the transmission channel, and if the part which is the same as the target LC does not exist in the voice frame, the step of determining the part LC which is in the front of the voice superframe and is not used as the target LC is returned to be executed until the part which is the same as the target LC is searched.
In this embodiment, if N non-target type voice frames in a voice super frame transmitted by the transmitter respectively include partial LCs, and information formed by each partial LC is a complete LC, where N is equal to the total number of non-target type voice frames in a voice super frame transmitted by the transmitter, the second searching module may include:
a second determining submodule, configured to determine, as a target LC, a portion LC that is sequentially previous and unused in the voice superframe;
and a second searching sub-module, configured to search whether a portion identical to the target LC exists in the voice frame received over the transmission channel, and if the portion identical to the target LC does not exist, return to the step of determining that the portion LC that is in the preceding sequence and is not used in the voice superframe is the target LC until the portion identical to the target LC is found.
In another embodiment of the present application, a receiver is provided, which sends a voice superframe every preset time based on a transmitter, where the voice superframe includes a target type voice frame and a plurality of non-target type voice frames, the target type voice frame includes a synchronization word, and at least one of the plurality of non-target type voice frames includes a link control signaling LC, please refer to fig. 6, which may include: a processor 100, a memory 200, and a data bus 300, said processor 100 and said memory 200 communicating over said data bus 300;
the memory 200 is used for storing programs;
the processor 100 is configured to execute the program;
the program is specifically for:
searching for whether there is a portion identical to the sync word in a speech frame received from a transmission channel;
if the synchronous word exists, extracting a voice frame from the voice frame received on the transmission channel based on the part which is the same as the synchronous word;
if not, searching whether a part same as the LC exists in a voice frame received from the transmission channel;
and if so, extracting the voice frame from the voice frame received on the transmission channel based on the same part as the LC.
In another embodiment of the present application, a speech synchronization apparatus is introduced, which is applied to a transmitter, and the speech synchronization apparatus may include:
a sending module, configured to send a voice superframe every preset time, where the voice superframe includes a target type voice frame and multiple non-target type voice frames, the target type voice frame includes a sync word, and at least one of the multiple non-target type voice frames includes an LC, so that a receiver searches for a portion, which is the same as the sync word, in a voice frame received from a transmission channel; if the synchronous word exists, extracting a voice frame from the voice frame received on the transmission channel based on the part which is the same as the synchronous word; if not, searching whether a part same as the LC exists in a voice frame received from the transmission channel; and if so, extracting the voice frame from the voice frame received on the transmission channel based on the same part as the LC.
In another embodiment of the present application, a voice synchronization system is provided, please refer to fig. 7, the voice synchronization system includes: a transmitter 21 and a receiver 22.
The transmitter 21 may include: a processor 400, a memory 500 and a data bus 600, said processor 400 and said memory 500 communicating via said data bus 600;
the memory 500 is used for storing programs;
the processor 400 is configured to execute the program;
the program is specifically for:
and sending a voice superframe once every preset time, wherein the voice superframe comprises a target type voice frame and a plurality of non-target type voice frames, the target type voice frame comprises synchronous words, and at least one voice frame in the plurality of non-target type voice frames comprises LC.
The specific structure and related functions of the receiver 22 can be referred to the receiver described in the foregoing embodiments, and will not be described in detail herein.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The above detailed description is provided for a retrieval method, apparatus and system provided by the present application, and the principle and implementation of the present application are explained by applying specific examples, and the description of the above embodiments is only used to help understanding the method and core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A voice synchronization method is characterized in that a transmitter sends a voice superframe every preset time, the voice superframe comprises a target type voice frame and a plurality of non-target type voice frames, the target type voice frame comprises a synchronization word, at least one voice frame in the plurality of non-target type voice frames comprises a link control signaling LC, the method is applied to a receiver, and the method comprises the following steps:
searching for whether there is a portion identical to the sync word in a speech frame received from a transmission channel;
if the synchronous word exists, extracting a voice frame from the voice frame received on the transmission channel based on the part which is the same as the synchronous word;
if not, searching whether a part same as the LC exists in a voice frame received from the transmission channel;
and if so, extracting the voice frame from the voice frame received on the transmission channel based on the same part as the LC.
2. The method of claim 1, wherein if M said non-target type speech frames in a voice superframe transmitted by said transmitter include partial LCs respectively, and the information formed by each of said partial LCs is complete LCs, and M is less than the total number of non-target type speech frames in a voice superframe transmitted by said transmitter, searching for the presence of the same portion as said LCs in the speech frames received from said transmission channel, comprises:
determining a part LC which is in the front of the voice superframe and is not used as a target LC;
and searching whether a part identical to the target LC exists in the voice frames received from the transmission channel, and if the part identical to the target LC does not exist, returning to the step of determining that the part LC which is in the front of the voice superframe and is not used is the target LC until the part identical to the target LC is searched.
3. The method of claim 1, wherein if N said non-target type speech frames in a voice superframe transmitted by said transmitter include partial LCs, respectively, and the information formed by each of said partial LCs is a complete LC, N being equal to the total number of non-target type speech frames in a voice superframe transmitted by said transmitter, searching for the presence of the same portion as said LC in the speech frames received from said transmission channel, comprises:
determining a part LC which is in the front of the voice superframe and is not used as a target LC;
and searching whether a part identical to the target LC exists in the voice frames received from the transmission channel, and if the part identical to the target LC does not exist, returning to the step of determining that the part LC which is in the front of the voice superframe and is not used is the target LC until the part identical to the target LC is searched.
4. A method for voice synchronization, applied to a transmitter, comprising:
sending a voice superframe at intervals of preset time, wherein the voice superframe comprises a target type voice frame and a plurality of non-target type voice frames, the target type voice frame comprises a synchronous word, and at least one voice frame in the plurality of non-target type voice frames comprises an LC (inductance capacitance), so that a receiver searches whether a part same as the synchronous word exists in the voice frames received from a transmission channel; if the synchronous word exists, extracting a voice frame from the voice frame received on the transmission channel based on the part which is the same as the synchronous word; if not, searching whether a part same as the LC exists in a voice frame received from the transmission channel; and if so, extracting the voice frame from the voice frame received on the transmission channel based on the same part as the LC.
5. A voice synchronization device is characterized in that a transmitter sends a voice superframe every preset time, the voice superframe comprises a target type voice frame and a plurality of non-target type voice frames, the target type voice frame comprises a synchronization word, at least one voice frame in the plurality of non-target type voice frames comprises a link control signaling LC, the method is applied to a receiver, and the device comprises:
a first searching module, configured to search, in a speech frame received from a transmission channel, whether a portion identical to the sync word exists;
a first extraction module, configured to, if there is a portion identical to the sync word in a speech frame received from a transmission channel, extract the speech frame from the speech frame received from the transmission channel based on the portion identical to the sync word;
a second searching module, configured to search whether a part identical to the LC exists in a speech frame received from a transmission channel if the part identical to the sync word does not exist in the speech frame received from the transmission channel;
and the second extraction module is used for extracting the voice frame from the voice frame received on the transmission channel based on the part which is the same as the LC if the part which is the same as the LC exists in the voice frame received from the transmission channel.
6. The apparatus of claim 5, wherein if M voice frames of non-target type in a voice superframe transmitted by the transmitter include partial LCs, and the information formed by each partial LC is a complete LC, and M is smaller than the total number of voice frames of non-target type in a voice superframe transmitted by the transmitter, the second lookup module comprises:
a first determining submodule, configured to determine, as a target LC, a portion LC that is sequentially previous and unused in the voice superframe;
and the first searching sub-module is used for searching whether a part which is the same as the target LC exists in the voice frame received from the transmission channel, and if the part which is the same as the target LC does not exist in the voice frame, the step of determining the part LC which is in the front of the voice superframe and is not used as the target LC is returned to be executed until the part which is the same as the target LC is searched.
7. The apparatus of claim 5, wherein if N frames of said non-target type speech in a voice superframe transmitted by said transmitter include partial LCs, and the information formed by each of said partial LCs is a complete LC, N being equal to the total number of non-target type speech frames in a voice superframe transmitted by said transmitter, said second lookup module comprises:
a second determining submodule, configured to determine, as a target LC, a portion LC that is sequentially previous and unused in the voice superframe;
and a second searching sub-module, configured to search whether a portion identical to the target LC exists in the voice frame received over the transmission channel, and if the portion identical to the target LC does not exist, return to the step of determining that the portion LC that is in the preceding sequence and is not used in the voice superframe is the target LC until the portion identical to the target LC is found.
8. A speech synchronization apparatus, for use in a transmitter, the apparatus comprising:
a sending module, configured to send a voice superframe every preset time, where the voice superframe includes a target type voice frame and multiple non-target type voice frames, the target type voice frame includes a sync word, and at least one of the multiple non-target type voice frames includes an LC, so that a receiver searches for a portion, which is the same as the sync word, in a voice frame received from a transmission channel; if the synchronous word exists, extracting a voice frame from the voice frame received on the transmission channel based on the part which is the same as the synchronous word; if not, searching whether a part same as the LC exists in a voice frame received from the transmission channel; and if so, extracting the voice frame from the voice frame received on the transmission channel based on the same part as the LC.
9. A receiver, wherein a transmitter sends a voice superframe every preset time, the voice superframe comprises a target type voice frame and a plurality of non-target type voice frames, the target type voice frame comprises a synchronization word, and at least one of the plurality of non-target type voice frames comprises a link control signaling LC, the receiver comprises: a processor, a memory, and a data bus through which the processor and the memory communicate;
the memory is used for storing programs;
the processor is used for executing the program;
the program is specifically for:
searching for whether there is a portion identical to the sync word in a speech frame received from a transmission channel;
if the synchronous word exists, extracting a voice frame from the voice frame received on the transmission channel based on the part which is the same as the synchronous word;
if not, searching whether a part same as the LC exists in a voice frame received from the transmission channel;
and if so, extracting the voice frame from the voice frame received on the transmission channel based on the same part as the LC.
10. A speech synchronization system, comprising: a transmitter and a receiver as claimed in claim 9;
the transmitter is configured to send a voice superframe once every preset time, where the voice superframe includes a target type voice frame and multiple non-target type voice frames, the target type voice frame includes a synchronization word, and at least one of the multiple non-target type voice frames includes an LC.
CN201911349463.8A 2019-12-24 2019-12-24 Voice synchronization method, device and system and related equipment Active CN113037456B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911349463.8A CN113037456B (en) 2019-12-24 2019-12-24 Voice synchronization method, device and system and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911349463.8A CN113037456B (en) 2019-12-24 2019-12-24 Voice synchronization method, device and system and related equipment

Publications (2)

Publication Number Publication Date
CN113037456A true CN113037456A (en) 2021-06-25
CN113037456B CN113037456B (en) 2022-06-24

Family

ID=76451872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911349463.8A Active CN113037456B (en) 2019-12-24 2019-12-24 Voice synchronization method, device and system and related equipment

Country Status (1)

Country Link
CN (1) CN113037456B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040240480A1 (en) * 2003-05-30 2004-12-02 Hiben Bradley M. Method for selecting an operating mode based on a detected synchronization pattern
US20090081997A1 (en) * 2007-09-20 2009-03-26 Motorola, Inc. System and method for minimizing undesired audio in a communication system utilizing distributed signaling
CN104954995A (en) * 2015-04-23 2015-09-30 河北远东通信系统工程有限公司 Synchronization word free speech frame synchronization method in DMR/PDT system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040240480A1 (en) * 2003-05-30 2004-12-02 Hiben Bradley M. Method for selecting an operating mode based on a detected synchronization pattern
US20090081997A1 (en) * 2007-09-20 2009-03-26 Motorola, Inc. System and method for minimizing undesired audio in a communication system utilizing distributed signaling
CN104954995A (en) * 2015-04-23 2015-09-30 河北远东通信系统工程有限公司 Synchronization word free speech frame synchronization method in DMR/PDT system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
时小乔: "PDT数字对讲机测试技术的研究与实现", 《《中国优秀硕士论文全文数据库 信息科技辑》》 *

Also Published As

Publication number Publication date
CN113037456B (en) 2022-06-24

Similar Documents

Publication Publication Date Title
CN108234410B (en) A kind of virtual terminal distribution method and device
CN108040367A (en) A kind of UE bands of position update method, access network entity, UE and core network entity
CN104754536A (en) Method and system for realizing communication between different languages
CN106973050A (en) A kind of method and device of inter-network lock information sharing
CN105451300B (en) A kind of method for connecting network and mobile device
CN107534914B (en) Network switching method and device
CN104572952A (en) Identification method and device for live multi-media files
CN108712506A (en) block chain node communication method, device and block chain node
CN105120528A (en) A method, apparatus and system for carrying out configuration setting between devices
CN104509060A (en) Method and device for transmitting streaming media data
CN104348859A (en) File synchronizing method, device, server, terminal and system
CN104079711A (en) Calling method based on speech recognition
CN113037456B (en) Voice synchronization method, device and system and related equipment
TW507458B (en) Uplink synchronization signal transmission in TDD systems
MX2022004204A (en) Blockchain data search method.
EP3355551B1 (en) Data access method and device
CN100502391C (en) Reorganizing method of slicing message
CN105071895A (en) Method of transmitting and receiving data capable of penetrating various vocoders and system
CN103457840A (en) Information sharing system and information sharing method
CN105049638B (en) The method and device conversed in the terminal device of multiple operating system
CN107105425A (en) Network access method and network access device
CN103916892A (en) Method and device for obtaining BSIC of neighborhood
CN104065673B (en) A kind of implementation method and device by address list synchronization to server
CN113448755A (en) Transaction routing method and device for switching between new system and old system
CN109284332B (en) Data processing method, client, server and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant