CN110809163A - Data transmission method and device, computer equipment and storage medium - Google Patents

Data transmission method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN110809163A
CN110809163A CN201911047001.0A CN201911047001A CN110809163A CN 110809163 A CN110809163 A CN 110809163A CN 201911047001 A CN201911047001 A CN 201911047001A CN 110809163 A CN110809163 A CN 110809163A
Authority
CN
China
Prior art keywords
original audio
filter parameter
video
filtering operation
image frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911047001.0A
Other languages
Chinese (zh)
Other versions
CN110809163B (en
Inventor
张清
刘杉
刘海军
金飞剑
王诗涛
郭靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911047001.0A priority Critical patent/CN110809163B/en
Publication of CN110809163A publication Critical patent/CN110809163A/en
Application granted granted Critical
Publication of CN110809163B publication Critical patent/CN110809163B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The invention provides a data transmission method, a data transmission device, computer equipment and a storage medium.A coding end performs filtering operation on a prediction block corresponding to an original block in original audio and video data in the process of coding the original audio and video data to obtain a coded audio and video data stream; determining a filter parameter identifier adopted by each prediction unit for executing filtering operation in original audio and video data, wherein the filter parameter identifier is used for indicating a decoding end to obtain a filter parameter adopted by each prediction unit for executing filtering operation at an encoding end; and transmitting the coded audio-video data stream and the filter parameter identification adopted by each prediction unit performing the filtering operation to a decoding end. And performing filtering operation on the prediction block, reducing the residual error between the prediction block and the original block, improving the coding efficiency, only transmitting the filter parameter identifier to a decoding end, wherein the filter parameter identifier occupies a small number of bits, reduces the number of bits occupied by information in the transmission process, and improves the transmission efficiency of the audio and video file.

Description

Data transmission method and device, computer equipment and storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a data transmission method and apparatus, a computer device, and a storage medium.
Background
In the audio and video transmission process, the encoding and compression operation is usually required to be performed on the audio and video file sent by the sending end, so that the bit number occupied in the audio and video file transmission process is reduced, and the transmission efficiency of the audio and video file is improved. However, with the richness of the audio and video contents, the audio and video files are larger and larger, and higher requirements are put forward for the transmission efficiency of the larger audio and video files.
Disclosure of Invention
In view of this, the invention provides a data transmission method, a data transmission device, a computer device and a storage medium, so as to improve the transmission efficiency of a larger audio/video file.
In order to achieve the purpose, the invention provides the following technical scheme:
in one aspect, the present application provides a data transmission method, where the method is applied to an encoding end, and the method includes:
in the process of coding original audio and video data to obtain a coded audio and video data stream, performing filtering operation on a prediction block corresponding to an original block in the original audio and video data;
determining a filter parameter identifier adopted by each prediction unit which executes filtering operation in original audio and video data, wherein the filter parameter identifier is used for indicating a decoding end to obtain a filter parameter adopted by each prediction unit which executes filtering operation at an encoding end;
and transmitting the coded audio-video data stream and the filter parameter identification adopted by each prediction unit for performing the filtering operation to a decoding end.
On the other hand, the application also discloses a data transmission method, which is applied to a decoding end and comprises the following steps:
receiving an encoded audio-visual data stream from an encoding end and an identification of filter parameters to be used by each prediction unit performing a filtering operation at the encoding end;
acquiring a filter parameter adopted by each prediction unit which executes the filtering operation at the encoding end according to the filter parameter identification adopted by each prediction unit which executes the filtering operation at the encoding end;
and in the process of executing decoding operation on the coded audio and video data stream, utilizing the filter parameters adopted by each prediction unit executing filtering operation at the coding end to execute backward filtering operation on the filtered prediction block to obtain original audio and video data.
On the other hand, the application also discloses a data transmission device, which is applied to the encoding end, and the device comprises:
the filtering unit is used for performing filtering operation on a prediction block corresponding to an original block in original audio and video data in the process of coding the original audio and video data to obtain a coded audio and video data stream;
the device comprises a filter parameter identification determining unit, a decoding end and a prediction end, wherein the filter parameter identification determining unit is used for determining a filter parameter identification adopted by each prediction unit which executes filtering operation in original audio and video data, and the filter parameter identification is used for indicating the decoding end to obtain a filter parameter adopted by each prediction unit which executes filtering operation at the encoding end;
and the transmission unit is used for transmitting the coded audio-video data stream and the filter parameter identification adopted by each prediction unit which performs the filtering operation to the decoding end.
On the other hand, the application also discloses a data transmission device, the device is applied to a decoding end, the device comprises:
a receiving unit, for receiving the encoded audio-visual data stream from the encoding end and the filter parameter identification adopted by each prediction unit performing the filtering operation at the encoding end;
the filter parameter acquiring unit is used for acquiring the filter parameters adopted by each prediction unit which executes the filtering operation at the encoding end according to the filter parameter identification adopted by each prediction unit which executes the filtering operation at the encoding end;
and the backward filtering unit is used for performing backward filtering operation on the filtered prediction block by utilizing the filter parameters adopted by each prediction unit which performs filtering operation at the encoding end in the process of performing decoding operation on the encoded audio and video data stream to obtain the original audio and video data.
In another aspect, the present application further discloses a computer device, comprising:
a processor and a memory;
the processor is used for calling and executing the program stored in the memory;
the memory is configured to store the program, the program at least to:
the data transmission method as described above is performed.
In another aspect, the present application also discloses a computer-readable storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are loaded and executed by a processor, the data transmission method as described above is implemented.
According to the technical scheme, compared with the prior art, the invention provides a data transmission method, a data transmission device, computer equipment and a storage medium, wherein a coding end performs filtering operation on a prediction block corresponding to an original block in original audio and video data in the process of coding the original audio and video data to obtain a coded audio and video data stream; determining a filter parameter identifier adopted by each prediction unit which executes filtering operation in original audio and video data, wherein the filter parameter identifier is used for indicating a decoding end to obtain a filter parameter adopted by each prediction unit which executes filtering operation at an encoding end; and transmitting the coded audio-video data stream and the filter parameter identification adopted by each prediction unit for performing the filtering operation to a decoding end. In the process of obtaining the coded audio-video data stream by coding, the embodiment of the invention executes the filtering operation on the prediction block corresponding to the original block in the original audio-video data, improves the prediction accuracy, reduces the residual error between the prediction block and the original block, so as to reduce the bit number required by the transmission of the residual error, improve the coding efficiency and improve the transmission efficiency of the audio-video file The audio and video file transmission method can quickly transmit the audio and video file to a decoding end, and improves the transmission efficiency of the audio and video file.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic diagram of a component structure of a computer device according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of an embodiment of a data transmission method according to the present invention;
fig. 3 is a flowchart illustrating an encoding process according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating an embodiment of another data transmission method according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating an embodiment of a method for transmitting filter parameters corresponding to a prediction unit performing a filtering operation at an encoding end to a decoding end according to the present invention;
FIG. 6 is a flowchart illustrating another embodiment of a method for transmitting filter parameters corresponding to a prediction unit performing a filtering operation at an encoding end to a decoding end according to the present invention;
fig. 7 is a flowchart illustrating an embodiment of another data transmission method according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a data transmission device according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of another data transmission apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the audio and video transmission process, the audio and video files are compressed to reduce the volume of the audio and video files. The existing compression algorithm comprises a plurality of complex modules, and can fully utilize time domain correlation, space domain correlation, statistical correlation and the like in an original sequence from each dimension to compress so as to reduce the volume of the coded audio/video code stream. At present, with the richness of audio and video contents, the original volume of audio and video files is larger and larger, and the audio and video files are difficult to be directly transmitted in a network, so that higher requirements are put forward on the transmission efficiency of the larger audio and video files. In order to solve the technical problem, the embodiment of the invention discloses a data transmission method, a data transmission device, computer equipment and a storage medium, wherein a coding end performs filtering operation on a prediction block corresponding to an original block in original audio and video data in the process of coding the original audio and video data to obtain a coded audio and video data stream; determining a filter parameter identifier adopted by each prediction unit which executes filtering operation in original audio and video data, wherein the filter parameter identifier is used for indicating a decoding end to obtain a filter parameter adopted by each prediction unit which executes filtering operation at an encoding end; and transmitting the coded audio-video data stream and the filter parameter identification adopted by each prediction unit for performing the filtering operation to a decoding end. In the process of obtaining the coded audio-video data stream by coding, the embodiment of the invention executes the filtering operation on the prediction block corresponding to the original block in the original audio-video data, improves the prediction accuracy, reduces the residual error between the prediction block and the original block, so as to reduce the bit number required by the transmission of the residual error, improve the coding efficiency and improve the transmission efficiency of the audio-video file The audio and video file transmission method can quickly transmit the audio and video file to a decoding end, and improves the transmission efficiency of the audio and video file.
In order to improve the transmission efficiency of a large audio/video file, the embodiment of the invention adopts a mode of performing filtering operation on the prediction block corresponding to the original block in the original audio/video data to reduce the residual error between the prediction block and the original block so as to reduce the bit number required by residual error transmission, improve the coding efficiency and simultaneously improve the transmission efficiency of the audio/video file.
After the encoding end performs the filtering operation on the prediction block corresponding to the original block in the original audio/video data, in order to ensure the consistency of encoding and decoding, the decoding end needs to obtain the filtering parameters used for performing the filtering operation on each prediction unit in the prediction block, so as to perform the opposite filtering operation on the decoding end, thereby ensuring that the decoding end can correctly obtain the prediction block filtered by the encoding end, and further obtaining the original audio/video data. Therefore, it is necessary to transmit filter parameters employed by each prediction unit that performs a filtering operation at the encoding end to the decoding end. The inventor finds out in the research process that in order to solve the problem of filter parameter transmission, a mode of transmitting filter parameters adopted by each prediction unit which executes filtering operation at an encoding end to a decoding end one by one can be adopted, but with the richness of audio and video contents, audio and video files are larger and larger, tens of thousands, hundreds of thousands or even millions of prediction units can be generated in one video file, and a plurality of prediction units adopt the same filter parameters.
Furthermore, in order to solve the above problems, the inventor proposes a method of transmitting only the filter parameter identifier used by each prediction unit performing the filtering operation to the decoding end, and instructs the decoding end to obtain the filter parameter used by each prediction unit performing the filtering operation at the encoding end, because the filter parameter identifier occupies a small number of bits compared with the filter parameter content in the transmission process, the number of bits occupied by information in the transmission process is further reduced compared with the transmission of the filter parameter content, and by the above method, the encoded audio/video data stream and the filter parameter identifier used by each prediction unit performing the filtering operation can be transmitted to the decoding end faster, and the transmission efficiency of the audio/video file is improved.
The data transmission method according to the embodiment of the present application may be applied to a computer device, as shown in fig. 1, which shows a schematic structural diagram of a component of a computer device to which the scheme of the present application is applied. In fig. 1, the computer apparatus may include: a processor 101 and a memory 102.
The server 100 may further include: a communication interface 103, an input unit 104, and a display 105 and a communication bus 106.
The processor 101, the memory 102, the communication interface 103, the input unit 104, and the display 105 all communicate with each other via the communication bus 106.
In the embodiment of the present application, the processor 101 may be a Central Processing Unit (CPU), an off-the-shelf programmable gate array (FPGA) or other programmable logic devices.
The processor may call a program stored in the memory 102, and in particular, may execute the operations performed on the computer device side in the following method embodiments.
The memory 102 is used for storing one or more programs, which may include program codes including computer operation instructions, and in this embodiment, the memory stores at least the programs for implementing the following functions:
in the process of coding original audio and video data to obtain a coded audio and video data stream, performing filtering operation on a prediction block corresponding to an original block in the original audio and video data;
determining a filter parameter identifier adopted by each prediction unit which executes filtering operation in original audio and video data, wherein the filter parameter identifier is used for indicating a decoding end to obtain a filter parameter adopted by each prediction unit which executes filtering operation at an encoding end;
and transmitting the coded audio-video data stream and the filter parameter identification adopted by each prediction unit for performing the filtering operation to a decoding end.
Alternatively, in the embodiment of the present application, the memory stores at least a program for implementing the following functions:
receiving an encoded audio-visual data stream from an encoding end and an identification of filter parameters to be used by each prediction unit performing a filtering operation at the encoding end;
acquiring a filter parameter adopted by each prediction unit which executes the filtering operation at the encoding end according to the filter parameter identification adopted by each prediction unit which executes the filtering operation at the encoding end;
and in the process of executing decoding operation on the coded audio and video data stream, utilizing the filter parameters adopted by each prediction unit executing filtering operation at the coding end to execute backward filtering operation on the filtered prediction block to obtain original audio and video data.
As shown in fig. 2, which shows a schematic flow chart of an embodiment of a data transmission method according to the present application, the embodiment is mainly illustrated by applying the method to an encoding end, and referring to fig. 2, the data transmission method specifically includes the following steps:
s100, in the process of coding original audio and video data to obtain a coded audio and video data stream, performing filtering operation on a prediction block corresponding to an original block in the original audio and video data;
the following description will be made of the process of encoding the original audio/video data to obtain the encoded audio/video data stream disclosed in the embodiment of the present invention with reference to the encoding process flow chart shown in fig. 3:
firstly, performing intra-frame/inter-frame prediction on an original block to be predicted currently, wherein the intra-frame prediction refers to decoded information in a current frame for prediction, the inter-frame prediction refers to prediction according to the content of a frame decoded before the current frame, and a prediction block is generated after prediction;
the prediction block is filtered by using a filter, and the filtering can be performed only on the intra-frame prediction block, or only on the inter-frame prediction block, or both the intra-frame prediction block and the inter-frame prediction block, which is not specifically limited in the embodiment of the present invention;
after the filter processing, obtaining a filtered prediction block, and calculating a residual between an original block and the filtered prediction block to obtain a residual block;
transforming and quantizing the residual blocks to obtain transformed and quantized residual blocks, further removing the correlation among the residual blocks and reducing the content needing to be transmitted;
and entropy coding is carried out on the transform coefficient of the transformed and quantized residual block, and statistical redundancy is removed to obtain a coded audio-video data stream.
In addition, in order to ensure the consistency of encoding and decoding, the encoding end also performs decoding-related processing, the obtained decoded data is put into a decoded image buffer to be used as a reference for inter/intra prediction, specifically, the steps of inverse quantization and inverse transformation are performed on the transformed and quantized residual block to obtain an inverse quantized and inverse transformed residual block, the decoded image is buffered by combining the filtered prediction block, and then the decoded image is reversely input into the motion estimation ME/motion compensation MC unit, so that the prediction process is further performed by further combining the decoded image in the ME/unit to obtain a prediction block.
The filter carries out further filtering processing on the prediction block so as to better predict the original block and achieve the purpose of improving the coding efficiency.
S110, determining a filter parameter identifier adopted by each prediction unit which executes filtering operation in original audio and video data, wherein the filter parameter identifier is used for indicating a decoding end to obtain a filter parameter adopted by each prediction unit which executes filtering operation at an encoding end;
it should be noted that the filter parameter identifier can uniquely identify one filter parameter, the filter parameter is specifically a parameter used for performing a filtering process on the prediction unit, and the specific parameter content is related to the property of the filter itself.
Each type of filter parameter uniquely identifies a type of filter, and all filter sub-parameters associated with a type of filter are included in the corresponding type of filter parameter.
And S120, transmitting the coded audio and video data stream and the filter parameter identification adopted by each prediction unit performing the filtering operation to a decoding end.
After the encoded audio and video data stream and the filter parameter identifier adopted by each prediction unit executing the filtering operation are transmitted to the decoding end, the decoding end can obtain the filter parameter adopted by each prediction unit executing the filtering operation at the encoding end according to the filter parameter identifier adopted by each prediction unit executing the filtering operation, and further, the decoding end can utilize the filter parameter adopted by each prediction unit executing the filtering operation at the encoding end to perform backward filtering processing on the encoded audio and video data stream, so that the consistency of encoding and decoding is ensured.
It should be noted that, in the embodiment of the present invention, the same filter parameter set may be fixedly set at the encoding end and the decoding end, where the same filter parameter set fixedly set at the encoding end and the decoding end includes all filter parameters adopted by each prediction unit performing a filtering operation, and after the filter parameter identifier adopted by each prediction unit performing a filtering operation is transmitted to the decoding end, the decoding end may find the filter parameter adopted by each prediction unit performing a filtering operation in the filter parameter set according to the filter parameter identifier adopted by each prediction unit performing a filtering operation, so as to perform inverse filtering processing on the encoded audio and video data stream by using the filter parameter adopted by each prediction unit performing a filtering operation at the encoding end.
By the scheme, the encoding end performs filtering operation on the prediction block corresponding to the original block in the original audio and video data in the process of encoding the original audio and video data to obtain the encoded audio and video data stream; determining a filter parameter identifier adopted by each prediction unit which executes filtering operation in original audio and video data, wherein the filter parameter identifier is used for indicating a decoding end to obtain a filter parameter adopted by each prediction unit which executes filtering operation at an encoding end; and transmitting the coded audio-video data stream and the filter parameter identification adopted by each prediction unit for performing the filtering operation to a decoding end. In the process of obtaining the coded audio-video data stream by coding, the embodiment of the invention executes the filtering operation on the prediction block corresponding to the original block in the original audio-video data, improves the prediction accuracy, reduces the residual error between the prediction block and the original block, so as to reduce the bit number required by the transmission of the residual error, improve the coding efficiency and improve the transmission efficiency of the audio-video file The audio and video file transmission method can quickly transmit the audio and video file to a decoding end, and improves the transmission efficiency of the audio and video file.
It should be noted that, in the embodiment of the present invention, in addition to the same filter parameter set being fixedly set at the encoding end and the decoding end, the filter parameter set may also be not fixedly set at the decoding end, in which case, all the filter parameters corresponding to the prediction unit that performs the filtering operation at the encoding end need to be transmitted to the decoding end. In some cases, the fixed filter parameters of the filter may be fixedly stored at the decoding end, and the filter parameters adaptively set at the encoding end according to the audio/video image need to be transmitted from the encoding end to the decoding end. In addition, in the present application, a specific manner of transmitting the filter parameters corresponding to the prediction unit that performs the filtering operation at the encoding end to the decoding end is not a manner of transmitting the filter parameters, which are used by each prediction unit that performs the filtering operation at the encoding end, to the decoding end one by one, but a transmission manner is adopted as follows, so as to improve the transmission efficiency of the audio/video file.
As shown in fig. 4, which shows a schematic flow chart of an embodiment of another data transmission method according to the present application, this embodiment mainly uses the method applied to an encoding end for example, and referring to fig. 4, the data transmission method specifically includes the following steps:
s200, in the process of coding original audio and video data to obtain a coded audio and video data stream, performing filtering operation on a prediction block corresponding to an original block in the original audio and video data;
s210, determining filter parameter sets corresponding to all prediction units executing filtering operation in original audio and video data, wherein the filter parameter sets comprise filter parameters required for executing filtering operation on all prediction units in the original audio and video data;
according to the embodiment of the invention, various types of filter parameters required by all the prediction units in the original audio and video data to execute the filtering operation are counted, all the types of filter parameters adopted by all the prediction units in the original audio and video data to execute the filtering operation are placed in the filter parameter sets corresponding to all the prediction units in the original audio and video data to execute the filtering operation, and the filter parameters adopted by all the prediction units in the original audio and video data to execute the filtering operation can be found in the filter parameter sets.
The filter parameter set does not contain repeated filter parameters.
S220, taking filter parameter sets corresponding to all prediction units executing filtering operation in original audio and video data as general filter parameter sets corresponding to the original audio and video data;
s230, determining a filter parameter identifier adopted by each prediction unit which executes filtering operation in original audio and video data, wherein the filter parameter identifier is used for indicating a decoding end to obtain a filter parameter adopted by each prediction unit which executes filtering operation at an encoding end;
and S240, transmitting the general filter parameter set corresponding to the original audio and video data, the coded audio and video data stream and the filter parameter identifier adopted by each prediction unit executing the filtering operation to a decoding end.
It should be noted that, in the embodiment of the present invention, according to the sequence of the code stream, the general filter parameter set corresponding to the original audio/video data, the encoded audio/video data stream, and the filter parameter identifier adopted by each prediction unit that performs the filtering operation are transmitted to the decoding end.
After receiving the general filter parameter set corresponding to the original audio/video data, the encoded audio/video data stream, and the filter parameter identifier adopted by each prediction unit performing the filtering operation, the decoding end may find the filter parameter corresponding to each prediction unit performing the filtering operation in the general filter parameter set corresponding to the original audio/video data according to the filter parameter identifier adopted by each prediction unit performing the filtering operation, so as to perform the inverse filtering processing on the encoded audio/video data stream by using the filter parameter adopted by each prediction unit performing the filtering operation at the encoding end.
The filter parameter sets corresponding to all prediction units executing filtering operation in original audio and video data and the filter parameter identifiers adopted by each prediction unit executing filtering operation are transmitted to the decoding end, and as the filter parameter sets do not contain repeated filter parameters, the embodiment of the invention does not need to transmit repeated parameters under the condition that a plurality of prediction units adopt the same filter parameters, and the transmitted filter parameter identifiers occupy less bit number than the content of the transmitted filter parameters, thereby reducing the redundancy of transmission information and improving the transmission efficiency of audio and video files.
Because an original audio and video data is composed of a plurality of original audio and video sequences, an original audio and video sequence is composed of a plurality of original audio and video image frames, an original audio and video image frame is composed of a plurality of original audio and video strips (Slice), an original audio and video strip is composed of a plurality of Largest Coding Units (LCU), one LCU is composed of a plurality of Coding Units (CU), and one CU also contains a plurality of Prediction Units (PU).
Based on the above structural description, several specific implementations of the transmission filter parameters disclosed in the above embodiments are introduced below:
the first method is as follows: determining a general filter parameter set corresponding to all prediction units executing filtering operation in any original audio and video sequence in original audio and video data; and taking the general filter parameter sets corresponding to all the prediction units executing the filtering operation in any original audio and video sequence as the general filter parameter sets corresponding to the original audio and video sequence, loading the general filter parameter sets in the sequence parameter set corresponding to the original audio and video sequence, and transmitting the general filter parameter sets to a decoding end, wherein the general filter parameter set corresponding to the original audio and video sequence comprises filter parameters required for executing the filtering operation on all the prediction units in the original audio and video sequence.
Because an original audio and video data is composed of a plurality of original audio and video sequences, in the embodiment of the invention, a general filter parameter set is set for each original audio and video sequence in the original audio and video data and is used as a general filter parameter set corresponding to the original audio and video sequence, each original audio and video sequence has a one-to-one correspondence relationship with the general filter parameter set, and each prediction unit executing filtering operation in each original audio and video sequence can find a filter parameter corresponding to the prediction unit in the general filter parameter set corresponding to the original audio and video sequence.
In the embodiment of the invention, the general filter Parameter Set corresponding to each original audio and video Sequence is loaded in the SPS (Sequence Parameter Set) corresponding to each original audio and video Sequence and is transmitted to the decoding end.
The SPS of each original audio-video sequence bears filter parameters adopted by all prediction units which execute the filtering operation in the original audio-video sequence, and after receiving a general filter parameter set corresponding to the original audio-video sequence, a coded audio-video data stream and a filter parameter identifier adopted by each prediction unit which executes the filtering operation, the decoding end can find the filter parameter corresponding to each prediction unit which executes the filtering operation in the general filter parameter set corresponding to the original audio-video sequence according to the filter parameter identifier adopted by each prediction unit which executes the filtering operation, so that the filter parameter adopted by each prediction unit which executes the filtering operation at the coding end is utilized to execute backward filtering processing on the coded audio-video data stream.
Because the general filter parameter set corresponding to the original audio and video sequence does not contain repeated filter parameters, the embodiment of the invention does not need to transmit repeated parameters under the condition that a plurality of prediction units adopt the same filter parameters, and the transmitted filter parameter identifier occupies less bit number than the transmitted filter parameter content, thereby reducing the redundancy of transmitted information and improving the transmission efficiency of audio and video files.
The second method comprises the following steps: determining a general filter parameter set corresponding to all prediction units executing filtering operation in any original audio and video image frame in an original audio and video sequence; and taking the general filter parameter sets corresponding to all the prediction units executing the filtering operation in any original audio and video image frame as the general filter parameter sets corresponding to the original audio and video image frame, bearing the general filter parameter sets in the image parameter sets corresponding to the original audio and video image frame, and transmitting the general filter parameter sets to a decoding end, wherein the general filter parameter sets corresponding to the original audio and video image frame comprise filter parameters required for executing the filtering operation on all the prediction units in the original audio and video image frame.
Because an original audio and video sequence is composed of a plurality of original audio and video image frames, in the embodiment of the invention, a general filter parameter set is set for each original audio and video image frame in the original audio and video sequence and is used as a general filter parameter set corresponding to the original audio and video image frame, each original audio and video image frame and the general filter parameter set have a one-to-one correspondence relationship, and each prediction unit executing filtering operation in each original audio and video image frame can find a filter parameter corresponding to the prediction unit in the general filter parameter set corresponding to the original audio and video image frame.
In the embodiment of the present invention, the general filter Parameter Set corresponding to each original audio/video image frame is loaded in a PPS (Picture Parameter Set) corresponding to each original audio/video image frame, and is transmitted to the decoding end.
The PPS of each original audio and video image frame bears filter parameters adopted by all prediction units which execute filtering operation in the original audio and video image frame, and after receiving a general filter parameter set corresponding to the original audio and video image frame, a coded audio and video data stream and a filter parameter identifier adopted by each prediction unit which executes filtering operation, a decoding end can find the filter parameter corresponding to each prediction unit which executes filtering operation in the general filter parameter set corresponding to the original audio and video image frame according to the filter parameter identifier adopted by each prediction unit which executes filtering operation, so that the filter parameter adopted by each prediction unit which executes filtering operation at a coding end is utilized to execute backward filtering processing on the coded audio and video data stream.
Because the general filter parameter set corresponding to the original audio and video image frame does not contain repeated filter parameters, the embodiment of the invention does not need to transmit repeated parameters under the condition that a plurality of prediction units adopt the same filter parameters, and the transmitted filter parameter identifier occupies less bit number than the transmitted filter parameter content, thereby reducing the redundancy of transmitted information and improving the transmission efficiency of audio and video files.
The third method comprises the following steps: determining a general filter parameter set corresponding to all prediction units executing filtering operation in any original audio and video strip in an original audio and video image frame; and taking the general filter parameter sets corresponding to all the prediction units executing the filtering operation in any original audio and video strip as the general filter parameter sets corresponding to the original audio and video strip, loading the general filter parameter sets in a strip Header (Slice Header) of the original audio and video strip, and transmitting the general filter parameter sets to a decoding end, wherein the general filter parameter sets corresponding to the original audio and video strip comprise filter parameters required for executing the filtering operation on all the prediction units in the original audio and video strip.
Because an original audio and video image frame is composed of a plurality of original audio and video strips, a general filter parameter set is set for each original audio and video strip in the original audio and video image frame in the embodiment of the invention and is used as the general filter parameter set corresponding to the original audio and video strip, each original audio and video strip has a one-to-one correspondence relation with the general filter parameter set, and each prediction unit executing filtering operation in each original audio and video strip can find the corresponding filter parameter in the general filter parameter set corresponding to the original audio and video strip.
The embodiment of the invention loads the general filter parameter set corresponding to each original audio and video strip in the strip head of each original audio and video strip and transmits the general filter parameter set to the decoding end.
The method comprises the steps that the head of each original audio and video strip bears filter parameters adopted by all prediction units which execute filtering operation in the original audio and video strip, and after a decoding end receives a general filter parameter set corresponding to the original audio and video strip, a coded audio and video data stream and a filter parameter identifier adopted by each prediction unit which executes filtering operation, the filter parameter corresponding to each prediction unit which executes filtering operation can be found in the general filter parameter set corresponding to the original audio and video strip according to the filter parameter identifier adopted by each prediction unit which executes filtering operation, so that the coded audio and video data stream is subjected to backward filtering processing by utilizing the filter parameter adopted by each prediction unit which executes filtering operation at a coding end.
Because the general filter parameter set corresponding to the original audio and video strip does not contain repeated filter parameters, the embodiment of the invention does not need to transmit repeated parameters under the condition that a plurality of prediction units adopt the same filter parameters, and the transmitted filter parameter identifier occupies less bit number than the transmitted filter parameter content, thereby reducing the redundancy of transmitted information and improving the transmission efficiency of audio and video files.
It should be noted that, in the case that the general filter parameter set corresponding to the original audio/video sequence does not include filter parameters required by some prediction units executing filtering operation in the original audio/video sequence; or, in the case that the general filter parameter set corresponding to the original audio-video sequence does not contain the most suitable filter parameters required by some prediction units for executing filtering operation in the original audio-video sequence; it is necessary to separately transmit the most suitable filter parameters required by some prediction units performing the filtering operation in the original audio-video sequence to the decoding end at the encoding end. Next, a specific manner of transmitting filter parameters corresponding to a prediction unit performing a filtering operation at an encoding end to a decoding end is described in detail with a specific embodiment.
Referring to fig. 5, which shows a flowchart of an embodiment of a method for transmitting filter parameters corresponding to a prediction unit performing a filtering operation at an encoding end to a decoding end according to the present application, the embodiment is mainly illustrated by applying the method to the encoding end, and referring to fig. 5, the method specifically includes the following steps:
s300, determining general filter parameter sets corresponding to all prediction units executing filtering operation in any original audio and video sequence in original audio and video data;
s310, taking the general filter parameter sets corresponding to all prediction units executing filtering operation in any original audio and video sequence as the general filter parameter sets corresponding to the original audio and video sequence, loading the general filter parameter sets in the sequence parameter set corresponding to the original audio and video sequence, and transmitting the general filter parameter sets to a decoding end;
the general filter parameter set corresponding to the original audio-video sequence contains filter parameters required for performing filtering operation on all prediction units in the original audio-video sequence.
S320, judging whether each original audio/video image frame in any original audio/video sequence shares a general filter parameter set corresponding to the original audio/video sequence; executing step S330 when an original audio/video image frame in an original audio/video sequence shares a general filter parameter set corresponding to the original audio/video sequence; executing step S360 when an original audio/video image frame in an original audio/video sequence does not share a general filter parameter set corresponding to the original audio/video sequence;
specifically, the embodiment of the present invention determines whether a prediction unit in any original audio/video image frame in any original audio/video sequence uses a filter parameter in a general filter parameter set corresponding to the original audio/video sequence.
S330, generating sequence parameter sharing information;
in the embodiment of the present invention, information share _ sps _ filter _ flag is used to represent sequence parameter shared information, the content in the sequence parameter shared information may be represented by 1-bit flag, and the specific content of the flag may be 1.
The sequence parameter sharing information is used for indicating that an original audio and video image frame in an original audio and video sequence shares a general filter parameter set corresponding to the original audio and video sequence.
S340, taking the original audio and video image frame sharing the general filter parameter set corresponding to the original audio and video sequence as a sharing parameter image frame;
shared parameter image frames are filter parameters in a common filter parameter set corresponding to an original audio-video sequence in which the frame is located that are available to a prediction unit in the frame performing a filtering operation.
S350, carrying the sequence parameter sharing information in an image parameter set corresponding to the shared parameter image frame, and transmitting the sequence parameter sharing information to a decoding end;
specifically, in the embodiment of the present invention, the information whose share _ sps _ filter _ flag content is 1 may be carried in the image parameter set corresponding to the shared parameter image frame, and transmitted to the decoding end.
Optionally, under the condition that an original audio/video image frame in the original audio/video sequence does not use all filter parameters in the general filter parameter set corresponding to the original audio/video sequence, the embodiment of the present invention may load information whose share _ sps _ filter _ flag content is 1 in the image parameter set corresponding to the shared parameter image frame, and at the same time, it is also necessary to load a filter parameter identifier (e.g., index) corresponding to a filter parameter specifically used by the original audio/video image frame in the original audio/video sequence in the image parameter set corresponding to the shared parameter image frame, and transmit the filter parameter identifier to the decoding end.
S360, taking an original audio and video image frame which does not share a general filter parameter set corresponding to the original audio and video sequence as a non-shared parameter image frame, and determining a special filter parameter set adopted by the non-shared parameter image frame;
the unshared parameter image frame is a filter parameter in a general filter parameter set corresponding to an original audio-video sequence in which the frame is not used by a prediction unit performing a filtering operation in the frame.
And S370, carrying the special filter parameter set adopted by the unshared parameter image frame in the image parameter set corresponding to the unshared parameter image frame, and transmitting the special filter parameter set to a decoding end.
Since the unshared parameter image frame is a filter parameter in the general filter parameter set corresponding to the original audio and video sequence in which the frame is located, the unshared parameter image frame does not have to be used by the prediction unit performing the filtering operation in the frame, and at this time, a specific filter parameter set adopted by the unshared parameter image frame needs to be carried in the image parameter set corresponding to the unshared parameter image frame and transmitted to the decoding end.
Optionally, in the embodiment of the present invention, a fixed filter parameter of the filter may be fixedly stored at the decoding end, and correspondingly, the method in the embodiment of the present invention further includes: judging whether each original audio and video image frame in any original audio and video sequence shares a fixed filter parameter fixedly stored at a decoding end, namely judging whether a prediction unit in any original audio and video image frame in any original audio and video sequence uses the fixed filter parameter fixedly stored at the decoding end, and generating fixed parameter sharing information under the condition that the prediction unit in one original audio and video image frame in one original audio and video sequence uses the fixed filter parameter fixedly stored at the decoding end, and carrying the fixed parameter sharing information in an image parameter set corresponding to the original audio and video image frame using the fixed filter parameter and transmitting the fixed parameter sharing information to the decoding end; under the condition that a prediction unit in an original audio and video image frame in an original audio and video sequence does not use a fixed filter parameter fixedly stored at a decoding end, non-fixed parameter sharing information is generated, is carried in an image parameter set corresponding to the original audio and video image frame without using the fixed filter parameter and is transmitted to the decoding end.
In the embodiment of the present invention, a value in the information share _ fixed _ filter _ flag is used to distinguish fixed parameter sharing information from non-fixed parameter sharing information, where the value in the share _ fixed _ filter _ flag may be represented by a 1-bit flag, the flag content is 1, which represents that the parameter is fixed parameter sharing information, and the flag content is 0, which represents that the parameter is non-fixed parameter sharing information.
Optionally, under the condition that an original audio/video image frame in the original audio/video sequence does not use all fixed filter parameters, in the embodiment of the present invention, on the premise that the flag content in the share _ fixed _ filter _ flag is set to 1, a filter parameter identifier (e.g., index) corresponding to a fixed filter parameter specifically used by the original audio/video image frame in the original audio/video sequence needs to be carried in an image parameter set corresponding to the shared parameter image frame, and is transmitted to the decoding end.
It should be noted that, the steps of determining whether each original audio/video image frame in any original audio/video sequence shares a fixed filter parameter fixedly stored at the decoding end and determining whether each original audio/video image frame in any original audio/video sequence shares a general filter parameter set corresponding to the original audio/video sequence disclosed in the embodiment of the present invention may be executed in no order, or may be executed simultaneously.
In the embodiment of the invention, the general filter parameter sets corresponding to all the prediction units which execute the filtering operation in any original audio and video sequence are loaded in the sequence parameter set corresponding to the original audio and video sequence and are transmitted to the decoding end, the indication information of whether each original audio and video image frame in any original audio and video sequence shares the general filter parameter set corresponding to the original audio and video sequence is transmitted to the decoding end, and simultaneously the specific filter parameter set adopted by the unshared parameter image frame is also transmitted to the decoding end The method occupies less bit number, reduces the redundancy of transmitted information and improves the transmission efficiency of the audio and video files.
Referring to fig. 6, which shows a flowchart of an embodiment of a method for transmitting filter parameters corresponding to a prediction unit performing a filtering operation at an encoding end to a decoding end according to the present application, the embodiment is mainly illustrated by applying the method to the encoding end, and referring to fig. 6, the method specifically includes the following steps:
s400, determining general filter parameter sets corresponding to all prediction units executing filtering operation in any original audio and video sequence in original audio and video data;
s410, taking the general filter parameter sets corresponding to all prediction units executing filtering operation in any original audio and video sequence as the general filter parameter sets corresponding to the original audio and video sequence, carrying the general filter parameter sets in the sequence parameter set corresponding to the original audio and video sequence, and transmitting the general filter parameter sets to a decoding end;
the general filter parameter set corresponding to the original audio-video sequence contains filter parameters required for performing filtering operation on all prediction units in the original audio-video sequence.
S420, judging whether each original audio/video image frame in any original audio/video sequence shares a general filter parameter set corresponding to the original audio/video sequence; executing step S430 if an original audio/video image frame in an original audio/video sequence shares a general filter parameter set corresponding to the original audio/video sequence; executing step S460 when an original audio/video image frame in an original audio/video sequence does not share the general filter parameter set corresponding to the original audio/video sequence;
specifically, the embodiment of the present invention determines whether a prediction unit in any original audio/video image frame in any original audio/video sequence uses a filter parameter in a general filter parameter set corresponding to the original audio/video sequence.
S430, generating sequence parameter sharing information;
the first sequence parameter sharing information is used for indicating that an original audio and video image frame in an original audio and video sequence shares a common filter parameter set corresponding to the original audio and video sequence.
S440, taking the original audio and video image frame sharing the general filter parameter set corresponding to the original audio and video sequence as a sharing parameter image frame;
shared parameter image frames are filter parameters in a common filter parameter set corresponding to an original audio-video sequence in which the frame is located that are available to a prediction unit in the frame performing a filtering operation.
S450, carrying the sequence parameter sharing information in an image parameter set corresponding to the shared parameter image frame, and transmitting the sequence parameter sharing information to a decoding end;
s460, taking the original audio and video image frame which does not share the general filter parameter set corresponding to the original audio and video sequence as a non-shared parameter image frame, and determining a special filter parameter set adopted by the non-shared parameter image frame;
the unshared parameter image frame is a filter parameter in a general filter parameter set corresponding to an original audio-video sequence in which the frame is not used by a prediction unit performing a filtering operation in the frame.
S470, carrying the special filter parameter set adopted by the non-shared parameter image frame in the image parameter set corresponding to the non-shared parameter image frame, and transmitting the image parameter set to a decoding end;
the specific execution process of steps S400 to S470 in the embodiment of the present invention is the same as the execution process of steps S300 to S370 in the above embodiment, and the mutual contents may be referred to correspondingly, and the embodiment of the present invention is not limited specifically.
If only one original audio/video strip exists in the current original audio/video image frame, the strip head corresponding to the original audio/video strip does not need to additionally transmit parameters. However, if there are a plurality of original audio-video bands in the current original audio-video image frame, the following steps are required.
S480, judging whether each original audio and video strip in any original audio and video image frame shares a filter parameter set corresponding to the original audio and video image frame; executing step S490 when an original audio/video strip in an original audio/video image frame shares the filter parameter set corresponding to the original audio/video image frame; executing step S493 when an original audio/video strip in an original audio/video image frame does not share a filter parameter set corresponding to the original audio/video image frame;
specifically, the embodiment of the present invention determines whether a prediction unit in any original audio/video stripe in any original audio/video image frame uses a filter parameter in a filter parameter set corresponding to the original audio/video image frame.
Specifically, the step of determining whether each original audio/video strip in any original audio/video image frame shares the filter parameter set corresponding to the original audio/video image frame includes: and judging whether each audio and video strip in any shared parameter image frame shares a general filter parameter set corresponding to the shared parameter image frame or not, and judging whether each audio and video strip in any non-shared parameter image frame shares a special filter parameter set corresponding to the non-shared parameter image frame or not.
S490, generating image frame parameter sharing information;
in the embodiment of the present invention, information select _ pps _ filter _ flag is used to represent image frame parameter sharing information, content in the image frame parameter sharing information may be represented by 1-bit flag, and specific content of the flag may be 1.
The image frame parameter sharing information is used for indicating that an original audio and video strip in an original audio and video image frame shares a general filter parameter set corresponding to the original audio and video image frame.
S491, taking the original audio/video strip sharing the filter parameter set corresponding to the original audio/video image frame as a shared parameter strip;
the shared parameter strip is a filter parameter in a filter parameter set corresponding to an original audio/video image frame where the strip is located, and is used by a prediction unit in the strip, which performs a filtering operation.
S492, the image frame parameter sharing information is carried in the corresponding strip head of the sharing parameter strip and transmitted to the decoding end;
specifically, in the embodiment of the present invention, the information that the content of select _ pps _ filter _ flag is 1 may be carried in the slice header corresponding to the shared parameter slice, and transmitted to the decoding end.
Optionally, under the condition that an original audio/video strip in the original audio/video image frame does not use all filter parameters in a filter parameter set corresponding to the original audio/video image frame, the embodiment of the present invention may carry information whose select _ pps _ filter _ flag content is 1 in a strip header corresponding to a shared parameter strip, and at the same time, it is necessary to carry a filter parameter identifier (e.g., index) corresponding to a filter parameter specifically used by the original audio/video strip in the original audio/video image frame in the strip header and transmit the filter parameter identifier to a decoding end.
S493, using an original audio and video strip which does not share a filter parameter set corresponding to the original audio and video image frame as a non-shared parameter strip, and determining a special filter parameter set adopted by the non-shared parameter strip;
the unshared parameter band is a prediction unit in the band for executing the filtering operation, and does not use the filter parameter in the filter parameter set corresponding to the original audio/video image frame where the band is located.
And S494, carrying the special filter parameter set adopted by the unshared parameter strip in a strip header corresponding to the unshared parameter strip, and transmitting the special filter parameter set to a decoding end.
Since the unshared parameter stripe is a prediction unit in the stripe that performs a filtering operation and does not use a filter parameter in a filter parameter set corresponding to an original audio/video image frame where the stripe is located, a specific filter parameter set adopted by the unshared parameter stripe needs to be carried in a stripe header of the unshared parameter stripe and transmitted to a decoding end.
Optionally, in the embodiment of the present invention, the general filter parameter sets corresponding to all the prediction units performing the filtering operation in any original audio/video sequence are loaded in the sequence parameter set corresponding to the original audio/video sequence, and are transmitted to the decoding end, and the indication information of whether each original audio/video image frame in any original audio/video sequence shares the general filter parameter set corresponding to the original audio/video sequence, and the indication information of whether each original audio/video strip in any original audio/video image frame shares the filter parameter set corresponding to the original audio/video image frame are transmitted to the decoding end, and simultaneously the specific filter parameter set used by the unshared parameter image frame and the specific filter parameter set used by the unshared parameter strip are also transmitted to the decoding end, compared with the manner that the filter parameters used by each prediction unit performing the filtering operation at the encoding end are transmitted to the decoding end one by one, the embodiment of the invention does not need to transmit repeated parameters, and the transmitted filter parameter identification and the indication information whether each original audio and video image frame in any original audio and video sequence shares the general filter parameter set corresponding to the original audio and video sequence, and the indication information whether each original audio and video strip in any original audio and video image frame shares the filter parameter set corresponding to the original audio and video image frame occupy less bit number than the transmission filter parameter content, thereby reducing the redundancy of transmission information and improving the transmission efficiency of audio and video files.
Optionally, since not all prediction units in the original audio/video data need to be filtered, in the embodiment of the present invention, the prediction unit information for performing the filtering operation needs to be transmitted to the decoding end, so that the decoding end performs the filtering operation on the prediction unit that needs to perform the filtering operation.
The embodiment of the invention also comprises the following steps: determining filtering decision information corresponding to each maximum coding unit in original audio and video data, wherein the filtering decision information corresponding to the maximum coding unit is used for indicating whether the maximum coding unit needs to be filtered or not, namely whether a prediction unit needing to be filtered exists in the maximum coding unit or not; transmitting the filtering decision information corresponding to each maximum coding unit in the original audio and video data to a decoding end; determining filtering decision information corresponding to a coding unit in a maximum coding unit which needs to be filtered, wherein the filtering decision information corresponding to the coding unit is used for indicating whether the coding unit needs to be filtered or not, namely whether a prediction unit which needs to be filtered exists in the coding unit or not; and transmitting the filtering decision information corresponding to the coding unit in the maximum coding unit needing filtering to a decoding end.
Specifically, the filtering decision information corresponding to the largest coding unit may be carried in the largest coding unit LCU, and a 1-bit flag is adopted in the LCU to indicate whether the LCU needs to be filtered, where the flag is 1 to indicate that filtering is needed, and the flag is 0 to indicate that filtering is not needed. If the fact that filtering is needed is indicated in an LCU, a 1bit flag is transmitted in a CU of the LCU to indicate whether the current CU needs filtering or not, the flag is 1 to indicate that filtering is needed, and the flag is 0 to indicate that filtering is not needed.
If the filtering decision information transmitted in a CU indicates that the current CU needs to be filtered, the filter parameter identification index transmitted in a PU in the CU indicates which filter parameter in the slice where the current PU selects for filtering.
In the data transmission method for the encoding end disclosed above, a data transmission method applied to the decoding end is introduced below, and the data transmission method applied to the decoding end introduced below corresponds to the data transmission method for the encoding end disclosed in the above embodiment of the present invention, and the specific contents may be referred to correspondingly.
As shown in fig. 7, which shows a schematic flow chart of an embodiment of another data transmission method according to the present application, this embodiment mainly uses the method applied to a decoding end for example, and referring to fig. 7, the data transmission method specifically includes the following steps:
s500, receiving the coded audio and video data stream from a coding end and filter parameter identifications adopted by each prediction unit which performs filtering operation at the coding end;
s510, according to the filter parameter identification adopted by each prediction unit which executes the filtering operation at the encoding end, obtaining the filter parameter adopted by each prediction unit which executes the filtering operation at the encoding end;
specifically, the embodiment of the present invention may obtain, from the encoding side locally or from the decoding side, the filter parameter used by each prediction unit that performs the filtering operation at the encoding side according to the filter parameter identifier used by each prediction unit that performs the filtering operation at the encoding side.
And S520, in the process of executing decoding operation on the coded audio and video data stream, utilizing the filter parameters adopted by each prediction unit executing filtering operation at the coding end to execute reverse filtering operation on the filtered prediction block to obtain original audio and video data.
The decoding end in the embodiment of the invention receives the coded audio and video data stream from the coding end and the filter parameter identifier adopted by each prediction unit which executes the filtering operation at the coding end, and then obtains the filter parameter adopted by each prediction unit which executes the filtering operation at the coding end according to the filter parameter identifier adopted by each prediction unit which executes the filtering operation at the coding end; therefore, in the process of executing decoding operation on the coded audio and video data stream, reverse filtering operation is executed on the filtered prediction block by utilizing the filter parameters adopted by each prediction unit executing filtering operation at the coding end, so that original audio and video data are obtained, and the consistency of coding and decoding is ensured.
The application also provides a data transmission device corresponding to the data transmission method. As shown in fig. 8, which shows a schematic structural diagram of a data transmission device according to the present application, the data transmission device applied to an encoding end may include:
the filtering unit 100 is configured to perform a filtering operation on a prediction block corresponding to an original block in original audio and video data in a process of encoding the original audio and video data to obtain an encoded audio and video data stream;
a filter parameter identifier determining unit 110, configured to determine a filter parameter identifier used by each prediction unit that performs a filtering operation in original audio/video data, where the filter parameter identifier is used to instruct a decoding end to obtain a filter parameter used by each prediction unit that performs a filtering operation at an encoding end;
a transmission unit 120, configured to transmit the encoded audio-video data stream and the filter parameter identifier adopted by each prediction unit performing the filtering operation to the decoding end.
Further comprising:
and a filter parameter transmission unit for transmitting the filter parameter corresponding to the prediction unit performing the filtering operation at the encoding end to the decoding end.
The filter parameter transmission unit includes: a first filter parameter transmission subunit to:
determining filter parameter sets corresponding to all prediction units executing filtering operation in original audio and video data, wherein the filter parameter sets comprise filter parameters required for executing filtering operation on all prediction units in the original audio and video data; taking filter parameter sets corresponding to all prediction units executing filtering operation in original audio and video data as general filter parameter sets corresponding to the original audio and video data; and transmitting the general filter parameter set corresponding to the original audio and video data to a decoding end.
The first filter parameter transmission subunit is specifically configured to:
determining a general filter parameter set corresponding to all prediction units executing filtering operation in any original audio and video sequence in original audio and video data; taking the general filter parameter sets corresponding to all prediction units executing filtering operation in any original audio and video sequence as the general filter parameter sets corresponding to the original audio and video sequence, loading the general filter parameter sets in the sequence parameter set corresponding to the original audio and video sequence, and transmitting the general filter parameter sets to a decoding end, wherein the general filter parameter set corresponding to the original audio and video sequence comprises filter parameters required for executing filtering operation on all the prediction units in the original audio and video sequence;
or;
determining a general filter parameter set corresponding to all prediction units executing filtering operation in any original audio and video image frame in an original audio and video sequence; taking general filter parameter sets corresponding to all prediction units executing filtering operation in any original audio and video image frame as general filter parameter sets corresponding to the original audio and video image frame, bearing the general filter parameter sets in image parameter sets corresponding to the original audio and video image frame, and transmitting the general filter parameter sets to a decoding end, wherein the general filter parameter sets corresponding to the original audio and video image frame comprise filter parameters required for executing filtering operation on all the prediction units in the original audio and video image frame;
or;
determining a general filter parameter set corresponding to all prediction units executing filtering operation in any original audio and video strip in an original audio and video image frame; and taking the general filter parameter sets corresponding to all the prediction units executing the filtering operation in any original audio and video strip as the general filter parameter sets corresponding to the original audio and video strip, carrying the general filter parameter sets in a strip head of the original audio and video strip, and transmitting the general filter parameter sets to a decoding end, wherein the general filter parameter sets corresponding to the original audio and video strip comprise filter parameters required for executing the filtering operation on all the prediction units in the original audio and video strip.
The filter parameter transmission unit includes: a second filter parameter transmission subunit to:
determining a general filter parameter set corresponding to all prediction units executing filtering operation in any original audio and video sequence in original audio and video data; taking the general filter parameter sets corresponding to all prediction units executing filtering operation in any original audio and video sequence as the general filter parameter sets corresponding to the original audio and video sequence, loading the general filter parameter sets in the sequence parameter set corresponding to the original audio and video sequence, and transmitting the general filter parameter sets to a decoding end, wherein the general filter parameter set corresponding to the original audio and video sequence comprises filter parameters required for executing filtering operation on all the prediction units in the original audio and video sequence;
judging whether each original audio and video image frame in any original audio and video sequence shares a general filter parameter set corresponding to the original audio and video sequence;
under the condition that an original audio and video image frame in an original audio and video sequence shares a general filter parameter set corresponding to the original audio and video sequence, sequence parameter sharing information is generated;
taking the original audio and video image frame sharing the general filter parameter set corresponding to the original audio and video sequence as a sharing parameter image frame;
the sequence parameter sharing information is carried in an image parameter set corresponding to the shared parameter image frame and is transmitted to a decoding end;
under the condition that an original audio and video image frame in an original audio and video sequence does not share a general filter parameter set corresponding to the original audio and video sequence, taking the original audio and video image frame which does not share the general filter parameter set corresponding to the original audio and video sequence as a non-shared parameter image frame, and determining a special filter parameter set adopted by the non-shared parameter image frame;
and carrying the special filter parameter set adopted by the non-shared parameter image frame in the image parameter set corresponding to the non-shared parameter image frame, and transmitting the image parameter set to a decoding end.
Further comprising: a third filter parameter transmission subunit to:
judging whether each original audio and video strip in any original audio and video image frame shares a filter parameter set corresponding to the original audio and video image frame;
generating image frame parameter sharing information under the condition that an original audio and video strip in an original audio and video image frame shares a filter parameter set corresponding to the original audio and video image frame;
taking an original audio and video strip sharing a filter parameter set corresponding to the original audio and video image frame as a shared parameter strip;
the image frame parameter sharing information is carried in a strip head corresponding to the sharing parameter strip and is transmitted to a decoding end;
under the condition that an original audio and video strip in an original audio and video image frame does not share a filter parameter set corresponding to the original audio and video image frame, taking the original audio and video strip which does not share the filter parameter set corresponding to the original audio and video image frame as a non-shared parameter strip, and determining a special filter parameter set adopted by the non-shared parameter strip;
and carrying the specific filter parameter set adopted by the unshared parameter strip in a strip header corresponding to the unshared parameter strip, and transmitting the special filter parameter set to a decoding end.
Further comprising:
the first filtering decision information determining unit is used for determining filtering decision information corresponding to each maximum coding unit in original audio and video data, and the filtering decision information corresponding to the maximum coding unit is used for indicating whether the maximum coding unit needs to be filtered or not;
and the first filtering decision information transmission unit is used for transmitting the filtering decision information corresponding to each maximum coding unit in the original audio and video data to the decoding end.
Further comprising:
the second filtering decision information determining unit is used for determining filtering decision information corresponding to a coding unit in a maximum coding unit which needs to be filtered, and the filtering decision information corresponding to the coding unit is used for indicating whether the coding unit needs to be filtered or not;
and the second filtering decision information transmission unit is used for transmitting the filtering decision information corresponding to the coding unit in the maximum coding unit which needs to be filtered to the decoding end.
The application also provides another data transmission device corresponding to the data transmission method. As shown in fig. 9, which shows a schematic structural diagram of a data transmission device according to the present application, the data transmission device, applied to a decoding end, may include:
a receiving unit 200, configured to receive, from an encoding end, an encoded audio-visual data stream and an identification of filter parameters used by each prediction unit performing a filtering operation at the encoding end;
a filter parameter obtaining unit 210, configured to obtain a filter parameter used by each prediction unit that performs a filtering operation at the encoding end according to a filter parameter identifier used by each prediction unit that performs a filtering operation at the encoding end;
and a backward filtering unit 220, configured to, in the process of performing a decoding operation on the encoded audio/video data stream, perform a backward filtering operation on the filtered prediction block by using the filter parameter adopted by each prediction unit that performs a filtering operation at the encoding end, so as to obtain original audio/video data.
The embodiment of the invention also discloses a computer-readable storage medium, wherein computer-executable instructions are stored in the computer-readable storage medium, and when the computer-executable instructions are loaded and executed by a processor, the data transmission method is realized.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (13)

1. A data transmission method, wherein the method is applied to an encoding end, and the method comprises:
in the process of coding original audio and video data to obtain a coded audio and video data stream, performing filtering operation on a prediction block corresponding to an original block in the original audio and video data;
determining a filter parameter identifier adopted by each prediction unit which executes filtering operation in original audio and video data, wherein the filter parameter identifier is used for indicating a decoding end to obtain a filter parameter adopted by each prediction unit which executes filtering operation at an encoding end;
and transmitting the coded audio-video data stream and the filter parameter identification adopted by each prediction unit for performing the filtering operation to a decoding end.
2. The method of claim 1, further comprising:
and transmitting filter parameters corresponding to the prediction unit which performs the filtering operation at the encoding end to the decoding end.
3. The method of claim 2, wherein transmitting filter parameters corresponding to the prediction unit performing the filtering operation at the encoding end to the decoding end comprises:
determining filter parameter sets corresponding to all prediction units executing filtering operation in original audio and video data, wherein the filter parameter sets comprise filter parameters required for executing filtering operation on all prediction units in the original audio and video data; taking filter parameter sets corresponding to all prediction units executing filtering operation in original audio and video data as general filter parameter sets corresponding to the original audio and video data; and transmitting the general filter parameter set corresponding to the original audio and video data to a decoding end.
4. The method according to claim 3, wherein the filter parameter sets corresponding to all prediction units performing the filtering operation in the original audio/video data are determined, and the filter parameter sets include filter parameters required for performing the filtering operation on all prediction units in the original audio/video data; taking filter parameter sets corresponding to all prediction units executing filtering operation in original audio and video data as general filter parameter sets corresponding to the original audio and video data; transmitting the general filter parameter set corresponding to the original audio and video data to a decoding end comprises the following steps:
determining a general filter parameter set corresponding to all prediction units executing filtering operation in any original audio and video sequence in original audio and video data; taking the general filter parameter sets corresponding to all prediction units executing filtering operation in any original audio and video sequence as the general filter parameter sets corresponding to the original audio and video sequence, loading the general filter parameter sets in the sequence parameter set corresponding to the original audio and video sequence, and transmitting the general filter parameter sets to a decoding end, wherein the general filter parameter set corresponding to the original audio and video sequence comprises filter parameters required for executing filtering operation on all the prediction units in the original audio and video sequence;
or;
determining a general filter parameter set corresponding to all prediction units executing filtering operation in any original audio and video image frame in an original audio and video sequence; taking general filter parameter sets corresponding to all prediction units executing filtering operation in any original audio and video image frame as general filter parameter sets corresponding to the original audio and video image frame, bearing the general filter parameter sets in image parameter sets corresponding to the original audio and video image frame, and transmitting the general filter parameter sets to a decoding end, wherein the general filter parameter sets corresponding to the original audio and video image frame comprise filter parameters required for executing filtering operation on all the prediction units in the original audio and video image frame;
or;
determining a general filter parameter set corresponding to all prediction units executing filtering operation in any original audio and video strip in an original audio and video image frame; and taking the general filter parameter sets corresponding to all the prediction units executing the filtering operation in any original audio and video strip as the general filter parameter sets corresponding to the original audio and video strip, carrying the general filter parameter sets in a strip head of the original audio and video strip, and transmitting the general filter parameter sets to a decoding end, wherein the general filter parameter sets corresponding to the original audio and video strip comprise filter parameters required for executing the filtering operation on all the prediction units in the original audio and video strip.
5. The method of claim 2, wherein transmitting filter parameters corresponding to the prediction unit performing the filtering operation at the encoding end to the decoding end comprises:
determining a general filter parameter set corresponding to all prediction units executing filtering operation in any original audio and video sequence in original audio and video data; taking the general filter parameter sets corresponding to all prediction units executing filtering operation in any original audio and video sequence as the general filter parameter sets corresponding to the original audio and video sequence, loading the general filter parameter sets in the sequence parameter set corresponding to the original audio and video sequence, and transmitting the general filter parameter sets to a decoding end, wherein the general filter parameter set corresponding to the original audio and video sequence comprises filter parameters required for executing filtering operation on all the prediction units in the original audio and video sequence;
judging whether each original audio and video image frame in any original audio and video sequence shares a general filter parameter set corresponding to the original audio and video sequence;
under the condition that an original audio and video image frame in an original audio and video sequence shares a general filter parameter set corresponding to the original audio and video sequence, sequence parameter sharing information is generated;
taking the original audio and video image frame sharing the general filter parameter set corresponding to the original audio and video sequence as a sharing parameter image frame;
the sequence parameter sharing information is carried in an image parameter set corresponding to the shared parameter image frame and is transmitted to a decoding end;
under the condition that an original audio and video image frame in an original audio and video sequence does not share a general filter parameter set corresponding to the original audio and video sequence, taking the original audio and video image frame which does not share the general filter parameter set corresponding to the original audio and video sequence as a non-shared parameter image frame, and determining a special filter parameter set adopted by the non-shared parameter image frame;
and carrying the special filter parameter set adopted by the non-shared parameter image frame in the image parameter set corresponding to the non-shared parameter image frame, and transmitting the image parameter set to a decoding end.
6. The method of claim 5, further comprising:
judging whether each original audio and video strip in any original audio and video image frame shares a filter parameter set corresponding to the original audio and video image frame;
generating image frame parameter sharing information under the condition that an original audio and video strip in an original audio and video image frame shares a filter parameter set corresponding to the original audio and video image frame;
taking an original audio and video strip sharing a filter parameter set corresponding to the original audio and video image frame as a shared parameter strip;
the image frame parameter sharing information is carried in a strip head corresponding to the sharing parameter strip and is transmitted to a decoding end;
under the condition that an original audio and video strip in an original audio and video image frame does not share a filter parameter set corresponding to the original audio and video image frame, taking the original audio and video strip which does not share the filter parameter set corresponding to the original audio and video image frame as a non-shared parameter strip, and determining a special filter parameter set adopted by the non-shared parameter strip;
and carrying the specific filter parameter set adopted by the unshared parameter strip in a strip header corresponding to the unshared parameter strip, and transmitting the special filter parameter set to a decoding end.
7. The method of any one of claims 1-6, further comprising:
determining filtering decision information corresponding to each maximum coding unit in original audio and video data, wherein the filtering decision information corresponding to the maximum coding unit is used for indicating whether the maximum coding unit needs to be filtered or not;
and transmitting the filtering decision information corresponding to each maximum coding unit in the original audio and video data to a decoding end.
8. The method according to claim 7, wherein after transmitting the filtering decision information corresponding to each maximum coding unit in the original audio-video data to the decoding end, the method further comprises:
determining filtering decision information corresponding to a coding unit in a maximum coding unit needing filtering, wherein the filtering decision information corresponding to the coding unit is used for indicating whether the coding unit needs filtering or not;
and transmitting the filtering decision information corresponding to the coding unit in the maximum coding unit needing filtering to a decoding end.
9. A data transmission method, wherein the method is applied to a decoding end, and the method comprises:
receiving an encoded audio-visual data stream from an encoding end and an identification of filter parameters to be used by each prediction unit performing a filtering operation at the encoding end;
acquiring a filter parameter adopted by each prediction unit which executes the filtering operation at the encoding end according to the filter parameter identification adopted by each prediction unit which executes the filtering operation at the encoding end;
and in the process of executing decoding operation on the coded audio and video data stream, utilizing the filter parameters adopted by each prediction unit executing filtering operation at the coding end to execute backward filtering operation on the filtered prediction block to obtain original audio and video data.
10. A data transmission apparatus, wherein the apparatus is applied to an encoding end, the apparatus comprising:
the filtering unit is used for performing filtering operation on a prediction block corresponding to an original block in original audio and video data in the process of coding the original audio and video data to obtain a coded audio and video data stream;
the device comprises a filter parameter identification determining unit, a decoding end and a prediction end, wherein the filter parameter identification determining unit is used for determining a filter parameter identification adopted by each prediction unit which executes filtering operation in original audio and video data, and the filter parameter identification is used for indicating the decoding end to obtain a filter parameter adopted by each prediction unit which executes filtering operation at the encoding end;
and the transmission unit is used for transmitting the coded audio-video data stream and the filter parameter identification adopted by each prediction unit which performs the filtering operation to the decoding end.
11. A data transmission apparatus, wherein the apparatus is applied to a decoding end, the apparatus comprising:
a receiving unit, for receiving the encoded audio-visual data stream from the encoding end and the filter parameter identification adopted by each prediction unit performing the filtering operation at the encoding end;
the filter parameter acquiring unit is used for acquiring the filter parameters adopted by each prediction unit which executes the filtering operation at the encoding end according to the filter parameter identification adopted by each prediction unit which executes the filtering operation at the encoding end;
and the backward filtering unit is used for performing backward filtering operation on the filtered prediction block by utilizing the filter parameters adopted by each prediction unit which performs filtering operation at the encoding end in the process of performing decoding operation on the encoded audio and video data stream to obtain the original audio and video data.
12. A computer device, characterized in that the computer device comprises:
a processor and a memory;
the processor is used for calling and executing the program stored in the memory;
the memory is configured to store the program, the program at least to:
performing the data transmission method according to any of the preceding claims 1-8; or; the data transmission method according to claim 9 is performed.
13. A computer-readable storage medium having stored thereon computer-executable instructions that, when loaded and executed by a processor, carry out a data transmission method according to any one of claims 1 to 8; or; the data transmission method according to claim 9 is performed.
CN201911047001.0A 2019-10-30 2019-10-30 Data transmission method, device, computer equipment and storage medium Active CN110809163B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911047001.0A CN110809163B (en) 2019-10-30 2019-10-30 Data transmission method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911047001.0A CN110809163B (en) 2019-10-30 2019-10-30 Data transmission method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110809163A true CN110809163A (en) 2020-02-18
CN110809163B CN110809163B (en) 2023-05-12

Family

ID=69489694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911047001.0A Active CN110809163B (en) 2019-10-30 2019-10-30 Data transmission method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110809163B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113727106A (en) * 2020-05-25 2021-11-30 腾讯科技(深圳)有限公司 Video encoding method, video decoding method, video encoding device, video decoding device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102484713A (en) * 2009-09-17 2012-05-30 株式会社东芝 Dynamic image encoding method and dynamic image decoding method
WO2012167927A1 (en) * 2011-06-09 2012-12-13 Panasonic Corporation Coding of control data for adaptive loop filters
CN104735450A (en) * 2015-02-26 2015-06-24 北京大学 Method and device for carrying out adaptive loop filtering during video coding and decoding
CN105979272A (en) * 2011-02-16 2016-09-28 联发科技股份有限公司 Method and apparatus for sharing information in video codecs system
CN108141602A (en) * 2015-09-25 2018-06-08 华为技术有限公司 video motion compensation device and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102484713A (en) * 2009-09-17 2012-05-30 株式会社东芝 Dynamic image encoding method and dynamic image decoding method
CN105979272A (en) * 2011-02-16 2016-09-28 联发科技股份有限公司 Method and apparatus for sharing information in video codecs system
WO2012167927A1 (en) * 2011-06-09 2012-12-13 Panasonic Corporation Coding of control data for adaptive loop filters
CN104735450A (en) * 2015-02-26 2015-06-24 北京大学 Method and device for carrying out adaptive loop filtering during video coding and decoding
CN108141602A (en) * 2015-09-25 2018-06-08 华为技术有限公司 video motion compensation device and method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113727106A (en) * 2020-05-25 2021-11-30 腾讯科技(深圳)有限公司 Video encoding method, video decoding method, video encoding device, video decoding device, electronic equipment and storage medium
CN113727106B (en) * 2020-05-25 2024-03-26 腾讯科技(深圳)有限公司 Video encoding and decoding methods, devices, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110809163B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
US20240121420A1 (en) Method of coding and decoding images, coding and decoding device and computer programs corresponding thereto
US10834390B2 (en) Codeword assignment for intra chroma mode signalling for HEVC
JP5399416B2 (en) Video coding system with reference frame compression
CN103096053B (en) A kind of decoding method of pattern conversion and device
US11076176B2 (en) Multiple transcode engine systems and methods
US8594442B2 (en) Method and device for performing image compression coding
CN105556962A (en) Method of lossless mode signaling for video system with lossless and lossy coding
CN112203085B (en) Image processing method, device, terminal and storage medium
EP3706421A1 (en) Method and apparatus for video encoding and decoding based on affine motion compensation
CN113170210A (en) Affine mode signaling in video encoding and decoding
CN108028938A (en) Method for video coding and device
CN104521234A (en) Method and apparatus for video processing incorporating deblocking and sample adaptive offset
CN110809163B (en) Data transmission method, device, computer equipment and storage medium
CN111757109A (en) High-real-time parallel video coding and decoding method, system and storage medium
CN114786019B (en) Image prediction method, encoder, decoder, and storage medium
CN112203086A (en) Image processing method, device, terminal and storage medium
CN107172425B (en) Thumbnail generation method and device and terminal equipment
CN111787326B (en) Entropy coding and decoding method and device
CN114222136B (en) Motion compensation processing method, encoder, decoder and storage medium
WO2021263251A1 (en) State transition for dependent quantization in video coding
CN112449201B (en) Decoding method, encoding method, corresponding devices, electronic equipment and storage medium
CN109587502B (en) Method, device, equipment and computer readable storage medium for intra-frame compression
KR101431463B1 (en) Apparatus for lossless video coding/decoding and method thereof
CN108111862B (en) Video coding method and device
CN112969067B (en) Video coding method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40017680

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant