WO2021134654A1 - 视频编码的方法和装置 - Google Patents

视频编码的方法和装置 Download PDF

Info

Publication number
WO2021134654A1
WO2021134654A1 PCT/CN2019/130875 CN2019130875W WO2021134654A1 WO 2021134654 A1 WO2021134654 A1 WO 2021134654A1 CN 2019130875 W CN2019130875 W CN 2019130875W WO 2021134654 A1 WO2021134654 A1 WO 2021134654A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
encoding
image block
preset
boundary
Prior art date
Application number
PCT/CN2019/130875
Other languages
English (en)
French (fr)
Inventor
王悦名
郑萧桢
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980048568.1A priority Critical patent/CN112534824B/zh
Priority to PCT/CN2019/130875 priority patent/WO2021134654A1/zh
Publication of WO2021134654A1 publication Critical patent/WO2021134654A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation

Definitions

  • This application relates to the field of image processing, and more specifically, to a method and device for video encoding.
  • Multi-core hardware encoders usually divide an image or video into multiple tiles (tile) or slice segment (SS) (slice for short), and each core is responsible for one or more tiles or slices. coding.
  • the embodiments of the present application provide a video encoding method and device, which can eliminate the boundary blocking effect caused by different cores encoding different images in the same video, improve the image display quality, and further improve the user's experience of watching the video.
  • a video encoding method including: using a first processor to encode a first image and a first boundary image block in an image to be encoded, where the first boundary image block is The boundary image block of the second image of, the first image is adjacent to the second image; the second image and the second boundary image block are encoded by the second processor, the second boundary image block Is a boundary image block in the first image, and the first boundary image block is adjacent to the second boundary image block; wherein, when the first processor encodes the first boundary image block
  • the coding information used is the same as the coding information used when the second processor encodes the first boundary image block, and the coding used when the second processor encodes the second boundary image block
  • the information is the same as the encoding information used when the first processor encodes the second boundary image block; the first image and the first boundary image block encoded by the first processor are used to encode the first image and the first boundary image block.
  • the adjacent boundary between an image and the second image is filtered, and/or the second image and the second boundary image block block
  • a video encoding device including a first processor and a second processor; the first processor is configured to encode a first image and a first boundary image block in an image to be encoded, and The first boundary image block is a boundary image block of a second image in the image to be encoded, and the first image is adjacent to the second image; the second processor is configured to The image and a second boundary image block are encoded, the second boundary image block is a boundary image block in the first image, and the first boundary image block is adjacent to the second boundary image block;
  • the encoding information used by the first processor when encoding the first boundary image block is the same as the encoding information used by the second processor when encoding the first boundary image block, and the second processor
  • the encoding information used by the processor when encoding the second boundary image block is the same as the encoding information used by the first processor when encoding the second boundary image block; the first processor is also configured to Use the encoded first image and the first boundary image block to filter the adjacent boundary between the first
  • a video encoding device including a processor and a memory.
  • the memory is used to store a computer program, and the processor is used to call and run the computer program stored in the memory to execute the method in the above-mentioned first aspect or each of its implementation modes.
  • a chip is provided for implementing the method in the first aspect or its implementation manners.
  • the chip includes: a processor, configured to call and run a computer program from the memory, so that the device installed with the chip executes the method in the first aspect or its implementation manners.
  • a computer-readable storage medium for storing a computer program.
  • the computer program includes instructions for executing the first aspect or any possible implementation of the first aspect.
  • a computer program product including computer program instructions that cause a computer to execute the above-mentioned first aspect or the method in each implementation manner of the first aspect.
  • the first boundary image block is encoded by the first processor.
  • the second processor uses the second processor to encode the second boundary image block, and using the encoding information used by the first processor to encode the first boundary image block and using the second processor to encode the first boundary image in the second image
  • the coding information of the blocks is the same, and the coding information used when encoding the second boundary image block by the second processor is the same as the coding information of the boundary image block in the first image.
  • the adjacent boundary between the first image and the second image can be filtered through the encoded first image and the first boundary image block, and/or the encoded second image and the second boundary image block can be used to filter the first image and the second boundary image block.
  • the adjacent boundary between the first image and the second image is filtered, so that the boundary blocking effect between the images caused by encoding by different processors can be eliminated, the image display quality can be improved, and the user's viewing experience can be further improved.
  • Fig. 1 is an architecture diagram of a technical solution applying an embodiment of the present application
  • Fig. 2 is a schematic diagram of a video coding framework 2 according to an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a video encoding method provided by an embodiment of the present application.
  • Figure 4a is a schematic diagram of a video to be coded divided according to an embodiment of the present application.
  • FIG. 4b is a schematic diagram of a video to be coded divided according to another embodiment of the present application.
  • FIG. 4c is a schematic diagram of a video to be coded divided according to still another embodiment of the present application.
  • FIG. 4d is a schematic diagram of a video to be coded divided according to another embodiment of the present application.
  • FIG. 4e is a schematic diagram of a video to be coded divided according to another embodiment of the present application.
  • FIG. 5a is a schematic diagram of a video to be coded divided according to another embodiment of the present application.
  • FIG. 5b is a schematic diagram of a video to be coded divided according to another embodiment of the present application.
  • FIG. 6 is a schematic diagram of dividing a video to be coded according to another embodiment of the present application.
  • FIG. 7 is a schematic diagram of dividing a video to be coded according to another embodiment of the present application.
  • FIG. 8 is a schematic diagram of dividing a video to be coded according to another embodiment of the present application.
  • FIG. 9 is a schematic diagram of dividing a video to be coded according to another embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a video encoding device provided by an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of a chip provided by an embodiment of the present application.
  • Fig. 1 is a structural diagram of a technical solution applying an embodiment of the present application.
  • the system 100 can receive the data 102 to be processed, process the data 102 to be processed, and generate processed data 108.
  • the system 100 may receive the data to be encoded and encode the data to be encoded to generate encoded data, or the system 100 may receive the data to be decoded and decode the data to be decoded to generate decoded data.
  • the components in the system 100 may be implemented by one or more processors.
  • the processor may be a processor in a computing device or a processor in a mobile device (such as a drone).
  • the processor may be any type of processor, which is not limited in the embodiment of the present invention.
  • the processor may include an encoder, a decoder, or a codec, etc.
  • One or more memories may also be included in the system 100.
  • the memory can be used to store instructions and data, for example, computer-executable instructions that implement the technical solutions of the embodiments of the present invention, to-be-processed data 102, processed data 108, and so on.
  • the memory may be any type of memory, which is not limited in the embodiment of the present invention.
  • the data to be encoded may include text, images, graphic objects, animation sequences, audio, video, or any other data that needs to be encoded.
  • the data to be encoded may include sensor data from sensors, which may be vision sensors (for example, cameras, infrared sensors), microphones, near-field sensors (for example, ultrasonic sensors, radars), position sensors, and temperature sensors. Sensors, touch sensors, etc.
  • the data to be encoded may include information from the user, for example, biological information, which may include facial features, fingerprint scans, retinal scans, voice recordings, DNA sampling, and the like.
  • Fig. 2 is a schematic diagram of a video coding framework 2 according to an embodiment of the present application.
  • each frame in the video to be coded is coded in sequence.
  • the current coded frame mainly undergoes processing such as prediction (Prediction), transformation (Transform), quantization (Quantization), and entropy coding (Entropy Coding), and finally the bit stream of the current coded frame is output.
  • the decoding process usually decodes the received code stream according to the inverse process of the above process to recover the video frame information before decoding.
  • the video encoding framework 2 includes an encoding control module 201, which is used to perform decision-making control actions and parameter selection in the encoding process.
  • the encoding control module 201 controls the parameters used in transformation, quantization, inverse quantization, and inverse transformation, controls the selection of intra-frame or inter-frame modes, and controls the parameters of motion estimation and filtering, and
  • the control parameters of the encoding control module 201 will also be input to the entropy encoding module, and the encoding will be performed to form a part of the encoded bitstream.
  • the encoded frame is partitioned 202, specifically, firstly, slice partition is performed, and then block partition is performed.
  • the coded frame is divided into a plurality of non-overlapping largest coding tree units (Coding Tree Units, CTUs), and each CTU can also be in a quadtree, or binary tree, or triple tree manner. Iteratively divides into a series of smaller coding units (Coding Unit, CU).
  • the CU may also include a prediction unit (Prediction Unit, PU) and a transformation unit (Transform Unit, TU) associated with it.
  • the PU It is the basic unit of prediction
  • TU is the basic unit of transformation and quantization.
  • the PU and TU are respectively obtained by dividing into one or more blocks on the basis of the CU, where one PU includes multiple prediction blocks (PB) and related syntax elements.
  • the PU and TU may be the same, or obtained by the CU through different division methods.
  • at least two of the CU, PU, and TU are the same.
  • CU, PU, and TU are not distinguished, and prediction, quantization, and transformation are all performed in units of CU.
  • the CTU, CU, or other data units formed are all referred to as coding blocks in the following.
  • the data unit for video encoding may be a frame, a slice, a coding tree unit, a coding unit, a coding block, or any group of the above.
  • the size of the data unit can vary.
  • a prediction process is performed to remove the spatial and temporal redundant information of the current coded frame.
  • the commonly used prediction methods include intra-frame prediction and inter-frame prediction. Intra-frame prediction uses only the reconstructed information in the current frame to predict the current coding block, while inter-frame prediction uses the information in other previously reconstructed frames (also called reference frames) to predict the current coding block. Make predictions.
  • the encoding control module 201 is used to make a decision to select intra prediction or inter prediction.
  • the process of intra-frame prediction 203 includes obtaining the reconstructed block of the coded neighboring block around the current coding block as a reference block, and based on the pixel value of the reference block, the prediction mode method is used to calculate the predicted value to generate the predicted block , Subtracting the corresponding pixel values of the current coding block and the prediction block to obtain the residual of the current coding block, the residual of the current coding block is transformed 204, quantized 205, and entropy coding 210 to form the code stream of the current coding block. Further, after all the coded blocks of the current coded frame undergo the above-mentioned coding process, they form a part of the coded stream of the coded frame. In addition, the control and reference data generated in the intra-frame prediction 203 are also encoded by the entropy encoding 210 to form a part of the encoded bitstream.
  • the transform 204 is used to remove the correlation of the residual of the image block, so as to improve the coding efficiency.
  • the transformation of the residual data of the current coding block usually adopts two-dimensional discrete cosine transform (DCT) transformation and two-dimensional discrete sine transform (DST) transformation, for example, the residual information of the coded block Respectively multiply an N ⁇ M transformation matrix and its transposed matrix, and obtain the transformation coefficient of the current coding block after the multiplication.
  • DCT discrete cosine transform
  • DST two-dimensional discrete sine transform
  • the quantization 205 is used to further improve the compression efficiency.
  • the transform coefficients can be quantized to obtain the quantized coefficients, and then the quantized coefficients are entropy-encoded 210 to obtain the residual code stream of the current coding block.
  • the entropy coding method includes But it is not limited to content adaptive binary arithmetic coding (Context Adaptive Binary Arithmetic Coding, CABAC) entropy coding.
  • CABAC Context Adaptive Binary Arithmetic Coding
  • the bit stream obtained by entropy coding and the coding mode information after coding are stored or sent to the decoding end.
  • the quantization result is also inversely quantized 206, and the inverse quantization result is inversely transformed 207.
  • the reconstructed pixels are obtained by using the inverse transform result and the motion compensation result.
  • filtering ie, loop filtering
  • the filtered reconstructed image (belonging to the reconstructed video frame) is output.
  • the reconstructed image can be used as a reference frame image of other frame images for inter-frame prediction.
  • the reconstructed image may also be referred to as a reconstructed image or a reconstructed image.
  • the coded neighboring block in the intra prediction 203 process is: the neighboring block that has been coded before the current coding block is coded, and the residual generated in the coding process of the neighboring block is transformed 204, quantized 205, After inverse quantization 206 and inverse transform 207, the reconstructed block is obtained by adding the prediction block of the neighboring block.
  • the inverse quantization 206 and the inverse transformation 207 are the inverse processes of the quantization 206 and the transformation 204, which are used to restore the residual data before the quantization and transformation.
  • the intra prediction mode may include a direct current (DC) prediction mode, a flat (Planar) prediction mode, and different angle prediction modes (for example, it may include 33 angle prediction modes).
  • DC direct current
  • Planar flat
  • angle prediction modes for example, it may include 33 angle prediction modes.
  • the inter prediction process includes motion estimation (ME) 208 and motion compensation (MC) 209.
  • the motion estimation is performed 208 according to the reference frame image in the reconstructed video frame, and the image block most similar to the current encoding block is searched for in one or more reference frame images according to a certain matching criterion as a matching block.
  • the relative displacement with the current coding block is the motion vector (Motion Vector, MV) of the current coding block.
  • Motion Vector Motion Vector
  • motion compensation is performed 209 on the current coding block to obtain the prediction block of the current coding block.
  • the original value of the pixel of the coding block is subtracted from the pixel value of the corresponding prediction block to obtain the residual of the coding block.
  • the residual of the current coding block undergoes transformation 204, quantization 205 and entropy coding 210 to form a part of the code stream of the coded frame.
  • the control and reference data generated in the motion compensation 209 are also encoded by the entropy encoding 210 to form a part of the encoded bitstream.
  • the reconstructed video frame is a video frame obtained after filtering 211.
  • the reconstructed video frame includes one or more reconstructed images.
  • Filtering 211 is used to reduce compression distortions such as blocking effects and ringing effects generated in the encoding process.
  • the reconstructed video frame is used to provide reference frames for inter-frame prediction during the encoding process.
  • the reconstructed video frame is output after post-processing For the final decoded video.
  • the inter prediction mode may include an advanced motion vector prediction (Advanced Motion Vector Prediction, AMVP) mode, a merge mode or a skip mode.
  • AMVP Advanced Motion Vector Prediction
  • motion vector prediction Motion Vector Prediction, MVP
  • MVP Motion Vector Prediction
  • the starting point of motion estimation can be determined according to the MVP, and the motion search is performed near the starting point, and the optimal is obtained after the search is completed MV
  • the position of the reference block in the reference image is determined by the MV
  • the reference block is subtracted from the current block to obtain the residual block
  • the MVP is subtracted from the MV to obtain the Motion Vector Difference (MVD)
  • MVD Motion Vector Difference
  • the MVP For the Merge mode, you can determine the MVP first, and directly determine the MVP as the MV of the current block. Among them, in order to obtain the MVP, you can first build a MVP candidate list (merge candidate list), in the MVP candidate list, you can include at least one candidate MVP, each candidate MVP can correspond to an index, the encoding end is from the MVP candidate list After selecting the MVP, the MVP index can be written into the code stream, and the decoder can find the MVP corresponding to the index from the MVP candidate list according to the index, so as to realize the decoding of the image block.
  • a MVP candidate list in the MVP candidate list, you can include at least one candidate MVP, each candidate MVP can correspond to an index, the encoding end is from the MVP candidate list
  • the MVP index can be written into the code stream, and the decoder can find the MVP corresponding to the index from the MVP candidate list according to the index, so as to realize the decoding of the image block.
  • Merge mode can also have other implementations.
  • Skip mode is a special case of Merge mode. After obtaining the MV according to the Merge mode, if the encoding end determines that the current block is basically the same as the reference block, then there is no need to transmit residual data, only the index of the MVP, and further a flag can be passed, which can indicate that the current block can be directly Obtained from the reference block.
  • Merge mode can be applied to triangle prediction technology.
  • the image block to be coded can be divided into two sub-image blocks with a triangular shape.
  • the motion vector can be determined for each sub-image block from the motion information candidate list, and the motion vector can be determined based on each sub-image block.
  • the motion vector determines the prediction sub-block corresponding to each sub-image block, and constructs the prediction block of the current image block based on the prediction sub-block corresponding to each sub-image block, thereby realizing the coding of the current image block.
  • For the decoding end operations corresponding to the encoding end are performed. First, use entropy decoding, inverse quantization and inverse transformation to obtain residual information, and determine whether the current image block uses intra-frame prediction or inter-frame prediction according to the decoded bitstream. If it is intra-frame prediction, the reconstructed image block in the current frame is used to construct prediction information according to the intra-frame prediction method; if it is inter-frame prediction, it is necessary to parse out the motion information, and use the parsed motion information in the reconstructed image Determine the reference block to obtain the prediction information; then, superimpose the prediction information and the residual information, and pass the filtering operation to obtain the reconstruction information.
  • Multi-core hardware encoders usually divide an image or video into multiple tiles and/or multiple slices, and each core is responsible for the encoding of one or more tiles or slices.
  • the embodiment of the present application provides a video encoding method, which can eliminate the boundary caused by encoding, and further, can improve the user's viewing experience.
  • the video encoding method 300 provided by the embodiment of the present application will be described in detail below with reference to FIG. 3.
  • FIG. 3 shows a video encoding method 300 provided by an embodiment of this application.
  • the method 300 may include steps 310-330.
  • a first processor uses a first processor to encode a first image and a first boundary image block in an image to be encoded, where the first boundary image block is a boundary image block of a second image in the image to be encoded, and the first image block is the boundary image block of the second image in the image to be encoded.
  • An image is adjacent to the second image.
  • the first image and the second image are the images in the image to be encoded.
  • the image to be encoded can be divided into at least two images, and then the first processing can be used.
  • the device encodes one of the images.
  • the first image and the second image in the embodiments of the present application may be images of the same size, that is, when the image to be coded is divided, the image to be coded can be divided vertically or horizontally from the center of the image; the first image and the second image It can also be images of different sizes, that is, when the image to be coded is divided, the center of the image to be coded may not be divided vertically or horizontally.
  • FIG. 4a it is a schematic diagram of the division of a certain frame of the video to be encoded according to an embodiment of the present application.
  • Two strip segments can be obtained by dividing the video to be coded, strip segment 1 and strip segment 2, and each strip segment has a strip shape.
  • the first processor may be used to encode slice segment 1
  • the second processor may be used to encode slice segment 2.
  • FIG. 4b it is a schematic diagram of the division of a certain frame of image in a video to be coded according to another embodiment of this application.
  • Two tiles can be obtained by horizontally dividing the image to be coded, namely tile 1 and tile 2, and each tile is rectangular.
  • each tile may include an integer number of CTUs.
  • the first processor may be used to encode tile 1
  • the second processor may be used to encode tile 2.
  • the image to be coded can also be divided vertically.
  • the horizontal division in the embodiment of the present application may refer to the division of the image to be coded from the horizontal direction
  • the vertical division may refer to the division of the image to be coded from the vertical direction
  • FIG. 4c it is a schematic diagram of the division of a certain frame of image in the video to be coded according to still another embodiment of this application.
  • band fragment 1 band fragment 1
  • band fragment 2 band fragment 3
  • all CTUs in strip segment 4 belong to tile 2, which meets the above conditions.
  • FIG. 4d it is a schematic diagram of dividing an image of a certain frame in a video to be coded according to another embodiment of this application.
  • all CTUs in the same stripe segment belong to the same tile.
  • all CTUs in stripe segment 1 belong to tile 1
  • all CTUs in stripe segment 2 belong to tile 1.
  • All CTUs in 3 belong to tile 2
  • all CTUs in stripe segment 4 also belong to tile 2, which meets the above conditions.
  • the coding order of the CTUs in the divided slice segments may be continuous.
  • the division can be performed based on the thicker black solid line in FIG. 4e to obtain strip segment 3 and strip segment 4.
  • slice segment 3 includes CTUs with coding order from 1 to 8;
  • slice segment 4 includes CTUs with coding order from 9 to 35.
  • a second processor to encode the second image and the second boundary image block, where the second boundary image block is a boundary image block in the first image, and the first boundary image block is The second boundary image block is adjacent; wherein the encoding information used by the first processor when encoding the first boundary image block is the same as the encoding information used by the second processor to encode the first boundary image block
  • the coding information used when the second processor encodes the second boundary image block is the same as the coding information used when the first processor encodes the second boundary image block the same.
  • the first boundary image block may be a boundary image block in the second image
  • the second boundary image block may be a boundary image block in the first image
  • the preset image encoding information used can be the same as the encoding information used when the second processor is used to encode the first boundary image block. Same; when the second processor is used to encode the second boundary image block, the image encoding information used may be the same as the encoding information used by the first processor to encode the second boundary image block.
  • first boundary image block in the embodiment of the present application is adjacent to the second boundary image block, that is, the boundary image block in the second image is adjacent to the boundary image block in the first image.
  • the image block adjacent to the horizontal center line or the vertical center line of the image to be coded is implemented in this application.
  • the image to be encoded is horizontally divided to obtain the first image and the second image in the embodiment of the present application.
  • tile 1 and tile 2 in the figure are the first image and the second image in the embodiment of this application respectively, that is, 5a-1 and 5a-2 in the figure are respectively the first image in the embodiment of this application.
  • 5a-3 is the first boundary image block in the embodiment of this application
  • 5a-4 is the second boundary image block in the embodiment of this application.
  • the first image 5a-1 in the figure is an image including image block A displayed in a thicker black solid line
  • the second image 5a-2 in the figure is an image including image block D displayed in a thicker solid line Image
  • image block 5a-3 in the figure is an image block including image block B adjacent to the dividing line
  • image block 5a-4 in the figure is an image block including image block C adjacent to the dividing line.
  • the encoding information used by the first processor when encoding the first boundary image block 5a-3 is the same as the encoding information used by the second processor when encoding the first boundary image block 5a-3.
  • the coding information may be the same; the coding information used by the second processor when encoding the second boundary image block 5a-4 may be the same as the coding information used by the first processor when encoding the second boundary image block 5a-4.
  • the encoding information can be the same.
  • the first boundary image block 5a-3 in the embodiment of the present application includes multiple image blocks B.
  • each image block B it can be further divided into multiple small image blocks.
  • multiple small image blocks are obtained, namely B1, B2, B3, and B4.
  • the coding information of these four small image blocks may be different from each other.
  • the image block D included in the second image 5a-2 it can also continue to be divided into multiple small image blocks.
  • the image block B corresponding to the aforementioned image block B divided into multiple small image blocks can be Block D is divided to obtain 4 small image blocks, namely D1, D2, D3, and D4.
  • the coding information of these four small image blocks may be different from each other.
  • the value description is that the coding information used when the image block B1 is coded by the first processor may be the same as the coding information used when the image block D1 is coded by the second processor;
  • the encoding information used by the processor when encoding image block B2 may be the same as the encoding information used when encoding image block D2 by the second processor;
  • the encoding information used when encoding image block B3 by the first processor The encoding information may be the same as the encoding information used when the second processor is used to encode the image block D3;
  • the encoding information used when the first processor is used to encode the image block B4 may be the same as the encoding information used when the second processor is used to encode the image block.
  • D4 uses the same coding information for coding.
  • irregular division of the image to be coded can also obtain the first image and the second image in the embodiment of this application.
  • 5b-1 and 5b-2 in the figure are implemented in this application respectively.
  • 5b-3 is the first boundary image block in the embodiment of this application
  • 5b-4 is the second boundary image block in the embodiment of this application.
  • the first image 5b-1 in the figure is an image including image block A displayed in a thicker black solid line
  • the second image 5b-2 in the figure is an image including image block D displayed in a thicker solid line Image
  • image block 5b-3 in the figure is an image block including image block B adjacent to the dividing line
  • image block 5b-4 in the figure is an image block including image block C adjacent to the dividing line.
  • the encoding information used by the first processor when encoding the first boundary image block 5b-3 may be the same as the encoding information used by the second processor when encoding the first boundary image block 5b-3.
  • the coding information is the same; the coding information used by the second processor when encoding the second border image block 5b-4 can be the same as the coding information used by the first processor when coding the second border image block 5b-4
  • the encoding information is the same.
  • the image block A0, the image block B0, the image block C0, and the image block D0 in FIG. 5b may also belong to the boundary image blocks.
  • the first boundary image block 5b-3 may include the image block B and the image block B0
  • the second boundary image block 5b-4 may include the image block C and the image block C0.
  • the encoding information and parameter information of the image block B0 may be the same as the encoding information and parameter information of the image block D0 included in the second image 5b-2
  • the encoding information and parameter information of the image block C0 may be the same as those of the second image 5b-
  • the coding information and parameter information of the image block A0 included in 2 are the same.
  • the number of processors and the number of divided images of the image to be coded may also be other values.
  • the number of processors may also be 4.
  • the four divided images are coded separately. This application does not specifically limit this.
  • the filtering in the embodiments of the present application may include deblocking filtering, pixel adaptive compensation (Sample Adaptive Offset, SAO), or adaptive loop filtering (Adaptive Loop Filter, ALF).
  • deblocking filtering is mainly used to eliminate blocking effects between images caused by encoding by different processors; SAO and ALF are mainly used to compensate for the distortion of original pixels and reconstructed pixels due to encoding.
  • first boundary image block and the first image that are multi-encoded by the first processor in the embodiment of the present application can be used to eliminate the difference between the images caused by encoding the first image and the second image by different processors.
  • Block effect similarly, the second boundary image block and the second image that are multi-encoded by the second processor can also be used to eliminate the block effect between the images caused by different processors encoding the first image and the second image.
  • the first processor can discard the multi-encoded first boundary image block
  • the second processor can be used to discard the multi-encoded second boundary image block.
  • the first processor and the second processor are used to obtain the processed reconstructed pixels
  • the first processor can be used to discard the coding information of the first boundary image block
  • the second processor can be used to discard the second boundary image. Encoding information of the block to ensure the integrity of the image to be encoded.
  • the first boundary image block is encoded by the first processor.
  • the second processor to encode the second boundary image block, and using the encoding information used by the first processor when encoding the first boundary image block and the second processor when encoding the second boundary image block
  • the coding information used is the same.
  • the adjacent boundary between the first image and the second image can be filtered through the encoded first image and the first boundary image block, and/or the encoded second image and the second boundary image block can be used to filter the first image and the second boundary image block.
  • the adjacent boundary between the first image and the second image is filtered, so that the boundary blocking effect between the images caused by encoding by different processors can be eliminated, the image display quality can be improved, and the user's viewing experience can be further improved.
  • the coding information used when encoding the first boundary image block by the first processor is the same as the coding information used when encoding the first boundary image block by the second processor.
  • the coding information used when encoding the second boundary image block is the same as the coding information used when encoding the second boundary image block by the first processor.
  • the encoding information may include a variety of information, which will be described in detail below.
  • the coding information includes coding mode and coding parameters.
  • the coding mode includes one or more of an inter prediction mode, an intra prediction mode, or a lossless coding mode.
  • the coding information in the embodiments of the present application may include coding mode and coding parameters.
  • the encoding mode in the encoding information used when the first processor is used to encode the first boundary image block and the encoding mode used when the second processor is used to encode the first boundary image block may both be frame Inter prediction mode; or, the coding mode in the coding information used when the first processor is used to encode the first boundary image block and the coding mode used when the second processor is used to encode the first boundary image block is also Both may be intra-frame prediction modes; or, the coding mode in the coding information used when the first boundary image block is coded by the first processor and the coding mode used when the first boundary image block is coded by the second processor
  • the coding modes of may also be lossless coding modes and so on.
  • the encoding mode in the encoding information used when the second boundary image block is encoded by the second processor and the encoding used when the second boundary image block is encoded by the first processor can be all inter-frame prediction modes; or the coding mode in the coding information used when the second processor is used to encode the second boundary image block and the first processor is used to encode the second boundary image block
  • the coding mode in the coding information used at the time may all be the intra-frame prediction mode; or, the coding mode in the coding information used when the second processor is used to encode the second boundary image block and the coding mode in the coding information used when the second processor is used to
  • the coding mode in the coding information used when the boundary image block is coded may be a lossless coding mode and the like.
  • the encoding mode in the encoding information used when the first processor is used to encode the first boundary image block and the encoding mode used when the second processor is used to encode the first boundary image block may be any two or three of inter prediction mode, intra prediction mode, or lossless coding mode.
  • the coding mode in the coding information used when the first processor is used to encode the first boundary image block and the coding mode used when the first processor is used to encode the second boundary image block may be the same or different.
  • the coding mode in the coding information used when the second processor is used to encode the first boundary image block and the coding mode used when the second processor is used to encode the second boundary image block can be the same, and Can be different.
  • the coding parameters can be determined according to the boundary image blocks included in the first image and the second image.
  • the coding mode and coding parameters of the boundary image blocks adjacent to the first image and the second image can be preset in the memory, and the first processing
  • the encoding mode and encoding parameters used when the first boundary image block is encoded by the processor can be obtained from the memory.
  • the encoding mode and encoding parameters used when encoding the first boundary image block by the first processor are the same as those used
  • the coding mode and coding parameters used when the second processor is used to encode the first boundary image block are the same; the coding parameters used when the second processor is used to encode the second boundary image block can also be obtained from the memory, At the same time, it is necessary to ensure that the coding parameters used when the second processor is used to encode the second boundary image block can be the same as the coding parameters used when the second processor is used to encode the second boundary image block.
  • the lossless coding mode in the embodiment of the present application may be a pulse code modulation (Pulse Code Modulation, PCM) mode, or a transform quantization bypass (transquant bypass) mode, which is not specifically limited in this application.
  • PCM Pulse Code Modulation
  • Transquant bypass transform quantization bypass
  • the using the first processor to encode the first image in the image to be encoded includes: using the first processor to use first preset encoding information to encode the second boundary The image block is encoded; the encoding the second boundary image block by the second processor includes: encoding the second boundary image block by using the second processor and using the first preset encoding information.
  • the using the first processor to encode the first boundary image block includes: using the first processor to use second preset encoding information to encode the first boundary image block Encoding; the encoding the second image using the second processor includes: encoding the first boundary image block by using the second processor and using the second preset encoding information.
  • the second boundary image block is the boundary image block in the first image
  • the first preset encoding information can be used to encode the first image.
  • the second boundary image block in the image is encoded; similarly, when the second boundary image block is encoded by the second processor, the same first preset encoding information can be used to encode the second boundary image block, Therefore, it can be ensured that when the two processors respectively encode the second boundary image block, the coding information used is the same, and the reconstructed pixels of the second boundary image block finally obtained are the same.
  • the second preset encoding information can be used to encode the second image.
  • the boundary image block in the image includes the first boundary image block for encoding; similarly, when the first boundary image block is encoded by the first processor, the second preset encoding information can be used to encode the first boundary image block. Encoding is performed to ensure that the two processors respectively use the same encoding information when encoding the first boundary image block.
  • the encoding parameters in the preset encoding information include: a first preset encoding mode, a preset reference Image block information and preset transform and quantization parameters; if the encoding mode in the preset encoding information includes an inter prediction mode, the encoding parameters in the preset encoding information include: a second preset encoding mode, a preset The information of the reference frame, the preset motion vector, and the preset transform and quantization parameters; if the encoding mode in the preset encoding information includes the transform and quantization bypass mode in the lossless encoding mode, the encoding parameters in the preset encoding information Including: the intra prediction mode and the information of the preset reference image block, or the inter prediction mode, the information of the preset reference frame and the preset motion vector; if the coding mode in the preset coding information includes all For the pulse code modulation mode in the lossless
  • the encoding parameters in the first preset encoding information may be determined by the encoding mode.
  • the encoding parameters in the first preset encoding information may include: Set the coding mode, preset the information of the reference image block, and preset the transform and quantization parameters.
  • the first preset encoding mode may be a vertical prediction mode in an intra prediction mode (including but not limited to mode 26 of the 35 intra prediction modes defined in the HEVC standard) or a horizontal prediction mode (including but not limited to Among the 35 intra-frame prediction modes defined in the HEVC standard, mode 10), the information of the prediction reference image block may be the information of the image block located above or on the left of the current image block.
  • the coding parameters in the second preset coding information may include: the second preset coding mode, the information of the preset reference frame, the preset motion vector, and the preset transform quantization parameter.
  • the second preset encoding mode may be AMVP mode, merge mode or skip mode in the inter-frame prediction mode, and the information of the preset reference frame may be a forward frame or a backward frame of the current frame.
  • the encoding parameter in the preset encoding information may include an intra prediction mode and an inter prediction mode.
  • the intra-frame prediction mode or the inter-frame prediction mode reference may be made to the parameters of the intra-frame prediction mode or the inter-frame prediction mode above, which will not be repeated here.
  • the transform and quantization bypass mode in the embodiment of the present application may refer to: the transform and quantization process in the process of no coding. That is, the residual error obtained after predicting the current image block may not undergo transformation and quantization processing.
  • the encoding parameter may include pixel bit depth.
  • the pulse code modulation mode in the embodiment of the present application may refer to encoding the original pixels of the current image block without going through the process of prediction, transformation, and quantization.
  • the first image and the second image in the embodiment of the present application can be obtained by dividing the image to be coded.
  • the coding information of the boundary image block in the embodiment of the present application may be different. Will be introduced in detail.
  • the first preset encoding adopted by the first boundary image block Mode satisfies a first preset condition
  • the first preset encoding mode adopted by the second boundary image block satisfies a second preset condition
  • the first preset condition is a frame adopted by the first boundary image block
  • Intra-prediction reconstruction pixels are obtained according to adjacent blocks on the left, bottom left, top left, or above of the first boundary image block or according to a direct current (DC) prediction mode
  • the second preset condition is the second boundary image
  • the intra-frame prediction reconstruction pixels used by the block are obtained based on the neighboring blocks above or on the upper right of the second boundary image block; or if the first image and the second image are obtained by performing an operation on the image to be coded If obtained by horizontal division, the first preset encoding mode adopted by the first boundary image block satisfies a third preset condition, and the first preset encoding mode
  • the third preset condition is that the intra-frame prediction reconstruction pixels used by the first boundary image block are based on adjacent blocks above, upper left, upper right, or left of the first boundary image block or based on direct current Prediction mode or obtained according to a planar prediction mode
  • the fourth preset condition is that the intra-frame prediction reconstruction pixels adopted by the second boundary image block are based on the left or bottom left phase of the second boundary image block. Obtained from neighboring blocks.
  • the first image 6-1 and the second image 6-2 can be obtained by vertically dividing the image to be coded. Then, the first image 6-1 and the first image 6-1 and the second image 6-2 are obtained by using different processors.
  • the first processor can be used to encode the first image 6-1 and the first boundary image block 6-3
  • the second processor can be used to encode the second image 6-2 and the second image 6-2.
  • the boundary image block 6-4 is encoded. Wherein, the encoding information used when the image block B included in the first boundary image block 6-3 is encoded by the first processor can be compared with the image block D included in the second image 6-2 by the second processor.
  • the coding information used when encoding is the same; the coding information used when the second processor is used to encode the image block C included in the second boundary image block 6-4 can be the same as the first processor for the first image 6-1.
  • the included image block A uses the same encoding information when encoding.
  • the first image 6-1 is an image including image block A displayed in a thicker black solid line
  • the second image 6-2 is an image including image block D displayed in a thicker solid line.
  • the coding information used when it encodes the first border image block 6-3 can be the same as The second processor has the same encoding information for the image block D included in the second image 6-2. If the image to be coded is vertically divided to obtain the first image 6-1 and the second image 6-2, the first preset coding mode included in the second preset coding information adopted by the first boundary image block 6-3 can satisfy The first preset condition, the first preset condition may be that the intra-frame prediction reconstructed pixels adopted by the first boundary image block 6-3 may be predicted according to different angle prediction modes (for example, it may be based on the left and bottom left of the first boundary image block).
  • the upper left or upper adjacent block including the 35 intra-prediction modes defined in the HEVC standard, mode 2-mode 26) or according to the DC prediction mode (including the 35 intra-prediction modes defined in the HEVC standard) 1) Obtained.
  • the first preset encoding modes included in the first preset encoding information adopted by the second boundary image block 6-4 may satisfy a second preset condition, and the second preset condition may be the second boundary image block 6-4
  • the used intra-frame prediction reconstructed pixels can be predicted according to different angle prediction modes (it can be based on the adjacent block above or on the upper right of the first boundary image block, including mode 26-mode 34 of the 35 intra-frame prediction modes defined in the HEVC standard. ).
  • the first processor when the first processor encodes the first boundary image block 6-3 and the second processor encodes the image block D included in the second image 6-2, the first processor may use the above The first preset encoding mode included in the second preset encoding information encodes the first boundary image block 6-3, and the second processor may also use the first preset encoding mode pair included in the second preset encoding information
  • the image block D included in the second image 6-2 is encoded.
  • the second processor When the second processor encodes the second boundary image block 6-4 and the first processor encodes the image block A included in the first image 6-1, the second processor may use the first preset The first preset encoding mode included in the encoding information encodes the second boundary image block 6-4, and the first processor may also use the first preset encoding mode included in the first preset encoding information to encode the first image 6
  • the image block A included in -1 is encoded.
  • the first preset encoding information may also include the AMVP mode or the Merge mode of the inter-frame prediction mode. Or skip mode, or lossless encoding mode, the first processor can use the first preset encoding information to encode the first boundary image block.
  • the second processor can also use the first preset encoding information to encode the first boundary image block.
  • the first boundary image block included in the second image is encoded; the second set of encoding information may also include the AMVP mode, Merge mode, or skip mode of the inter-frame prediction mode, or the lossless encoding mode, and the second processor may be used to adopt the second pre-coding mode. Assuming that the encoding information encodes the second boundary image block, similarly, the first processor may also encode the second boundary image block included in the first image with the second preset encoding information.
  • the reference image block may be the left image block or the right image block of the current image block.
  • the first preset coding information and the second preset coding information in the embodiment of the present application may be different.
  • the first preset encoding information may include the AMVP mode of the inter prediction mode
  • the second preset encoding information may include the lossless encoding mode
  • the first preset encoding information may include the lossless encoding mode
  • the second preset encoding information may Merge mode including inter prediction mode.
  • the image to be coded may be divided in different ways.
  • the first image 7-1 and the first image 7-1 after horizontally dividing the image to be coded are provided in this embodiment of the present application.
  • the first image 7-1 is an image including image block A displayed in a thicker black solid line
  • the second image 7-2 is an image including image block A displayed in a thicker black solid line.
  • Image of image block D is an image including image block A displayed in a thicker black solid line.
  • the first preset encoding mode included in the second preset encoding information adopted by the first boundary image block 7-3 may satisfy the third preset condition, and the third preset condition may be the first
  • the intra-frame prediction reconstruction pixels adopted by the boundary image block 7-3 can be predicted according to different angle prediction modes (for example, it can be based on the neighboring blocks above, upper left, upper right or left of the first boundary image block, including 35 defined in the HEVC standard).
  • Intra-prediction mode mode 10-mode 34
  • DC prediction mode including mode 1 in the 35 intra-prediction modes defined in the HEVC standard
  • flat prediction mode including 35 intra-prediction modes defined in the HEVC standard
  • the first preset encoding modes included in the first preset encoding information adopted by the second boundary image block 7-4 may satisfy the fourth preset condition, and the fourth preset condition may be the second boundary image block 7-4
  • the adopted intra-frame prediction reconstructed pixels can be predicted according to different angle prediction modes (it can be based on the neighboring block on the left or bottom left of the first boundary image block, including mode 2-mode 10 of the 35 intra prediction modes defined in the HEVC standard) ).
  • the first boundary image block 7-3 when encoding the first boundary image block 7-3 by the first processor, the first boundary image block 7-3 can be encoded using the first encoding mode included in the second preset encoding information.
  • the image block D When the second processor is used to encode the image block D included in the second image 7-2, the image block D may also be encoded using the first encoding mode included in the second preset encoding information.
  • the second boundary image block 7-4 may be encoded using the first encoding mode included in the second preset encoding information.
  • the first processor encodes the first image 7-1 including the image block A, it may also use the first encoding mode included in the second preset encoding information to encode the image block A.
  • the image to be coded may be divided vertically or horizontally divided multiple times to obtain multiple image blocks.
  • the image to be coded may be divided vertically twice to obtain three
  • the image blocks are respectively the first image 8-1, the second image 8-2 and the third image 8-3.
  • the first image 8-1 is an image including image block A displayed in a thicker black solid line
  • the second image 8-2 is an image including image block D and an image block displayed in a thicker solid line
  • the image of E, and the third image 8-3 is an image including the image block H displayed in a thicker black solid line.
  • the first image 8-1, the second image 8-2, and the third image 8-3 can be coded separately by different processors. When these three images are coded separately by different processors, they can be Encoding is performed in a similar way as in Figure 6 above.
  • the first processor is used to encode the first image 8-1
  • one more column of image blocks can be encoded, and the column of multiple encoded image blocks can be called the first boundary image block 8-4, and the first processor is used
  • the coding information used when encoding the first boundary image block 8-4 may be the same as when using the second processor to encode the image block D adjacent to the first image 8-1 included in the second image 8-2.
  • the coding information used is the same.
  • the second image 8-2 is adjacent to the first image 8-1 and the third image 8-3, when encoding the second image 8-2, two more columns of image blocks can be encoded , As shown in the figure, the image block C included in the second boundary image block 8-5 and the image block F included in the third boundary image block 8-6.
  • the encoding information used when encoding the image block C included in the second boundary image block 8-5 by the second processor may be the same as the image block A included in the first image 8-1 by the first processor.
  • the encoding information used when encoding is the same; the encoding information used when the second processor is used to encode the image block F included in the third boundary image block 8-6 can be the same as the third processor for the third image 8-
  • the image blocks H included in 3 are encoded with the same encoding information.
  • the third image 8-3 when encoding it, one more column of image blocks can be coded, and the multi-coded column of image blocks can be called the fourth boundary image block 8-7, and the third processor is used to
  • the coding information used when the fourth boundary image block 8-7 is coded may be the same as the coding information used when the second processor is used to code the image block E included in the second image 8-2.
  • the multiple images may be encoded based on a method similar to the above. For brevity, details are not repeated here.
  • the image to be coded can be divided vertically or horizontally multiple times to obtain multiple images, and the boundary image blocks of each image can be combined, so that different processors can be used to perform processing on the multiple images and boundary image blocks obtained.
  • Encoding further, can filter the boundary between the multiple images based on the multiple images and the boundary image block.
  • the image to be encoded may be multiple vertical and multiple horizontal divisions to obtain multiple images. In this manner, there may be some differences from the encoding manner described above, which will be described in detail below.
  • the image to be encoded further includes a third image and a fourth image, if the first image, the second image, the third image, and the fourth image are Obtained by horizontally and vertically dividing the image to be encoded, the encoding mode in the preset encoding information includes the vertical prediction mode in the intra prediction mode, the horizontal prediction mode in the intra prediction mode, and the inter prediction One or more of mode or lossless encoding mode.
  • the first image, the second image, the third image, and the fourth image are obtained by vertically and horizontally dividing the image to be coded
  • the four images are respectively coded using different processors
  • the coding mode for a multi-coded column of image blocks or a multi-coded row of image blocks can be determined as a vertical prediction mode or a horizontal prediction mode based on the division mode.
  • the encoding mode for a multi-encoded column of image blocks or a multi-encoded row of image blocks may also be an inter prediction mode or a lossless encoding mode; for the first image, the second image, the third image, and the first image, the encoding mode may also be an inter prediction mode or a lossless encoding mode.
  • the coding mode of the image block at the intersection position of the four images may adopt the inter-frame prediction mode or the lossless coding mode, which is not specifically limited in this application.
  • the using the first processor to encode the first image and the first boundary image block in the image to be coded includes: using the first processor, based on the vertical prediction mode Or one or more of the horizontal prediction mode, the inter-frame prediction mode, and the lossless coding mode to encode the first image and the first boundary image block; using the first processor , Encoding an intersection image included in the image block to be encoded based on the inter-frame prediction mode or the lossless encoding mode, where the intersection image is the first image, the second image, and the third image And the image of the intersection position of the fourth image.
  • the using the second processor to encode the second image and the second boundary image block includes: using the second processor, based on the vertical prediction mode, the Encoding the second image and the second boundary image block by one or more of the horizontal prediction mode, the inter-frame prediction mode, and the lossless encoding mode; using the second processor, based on The inter-frame prediction mode or the lossless encoding mode encodes the intersection image included in the image block to be encoded, and the intersection image is the first image, the second image, the third image, and the The image of the intersection position of the fourth image.
  • the first image 9-1 is an image including image blocks A and C and image blocks 9-13 displayed in a thicker black solid line
  • the second image 9-2 is an image displayed in a thicker black solid line
  • the image shown in lines includes image block F and image block G and image blocks 9-13
  • the third image 9-3 is displayed in a thicker black solid line including image block J, image block K and image block 9-13.
  • Image, the fourth image 9-4 is an image including image blocks P and N, and image blocks 9-13 displayed with a thicker black solid line.
  • the boundary image blocks included in the second image 9-2 adjacent to the first image 9-1 can be combined with the first image 9-1.
  • the boundary image blocks included in the three images 9-3 are coded.
  • the first processor may be used to encode the first image 9-1, the first boundary image block 9-5, and the third boundary image block 9-6.
  • the encoding information used when the image block B included in the first boundary image block 9-5 is encoded by the first processor may be the same as the image block included in the second image 9-2 by the second processor.
  • the encoding information used when F is encoded is the same.
  • the encoding information used when encoding the image block D included in the third boundary image block 9-6 by the first processor can be the same as the encoding information used when the third processor is used to encode the third image.
  • the image block J included in 9-3 uses the same coding information for coding.
  • encoding may be performed based on preset first preset encoding information.
  • the first preset encoding information can be used to encode it
  • the second processor is used to encode the image blocks included in the second image 9-2.
  • the first preset encoding information may also be used to encode it.
  • encoding may be performed based on preset third preset encoding information.
  • the third preset encoding information may be used to encode it, and the third processor is used to encode the image blocks included in the second image 9-3.
  • the third preset encoding information may also be used to encode it.
  • the first preset encoding information may include a vertical prediction mode, an inter prediction mode, or a lossless encoding mode in an intra prediction mode; the third preset encoding information may include a horizontal prediction mode in an intra prediction mode , Inter prediction mode or lossless coding mode.
  • the second processor when the second processor encodes the second image 9-2, it may also combine the image blocks included in the first image 9-1 adjacent to the second image 9-2 and the fourth image 9-4.
  • the included image blocks are encoded.
  • the second image 9-2, the second boundary image block 9-7, and the fourth boundary image block 9-8 can be encoded by using the second processor.
  • the encoding information used when the second processor is used to encode the image block E included in the second boundary image block 9-7 may be the same as the image block A included in the first image 9-1 by the first processor.
  • the coding information used when encoding is the same; the coding information used when the second processor is used to encode the image block H included in the fourth boundary image block 9-8 can be the same as the fourth processor for the fourth image 9-
  • the image block N included in 4 uses the same encoding information when encoding.
  • encoding may be performed based on preset second preset encoding information.
  • the second preset encoding information can be used for encoding
  • the first processor is used to perform encoding on the image block A included in the first image 9-1.
  • the second preset encoding information may also be used for encoding.
  • the fourth preset encoding information can be used to perform
  • the image block N included in the fourth image 9-4 may also be encoded by using the fourth preset encoding information by using the fourth processor.
  • the second preset encoding information may include the vertical prediction mode, the inter prediction mode, or the lossless encoding mode in the intra prediction mode; the fourth preset encoding information may include the horizontal prediction mode in the intra prediction mode , Inter prediction mode or lossless coding mode.
  • the third processor when the third processor encodes the third image 9-3, it may also combine the image blocks included in the first image 9-1 adjacent to the third image 9-3 and the fourth image 9-4.
  • the included image blocks are encoded.
  • the third image 9-2, the fifth boundary image block 9-9, and the sixth boundary image block 9-10 can be encoded by using the third processor.
  • the encoding information used when the third processor is used to encode the image block I included in the fifth boundary image block 9-9 can be combined with the first processor to perform the image block C included in the first image 9-1.
  • the encoding information used during encoding is the same, for example, the fifth preset encoding information can be used; the encoding information used when encoding the image block L included in the sixth boundary image block 9-10 by the third processor can be The coding information used when the image block P included in the fourth image 9-4 is coded by the fourth processor is the same, for example, the sixth preset coding information can be used.
  • the fifth preset encoding information may include the horizontal prediction mode, the inter prediction mode, or the lossless encoding mode in the intra prediction mode; the sixth preset encoding information may include the vertical prediction mode in the intra prediction mode , Inter prediction mode or lossless coding mode.
  • the fourth processor when the fourth processor encodes the fourth image 9-4, it can also combine the image blocks included in the second image 9-2 adjacent to the fourth image 9-2 and the third image 9-3.
  • the included image blocks are encoded.
  • the fourth image 9-4, the seventh boundary image block 9-11, and the eighth boundary image block 9-12 can be encoded by using the fourth processor.
  • the encoding information used when the fourth processor is used to encode the image block O included in the seventh boundary image block 9-11 can be the same as the image block K included in the third image 7-3 by the third processor.
  • the coding information used when encoding is the same, for example, the seventh preset coding information can be used; the coding information used when the image block M included in the eighth boundary image block 9-12 is coded by the fourth processor It may be the same as the encoding information used when the second processor is used to encode the image block G included in the second image 9-2.
  • the eighth preset encoding information may be used.
  • the seventh preset encoding information may include the vertical prediction mode, the inter prediction mode, or the lossless encoding mode in the intra prediction mode; the eighth preset encoding information may include the horizontal prediction mode in the intra prediction mode , Inter prediction mode or lossless coding mode.
  • the using the first processor to encode the first image and the first boundary image block in the image to be coded includes: using the first processor, based on the vertical prediction mode Or one or more of the horizontal prediction mode, the inter-frame prediction mode, and the lossless coding mode to encode the first image and the first boundary image block; using the first processor , Encoding an intersection image included in the image block to be encoded based on the inter-frame prediction mode or the lossless encoding mode, where the intersection image is the first image, the second image, and the third image And the image of the intersection position of the fourth image.
  • the using the second processor to encode the second image and the second boundary image block includes: using the second processor, based on the vertical prediction mode, the Encoding the second image and the second boundary image block by one or more of the horizontal prediction mode, the inter-frame prediction mode, and the lossless encoding mode; using the second processor, based on The inter-frame prediction mode or the lossless encoding mode encodes the intersection image included in the image block to be encoded, and the intersection image is the first image, the second image, the third image, and the The image of the intersection position of the fourth image.
  • the image to be coded undergoes one vertical division and one horizontal division to obtain four images, as shown in FIG. 9, they are the first image 9-1, the second image 9-2, and the third image 9- respectively. 3 and the fourth image 9-4.
  • the first processor may be used to encode the first image 9-1, the first boundary image block 9-5, and the third boundary image block 9-6.
  • the first boundary image block 9-5 can be encoded using the first preset encoding information by using the first processor,
  • the second processor may also be used to encode the image block F using the first preset encoding information, where the first preset encoding information may include a vertical prediction mode in an intra prediction mode, an inter prediction mode, or a lossless coding mode.
  • the third boundary image block 9-6 and the image block J included in the third image can be encoded using the third set of coding information using the first processor, and the third processor can be used to encode the third boundary image block 9-6.
  • the image block J may also be encoded using third preset encoding information, where the third preset encoding information may include a horizontal prediction mode, an inter prediction mode, or a lossless encoding mode in an intra prediction mode.
  • the second processor can be used to encode the second image 9-2, the second boundary image block 9-7, and the fourth boundary image block 9-8.
  • multiple encoding methods can be used for encoding, for example, the horizontal prediction mode in the intra prediction mode, the vertical prediction mode or the inter prediction mode in the intra prediction mode, or Lossless encoding mode;
  • the second processor can use the second preset encoding information to perform the second boundary image block 9-7
  • the first processor can also use the second preset encoding information to encode the image block A, where the second preset encoding information may include the vertical prediction mode in the intra prediction mode, the inter prediction mode, or the lossless Encoding mode.
  • the fourth preset coding information can be used to encode the fourth border image block 9-8 by using the second processor.
  • the fourth processor may also use fourth preset encoding information to encode the image block N, where the fourth preset encoding information may include a horizontal prediction mode in an intra prediction mode, an inter prediction mode, or a lossless coding mode.
  • the coding modes of the third image, the fourth image, and the boundary image block are similar to the foregoing coding modes, and are not repeated here for brevity.
  • an inter prediction mode or a lossless encoding mode may be adopted for the image blocks located at the intersections of the first image 9-1, the second image 9-2, the third image 9-3, and the fourth image 9-4.
  • the intersection image in the embodiment of the present application may be 9-13 in FIG. 9.
  • the inter-frame prediction mode or the lossless encoding mode may be used for encoding.
  • the preset motion vector is a preset fixed motion vector; or the preset motion vector is used for searching in a preset search area and/or based on a preset search method.
  • the target transformation quantization parameter includes a preset quantization parameter (Quatization Parameter, QP) and a preset transformation unit TU division mode.
  • QP Quantization Parameter
  • the preset motion vector in the embodiment of the present application may be a preset fixed motion vector.
  • the first boundary image block 6-3 and the second image 6-2 include
  • the MV in the first preset coding information used may be a preset fixed motion vector.
  • the MV in the first preset encoding information may be the MV located at the first position in the MV set, or the MV pointing in a certain direction in the MV set. Specific restrictions.
  • the method further includes: if the encoding information includes multiple encoding methods, selecting the first boundary image block or the second boundary image block based on a preset encoding cost algorithm Encoding mode.
  • the image block D included in -2 uses the same encoding information when encoding, which may include the same encoding mode. That is, if the first preset encoding includes the vertical prediction mode, then the encoding mode of the image block D included in the second image 6-2 is the vertical prediction mode, and the first processor performs the processing on the image included in the first boundary image block 6-3.
  • the coding method when the block B is coded may also adopt the vertical prediction mode.
  • the encoding information used when the second processor is used to encode the image block C included in the second boundary image block 6-4 is the same as the image included in the first image 6-1 by the first processor.
  • the coding information used when encoding the block A is the same, which may include the same coding mode.
  • the encoding mode used by the second processor to encode the image block C included in the second boundary image block 6-4 may be the vertical prediction mode
  • the first The encoding mode used by the processor to encode the image block A included in the first image 6-1 may also be the vertical prediction mode
  • the second processor will The encoding mode used when encoding the image block C included in the image block 6-4 may be a horizontal prediction mode
  • the encoding mode used when the first processor encodes the image block A included in the first image 6-1 may also be Horizontal prediction mode.
  • the second preset coding mode may be a vertical prediction mode or a horizontal prediction mode, in some embodiments, it may be determined based on a preset coding cost algorithm, for example, rate-distortion optimization.
  • the coding mode of the second boundary image block 6-4 For example, if the vertical prediction mode can make the loss cost of the second boundary image block 6-4 smaller, the vertical prediction mode can be used; if the horizontal prediction mode is adopted, the loss cost of the second boundary image block 6-4 can be smaller. , You can use the horizontal prediction mode.
  • the first processor and the second processor belong to the same encoder or different encoders.
  • the first processor and the second processor may belong to the same encoder, or may belong to different encoders. In the case that the first processor and the second processor belong to the same encoder, the first processor and the second processor may be different processing cores in the encoder; the first processor and the second processor belong to different In the case of an encoder, the first processor and the second processor may be different encoders in the same encoding device.
  • FIG. 10 provides an encoding device 1000 for an embodiment of this application.
  • the encoding device 1000 may include a first processor 1010 and a second processor 1020.
  • the first processor 1010 is configured to: use the first processor to encode a first image and a first boundary image block in an image to be encoded, where the first boundary image block is a second image in the image to be encoded The first image is adjacent to the second image; the second processor 1020 encodes the second image and the second boundary image block, and the second boundary image block is The boundary image block in the first image, the first boundary image block is adjacent to the second boundary image block; wherein, the first processor 1010 uses The encoding information of the second processor 1020 is the same as the encoding information used when the second processor 1020 encodes the first boundary image block, and the second processor 1020 uses the same encoding information when encoding the second boundary image block.
  • the encoding information is the same as the encoding information used when the first processor 1010 encodes the second boundary image block; the first processor 1010 is also configured to use the encoded first image and the first The boundary image block filters the adjacent boundary between the first image and the second image, and/or the second processor 1020 uses the encoded second image and the second boundary The image block filters the adjacent boundary between the first image and the second image.
  • the coding information includes coding mode and coding parameters.
  • the coding mode includes one or more of an inter prediction mode, an intra prediction mode, or a lossless coding mode.
  • the first processor 1010 is further configured to: use the first processor to use first preset encoding information to encode the second boundary image block;
  • the processor 1020 is further configured to: use the first preset encoding information to encode the second boundary image block.
  • the first processor 1010 is further configured to: use the first processor to use second preset encoding information to encode the first boundary image block; the second The processor 1020 is further configured to: use the second preset encoding information to encode the first boundary image block.
  • the encoding parameters in the preset encoding information include: a first preset encoding mode, a preset reference Image block information and preset transform and quantization parameters; if the encoding mode in the preset encoding information includes an inter prediction mode, the encoding parameters in the preset encoding information include: a second preset encoding mode, a preset The information of the reference frame, the preset motion vector, and the preset transform and quantization parameters; if the encoding mode in the preset encoding information includes the transform and quantization bypass mode in the lossless encoding mode, the encoding parameters in the preset encoding information Including: the intra prediction mode and the information of the preset reference image block, or the inter prediction mode, the information of the preset reference frame and the preset motion vector; if the coding mode in the preset coding information includes all For the pulse code modulation mode in the lossless
  • the first preset used by the first boundary image block It is assumed that the coding mode satisfies a first preset condition, the first preset coding mode adopted by the second boundary image block satisfies a second preset condition, and the first preset condition is that the first boundary image block adopts
  • the intra-frame prediction reconstruction pixels are obtained according to the neighboring blocks on the left, bottom left, top left, or above of the first boundary image block or according to the DC prediction mode, and the second preset condition is the second boundary image block
  • the used intra-frame prediction reconstruction pixels are obtained based on the adjacent block above or on the upper right of the second boundary image block; or if the first image and the second image are horizontally performed on the image to be coded If obtained by dividing, the first preset encoding mode adopted by the first boundary image block satisfies a third preset condition, and the first preset encoding mode adopted by the second boundary image block
  • the third preset condition is that the intra-frame prediction reconstruction pixel used by the first boundary image block is based on the neighboring blocks above, upper left, upper right, or left of the first boundary image block or based on DC prediction Mode or obtained according to the flat prediction mode
  • the fourth preset condition is that the intra-frame prediction reconstruction pixels used by the second boundary image block are obtained according to the neighboring blocks on the left or bottom left of the second boundary image block .
  • the image to be encoded further includes a third image and a fourth image, if the first image, the second image, the third image, and the fourth image are Obtained by horizontally and vertically dividing the image to be encoded, the encoding mode in the preset encoding information includes the vertical prediction mode in the intra prediction mode, the horizontal prediction mode in the intra prediction mode, and the inter prediction One or more of mode or lossless encoding mode.
  • the first processor 1010 is further configured to: based on one of the vertical prediction mode or the horizontal prediction mode, the inter prediction mode, and the lossless coding mode Or multiple encoding the first image and the first boundary image block; encoding the intersection image included in the image block to be encoded based on the inter prediction mode or the lossless encoding mode, the intersection
  • the image is an image at an intersection position of the first image, the second image, the third image, and the fourth image.
  • the second processor 1020 is further configured to: based on one of the vertical prediction mode, the horizontal prediction mode, the inter prediction mode, and the lossless coding mode Or multiple encoding the second image and the second boundary image block; encoding the intersection image included in the image block to be encoded based on the inter prediction mode or the lossless encoding mode, the intersection
  • the image is an image at an intersection position of the first image, the second image, the third image, and the fourth image.
  • the preset motion vector is a preset fixed motion vector; or the preset motion vector is used for searching in a preset search area and/or based on a preset search method.
  • the target transform quantization parameter includes a preset quantization parameter QP and a preset transform unit TU division manner.
  • the first processor 1010 or the second processor 1020 is further configured to: if the encoding information includes multiple encoding methods, select the The coding mode of the first boundary image block or the second boundary image block.
  • the first processor 1010 and the second processor 1020 belong to the same encoder or different encoders.
  • the encoding apparatus 1000 may further include a memory 1030.
  • the first processor 1010 and the second processor 1020 can call and run computer programs from the memory 1030 to implement the methods in the embodiments of the present application.
  • the memory 1030 may be a separate device independent of the first processor 1010 and/or the second processor 1020, or may be integrated in the first processor 1010 and/or the second processor 1020.
  • the encoding device may be, for example, an encoder or a terminal (including but not limited to mobile phones, cameras, drones, etc.), and the encoding device may implement the corresponding processes in the various methods of the embodiments of the present application. For brevity, I won't repeat them here.
  • FIG. 11 is a schematic structural diagram of a chip of an embodiment of the present application.
  • the chip 1100 shown in FIG. 11 includes a first processor 1110 and a second processor 1120.
  • the first processor 1110 and the second processor 1120 can call and run a computer program from a memory to implement the method in the embodiment of the present application. .
  • the chip 1100 may further include a memory 1130.
  • the first processor 1110 and/or the second processor 1120 may call and run a computer program from the memory 1130 to implement the method in the embodiment of the present application.
  • the memory 1130 may be a separate device independent of the first processor 1110 and/or the second processor 1120, or may be integrated in the first processor 1110 and/or the second processor 1120.
  • the chip 1100 may further include an input interface 1140.
  • the first processor 1110 and/or the second processor 1120 can control the input interface 1140 to communicate with other devices or chips, and specifically, can obtain information or data sent by other devices or chips.
  • the chip 1100 may further include an output interface 1150.
  • the first processor 1110 and/or the second processor 1120 can control the output interface 1150 to communicate with other devices or chips, specifically, can output information or data to other devices or chips.
  • the chip mentioned in the embodiment of the present application may also be called a system-level chip, a system-on-chip, a system-on-chip, or a system-on-chip.
  • the processor of the embodiment of the present application may be an integrated circuit image processing system with signal processing capability.
  • the steps of the foregoing method embodiments can be completed by hardware integrated logic circuits in the processor or instructions in the form of software.
  • the above-mentioned processor may be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (Field Programmable Gate Array, FPGA) or other Programming logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC application specific integrated circuit
  • FPGA Field Programmable Gate Array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
  • the memory in the embodiments of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), and electrically available Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
  • the volatile memory may be a random access memory (Random Access Memory, RAM), which is used as an external cache.
  • RAM random access memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • DDR SDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • Enhanced SDRAM, ESDRAM Enhanced Synchronous Dynamic Random Access Memory
  • Synchronous Link Dynamic Random Access Memory Synchronous Link Dynamic Random Access Memory
  • DR RAM Direct Rambus RAM
  • the memory in the embodiment of the present application may also be static random access memory (static RAM, SRAM), dynamic random access memory (dynamic RAM, DRAM), Synchronous dynamic random access memory (synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (double data rate SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (enhanced SDRAM, ESDRAM), synchronous connection Dynamic random access memory (synch link DRAM, SLDRAM) and direct memory bus random access memory (Direct Rambus RAM, DR RAM), etc. That is to say, the memory in the embodiments of the present application is intended to include but not limited to these and any other suitable types of memory.
  • the memory in the embodiment of the present application can provide instructions and data to the processor.
  • a part of the memory may also include a non-volatile random access memory.
  • the memory can also store device type information.
  • the processor may be used to execute instructions stored in the memory, and when the processor executes the instructions, the processor may execute each step corresponding to the terminal device in the foregoing method embodiment.
  • each step of the above method can be completed by an integrated logic circuit of hardware in the processor or instructions in the form of software.
  • the steps of the method disclosed in combination with the embodiments of the present application may be directly embodied as execution and completion by a hardware processor, or execution and completion by a combination of hardware and software modules in the processor.
  • the software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory, and the processor executes the instructions in the memory and completes the steps of the above method in combination with its hardware. To avoid repetition, it will not be described in detail here.
  • the pixels in the image can be located in different rows and/or columns, where the length of A can correspond to the number of pixels located in the same row included in A, and the height of A can be Corresponds to the number of pixels in the same column included in A.
  • the length and height of A may also be referred to as the width and depth of A, which are not limited in the embodiment of the present application.
  • the boundary spacing distribution with A can refer to at least one pixel spaced from the boundary of A, and can also be referred to as "not adjacent to the boundary of A” or “not located at the boundary of A”.
  • “Border”, this embodiment of the application does not limit this, where A can be an image, a rectangular area, or a sub-image, and so on.
  • the embodiments of the present application also provide a computer-readable storage medium for storing computer programs.
  • the computer-readable storage medium can be applied to the encoding device in the embodiment of the present application, and the computer program causes the computer to execute the corresponding process implemented by the encoding device in each method of the embodiment of the present application.
  • the computer program causes the computer to execute the corresponding process implemented by the encoding device in each method of the embodiment of the present application.
  • the embodiments of the present application also provide a computer program product, including computer program instructions.
  • the computer program product can be applied to the encoding device in the embodiment of the present application, and the computer program instructions cause the computer to execute the corresponding process implemented by the encoding device in each method of the embodiment of the present application.
  • the computer program instructions cause the computer to execute the corresponding process implemented by the encoding device in each method of the embodiment of the present application.
  • the embodiment of the present application also provides a computer program.
  • the computer program can be applied to the encoding device in the embodiment of the present application.
  • the computer program When the computer program is run on the computer, the computer is caused to execute the corresponding process implemented by the encoding device in each method of the embodiment of the present application.
  • I won’t repeat it here.
  • the term "and/or” is merely an association relationship describing an associated object, indicating that there may be three relationships.
  • a and/or B can mean: A alone exists, A and B exist at the same time, and B exists alone.
  • the character "/" in this text generally indicates that the associated objects before and after are in an "or" relationship.
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may also be electrical, mechanical or other forms of connection.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments of the present application.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this application is essentially or the part that contributes to the existing technology, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium. It includes several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

一种视频编码的方法和装置,包括:利用第一处理器对待编码图像中的第一图像和第一边界图像块进行编码,所述第一边界图像块为所述待编码图像中的第二图像的边界图像块,所述第一图像与所述第二图像相邻;利用第二处理器对所述第二图像和第二边界图像块进行编码,所述第二边界图像块为所述第一图像中的边界图像块,所述第一边界图像块与所述第二边界图像块相邻;利用所述第一处理器编码后的所述第一图像和所述第一边界图像块对所述第一图像和所述第二图像之间的相邻边界进行滤波,和/或利用所述第二处理器编码后的所述第二图像和所述第二边界图像块对所述第一图像和所述第二图像之间的相邻边界进行滤波。

Description

视频编码的方法和装置 技术领域
本申请涉及图像处理领域,并且更为具体地,涉及一种视频编码的方法和装置。
背景技术
目前,在实际应用中,由于对于视频分辨率和帧率的需求不断上升,单核的硬件编码器已经不能满足需求,多核的硬件编码器可以提供更高的编码性能,从而可以满足更高的分辨率和帧率的需求。多核的硬件编码器通常会将图像或视频分为多个瓦片(tile)或条带片段(Slice Segement,SS)(可简称为slice),每个核负责其中一个或多个tile或slice的编码。
由于图像被划分到多个核中进行编码,在图像被划分的边界处会出现较为明显的边界块效应,从而导致图像显示质量较差,用户观看视频的体验降低。
因此,如何消除由于不同核编码同一视频中的不同图像而导致的边界块效应是一项亟待解决的问题。
发明内容
本申请实施例提供一种视频编码的方法和装置,能够消除由于不同核编码同一视频中的不同图像而导致的边界块效应,提升图像显示质量,进一步地,可以提高用户观看视频的体验。
第一方面,提供一种视频编码的方法,包括:利用第一处理器对待编码图像中的第一图像和第一边界图像块进行编码,所述第一边界图像块为所述待编码图像中的第二图像的边界图像块,所述第一图像与所述第二图像相邻;利用第二处理器对所述第二图像和第二边界图像块进行编码,所述第二边界图像块为所述第一图像中的边界图像块,所述第一边界图像块与所述第二边界图像块相邻;其中,所述第一处理器在对所述第一边界图像块进行编码时采用的编码信息与所述第二处理器对所述第一边界图像块进行编码时所采用的编码信息相同,所述第二处理器在对所述第二边界图像块进行编码时采用的编码信息与所述第一处理器对第二边界图像块进行编码时采用的编码信息相同;利用所述第一处理器编码后的所述第一图像和所述第一边界 图像块对所述第一图像和所述第二图像之间的相邻边界进行滤波,和/或利用所述第二处理器编码后的所述第二图像和所述第二边界图像块对所述第一图像和所述第二图像之间的相邻边界进行滤波。
第二方面,提供一种视频编码的装置,包括第一处理器和第二处理器;所述第一处理器,用于对待编码图像中的第一图像和第一边界图像块进行编码,所述第一边界图像块为所述待编码图像中的第二图像的边界图像块,所述第一图像与所述第二图像相邻;所述第二处理器,用于对所述第二图像和第二边界图像块进行编码,所述第二边界图像块为所述第一图像中的边界图像块,所述第一边界图像块与所述第二边界图像块相邻;其中,所述第一处理器在对所述第一边界图像块进行编码时采用的编码信息与所述第二处理器对所述第一边界图像块进行编码时所采用的编码信息相同,所述第二处理器在对所述第二边界图像块进行编码时采用的编码信息与所述第一处理器对第二边界图像块进行编码时采用的编码信息相同;所述第一处理器,还用于利用编码后的所述第一图像和所述第一边界图像块对所述第一图像和所述第二图像之间的相邻边界进行滤波,和/或所述第二处理器,还用于利用编码后的所述第二图像和所述第二边界图像块对所述第一图像和所述第二图像之间的相邻边界进行滤波。
第三方面,提供了一种视频编码装置,包括处理器和存储器。该存储器用于存储计算机程序,该处理器用于调用并运行该存储器中存储的计算机程序,执行上述第一方面或其各实现方式中的方法。
第四方面,提供一种芯片,用于实现上述第一方面或其各实现方式中的方法。
具体地,该芯片包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有该芯片的设备执行如上述第一方面或其各实现方式中的方法。
第五方面,提供了一种计算机可读存储介质,用于存储计算机程序,该计算机程序包括用于执行第一方面或第一方面的任意可能的实现方式中的方法的指令。
第六方面,提供了一种计算机程序产品,包括计算机程序指令,该计算机程序指令使得计算机执行上述第一方面或第一方面的各实现方式中的方法。
本申请实施例提供的视频编码的方法,由于在利用第一处理器和第二处 理器在对第一图像和第二图像进行编码的时候,利用第一处理器多编码了第一边界图像块,以及利用第二处理器多编码了第二边界图像块,且利用第一处理器在编码第一边界图像块所采用的编码信息与利用第二处理器编码第二图像中的第一边界图像块的编码信息相同,以及利用第二处理器在编码第二边界图像块时所采用的编码信息与第一图像中的边界图像块的编码信息相同。通过编码后的第一图像和第一边界图像块可以对第一图像和第二图像之间的相邻边界进行滤波,和/或通过编码后的第二图像和第二边界图像块可以对第一图像和第二图像之间的相邻边界进行滤波,从而可以消除由于不同处理器编码而导致的图像之间的边界块效应,提升图像显示质量,进一步地,可以提高用户的观看体验。
附图说明
下面将对实施例使用的附图作简单地介绍。
图1是应用本申请实施例的技术方案的架构图;
图2是根据本申请实施例的视频编码框架2示意图;
图3是本申请一实施例提供的视频编码方法的示意性流程图;
图4a是本申请一实施例提供的对待编码视频划分的示意性图;
图4b是本申请另一实施例提供的对待编码视频划分的示意性图;
图4c是本申请再一实施例提供的对待编码视频划分的示意性图;
图4d是本申请又一实施例提供的对待编码视频划分的示意性图;
图4e是本申请又一实施例提供的对待编码视频划分的示意性图;
图5a是本申请又一实施例提供的对待编码视频划分的示意性图;
图5b是本申请又一实施例提供的对待编码视频划分的示意性图;
图6是本申请又一实施例提供的对待编码视频划分的示意性图;
图7是本申请又一实施例提供的对待编码视频划分的示意性图;
图8是本申请又一实施例提供的对待编码视频划分的示意性图;
图9是本申请又一实施例提供的对待编码视频划分的示意性图;
图10是本申请一实施例提供的视频编码装置的示意性结构图;
图11是本申请实施例提供的芯片的示意性结构图。
具体实施方式
下面对本申请实施例中的技术方案进行描述。
除非另有说明,本申请实施例所使用的所有技术和科学术语与本申请的技术领域的技术人员通常理解的含义相同。本申请中所使用的术语只是为了描述具体的实施例的目的,不是旨在限制本申请的范围。
图1是应用本申请实施例的技术方案的架构图。
如图1所示,系统100可以接收待处理数据102,对待处理数据102进行处理,产生处理后的数据108。例如,系统100可以接收待编码数据,对待编码数据进行编码以产生编码后的数据,或者,系统100可以接收待解码数据,对待解码数据进行解码以产生解码后的数据。在一些实施例中,系统100中的部件可以由一个或多个处理器实现,该处理器可以是计算设备中的处理器,也可以是移动设备(例如无人机)中的处理器。该处理器可以为任意种类的处理器,本发明实施例对此不做限定。在一些可能的设计中,该处理器可以包括编码器、解码器或编解码器等。系统100中还可以包括一个或多个存储器。该存储器可用于存储指令和数据,例如,实现本发明实施例的技术方案的计算机可执行指令、待处理数据102、处理后的数据108等。该存储器可以为任意种类的存储器,本发明实施例对此也不做限定。
待编码数据可以包括文本、图像、图形对象、动画序列、音频、视频、或者任何需要编码的其他数据。在一些情况下,待编码数据可以包括来自传感器的传感数据,该传感器可以为视觉传感器(例如,相机、红外传感器),麦克风、近场传感器(例如,超声波传感器、雷达)、位置传感器、温度传感器、触摸传感器等。在一些情况下,待编码数据可以包括来自用户的信息,例如,生物信息,该生物信息可以包括面部特征、指纹扫描、视网膜扫描、嗓音记录、DNA采样等。
图2是根据本申请实施例的视频编码框架2示意图。如图2所示,在接收待编码视频后,从待编码视频的第一帧开始,依次对待编码视频中的每一帧进行编码。其中,当前编码帧主要经过:预测(Prediction)、变换(Transform)、量化(Quantization)和熵编码(Entropy Coding)等处理,最终输出当前编码帧的码流。对应的,解码过程通常是按照上述过程的逆过程对接收到的码流进行解码,以恢复出解码前的视频帧信息。
具体地,如图2所示,所述视频编码框架2中包括一个编码控制模块201,用于进行编码过程中的决策控制动作,以及参数的选择。例如,如图2所示, 编码控制模块201控制变换、量化、反量化、反变换的中用到的参数,控制进行帧内或者帧间模式的选择,以及运动估计和滤波的参数控制,且编码控制模块201的控制参数也将输入至熵编码模块中,进行编码形成编码码流中的一部分。
对当前编码帧开始编码时,对编码帧进行划分202处理,具体地,首先对其进行slice划分,再进行块划分。可选地,在一个示例中,编码帧划分为多个互不重叠的最大的编码树单元(Coding Tree Unit,CTU),各CTU还可以分别按四叉树、或二叉树、或三叉树的方式迭代划分为一系列更小的编码单元(Coding Unit,CU),一些示例中,CU还可以包含与之相关联的预测单元(Prediction Unit,PU)和变换单元(Transform Unit,TU),其中PU为预测的基本单元,TU为变换和量化的基本单元。一些示例中,PU和TU分别是在CU的基础上划分成一个或多个块得到的,其中一个PU包含多个预测块(Prediction Block,PB)以及相关语法元素。一些示例中,PU和TU可以是相同的,或者,是由CU通过不同的划分方法得到的。一些示例中,CU、PU和TU中的至少两种是相同的,例如,不区分CU、PU和TU,全部是以CU为单位进行预测、量化和变换。为方便描述,下文中将CTU、CU或者其它形成的数据单元均称为编码块。
应理解,在本申请实施例中,视频编码针对的数据单元可以为帧,条带,编码树单元,编码单元,编码块或以上任一种的组。在不同的实施例中,数据单元的大小可以变化。
具体地,如图2所示,编码帧划分为多个编码块后,进行预测过程,用于去除当前编码帧的空域和时域冗余信息。当前比较常用的预测方法包括帧内预测和帧间预测两种方法。帧内预测仅利用本帧图像中己重建的信息对当前编码块进行预测,而帧间预测会利用到之前已经重建过的其它帧图像(也被称作参考帧)中的信息对当前编码块进行预测。具体地,在本申请实施例中,编码控制模块201用于决策选择帧内预测或者帧间预测。
当选择帧内预测模式时,帧内预测203的过程包括获取当前编码块周围已编码相邻块的重建块作为参考块,基于该参考块的像素值,采用预测模式方法计算预测值生成预测块,将当前编码块与预测块的相应像素值相减得到当前编码块的残差,当前编码块的残差经过变换204、量化205以及熵编码210后形成当前编码块的码流。进一步的,当前编码帧的全部编码块经过上 述编码过程后,形成编码帧的编码码流中的一部分。此外,帧内预测203中产生的控制和参考数据也经过熵编码210编码,形成编码码流中的一部分。
具体地,变换204用于去除图像块的残差的相关性,以便提高编码效率。对于当前编码块残差数据的变换通常采用二维离散余弦变换(Discrete Cosine Transform,DCT)变换和二维离散正弦变换(Discrete Sine Transform,DST)变换,例如在编码端将编码块的残差信息分别与一个N×M的变换矩阵及其转置矩阵相乘,相乘之后得到当前编码块的变换系数。
在产生变换系数之后用量化205进一步提高压缩效率,变换系数经量化可以得到量化后的系数,然后将量化后的系数进行熵编码210得到当前编码块的残差码流,其中,熵编码方法包括但不限于内容自适应二进制算术编码(Context Adaptive Binary Arithmetic Coding,CABAC)熵编码。最后将熵编码得到的比特流及进行编码后的编码模式信息进行存储或发送到解码端。在编码端,还会对量化的结果进行反量化206,对反量化结果进行反变换207。在反变换207之后,利用反变换结果以及运动补偿结果,得到重建像素。之后,对重建像素进行滤波(即环路滤波)211。在211之后,输出滤波后的重建图像(属于重建视频帧)。后续,重建图像可以作为其他帧图像的参考帧图像进行帧间预测。本申请实施例中,重建图像又可称为重建后的图像或重构图像。
具体地,帧内预测203过程中的已编码相邻块为:当前编码块编码之前,已进行编码的相邻块,该相邻块的编码过程中产生的残差经过变换204、量化205、反量化206、和反变换207后,与该相邻块的预测块相加得到的重建块。对应的,反量化206和反变换207为量化206和变换204的逆过程,用于恢复量化和变换前的残差数据。
其中,帧内预测模式可以包括直流(DC)预测模式、平坦(Planar)预测模式和不同的角度预测模式(例如可以包括33种角度预测模式)。
如图2所示,当选择帧间预测模式时,帧间预测过程包括运动估计(Motion Estimation,ME)208和运动补偿(Motion Compensation,MC)209。具体地,根据重建视频帧中的参考帧图像进行运动估计208,在一张或多张参考帧图像中根据一定的匹配准则搜索到与当前编码块最相似的图像块为匹配块,该匹配块与当前编码块的相对位移即为当前编码块的运动矢量(Motion Vector,MV)。然后基于该运动矢量和参考帧对当前编码块进行运 动补偿209,获得当前编码块的预测块。并将该编码块像素的原始值与对应的预测块像素值相减得到编码块的残差。当前编码块的残差经过变换204、量化205以及熵编码210后形成编码帧的编码码流中的一部分。此外,运动补偿209中产生的控制和参考数据也经过熵编码210编码,形成编码码流中的一部分。
其中,如图2所示,重建视频帧为经过滤波211之后得到视频帧。重建视频帧包括一个或多个重建后的图像。滤波211用于减少编码过程中产生的块效应和振铃效应等压缩失真,重建视频帧在编码过程中用于为帧间预测提供参考帧,在解码过程中,重建视频帧经过后处理后输出为最终的解码视频。
具体地,帧间预测模式可以包括高级运动矢量预测(Advanced Motion Vector Prediction,AMVP)模式、合并(Merge)模式或跳过(skip)模式。
对于AMVP模式而言,可以先确定运动矢量预测(Motion Vector Prediction,MVP),在得到MVP之后,可以根据MVP确定运动估计的起始点,在起始点附近,进行运动搜索,搜索完毕之后得到最优的MV,由MV确定参考块在参考图像中的位置,参考块减去当前块得到残差块,MV减去MVP得到运动矢量差值(Motion Vector Difference,MVD),并将该MVD和MVP的索引通过码流传输给解码端。
对于Merge模式而言,可以先确定MVP,并直接将MVP确定为当前块的MV。其中,为了得到MVP,可以先构建一个MVP候选列表(merge candidate list),在MVP候选列表中,可以包括至少一个候选MVP,每个候选MVP可以对应有一个索引,编码端在从MVP候选列表中选择MVP之后,可以将该MVP索引写入到码流中,则解码端可以按照该索引从MVP候选列表中找到该索引对应的MVP,以实现对图像块的解码。
应理解,以上过程只是Merge模式的一种具体实现方式。Merge模式还可以具有其他的实现方式。
例如,Skip模式是Merge模式的一种特例。按照Merge模式得到MV之后,如果编码端确定当前块和参考块基本一样,那么不需要传输残差数据,只需要传递MVP的索引,以及进一步地可以传递一个标志,该标志可以表明当前块可以直接从参考块得到。
也就是说,Merge模式特点为:MV=MVP(MVD=0);而Skip模式还多一个特点,即:重构值rec=预测值pred(残差值resi=0)。
Merge模式可以应用于三角形预测技术中。在三角形预测技术中,可以将待编码的图像块划分为两个形状为三角形的子图像块,可以从运动信息候选列表中,分别为每个子图像块确定运动矢量,并基于每个子图像块的运动矢量,确定每个子图像块对应的预测子块,基于每个子图像块对应的预测子块,构造当前图像块的预测块,从而实现对当前图像块的编码。
对于解码端,则进行与编码端相对应的操作。首先利用熵解码以及反量化和反变换得到残差信息,并根据解码码流确定当前图像块使用帧内预测还是帧间预测。如果是帧内预测,则利用当前帧中已重建图像块按照帧内预测方法构建预测信息;如果是帧间预测,则需要解析出运动信息,并使用所解析出的运动信息在已重建的图像中确定参考块,得到预测信息;接下来,再将预测信息与残差信息进行叠加,并经过滤波操作便可以得到重建信息。
在实际应用中,由于对于视频分辨率和帧率的需求不断上升,单核的硬件编码器已经不能满足需求,多核的硬件编码器可以提供更高的编码性能,从而可以满足更高的分辨率和帧率的需求。多核的硬件编码器通常会将图像或视频分为多个tile和/或多个slice,每个核负责其中一个或多个tile或slice的编码。
应理解,本申请实施例中,通过对图像或视频划分得到的多个tile或slice也可以称为图像块,本申请对此不作具体限定。
由于图像被划分到多个核中进行编码,因此在图像被划分的边界处会出现较为明显的边界,从而导致用户的观看体验降低。
本申请实施例提供一种视频编码的方法,可以消除用于编码导致的边界,进一步地,可以提高用户的观看体验。
下面将结合图3详细描述本申请实施例提供的视频编码的方法300。
如图3所示为本申请一实施例提供的视频编码的方法300,该方法300可以包括步骤310-330。
310,利用第一处理器对待编码图像中的第一图像和第一边界图像块进行编码,所述第一边界图像块为所述待编码图像中的第二图像的边界图像块,所述第一图像与所述第二图像相邻。
本申请实施例中,第一图像和第二图像是待编码图像中的图像,在具体编码过程中,可以先对待编码图像进行划分,将其划分为至少两个图像,然后可以利用第一处理器对其中一个图像进行编码。
本申请实施例中的第一图像和第二图像可以为相同大小的图像,即在对待编码图像进行划分的时候,可以从待编码图像的中心进行垂直或水平划分;第一图像和第二图像也可以为不同大小的图像,即在对待编码图像进行划分的时候,可以不从待编码图像的中心进行垂直或水平划分。
例如,如图4a所示,为本申请一实施例提供的一种对待编码视频的某一帧图像的划分的示意性图。可以通过对待编码视频进行划分得到两个条带片段,分别为条带片段1和条带片段2,每一个条带片段呈条带形。在编码时,可以利用第一处理器对条带片段1进行编码,利用第二处理器对条带片段2进行编码。
如图4b所示,为本申请另一实施例提供的一种对待编码视频中的某一帧图像的划分的示意性图。可以通过对待编码图像进行水平划分得到两个瓦片,分别为瓦片1和瓦片2,每一个瓦片呈矩形。
其中,每一个瓦片可以包括整数个CTU,在编码时,可以利用第一处理器对瓦片1进行编码,利用第二处理器对瓦片2进行编码。同理,也可以对待编码图像进行垂直划分。
应理解,本申请实施例中的水平划分可以是指从水平方向对待编码图像进行的划分,垂直划分可以是指从垂直方向对待编码图像进行的划分。
本申请实施例中,在对待编码图像进行划分的时候,需要遵循一些基本原则,每个条带片段和瓦片之间至少满足以下两个条件之一:
(1)一个条带片段中的所有CTU属于同一个瓦片;
(2)一个瓦片中的所有CTU属于同一个条带片段。
如图4c所示,为本申请再一实施例提供的一种对待编码视频中的某一帧图像的划分的示意性图。如图4c所示,可以先对待编码视频中的某一帧的图像进行水平划分可以得到两个瓦片,分别为瓦片1和瓦片2,再分别对这两个瓦片进行划分,得到4个条带片段,分别为条带片段1、条带片段2、条带片段3以及条带片段4。可以看出,同一个条带片段中的所有CTU属于同一个瓦片,例如,条带片段1中的所有CTU均属于瓦片1,条带片段2中的所有CTU均属于瓦片1,条带片段3中的所有CTU均属于瓦片2,条带片段4中的所有CTU均属于瓦片2,满足上述条件。
如图4d所示,为本申请又一实施例提供的一种对待编码视频中的某一帧的图像的划分的示意性图。如图4d所示,可以先对待编码视频中的某一 帧图像进行垂直划分可以得到两个瓦片,分别为瓦片1和瓦片2,再分别对这两个瓦片进行划分,得到4个片段,分别为条带片段1和条带片段2、条带片段3以及条带片段4。可以看出,同一个条带片段中的所有CTU属于同一个瓦片,例如,条带片段1中的所有CTU属于瓦片1,条带片段2中的所有CTU属于瓦片1,条带片段3中的所有CTU属于瓦片2,条带片段4中的所有CTU也属于瓦片2,满足上述条件。
本申请实施例中,在对瓦片进行不规则划分的时候,应注意,划分后的条带片段中的CTU的编码顺序可以是连续的。例如,如图4e所示,在对瓦片2进行不规则划分以得到条带片段的时候,可以基于图4e中较粗黑色实线处进行划分,得到条带片段3和条带片段4。其中,条带片段3中包括编码顺序为1至8的CTU;条带片段4中包括编码顺序为9至35的CTU。
320,利用第二处理器对所述第二图像和第二边界图像块进行编码,所述第二边界图像块为所述第一图像中的边界图像块,所述第一边界图像块与所述第二边界图像块相邻;其中,所述第一处理器在对所述第一边界图像块进行编码时采用的编码信息与所述第二处理器对所述第一边界图像块进行编码时所采用的编码信息相同,所述第二处理器在对所述第二边界图像块进行编码时采用的编码信息与所述第一处理器对第二边界图像块进行编码时采用的编码信息相同。
本申请实施例中,第一边界图像块可以为第二图像中的边界图像块,第二边界图像块可以为第一图像中的边界图像块。利用第一处理器在编码第一边界图像的时候,可以参考预设的图像编码信息,利用第二处理器在编码第二边界图像块的时候,也可以参考预设的图像编码信息。
值得注意的是,利用第一处理器在编码第一边界图像块的时候,所采用的预设的图像编码信息可以与利用第二处理器在编码第一边界图像块的时候所采用的编码信息相同;利用第二处理器在编码第二边界图像块的时候,所采用的图像编码信息可以与利用第一处理器在编码第二边界图像块所采用的编码信息相同。
应理解,本申请实施例中的第一边界图像块与第二边界图像块相邻,即第二图像中的边界图像块与第一图像中的边界图像块相邻。换句话说,在对待编码图像进行划分的时候,若通过待编码图像的中心进行水平或垂直划分,则与待编码图像的水平中心线或垂直中心线的相邻的图像块即为本申请 实施例中的第一边界图像块和第二边界图像块。
如图5a所示,对待编码图像进行水平划分得到本申请实施例中的第一图像和第二图像。参考图5a,图中的瓦片1和瓦片2分别为本申请实施例中的第一图像和第二图像,即图中的5a-1和5a-2分别为本申请实施例中的第一图像和第二图像。图中的5a-3为本申请实施例中的第一边界图像块,5a-4为本申请实施例中的第二边界图像块。其中,图中的第一图像5a-1为以较粗黑实线显示的包括图像块A的图像,图中的第二图像5a-2为以较粗黑实线显示的包括图像块D的图像;图中的图像块5a-3为与划分线相邻的包括图像块B的图像块,图中的图像块5a-4为与划分线相邻的包括图像块C的图像块。
在编码过程中,利用第一处理器在对第一边界图像块5a-3进行编码时所采用的编码信息与利用第二处理器在对第一边界图像块5a-3进行编码时所采用的编码信息可以相同;利用第二处理器在对第二边界图像块5a-4进行编码时所采用的编码信息可以与利用第一处理器在对第二边界图像块5a-4进行编码时所采用的编码信息可以相同。
应理解,本申请实施例中的第一边界图像块5a-3包括多个图像块B,其中,对于每一个图像块B来说,可以继续划分为多个小图像块,假设对其中一个图像块B进行划分后得到多个小图像块,分别为B1、B2、B3以及B4,这四个小图像块的编码信息可以互不相同。类似地,对于第二图像5a-2中包括的图像块D来说,也可以继续划分为多个小图像块,例如,可以对与上述划分为多个小图像块的图像块B对应的图像块D进行划分,得到4个小图像块,分别为D1、D2、D3以及D4,这四个小图像块的编码信息可以互不相同。
值的说明的是,其中,在利用第一处理器对图像块B1进行编码时所采用的编码信息可以与利用第二处理器对图像块D1进行编码时所采用的编码信息相同;利用第一处理器对图像块B2进行编码时所采用的编码信息可以与利用第二处理器对图像块D2进行编码时所采用的编码信息相同;利用第一处理器对图像块B3进行编码时所采用的编码信息可以与利用第二处理器对图像块D3进行编码时所采用的编码信息相同;利用第一处理器对图像块B4进行编码时所采用的编码信息可以与利用第二处理器对图像块D4进行编码时所采用的编码信息相同。
如图5b所示,对待编码图像进行不规则划分也可以得到本申请实施例中的第一图像和第二图像,参考图5b,即图中的5b-1和5b-2分别为本申请实施例中的第一图像和第二图像。图中的5b-3为本申请实施例中的第一边界图像块,5b-4为本申请实施例中的第二边界图像块。其中,图中的第一图像5b-1为以较粗黑实线显示的包括图像块A的图像,图中的第二图像5b-2为以较粗黑实线显示的包括图像块D的图像;图中的图像块5b-3为与划分线相邻的包括图像块B的图像块,图中的图像块5b-4为与划分线相邻的包括图像块C的图像块。
在编码过程中,利用第一处理器在对第一边界图像块5b-3进行编码时所采用的编码信息可以与利用第二处理器在对第一边界图像块5b-3进行编码时所采用的编码信息相同;利用第二处理器在对第二边界图像块5b-4进行编码时所采用的编码信息可以与利用第一处理器在对第二边界图像块5b-4进行编码时所采用的编码信息相同。
在一些实施例中,图5b中的图像块A0、图像块B0、图像块C0和图像块D0也可以属于边界图像块。换句话说,第一边界图像块5b-3可以包括图像块B和图像块B0,第二边界图像块可以5b-4可以包括图像块C和图像块C0。其中,图像块B0的编码信息和参数信息可以与第二图像5b-2中所包括的图像块D0的编码信息和参数信息相同;图像块C0的编码信息和参数信息可以与第二图像5b-2中所包括的图像块A0的编码信息和参数信息相同。
本申请实施例中,处理器的数量和对待编码图像划分后的图像的数量也可以为其它数值,例如,若划分后的图像数量包括4个图像,则处理器的数量也可以为4个,分别对划分后的这4个图像进行编码。本申请对此不作具体限定。
330,利用所述第一处理器编码后的所述第一图像和所述第一边界图像块对所述第一图像和所述第二图像之间的相邻边界进行滤波,和/或利用所述第二处理器编码后的所述第二图像和所述第二边界图像块对所述第一图像和所述第二图像之间的相邻边界进行滤波。
本申请实施例中的滤波可以包括去块滤波、像素自适应补偿(Sample Adaptive Offset,SAO)或自适应环路滤波(Adaptive Loop Filter,ALF)。其中,去块滤波主要用于消除由于不同处理器编码而导致的图像之间的块效应;SAO和ALF主要是用于补偿原始像素与重建像素由于编码造成的失真。
应理解,本申请实施例中,在利用第一处理器编码后的第一图像和第一边界图像块对第一图像和第二图像之间的相邻边界进行滤波的时候,对于不同编码标准包括的滤波类型可能不同。
还应理解,本申请实施例中的利用第一处理器多编码的第一边界图像块和第一图像可以用于消除由于不同处理器编码第一图像和第二图像而导致的图像之间的块效应,类似地,利用第二处理器多编码的第二边界图像块和第二图像也可以用于消除由于不同处理器编码第一图像和第二图像而导致的图像之间的块效应。
因此,在利用第一边界图像块和第二边界图像块完成对第一图像和第二图像之间的相邻边界的滤波后,利用第一处理器可以丢弃多编码的第一边界图像块,利用第二处理器可以丢弃多编码的第二边界图像块。换句话说,在利用第一处理器和第二处理器得到处理后的重建像素后,利用第一处理器可以丢弃第一边界图像块的编码信息,利用第二处理器可以丢弃第二边界图像块的编码信息,以保证待编码图像的完整性。
本申请实施例提供的视频编码的方法,由于在利用第一处理器和第二处理器在对第一图像和第二图像进行编码的时候,利用第一处理器多编码了第一边界图像块,以及利用第二处理器多编码了第二边界图像块,且利用第一处理器在编码第一边界图像块时所采用的编码信息与利用第二处理器在编码第二边界图像块时所采用的编码信息相同。通过编码后的第一图像和第一边界图像块可以对第一图像和第二图像之间的相邻边界进行滤波,和/或通过编码后的第二图像和第二边界图像块可以对第一图像和第二图像之间的相邻边界进行滤波,从而可以消除由于不同处理器编码而导致的图像之间的边界块效应,提升图像显示质量,进一步地,可以提高用户的观看体验。
上文指出,利用第一处理器在对第一边界图像块进行编码时采用的编码信息与利用第二处理器对第一边界图像块进行编码时所采用的编码信息相同,利用第二处理器在对第二边界图像块进行编码时采用的编码信息与利用第一处理器对第二边界图像块进行编码时采用的编码信息相同。其中,该编码信息可以包括多种信息,下文将进行具体介绍。
可选地,在一些实施例中,所述编码信息包括编码模式和编码参数。
可选地,在一些实施例中,所述编码模式包括帧间预测模式、帧内预测模式或无损编码模式中的一种或多种。
本申请实施例中的编码信息可以包括编码模式和编码参数。具体地,利用第一处理器对第一边界图像块进行编码时所采用的编码信息中的编码模式和利用第二处理器对第一边界图像块进行编码时所采用的编码模式可以均为帧间预测模式;或者,利用第一处理器对第一边界图像块进行编码时所采用的编码信息中的编码模式和利用第二处理器对第一边界图像块进行编码时所采用的编码模式也可以均为帧内预测模式;或者,利用第一处理器对第一边界图像块进行编码时所采用的编码信息中的编码模式和利用第二处理器对第一边界图像块进行编码时所采用的编码模式还可以均为无损编码模式等。
同样的,对于第二边界图像块,利用第二处理器对第二边界图像块进行编码时采用的编码信息中的编码模式和利用第一处理器对第二边界图像块进行编码时采用的编码信息中的编码模式可以均为帧间预测模式;或者利用第二处理器对第二边界图像块进行编码时采用的编码信息中的编码模式和利用第一处理器对第二边界图像块进行编码时采用的编码信息中的编码模式可以均为帧内预测模式;或者,利用第二处理器对第二边界图像块进行编码时采用的编码信息中的编码模式和利用第一处理器对第二边界图像块进行编码时采用的编码信息中的编码模式可以均为无损编码模式等。
在其他可选的实现方式中,利用第一处理器对第一边界图像块进行编码时所采用的编码信息中的编码模式和利用第二处理器对第一边界图像块进行编码时所采用的编码模式可以为帧间预测模式、帧内预测模式或无损编码模式中的任意两种或三种。
利用第一处理器对第一边界图像块进行编码时所采用的编码信息中的编码模式和利用第一处理器对第二边界图像块进行编码时所采用的编码模式可以相同,也可以不同。同样的,利用第二处理器对第一边界图像块进行编码时所采用的编码信息中的编码模式和利用第二处理器对第二边界图像块进行编码时所采用的编码模式可以相同,也可以不同。
编码参数可以根据第一图像和第二图像包括的边界图像块确定,例如,可以将第一图像和第二图像相邻的边界图像块的编码模式和编码参数预设于存储器中,第一处理器对第一边界图像块进行编码时所使用的编码模式和编码参数可以从存储器中获取,同时需要保证利用第一处理器对第一边界图像块进行编码时所采用的编码模式和编码参数与利用第二处理器对第一边 界图像块进行编码时所采用的编码模式和编码参数相同;利用第二处理器在对第二边界图像块进行编码时使用的编码参数也可以从存储器中获取,同时需要保证利用第二处理器对第二边界图像块进行编码时所采用的编码参数可以与利用第一处理器对第二边界图像块进行编码时所采用的编码参数相同。
本申请实施例中的无损编码模式可以为脉冲编码调制(Pulse Code Modulation,PCM)模式,也可以为变换量化旁路(transquant bypass)模式,本申请对此不做具体限定。
可选地,在一些实施例中,所述利用第一处理器对待编码图像中的第一图像进行编码,包括:利用所述第一处理器采用第一预设编码信息对所述第二边界图像块进行编码;所述利用第二处理器对第二边界图像块进行编码,包括:利用所述第二处理器采用所述第一预设编码信息对所述第二边界图像块进行编码。
可选地,在一些实施例中,所述利用第一处理器对第一边界图像块进行编码,包括:利用所述第一处理器采用第二预设编码信息对所述第一边界图像块进行编码;所述利用第二处理器对所述第二图像进行编码,包括:利用所述第二处理器采用所述第二预设编码信息对所述第一边界图像块进行编码。
本申请实施例中,由于第二边界图像块是第一图像中的边界图像块,因此,利用第一处理器在对第一图像进行编码的时候,可以采用第一预设编码信息对第一图像中的第二边界图像块进行编码;同样地,利用第二处理器在对第二边界图像块进行编码的时候,可以采用同样的第一预设编码信息对第二边界图像块进行编码,从而可以保证这两个处理器分别在对第二边界图像块进行编码的时候,所采用的编码信息是相同的,最终得到的第二边界图像块的重建像素相同。
本申请实施例中,由于第一边界图像块是第二图像中的边界图像块,因此,利用第二处理器在对第二图像进行编码的时候,可以采用第二预设编码信息对第二图像中的边界图像块,包括第一边界图像块进行编码;同样地,利用第一处理器在对第一边界图像块进行编码的时候,可以采用第二预设编码信息对第一边界图像块进行编码,从而可以保证这两个处理器分别在对第一边界图像块进行编码的时候,所采用的编码信息是相同的。
可选地,在一些实施例中,若所述预设编码信息中的编码模式包括帧内预测模式,则所述预设编码信息中的编码参数包括:第一预设编码模式、预设参考图像块的信息以及预设变换量化参数;若所述预设编码信息中的编码模式包括帧间预测模式,则所述预设编码信息中的编码参数包括:第二预设编码模式、预设参考帧的信息、预设运动矢量以及预设变换量化参数;若所述预设编码信息中的编码模式包括无损编码模式中的变换量化旁路模式,则所述预设编码信息中的编码参数包括:所述帧内预测模式和预设参考图像块的信息,或所述帧间预测模式、预设参考帧的信息以及预设运动矢量;若所述预设编码信息中的编码模式包括所述无损编码模式中的脉冲编码调制模式,则所述预设编码信息中的编码参数包括:所述脉冲编码调制模式的像素比特深度。
本申请实施例中,第一预设编码信息中的编码参数可以由编码模式确定,例如,若编码模式包括帧内预测模式,则第一预设编码信息中的编码参数可以包括:第一预设编码模式,预设参考图像块的信息以及预设变换量化参数。其中,该第一预设编码模式可以为帧内预测模式中的垂直预测模式(包括但不限于HEVC标准中定义的35种帧内预测模式中的模式26)或水平预测模式(包括但不限于HEVC标准中定义的35种帧内预测模式中的模式10),预测参考图像块的信息可以为位于当前图像块的上方图像块或左方图像块的信息。
若编码模式为帧间预测模式,则第二预设编码信息中的编码参数可以包括:第二预设编码模式、预设参考帧的信息、预设运动矢量以及预设变换量化参数。其中,该第二预设编码模式可以为帧间预测模式中的AMVP模式、merge模式或skip模式,预设参考帧的信息可以为当前帧的前向帧或后向帧。
若编码模式包括无损编码模式中的变换量化旁路模式,则预设编码信息中的编码参数可以包括帧内预测模式和帧间预测模式。具体的关于帧内预测模式或帧间预测模式的其它编码参数可以参考上文中的帧内预测模式或帧间预测模式的参数,这里不再赘述。
本申请实施例中的变换量化旁路模式可以是指:无编码过程中的变换量化过程。即对当前图像块进行预测之后得到的残差可以不作变换量化处理。
若编码模式包括脉冲编码调制模式,则编码参数可以包括像素比特深度。本申请实施例中的脉冲编码调制模式可以是指对当前图像块的原始像素 进行编码,无需经过预测、变换、量化过程。
上文指出,通过对待编码图像进行划分可以得到本申请实施例中的第一图像和第二图像,对于不同的划分方式,本申请实施例中的边界图像块的编码信息可以是不同的,下文将进行具体介绍。
可选地,在一些实施例中,若所述第一图像和所述第二图像是通过对所述待编码图像进行垂直划分得到的,第一边界图像块采用的所述第一预设编码模式满足第一预设条件,所述第二边界图像块采用的所述第一预设编码模式满足第二预设条件,所述第一预设条件为所述第一边界图像块采用的帧内预测重建像素是根据所述第一边界图像块的左边、左下、左上或上方的相邻块或根据直流(DC)预测模式获得的,所述第二预设条件为所述第二边界图像块采用的帧内预测重建像素是根据所述第二边界图像块的上方或右上的相邻块获得的;或若所述第一图像和所述第二图像是通过对所述待编码图像进行水平划分得到的,则所述第一边界图像块采用的所述第一预设编码模式满足第三预设条件,所述第二边界图像块采用的所述第一预设编码模式满足第四预设条件,所述第三预设条件为所述第一边界图像块采用的帧内预测重建像素是根据所述第一边界图像块的上方、左上、右上或左边的相邻块或根据直流预测模式或根据平坦(planar)预测模式获得的,所述第四预设条件为所述第二边界图像块采用的帧内预测重建像素是根据所述第二边界图像块的左边或左下的相邻块获得的。
本申请实施例中,如图6所示,通过对待编码图像进行垂直划分可以得到第一图像6-1和第二图像6-2,则在利用不同处理器对第一图像6-1和第二图像6-2进行编码的时候,可以利用第一处理器对第一图像6-1和第一边界图像块6-3进行编码,利用第二处理器对第二图像6-2和第二边界图像块6-4进行编码。其中,利用第一处理器对第一边界图像块6-3中所包括的图像块B进行编码时所采用的编码信息可以与第二处理器对第二图像6-2所包括的图像块D进行编码时所采用的编码信息相同;利用第二处理器对第二边界图像块6-4所包括图像块C进行编码时所采用的编码信息可以与第一处理器对第一图像6-1所包括的图像块A进行编码时所采用的编码信息相同。
本申请实施例中,第一图像6-1为以较粗黑实线显示的包括图像块A的图像,第二图像6-2为以较粗黑实线显示的包括图像块D的图像。
具体地,利用第一处理器在对第一图像6-1和第一边界图像块6-3进行 编码的时候,其对第一边界图像块6-3进行编码时所采用的编码信息可以与第二处理器在对第二图像6-2所包括的图像块D的编码信息相同。若对待编码图像进行垂直划分得到第一图像6-1和第二图像6-2,则第一边界图像块6-3采用的第二预设编码信息中包括的第一预设编码模式可以满足第一预设条件,该第一预设条件可以为第一边界图像块6-3采用的帧内预测重建像素可以根据不同的角度预测模式(例如,可以基于第一边界图像块的左边、左下、左上或上方的相邻块,包括HEVC标准中定义的35种帧内预测模式中的模式2-模式26)或根据直流预测模式(包括HEVC标准中定义的35种帧内预测模式中的模式1)获得的。第二边界图像块6-4采用的第一预设编码信息中包括的第一预设编码模式们可以满足第二预设条件,该第二预设条件可以为第二边界图像块6-4采用的帧内预测重建像素可以根据不同的角度预测模式(可以基于第一边界图像块的上方或右上的相邻块,包括HEVC标准中定义的35种帧内预测模式中的模式26-模式34)。
基于此,第一处理器在对第一边界图像块6-3进行编码和第二处理器在对第二图像6-2所包括的图像块D进行编码的时候,第一处理器可以采用上述第二预设编码信息中包括的第一预设编码模式对第一边界图像块6-3进行编码,第二处理器也可以采用第二预设编码信息中包括的第一预设编码模式对第二图像6-2所包括的图像块D进行编码。
第二处理器在对第二边界图像块6-4进行编码和第一处理器在对第一图像6-1所包括的图像块A进行编码的时候,第二处理器可以采用第一预设编码信息中包括的第一预设编码模式对第二边界图像块6-4进行编码,第一处理器也可以采用第一预设编码信息中包括的第一预设编码模式对第一图像6-1所包括的图像块A进行编码。
应理解,本申请实施例中,若第一图像和第二图像是通过对所述待编码图像进行垂直划分得到的,第一预设编码信息也可以包括帧间预测模式的AMVP模式或Merge模式或skip模式,或无损编码模式,则可以利用第一处理器采用第一预设编码信息对第一边界图像块进行编码,同样地,第二处理器也可以采用第一预设编码信息对第二图像包括的第一边界图像块进行编码;第二设编码信息也可以包括帧间预测模式的AMVP模式或Merge模式或skip模式,或无损编码模式,则可以利用第二处理器采用第二预设编码信息对第二边界图像块进行编码,同样地,第一处理器也可以第二预设编码 信息对第一图像包括的第二边界图像块进行编码。
其中,在采用帧间预测模式中的AMVP模式或Merge模式或skip模式的情况下,其参考图像块可以为当前图像块的左侧图像块或右侧图像块。
可以理解的是,本申请实施例中的第一预设编码信息与第二预设编码信息可以不同。例如,第一预设编码信息可以包括帧间预测模式的AMVP模式,第二预设编码信息可以包括无损编码模式;或第一预设编码信息可以包括无损编码模式,第二预设编码信息可以包括帧间预测模式的Merge模式。
本申请实施例中,对待编码图像的划分可以有不同种划分方式,例如,如图7所示,为本申请实施例提供的通过对待编码图像进行水平划分后的第一图像7-1和第二图像7-2,本申请实施例中,第一图像7-1为以较粗黑实线显示的包括图像块A的图像,第二图像7-2为以较粗黑实线显示的包括图像块D的图像。
在这种划分方式下,第一边界图像块7-3采用的第二预设编码信息中包括的第一预设编码模式可以满足第三预设条件,该第三预设条件可以为第一边界图像块7-3采用的帧内预测重建像素可以根据不同的角度预测模式(例如,可以基于第一边界图像块的上方、左上、右上或左边的相邻块,包括HEVC标准中定义的35种帧内预测模式中的模式10-模式34)或根据直流预测模式(包括HEVC标准中定义的35种帧内预测模式中的模式1)或根据平坦预测模式(包括HEVC标准中定义的35种帧内预测模式中的模式0)获得的。第二边界图像块7-4采用的第一预设编码信息中包括的第一预设编码模式们可以满足第四预设条件,该第四预设条件可以为第二边界图像块7-4采用的帧内预测重建像素可以根据不同的角度预测模式(可以基于第一边界图像块的左边或左下的相邻块,包括HEVC标准中定义的35种帧内预测模式中的模式2-模式10)。
基于此,利用第一处理器在对第一边界图像块7-3进行编码的时候,可以采用上述第二预设编码信息所包括的第一编码模式对第一边界图像块7-3进行编码,利用第二处理器在对第二图像7-2所包括的图像块D进行编码的时候,也可以采用上述第二预设编码信息所包括的第一编码模式对图像块D进行编码。
类似地,利用第二处理器在对第二边界图像块7-4进行编码的时候,可以采用第二预设编码信息所包括的第一编码模式对第二边界图像块7-4进行 编码,第一处理器在对第一图像7-1包括图像块A进行编码的时候,也可以采用第二预设编码信息所包括的第一编码模式对图像块A进行编码。
还应理解,本申请实施例中,可以对待编码图像进行多次垂直划分或多次水平划分以得到多个图像块,例如,如图8所示,可以对待编码图像进行两次垂直划分得到三个图像块,分别为第一图像8-1、第二图像8-2以及第三图像8-3。本申请实施例中,第一图像8-1为以较粗黑实线显示的包括图像块A的图像,第二图像8-2为以较粗黑实线显示的包括图像块D和图像块E的图像,第三图像8-3为以较粗黑实线显示的包括图像块H的图像。
可以通过不同的处理器对第一图像8-1、第二图像8-2以及第三图像8-3分别进行编码,在利用不同的处理器对这三个图像分别进行编码的时候,可以与上述图6中类似的方法进行编码。例如,利用第一处理器在对第一图像8-1进行编码的时候,可以多编码一列图像块,多编码的一列图像块可以称为第一边界图像块8-4,利用第一处理器对该第一边界图像块8-4进行编码时所采用的编码信息可以与利用第二处理器对第二图像8-2包括的与第一图像8-1相邻的图像块D进行编码时所采用的编码信息相同。
类似地,由于第二图像8-2与第一图像8-1和第三图像8-3均相邻,因此,在对第二图像8-2进行编码的时候,可以多编码两列图像块,如图中的第二边界图像块8-5所包括的图像块C和第三边界图像块8-6所包括的图像块F。其中,利用第二处理器对第二边界图像块8-5所包括的图像块C进行编码时所采用的编码信息可以与利用第一处理器对第一图像8-1所包括的图像块A进行编码时所采用的编码信息相同;利用第二处理器对第三边界图像块8-6所包括的图像块F进行编码时所采用的编码信息可以与第三处理器对第三图像8-3所包括的图像块H进行编码时所采用的编码信息相同。
对于第三图像8-3来说,在对其进行编码的时候,可以多编码一列图像块,该多编码的一列图像块可以称为第四边界图像块8-7,利用第三处理器对该第四边界图像块8-7进行编码时所采用的编码信息可以与利用第二处理器对第二图像8-2所包括的图像块E进行编码时所采用的编码信息相同。
可以理解的是,本申请实施例中,若待编码图像经过多次水平划分得到多个图像,可以基于与上述类似的方法对多个图像进行编码,为了简洁,这里不再赘述。
上文描述了可以对待编码图像进行多次垂直划分或多次水平划分得到 多个图像,结合每个图像的边界图像块,从而可以利用不同的处理器对得到的多个图像以及边界图像块进行编码,进一步地,可以基于该多个图像以及边界图像块对多个图像之间的边界进行滤波。在一些实施中,待编码图像可能是经过多次垂直和多次水平划分得到多个图像,在这种方式下,与上文所描述的编码方式可能会有些差别,下文将进行具体描述。
可选地,在一些实施例中,所述待编码图像还包括第三图像和第四图像,若所述第一图像、所述第二图像、所述第三图像以及所述第四图像是通过对所述待编码图像进行水平和垂直划分得到的,则所述预设编码信息中的编码模式包括帧内预测模式中的垂直预测模式、帧内预测模式中的水平预测模式、帧间预测模式或无损编码模式中的一种或多种。
本申请实施例中,假设通过对待编码图像进行垂直和水平划分得到第一图像、第二图像、第三图像以及第四图像,则在利用不同的处理器对这四个图像分别进行编码的时候,可以结合与其相邻的图像多编码一列图像块或一行图像块。而对于多编码的一列图像块或多编码的一行图像块的编码模式基于划分方式可以确定为垂直预测模式或水平预测模式。
本申请实施例中,对于多编码的一列图像块或多编码的一行图像块的编码模式还可以为帧间预测模式或无损编码模式;对于处于第一图像、第二图像、第三图像以及第四图像的交点位置的图像块的编码模式可以采用帧间预测模式,也可以采用无损编码模式,本申请对此不作具体限定。
可选地,在一些实施例中,所述利用第一处理器对待编码图像中的第一图像和第一边界图像块进行编码,包括:利用所述第一处理器,基于所述垂直预测模式或所述水平预测模式、所述帧间预测模式、所述无损编码模式中的一种或多种对所述第一图像和所述第一边界图像块进行编码;利用所述第一处理器,基于所述帧间预测模式或所述无损编码模式对所述待编码图像块包括的交点图像进行编码,所述交点图像为所述第一图像、所述第二图像、所述第三图像以及所述第四图像的交点位置的图像。
可选地,在一些实施例中,所述利用第二处理器对所述第二图像和第二边界图像块进行编码,包括:利用所述第二处理器,基于所述垂直预测模式、所述水平预测模式、所述帧间预测模式、所述无损编码模式中的一种或多种对所述第二图像和所述第二边界图像块进行编码;利用所述第二处理器,基于所述帧间预测模式或所述无损编码模式对所述待编码图像块包括的交点 图像进行编码,所述交点图像为所述第一图像、所述第二图像、所述第三图像以及所述第四图像的交点位置的图像。
如图9所示,为本申请实施例提供的通过对待编码图像进行一次垂直和一次水平划分的示意性图。本申请实施例中,第一图像9-1为以较粗黑实线显示的包括图像块A和图像块C以及图像块9-13的图像,第二图像9-2为以较粗黑实线显示的包括图像块F和图像块G以及图像块9-13的图像,第三图像9-3为以较粗黑实线显示的包括图像块J和图像块K以及图像块9-13的图像,第四图像9-4为较粗黑实线显示的包括图像块P和图像块N以及图像块9-13的图像。
本申请实施例中,利用第一处理器在对第一图像9-1进行编码的时候,可以结合与第一图像9-1相邻的第二图像9-2所包括的边界图像块和第三图像9-3所包括的边界图像块进行编码。具体地,可以利用第一处理器对第一图像9-1、第一边界图像块9-5,以及第三边界图像块9-6进行编码。其中,在利用第一处理器对第一边界图像块9-5所包括的图像块B进行编码时所采用的编码信息可以与利用第二处理器对第二图像9-2所包括的图像块F进行编码时所采用的编码信息相同,利用第一处理器对第三边界图像块9-6所包括的图像块D进行编码时所采用的编码信息可以与利用第三处理器对第三图像9-3所包括的图像块J进行编码时所采用的编码信息相同。
对于第一边界图像块9-5和第二图像9-2所包括的图像块F来说,可以基于预设的第一预设编码信息进行编码。具体地,利用第一处理器对第一边界图像块9-5进行编码时可以采用第一预设编码信息对其进行编码,利用第二处理器对第二图像9-2所包括的图像块F进行编码时也可以采用第一预设编码信息对其进行编码。对于第三边界图像块9-6和第三图像所包括的图像块J来说,可以基于预设的第三预设编码信息进行编码。具体地,利用第一处理器对第三边界图像块9-6进行编码时可以采用第三预设编码信息对其进行编码,利用第三处理器对第二图像9-3所包括的图像块J进行编码时也可以采用第三预设编码信息对其进行编码。
本申请实施例中,第一预设编码信息可以包括帧内预测模式中的垂直预测模式、帧间预测模式或无损编码模式;第三预设编码信息可以包括帧内预测模式中的水平预测模式、帧间预测模式或无损编码模式。
类似地,第二处理器对第二图像9-2进行编码的时候,也可以结合与第 二图像9-2相邻的第一图像9-1所包括的图像块和第四图像9-4所包括的图像块进行编码。具体地,利用第二处理器可以对第二图像9-2和第二边界图像块9-7,以及第四边界图像块9-8进行编码。其中,利用第二处理器对第二边界图像块9-7所包括的图像块E进行编码时所采用的编码信息可以与利用第一处理器对第一图像9-1所包括的图像块A进行编码时所采用的编码信息相同;利用第二处理器对第四边界图像块9-8所包括的图像块H进行编码时所采用的编码信息可以与第四处理器对第四图像9-4所包括的图像块N进行编码时所采用的编码信息相同。
对于第二边界图像块9-7和第一图像9-1包括的图像块A来说,可以基于预设的第二预设编码信息进行编码。具体地,利用第二处理器对第二边界图像块9-7进行编码时可以采用第二预设编码信息进行编码,利用第一处理器对第一图像9-1所包括的图像块A进行编码时也可以采用第二预设编码信息进行编码。对于第四边界图像块9-8和第四图像9-4包括的图像块N来说,利用第二处理器对第四边界图像块9-8进行编码时可以采用第四预设编码信息进行编码,利用第四处理器对第四图像9-4包括的图像块N也可以采用第四预设编码信息进行编码。
本申请实施例中,第二预设编码信息可以包括帧内预测模式中的垂直预测模式、帧间预测模式或无损编码模式;第四预设编码信息可以包括帧内预测模式中的水平预测模式、帧间预测模式或无损编码模式。
类似地,第三处理器对第三图像9-3进行编码的时候,也可以结合与第三图像9-3相邻的第一图像9-1所包括的图像块和第四图像9-4所包括的图像块进行编码。具体地,利用第三处理器可以对第三图像9-2和第五边界图像块9-9,以及第六边界图像块9-10进行编码。其中,利用第三处理器对第五边界图像块9-9所包括的图像块I进行编码时所采用的编码信息可以与第一处理器对第一图像9-1所包括的图像块C进行编码时所采用的编码信息相同,例如,均可以采用第五预设编码信息;利用第三处理器对第六边界图像块9-10所包括的图像块L进行编码时所采用的编码信息可以与利用第四处理器对第四图像9-4所包括的图像块P进行编码时所采用的编码信息相同,例如,均可以采用第六预设编码信息。
本申请实施例中,第五预设编码信息可以包括帧内预测模式中的水平预测模式、帧间预测模式或无损编码模式;第六预设编码信息可以包括帧内预 测模式中的垂直预测模式、帧间预测模式或无损编码模式。
类似地,第四处理器对第四图像9-4进行编码的时候,也可以结合与第四图像9-2相邻的第二图像9-2所包括的图像块和第三图像9-3所包括的图像块进行编码。具体地,利用第四处理器可以对第四图像9-4和第七边界图像块9-11,以及第八边界图像块9-12进行编码。其中,利用第四处理器对第七边界图像块9-11所包括的图像块O进行编码时所采用的编码信息可以与利用第三处理器对第三图像7-3所包括的图像块K进行编码时所采用的编码信息相同,例如,均可以采用第七预设编码信息;利用第四处理器对第八边界图像块9-12所包括的图像块M进行编码时所采用的编码信息可以与利用第二处理器对第二图像9-2所包括的图像块G进行编码时所采用的编码信息相同,例如,均可以采用第八预设编码信息。
本申请实施例中,第七预设编码信息可以包括帧内预测模式中的垂直预测模式、帧间预测模式或无损编码模式;第八预设编码信息可以包括帧内预测模式中的水平预测模式、帧间预测模式或无损编码模式。
可选地,在一些实施例中,所述利用第一处理器对待编码图像中的第一图像和第一边界图像块进行编码,包括:利用所述第一处理器,基于所述垂直预测模式或所述水平预测模式、所述帧间预测模式、所述无损编码模式中的一种或多种对所述第一图像和所述第一边界图像块进行编码;利用所述第一处理器,基于所述帧间预测模式或所述无损编码模式对所述待编码图像块包括的交点图像进行编码,所述交点图像为所述第一图像、所述第二图像、所述第三图像以及所述第四图像的交点位置的图像。
可选地,在一些实施例中,所述利用第二处理器对所述第二图像和第二边界图像块进行编码,包括:利用所述第二处理器,基于所述垂直预测模式、所述水平预测模式、所述帧间预测模式、所述无损编码模式中的一种或多种对所述第二图像和所述第二边界图像块进行编码;利用所述第二处理器,基于所述帧间预测模式或所述无损编码模式对所述待编码图像块包括的交点图像进行编码,所述交点图像为所述第一图像、所述第二图像、所述第三图像以及所述第四图像的交点位置的图像。
本申请实施例中,若待编码图像经过一次垂直划分和一次水平划分得到四个图像,如图9所示,分别为第一图像9-1、第二图像9-2、第三图像9-3以及第四图像9-4。可以利用第一处理器对第一图像9-1、第一边界图像块 9-5以及第三边界图像块9-6进行编码。在编码的过程中,对于第一图像9-1,可以采用多种编码方式进行编码,例如,帧内预测模式中的水平预测模式、帧内预测模式中的垂直预测模式或帧间预测模式或无损编码模式;对于第一边界图像块9-5和第二图像包括的图像块F来说,利用第一处理器可以采用第一预设编码信息对第一边界图像块9-5进行编码,利用第二处理器也可以采用第一预设编码信息对图像块F进行编码,其中,第一预设编码信息可以包括帧内预测模式中的垂直预测模式、帧间预测模式或无损编码模式。对于第三边界图像块9-6和第三图像包括的图像块J来说,利用第一处理器可以采用第三设编码信息对第三边界图像块9-6进行编码,利用第三处理器也可以采用第三预设编码信息对图像块J进行编码,其中,第三预设编码信息可以包括帧内预测模式中的水平预测模式、帧间预测模式或无损编码模式。
类似地,可以利用第二处理器对第二图像9-2、第二边界图像块9-7以及第四边界图像块9-8进行编码。在编码的过程中,对于第二图像9-2,可以采用多种编码方式进行编码,例如,帧内预测模式中的水平预测模式、帧内预测模式中的垂直预测模式或帧间预测模式或无损编码模式;对于第二边界图像块9-7和第一图像9-1包括的图像块A来说,利用第二处理器可以采用第二预设编码信息对第二边界图像块9-7进行编码,利用第一处理器也可以采用第二预设编码信息对图像块A进行编码,其中,第二预设编码信息可以包括帧内预测模式中的垂直预测模式、帧间预测模式或无损编码模式。对于第四边界图像块9-8和第四图像9-4包括的图像块N来说,利用第二处理器可以采用第四预设编码信息对第四边界图像块9-8进行编码,利用第四处理器也可以采用第四预设编码信息对图像块N进行编码,其中,第四预设编码信息可以包括帧内预测模式中的水平预测模式、帧间预测模式或无损编码模式。
类似地,第三图像、第四图像以及边界图像块的编码模式与上述编码模式类似,为了简洁,这里不再赘述。
此外,对于位于第一图像9-1、第二图像9-2、第三图像9-3以及第四图像9-4交点位置的图像块,可以采用帧间预测模式或无损编码模式。如图9所示,本申请实施例中的交点图像可以为图9中的9-13,在对图像块9-13进行编码的时候,可以采用帧间预测模式或无损编码模式进行编码。
可选地,在一些实施例中,所述预设运动矢量为预设的固定运动矢量; 或所述预设运动矢量用于在预设的搜索区域和/或基于预设搜索方式进行搜索。
可选地,在一些实施例中,所述目标变换量化参数包括预设的量化参数(Quatization Parameter,QP)和预设的变换单元TU划分方式。
在一种实现方式中,本申请实施例中的预设运动矢量可以为预设的固定运动矢量,如图6所示,对于第一边界图像块6-3和第二图像6-2所包括的图像块D来说,其所采用的第一预设编码信息中的MV可以为预设的固定运动矢量。例如,假设MV集合中包括多个MV,第一预设编码信息中的MV可以是MV集合中位于第一个位置的MV,或者是MV集合中指向某一个方向的MV,本申请对此不作具体限定。
可选地,在一些实施例中,所述方法还包括:若所述编码信息中包括多种编码方式,基于预设编码代价算法选择所述第一边界图像块或所述第二边界图像块的编码模式。
本申请实施例中,如图6所示,利用第一处理器对第一边界图像块6-3所包括的图像块B进行编码时所采用的编码信息与第二处理器对第二图像6-2包括的图像块D进行编码时所采用的的编码信息相同,其中,可以包括编码模式相同。即若第一预设编码包括垂直预测模式,则第二图像6-2包括的图像块D的编码方式为垂直预测模式,则第一处理器对第一边界图像块6-3所包括的图像块B进行编码时的的编码方式也可以采用垂直预测模式。
本申请实施例中,利用第二处理器对第二边界图像块6-4所包括的图像块C进行编码时所采用的编码信息与利用第一处理器对第一图像6-1包括的图像块A进行编码时所采用的编码信息相同,其中,可以包括编码模式相同。例如,若第二预设编码模式为垂直预测模式,则第二处理器对第二边界图像块6-4所包括的图像块C进行编码时采用的编码模式可以为垂直预测模式,利用第一处理器对第一图像6-1包括的图像块A进行编码时采用的编码模式也可以为为垂直预测模式;若第二预设编码模式为水平预测模式,则第二处理器对第二边界图像块6-4所包括的图像块C进行编码时采用的编码模式可以为水平预测模式,第一处理器对第一图像6-1包括的图像块A进行编码时采用的编码模式也可以为水平预测模式。
对于第二边界图像块6-4,由于第二预设编码模式可能为垂直预测模式,也可能为水平预测模式,在一些实施例中,可以基于预设编码代价算法,例 如,率失真优化确定第二边界图像块6-4的编码模式。例如,若采用垂直预测模式可以使得第二边界图像块6-4的损失代价较小,则可以采用垂直预测模式;若采用水平预测模式可以使得第二边界图像块6-4的损失代价较小,则可以采用水平预测模式。
可选地,在一些实施例中,所述第一处理器和所述第二处理器属于同一编码器或不同编码器。
本申请实施例中,第一处理器和第二处理器可以属于同一编码器,也可以属于不同编码器。在第一处理器和第二处理器属于同一编码器的情况下,第一处理器和第二处理器可以为该编码器中的不同处理核;在第一处理器和第二处理器属于不同编码器的情况下,第一处理器和第二处理器可以为同一编码装置中的不同编码器。
上文结合图1-图9,详细描述了本申请的方法实施例,下面结合图10-图11,描述本申请的装置实施例,装置实施例与方法实施例相互对应,因此未详细描述的部分可参见前面各部分方法实施例。
图10为本申请一实施例提供一种编码装置1000,该编码装置1000可以包括第一处理器1010和第二处理器1020。
所述第一处理器1010用于:利用第一处理器对待编码图像中的第一图像和第一边界图像块进行编码,所述第一边界图像块为所述待编码图像中的第二图像的边界图像块,所述第一图像与所述第二图像相邻;所述第二处理器1020对所述第二图像和第二边界图像块进行编码,所述第二边界图像块为所述第一图像中的边界图像块,所述第一边界图像块与所述第二边界图像块相邻;其中,所述第一处理器1010在对所述第一边界图像块进行编码时采用的编码信息与所述第二处理器1020对所述第一边界图像块进行编码时所采用的编码信息相同,所述第二处理器1020在对所述第二边界图像块进行编码时采用的编码信息与所述第一处理器1010对第二边界图像块进行编码时采用的编码信息相同;所述第一处理器1010,还用于利用编码后的所述第一图像和所述第一边界图像块对所述第一图像和所述第二图像之间的相邻边界进行滤波,和/或所述第二处理器1020,利用编码后的所述第二图像和所述第二边界图像块对所述第一图像和所述第二图像之间的相邻边界进行滤波。
可选地,在一些实施例中,所述编码信息包括编码模式和编码参数。
可选地,在一些实施例中,所述编码模式包括帧间预测模式、帧内预测模式或无损编码模式中的一种或多种。
可选地,在一些实施例中,所述第一处理器1010进一步用于:利用所述第一处理器采用第一预设编码信息对所述第二边界图像块进行编码;所述第二处理器1020进一步用于:采用所述第一预设编码信息对所述第二边界图像块进行编码。
可选地,在一些实施例中,所述第一处理器1010进一步用于:利用所述第一处理器采用第二预设编码信息对所述第一边界图像块进行编码;所述第二处理器1020进一步用于:采用所述第二预设编码信息对所述第一边界图像块进行编码。
可选地,在一些实施例中,若所述预设编码信息中的编码模式包括帧内预测模式,则所述预设编码信息中的编码参数包括:第一预设编码模式、预设参考图像块的信息以及预设变换量化参数;若所述预设编码信息中的编码模式包括帧间预测模式,则所述预设编码信息中的编码参数包括:第二预设编码模式、预设参考帧的信息、预设运动矢量以及预设变换量化参数;若所述预设编码信息中的编码模式包括无损编码模式中的变换量化旁路模式,则所述预设编码信息中的编码参数包括:所述帧内预测模式和预设参考图像块的信息,或所述帧间预测模式、预设参考帧的信息以及预设运动矢量;若所述预设编码信息中的编码模式包括所述无损编码模式中的脉冲编码调制模式,则所述预设编码信息中的编码参数包括:所述脉冲编码调制模式的像素比特深度。
可选地,在一些实施例中,若所述第一图像和所述第二图像是通过对所述待编码图像进行垂直划分得到的,所述第一边界图像块采用的所述第一预设编码模式满足第一预设条件,所述第二边界图像块采用的所述第一预设编码模式满足第二预设条件,所述第一预设条件为所述第一边界图像块采用的帧内预测重建像素是根据所述第一边界图像块的左边、左下、左上或上方的相邻块或根据直流预测模式获得的,所述第二预设条件为所述第二边界图像块采用的帧内预测重建像素是根据所述第二边界图像块的上方或右上的相邻块获得的;或若所述第一图像和所述第二图像是通过对所述待编码图像进行水平划分得到的,则所述第一边界图像块采用的所述第一预设编码模式满足第三预设条件,所述第二边界图像块采用的所述第一预设编码模式满足第 四预设条件,所述第三预设条件为所述第一边界图像块采用的帧内预测重建像素是根据所述第一边界图像块的上方、左上、右上或左边的相邻块或根据直流预测模式或根据平坦预测模式获得的,所述第四预设条件为所述第二边界图像块采用的帧内预测重建像素是根据所述第二边界图像块的左边或左下的相邻块获得的。
可选地,在一些实施例中,所述待编码图像还包括第三图像和第四图像,若所述第一图像、所述第二图像、所述第三图像以及所述第四图像是通过对所述待编码图像进行水平和垂直划分得到的,则所述预设编码信息中的编码模式包括帧内预测模式中的垂直预测模式、帧内预测模式中的水平预测模式、帧间预测模式或无损编码模式中的一种或多种。
可选地,在一些实施例中,所述第一处理器1010进一步用于:基于所述垂直预测模式或所述水平预测模式、所述帧间预测模式、所述无损编码模式中的一种或多种对所述第一图像和所述第一边界图像块进行编码;基于所述帧间预测模式或所述无损编码模式对所述待编码图像块包括的交点图像进行编码,所述交点图像为所述第一图像、所述第二图像、所述第三图像以及所述第四图像的交点位置的图像。
可选地,在一些实施例中,所述第二处理器1020进一步用于:基于所述垂直预测模式、所述水平预测模式、所述帧间预测模式、所述无损编码模式中的一种或多种对所述第二图像和所述第二边界图像块进行编码;基于所述帧间预测模式或所述无损编码模式对所述待编码图像块包括的交点图像进行编码,所述交点图像为所述第一图像、所述第二图像、所述第三图像以及所述第四图像的交点位置的图像。
可选地,在一些实施例中,所述预设运动矢量为预设的固定运动矢量;或所述预设运动矢量用于在预设的搜索区域和/或基于预设搜索方式进行搜索。
可选地,在一些实施例中,所述目标变换量化参数包括预设的量化参数QP和预设的变换单元TU划分方式。
可选地,在一些实施例中,所述第一处理器1010或所述第二处理器1020进一步用于:若所述编码信息中包括多种编码方式,基于预设编码代价算法选择所述第一边界图像块或所述第二边界图像块的编码模式。
可选地,在一些实施例中,所述第一处理器1010和所述第二处理器1020 属于同一编码器或不同编码器。
可选地,编码装置1000还可以包括存储器1030。其中,第一处理器1010和第二处理器1020可以从存储器1030中调用并运行计算机程序,以实现本申请实施例中的方法。
其中,存储器1030可以是独立于第一处理器1010和/或第二处理器1020的一个单独的器件,也可以集成在第一处理器1010和/或第二处理器1020中。
可选地,该编码装置例如可以是编码器、终端(包括但不限于手机、相机、无人机等),并且该编码装置可以实现本申请实施例的各个方法中的相应流程,为了简洁,在此不再赘述。
图11是本申请实施例的芯片的示意性结构图。图11所示的芯片1100包括第一处理器1110和第二处理器1120,第一处理器1110和第二处理器1120可以从存储器中调用并运行计算机程序,以实现本申请实施例中的方法。
可选地,如图11所示,芯片1100还可以包括存储器1130。其中,第一处理器1110和/或第二处理器1120可以从存储器1130中调用并运行计算机程序,以实现本申请实施例中的方法。
其中,存储器1130可以是独立于第一处理器1110和/或第二处理器1120的一个单独的器件,也可以集成在第一处理器1110和/或第二处理器1120中。
可选地,该芯片1100还可以包括输入接口1140。其中,第一处理器1110和/或第二处理器1120可以控制该输入接口1140与其他装置或芯片进行通信,具体地,可以获取其他装置或芯片发送的信息或数据。
可选地,该芯片1100还可以包括输出接口1150。其中,第一处理器1110和/或第二处理器1120可以控制该输出接口1150与其他装置或芯片进行通信,具体地,可以向其他装置或芯片输出信息或数据。
应理解,本申请实施例提到的芯片还可以称为系统级芯片,系统芯片,芯片系统或片上系统芯片等。
应理解,本申请实施例的处理器可能是一种集成电路图像处理系统,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电 路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
可以理解,本申请实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
应理解,上述存储器为示例性但不是限制性说明,例如,本申请实施例中的存储器还可以是静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synch link DRAM, SLDRAM)以及直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)等等。也就是说,本申请实施例中的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
本申请实施例中的存储器可以向处理器提供指令和数据。存储器的一部分还可以包括非易失性随机存取存储器。例如,存储器还可以存储设备类型的信息。该处理器可以用于执行存储器中存储的指令,并且该处理器执行该指令时,该处理器可以执行上述方法实施例中与终端设备对应的各个步骤。
在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器执行存储器中的指令,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
还应理解,在本申请实施例中,图像中的像素点可以位于不同的行和/或列,其中,A的长度可以对应于A包括的位于同一行的像素点个数,A的高度可以对应于A包括的位于同一列的像素点个数。此外,A的长度和高度也可以分别称为A的宽度和深度,本申请实施例对此不做限定。
还应理解,在本申请实施例中,“与A的边界间隔分布”可以指与A的边界间隔至少一个像素点,也可以称为“不与A的边界相邻”或者“不位于A的边界”,本申请实施例对此不做限定,其中,A可以是图像、矩形区域或子图像,等等。
还应理解,上文对本申请实施例的描述着重于强调各个实施例之间的不同之处,未提到的相同或相似之处可以互相参考,为了简洁,这里不再赘述。
本申请实施例还提供了一种计算机可读存储介质,用于存储计算机程序。
可选的,该计算机可读存储介质可应用于本申请实施例中的编码装置,并且该计算机程序使得计算机执行本申请实施例的各个方法中由编码装置实现的相应流程,为了简洁,在此不再赘述。
本申请实施例还提供了一种计算机程序产品,包括计算机程序指令。
可选的,该计算机程序产品可应用于本申请实施例中的编码装置,并且 该计算机程序指令使得计算机执行本申请实施例的各个方法中由编码装置实现的相应流程,为了简洁,在此不再赘述。
本申请实施例还提供了一种计算机程序。
可选的,该计算机程序可应用于本申请实施例中的编码装置,当该计算机程序在计算机上运行时,使得计算机执行本申请实施例的各个方法中由编码装置实现的相应流程,为了简洁,在此不再赘述。
应理解,在本申请实施例中,术语“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系。例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口、装置或单元的间接耦合或通信连接,也可以是电的,机械的或其它的形式连接。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本申请实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以是两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分,或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (29)

  1. 一种视频编码的方法,其特征在于,包括:
    利用第一处理器对待编码图像中的第一图像和第一边界图像块进行编码,所述第一边界图像块为所述待编码图像中的第二图像的边界图像块,所述第一图像与所述第二图像相邻;
    利用第二处理器对所述第二图像和第二边界图像块进行编码,所述第二边界图像块为所述第一图像中的边界图像块,所述第一边界图像块与所述第二边界图像块相邻;其中,所述第一处理器在对所述第一边界图像块进行编码时采用的编码信息与所述第二处理器对所述第一边界图像块进行编码时所采用的编码信息相同,所述第二处理器在对所述第二边界图像块进行编码时采用的编码信息与所述第一处理器对第二边界图像块进行编码时采用的编码信息相同;
    利用所述第一处理器编码后的所述第一图像和所述第一边界图像块对所述第一图像和所述第二图像之间的相邻边界进行滤波,和/或利用所述第二处理器编码后的所述第二图像和所述第二边界图像块对所述第一图像和所述第二图像之间的相邻边界进行滤波。
  2. 根据权利要求1所述的方法,其特征在于,所述编码信息包括编码模式和编码参数。
  3. 根据权利要求2所述的方法,其特征在于,所述编码模式包括帧间预测模式、帧内预测模式或无损编码模式中的一种或多种。
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述利用第一处理器对待编码图像中的第一图像进行编码,包括:
    利用所述第一处理器采用第一预设编码信息对所述第二边界图像块进行编码;
    所述利用第二处理器对第二边界图像块进行编码,包括:
    利用所述第二处理器采用所述第一预设编码信息对所述第二边界图像块进行编码。
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,所述利用第一处理器对第一边界图像块进行编码,包括:
    利用所述第一处理器采用第二预设编码信息对所述第一边界图像块进 行编码;
    所述利用第二处理器对所述第二图像进行编码,包括:
    利用所述第二处理器采用所述第二预设编码信息对所述第一边界图像块进行编码。
  6. 根据权利要求4或5所述的方法,其特征在于,
    若所述预设编码信息中的编码模式包括帧内预测模式,则所述预设编码信息中的编码参数包括:第一预设编码模式、预设参考图像块的信息以及预设变换量化参数;
    若所述预设编码信息中的编码模式包括帧间预测模式,则所述预设编码信息中的编码参数包括:第二预设编码模式、预设参考帧的信息、预设运动矢量以及预设变换量化参数;
    若所述预设编码信息中的编码模式包括无损编码模式中的变换量化旁路模式,则所述预设编码信息中的编码参数包括:所述帧内预测模式和预设参考图像块的信息,或所述帧间预测模式、预设参考帧的信息以及预设运动矢量;
    若所述预设编码信息中的编码模式包括所述无损编码模式中的脉冲编码调制模式,则所述预设编码信息中的编码参数包括:所述脉冲编码调制模式的像素比特深度。
  7. 根据权利要求6所述的方法,其特征在于,若所述第一图像和所述第二图像是通过对所述待编码图像进行垂直划分得到的,则所述第一边界图像块采用的所述第一预设编码模式满足第一预设条件,所述第二边界图像块采用的所述第一预设编码模式满足第二预设条件,所述第一预设条件为所述第一边界图像块采用的帧内预测重建像素是根据所述第一边界图像块的左边、左下、左上或上方的相邻块或根据直流预测模式获得的,所述第二预设条件为所述第二边界图像块采用的帧内预测重建像素是根据所述第二边界图像块的上方或右上的相邻块获得的;或
    若所述第一图像和所述第二图像是通过对所述待编码图像进行水平划分得到的,则所述第一边界图像块采用的所述第一预设编码模式满足第三预设条件,所述第二边界图像块采用的所述第一预设编码模式满足第四预设条件,所述第三预设条件为所述第一边界图像块采用的帧内预测重建像素是根据所述第一边界图像块的上方、左上、右上或左边的相邻块或根据直流预测 模式或根据平坦预测模式获得的,所述第四预设条件为所述第二边界图像块采用的帧内预测重建像素是根据所述第二边界图像块的左边或左下的相邻块获得的。
  8. 根据权利要求4至6中任一项所述的方法,其特征在于,所述待编码图像还包括第三图像和第四图像,
    若所述第一图像、所述第二图像、所述第三图像以及所述第四图像是通过对所述待编码图像进行水平和垂直划分得到的,则所述预设编码信息中的编码模式包括帧内预测模式中的垂直预测模式、帧内预测模式中的水平预测模式、帧间预测模式或无损编码模式中的一种或多种。
  9. 根据权利要求8所述的方法,其特征在于,所述利用第一处理器对待编码图像中的第一图像和第一边界图像块进行编码,包括:
    利用所述第一处理器,基于所述垂直预测模式或所述水平预测模式、所述帧间预测模式、所述无损编码模式中的一种或多种对所述第一图像和所述第一边界图像块进行编码;
    利用所述第一处理器,基于所述帧间预测模式或所述无损编码模式对所述待编码图像块包括的交点图像进行编码,所述交点图像为所述第一图像、所述第二图像、所述第三图像以及所述第四图像的交点位置的图像。
  10. 根据权利要求8所述的方法,其特征在于,所述利用第二处理器对所述第二图像和第二边界图像块进行编码,包括:
    利用所述第二处理器,基于所述垂直预测模式、所述水平预测模式、所述帧间预测模式、所述无损编码模式中的一种或多种对所述第二图像和所述第二边界图像块进行编码;
    利用所述第二处理器,基于所述帧间预测模式或所述无损编码模式对所述待编码图像块包括的交点图像进行编码,所述交点图像为所述第一图像、所述第二图像、所述第三图像以及所述第四图像的交点位置的图像。
  11. 根据权利要求6至10中任一项所述的方法,其特征在于,所述预设运动矢量为预设的固定运动矢量;或
    所述预设运动矢量用于在预设的搜索区域和/或基于预设搜索方式进行搜索。
  12. 根据权利要求6至11中任一项所述的方法,其特征在于,所述目标变换量化参数包括预设的量化参数QP和预设的变换单元TU划分方式。
  13. 根据权利要求1至12中任一项所述的方法,其特征在于,所述方法还包括:
    若所述编码信息中包括多种编码方式,基于预设编码代价算法选择所述第一边界图像块或所述第二边界图像块的编码模式。
  14. 根据权利要求1至13中任一项所述的方法,其特征在于,所述第一处理器和所述第二处理器属于同一编码器或不同编码器。
  15. 一种视频编码的装置,其特征在于,包括第一处理器和第二处理器;
    所述第一处理器,用于对待编码图像中的第一图像和第一边界图像块进行编码,所述第一边界图像块为所述待编码图像中的第二图像的边界图像块,所述第一图像与所述第二图像相邻;
    所述第二处理器,用于对所述第二图像和第二边界图像块进行编码,所述第二边界图像块为所述第一图像中的边界图像块,所述第一边界图像块与所述第二边界图像块相邻;其中,所述第一处理器在对所述第一边界图像块进行编码时采用的编码信息与所述第二处理器对所述第一边界图像块进行编码时所采用的编码信息相同,所述第二处理器在对所述第二边界图像块进行编码时采用的编码信息与所述第一处理器对第二边界图像块进行编码时采用的编码信息相同;
    所述第一处理器,还用于利用编码后的所述第一图像和所述第一边界图像块对所述第一图像和所述第二图像之间的相邻边界进行滤波,和/或,
    所述第二处理器,还用于利用编码后的所述第二图像和所述第二边界图像块对所述第一图像和所述第二图像之间的相邻边界进行滤波。
  16. 根据权利要求15所述的装置,其特征在于,所述编码信息包括编码模式和编码参数。
  17. 根据权利要求16所述的装置,其特征在于,所述编码模式包括帧间预测模式、帧内预测模式或无损编码模式中的一种或多种。
  18. 根据权利要求15至17中任一项所述的装置,其特征在于,所述第一处理器进一步用于:采用第一预设编码信息对所述第二边界图像块进行编码;
    所述第二处理器进一步用于:采用所述第一预设编码信息对所述第二边界图像块进行编码。
  19. 根据权利要求15至18中任一项所述的装置,其特征在于,所述第 一处理器进一步用于:采用第二预设编码信息对所述第一边界图像块进行编码;
    所述第二处理器进一步用于:采用所述第二预设编码信息对所述第一边界图像块进行编码。
  20. 根据权利要求18或19所述的装置,其特征在于,
    若所述预设编码信息中的编码模式包括帧内预测模式,则所述预设编码信息中的编码参数包括:第一预设编码模式、预设参考图像块的信息以及预设变换量化参数;
    若所述预设编码信息中的编码模式包括帧间预测模式,则所述预设编码信息中的编码参数包括:第二预设编码模式、预设参考帧的信息、预设运动矢量以及预设变换量化参数;
    若所述预设编码信息中的编码模式包括无损编码模式中的变换量化旁路模式,则所述预设编码信息中的编码参数包括:所述帧内预测模式和预设参考图像块的信息,或所述帧间预测模式、预设参考帧的信息以及预设运动矢量;
    若所述预设编码信息中的编码模式包括所述无损编码模式中的脉冲编码调制模式,则所述预设编码信息中的编码参数包括:所述脉冲编码调制模式的像素比特深度。
  21. 根据权利要求20所述的装置,其特征在于,若所述第一图像和所述第二图像是通过对所述待编码图像进行垂直划分得到的,则所述第一边界图像块采用的所述第一预设编码模式满足第一预设条件,所述第二边界图像块采用的所述第一预设编码模式满足第二预设条件,所述第一预设条件为所述第一边界图像块采用的帧内预测重建像素是根据所述第一边界图像块的左边、左下、左上或上方的相邻块或根据直流预测模式获得的,所述第二预设条件为所述第二边界图像块采用的帧内预测重建像素是根据所述第二边界图像块的上方或右上的相邻块获得的;或
    若所述第一图像和所述第二图像是通过对所述待编码图像进行水平划分得到的,则所述第一边界图像块采用的所述第一预设编码模式满足第三预设条件,所述第二边界图像块采用的所述第一预设编码模式满足第四预设条件,所述第三预设条件为所述第一边界图像块采用的帧内预测重建像素是根据所述第一边界图像块的上方、左上、右上或左边的相邻块或根据直流预测 模式或根据平坦预测模式获得的,所述第四预设条件为所述第二边界图像块采用的帧内预测重建像素是根据所述第二边界图像块的左边或左下的相邻块获得的。
  22. 根据权利要求18至20中任一项所述的装置,其特征在于,所述待编码图像还包括第三图像和第四图像,
    若所述第一图像、所述第二图像、所述第三图像以及所述第四图像是通过对所述待编码图像进行水平和垂直划分得到的,则所述预设编码信息中的编码模式包括帧内预测模式中的垂直预测模式、帧内预测模式中的水平预测模式、帧间预测模式或无损编码模式中的一种或多种。
  23. 根据权利要求22所述的装置,其特征在于,所述第一处理器进一步用于:
    基于所述垂直预测模式或所述水平预测模式、所述帧间预测模式、所述无损编码模式中的一种或多种对所述第一图像和所述第一边界图像块进行编码;
    基于所述帧间预测模式或所述无损编码模式对所述待编码图像块包括的交点图像进行编码,所述交点图像为所述第一图像、所述第二图像、所述第三图像以及所述第四图像的交点位置的图像。
  24. 根据权利要求22所述的装置,其特征在于,所述第二处理器进一步用于:
    基于所述垂直预测模式、所述水平预测模式、所述帧间预测模式、所述无损编码模式中的一种或多种对所述第二图像和所述第二边界图像块进行编码;
    基于所述帧间预测模式或所述无损编码模式对所述待编码图像块包括的交点图像进行编码,所述交点图像为所述第一图像、所述第二图像、所述第三图像以及所述第四图像的交点位置的图像。
  25. 根据权利要求20至24中任一项所述的装置,其特征在于,所述预设运动矢量为预设的固定运动矢量;或
    所述预设运动矢量用于在预设的搜索区域和/或基于预设搜索方式进行搜索。
  26. 根据权利要求20至25中任一项所述的装置,其特征在于,所述目标变换量化参数包括预设的量化参数QP和预设的变换单元TU划分方式。
  27. 根据权利要求15至26中任一项所述的装置,其特征在于,所述第一处理器或所述第二处理器进一步用于:
    若所述编码信息中包括多种编码方式,基于预设编码代价算法选择所述第一边界图像块或所述第二边界图像块的编码模式。
  28. 根据权利要求15至27中任一项所述的装置,其特征在于,所述第一处理器和所述第二处理器属于同一编码器或不同编码器。
  29. 一种计算机可读存储介质,其特征在于,包括程序指令,所述程序指令被计算机运行时,所述计算机执行如权利要求1至14中任一项所述的方法。
PCT/CN2019/130875 2019-12-31 2019-12-31 视频编码的方法和装置 WO2021134654A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980048568.1A CN112534824B (zh) 2019-12-31 2019-12-31 视频编码的方法和装置
PCT/CN2019/130875 WO2021134654A1 (zh) 2019-12-31 2019-12-31 视频编码的方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/130875 WO2021134654A1 (zh) 2019-12-31 2019-12-31 视频编码的方法和装置

Publications (1)

Publication Number Publication Date
WO2021134654A1 true WO2021134654A1 (zh) 2021-07-08

Family

ID=74978849

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/130875 WO2021134654A1 (zh) 2019-12-31 2019-12-31 视频编码的方法和装置

Country Status (2)

Country Link
CN (1) CN112534824B (zh)
WO (1) WO2021134654A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115643407A (zh) * 2022-12-08 2023-01-24 荣耀终端有限公司 视频处理方法及其相关设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1997157A (zh) * 2006-12-19 2007-07-11 上海广电(集团)有限公司中央研究院 一种消除条带间条纹的方法
CN103947213A (zh) * 2011-10-28 2014-07-23 高通股份有限公司 瓦片边界上的环路滤波控制
US20150215631A1 (en) * 2014-01-23 2015-07-30 Broadcom Corporation Parallel Coding with Overlapped Tiles
JP2015188257A (ja) * 2015-06-17 2015-10-29 ソニー株式会社 画像復号装置および方法、記録媒体、並びにプログラム
US20180020215A1 (en) * 2016-07-14 2018-01-18 Arris Enterprises Llc Region specific encoding and sao-sensitive-slice-width-adaptation for improved-quality hevc encoding

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009104850A1 (en) * 2008-02-20 2009-08-27 Lg Electronics Inc. Method for encoding and decoding image, and apparatus for encoding and decoding image
KR102032000B1 (ko) * 2011-11-04 2019-10-14 선 페이턴트 트러스트 변경된 이미지 블록 경계 세기 유도에 의한 디블로킹 필터링
JP2015106747A (ja) * 2013-11-28 2015-06-08 富士通株式会社 動画像符号化装置、動画像符号化方法及び動画像符号化用コンピュータプログラム
CN110290384A (zh) * 2018-03-19 2019-09-27 华为技术有限公司 图像滤波方法、装置及视频编解码器

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1997157A (zh) * 2006-12-19 2007-07-11 上海广电(集团)有限公司中央研究院 一种消除条带间条纹的方法
CN103947213A (zh) * 2011-10-28 2014-07-23 高通股份有限公司 瓦片边界上的环路滤波控制
US20150215631A1 (en) * 2014-01-23 2015-07-30 Broadcom Corporation Parallel Coding with Overlapped Tiles
JP2015188257A (ja) * 2015-06-17 2015-10-29 ソニー株式会社 画像復号装置および方法、記録媒体、並びにプログラム
US20180020215A1 (en) * 2016-07-14 2018-01-18 Arris Enterprises Llc Region specific encoding and sao-sensitive-slice-width-adaptation for improved-quality hevc encoding

Also Published As

Publication number Publication date
CN112534824A (zh) 2021-03-19
CN112534824B (zh) 2022-09-23

Similar Documents

Publication Publication Date Title
CN112913250B (zh) 编码器、解码器及对任意ctu尺寸使用ibc搜索范围优化的对应方法
KR20210006993A (ko) Cclm에 기반한 인트라 예측 방법 및 그 장치
KR102543468B1 (ko) Cclm에 기반한 인트라 예측 방법 및 그 장치
EP4228259A1 (en) Deriving reference mode values and encoding and decoding information representing prediction modes
JP2022529432A (ja) イントラ符号化モードにおけるマトリクスの導出
WO2016137368A1 (en) Encoding and decoding of inter pictures in a video
JP7314281B2 (ja) イントラ・サブパーティション・コーディング・ツールによって引き起こされるサブパーティション境界のためのデブロッキングフィルタ
WO2021196035A1 (zh) 视频编码的方法和装置
CN118101948A (zh) 一种编码器、解码器及去块滤波器的边界强度的对应推导方法
CN113196783B (zh) 去块效应滤波自适应的编码器、解码器及对应方法
US20230328280A1 (en) Method and device for intra-prediction
KR20200096227A (ko) 블록 형상에 기초한 비디오 인코딩 및 디코딩을 위한 방법 및 장치
AU2022271494A1 (en) An encoder, a decoder and corresponding methods using compact mv storage
KR20210088688A (ko) Ibc 병합 리스트를 사용하는 인코더, 디코더 및 대응하는 방법들
KR102611845B1 (ko) 옵티컬 플로 기반 비디오 인터 예측
WO2022121770A1 (zh) 增强层编解码方法和装置
JP2024501331A (ja) ビデオコーディング中にフィルタ処理するための複数のニューラルネットワークモデル
JPWO2020211807A5 (zh)
WO2021134654A1 (zh) 视频编码的方法和装置
WO2022171042A1 (zh) 一种编码方法、解码方法及设备
RU2809518C2 (ru) Способ и устройство кодирования/декодирования изображений с использованием фильтрации и способ для передачи потока битов
CN114598873B (zh) 量化参数的解码方法和装置
WO2022016535A1 (zh) 视频编解码的方法和装置
WO2021134700A1 (zh) 视频编解码的方法和装置
WO2021056219A1 (zh) 视频编解码的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19958131

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19958131

Country of ref document: EP

Kind code of ref document: A1