CN114501029B - Image encoding method, image decoding method, image encoding device, image decoding device, computer device, and storage medium - Google Patents

Image encoding method, image decoding method, image encoding device, image decoding device, computer device, and storage medium Download PDF

Info

Publication number
CN114501029B
CN114501029B CN202210031316.1A CN202210031316A CN114501029B CN 114501029 B CN114501029 B CN 114501029B CN 202210031316 A CN202210031316 A CN 202210031316A CN 114501029 B CN114501029 B CN 114501029B
Authority
CN
China
Prior art keywords
pixel
point
current
decoded
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210031316.1A
Other languages
Chinese (zh)
Other versions
CN114501029A (en
Inventor
沈凌翔
黄斌
赵多
李永杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhouming Technology Co Ltd
Original Assignee
Shenzhen Zhouming Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhouming Technology Co Ltd filed Critical Shenzhen Zhouming Technology Co Ltd
Priority to CN202210031316.1A priority Critical patent/CN114501029B/en
Publication of CN114501029A publication Critical patent/CN114501029A/en
Application granted granted Critical
Publication of CN114501029B publication Critical patent/CN114501029B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present application relates to an image encoding method, an image decoding method, an apparatus, a computer device, and a storage medium, the image encoding method including: determining a current coding block in an image frame, wherein the current coding block comprises pixel points to be coded which are coded in parallel; determining corresponding coded pixel points based on the position of the current coding block, taking a plurality of coded pixel points as a plurality of reference points of the pixel points to be coded, and obtaining difference values of the plurality of reference points and the pixel points to be coded; determining a current pixel point to be coded from the pixel points to be coded, and determining a target reference point corresponding to the current pixel point to be coded from a plurality of reference points if the adjacent forward pixel points of the current pixel point to be coded are positioned in the current coding block; and fusing the difference value with the coding data of the target reference point to obtain a predicted value of the current pixel point to be coded in the current coding block, and coding based on the predicted value. By adopting the method, all pixel points to be coded in the current coding block can be really parallel.

Description

Image encoding method, image decoding method, image encoding device, image decoding device, computer device, and storage medium
Technical Field
The present application relates to the field of image processing, and in particular, to an image encoding method, an image decoding method, an apparatus, a computer device, a storage medium, and a computer program product.
Background
With the development of the current display technology, the requirements of video display resolution and frame rate are increasing. But ultra-high frame rate and resolution video displays necessarily place a great strain on the display control system and the data transmission system. This requires the use of compression means for better display, higher speed transmission paths, and higher performance display interfaces in the display control system.
However, in the conventional method, in one pixel group, data of forward pixels in the pixel group is calculated, and the data is applied to other pixels in the current encoding block to encode, which has a low encoding rate, so that transmission efficiency is low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image encoding/decoding method, apparatus, computer device, computer readable storage medium, and computer program product capable of high-rate transmission.
In a first aspect, the present application provides an image encoding method, including:
Determining a current coding block in an image frame, wherein the current coding block comprises pixel points to be coded which are coded in parallel;
determining corresponding encoded pixel points based on the position of a current encoding block, taking a plurality of encoded pixel points as a plurality of reference points of the pixel points to be encoded, and acquiring difference values of the plurality of reference points and the pixel points to be encoded, wherein the difference values are used for representing difference information of the pixel points to be encoded and the plurality of reference points;
determining a current pixel point to be coded from the pixel points to be coded, and determining a target reference point corresponding to the current pixel point to be coded from the plurality of reference points if the adjacent forward pixel points of the current pixel point to be coded are positioned in the current coding block;
and fusing the difference value with the coding data of the target reference point to obtain a predicted value of the current pixel point to be coded in the current coding block, and coding based on the predicted value.
In one embodiment, the determining the corresponding encoded pixel point based on the position of the current encoded block includes:
determining a plurality of encoded pixel points corresponding to the current encoding block position from a first line of a previous frame when the current encoding block is located in the first line of the image frame;
After encoding the first row of pixels of the image frame, a plurality of encoded pixels adjacent to the current encoding block position are determined from the previous row.
In one embodiment, the current encoding block includes a second pixel point to be encoded, and the determining, from the plurality of reference points, a target reference point corresponding to the current pixel point to be encoded includes:
determining the reference points of the second pixel points to be coded, which have the same positions in the current row and are positioned in different rows, as target reference points of the second pixel points to be coded;
fusing the difference value with the encoded data of the target reference point includes:
obtaining a quantized residual value of a target reference point of a second pixel point to be encoded, wherein the quantized residual value is generated by quantizing a difference value between a predicted value of the target reference point of the second pixel point to be encoded and a pixel value;
performing inverse quantization on the quantized residual value of the target reference point of the second pixel point to be coded to obtain an inverse quantized residual value of the target reference point of the second pixel point to be coded;
and fusing the difference value of the second pixel point to be coded and the inverse quantization residual value of the target reference point of the second pixel point to be coded.
In one embodiment, the current encoding block includes a third pixel point to be encoded, and the determining, from the plurality of reference points, a target reference point corresponding to the current pixel point to be encoded includes:
determining the reference points of the third pixel points to be coded, which have the same positions in the current row and are positioned in different rows, as target reference points of the third pixel points to be coded;
the fusing the difference value with the encoded data of the target reference point includes:
obtaining a quantized residual value of a target reference point of the third pixel point to be encoded, wherein the quantized residual value is generated by quantizing a difference value between a predicted value of the target reference point of the third pixel point to be encoded and a pixel value;
performing inverse quantization on the quantized residual value of the target reference point of the third pixel point to be coded to obtain an inverse quantized residual value of the target reference point of the third pixel point to be coded;
and fusing the inverse quantization residual value of the target reference point of the second pixel point to be encoded, the difference value of the third pixel point to be encoded and the inverse quantization residual value of the target reference point of the third pixel point to be encoded.
The application provides an image decoding method, which comprises the following steps:
determining a current decoding block in an image frame, wherein the current decoding block comprises pixel points to be decoded which are decoded in parallel;
determining corresponding decoded pixel points based on the position of a current decoding block, taking the decoded pixel points as a plurality of reference points of the pixel points to be decoded, and acquiring difference values of the reference points and the pixel points to be decoded, wherein the difference values are used for representing difference information of the pixel points to be decoded and the reference points;
determining a current pixel point to be decoded from the pixel points to be decoded, and determining a target reference point corresponding to the current pixel point to be decoded from the plurality of reference points if the adjacent forward pixel points of the current pixel point to be decoded are positioned in the current decoding block;
and fusing the difference value with the decoded data of the target reference point to obtain a predicted value of the current pixel point to be decoded in the current decoding block, and decoding based on the predicted value.
In one embodiment, the determining the corresponding decoded pixel based on the position of the current decoded block includes:
determining a plurality of decoded pixel points corresponding to the current decoding block position from a first line of a previous frame when the current decoding block is positioned in the first line of the image frame;
After the first row of pixels of the image frame are decoded, a plurality of decoded pixels adjacent to the current decoding block position are determined from the previous row.
In one embodiment, the current decoding block includes a second pixel point to be decoded, and the determining, from the plurality of reference points, a target reference point corresponding to the current pixel point to be decoded includes:
determining the reference points of the second pixel point to be decoded, which have the same position in the current row and are positioned in different rows, as target reference points of the second pixel point to be decoded;
fusing the difference value with the decoded data of the target reference point includes:
obtaining a quantized residual value of a target reference point of a second pixel to be decoded, wherein the quantized residual value is generated by quantizing a difference value between a predicted value of the target reference point of the second pixel to be decoded and a pixel value;
performing inverse quantization on the quantized residual value of the target reference point of the second pixel point to be decoded to obtain an inverse quantized residual value of the target reference point of the second pixel point to be decoded;
and fusing the difference value of the second pixel point to be decoded and the inverse quantization residual value of the target reference point of the second pixel point to be decoded.
In one embodiment, the current decoding block includes a third pixel point to be decoded, and the determining, from the plurality of reference points, a target reference point corresponding to the current pixel point to be decoded includes:
determining the reference points of the third pixel point to be decoded, which have the same position in the current row and are positioned in different rows, as target reference points of the third pixel point to be decoded;
the fusing the difference value with the decoded data of the target reference point includes:
obtaining a quantized residual value of a target reference point of the third pixel point to be decoded, wherein the quantized residual value is generated by quantizing a difference value between a predicted value of the target reference point of the third pixel point to be decoded and a pixel value;
performing inverse quantization on the quantized residual value of the target reference point of the third pixel point to be decoded to obtain an inverse quantized residual value of the target reference point of the third pixel point to be decoded;
and fusing the inverse quantization residual value of the target reference point of the second pixel point to be decoded, the difference value of the third pixel point to be decoded and the inverse quantization residual value of the target reference point of the third pixel point to be decoded.
In a second aspect, the present application further provides an image encoding apparatus, the apparatus including:
the pixel point to be encoded determining module is used for determining a current encoding block in an image frame, wherein the current encoding block comprises pixel points to be encoded which are encoded in parallel;
the reference point position determining module is used for determining corresponding encoded pixel points based on the position of the current encoding block, taking a plurality of encoded pixel points as a plurality of reference points of the pixel points to be encoded, and obtaining difference values of the plurality of reference points and the pixel points to be encoded, wherein the difference values are used for representing difference information of the pixel points to be encoded and the plurality of reference points;
the target reference point determining module is used for determining a current pixel point to be encoded from the pixel points to be encoded, and determining a target reference point corresponding to the current pixel point to be encoded from the plurality of reference points if the adjacent forward pixel points of the current pixel point to be encoded are positioned in the current encoding block;
and the prediction coding module is used for fusing the difference value with the coding data of the target reference point to obtain a predicted value of the current pixel point to be coded in the current coding block, and coding is carried out based on the predicted value.
The present application also provides an image decoding apparatus, the apparatus including:
the pixel point to be decoded determining module is used for determining a current decoding block in the image frame, wherein the current decoding block comprises pixel points to be decoded which are decoded in parallel;
the reference point position determining module is used for determining corresponding decoded pixel points based on the position of the current decoding block, taking the decoded pixel points as a plurality of reference points of the pixel points to be decoded, and obtaining difference values of the reference points and the pixel points to be decoded, wherein the difference values are used for representing difference information of the pixel points to be decoded and the reference points;
the target reference point determining module is used for determining a current pixel point to be decoded from the pixel points to be decoded, and determining a target reference point corresponding to the current pixel point to be decoded from the plurality of reference points if the adjacent forward pixel points of the current pixel point to be decoded are positioned in the current decoding block;
and the prediction coding module is used for fusing the difference value with the decoded data of the target reference point to obtain a predicted value of the current pixel point to be decoded in the current decoding block, and decoding is carried out based on the predicted value.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the method of the first aspect described above when executing the computer program:
in a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method of the first aspect described above.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprising a computer program which, when executed by a processor, implements the method of the first aspect described above.
The image coding and decoding methods, the image coding and decoding devices, the computer equipment, the storage medium and the computer program product apply pixel points processed in parallel in the coding block or the decoding block, determine corresponding reference points based on the corresponding relation of the positions, determine target reference points corresponding to the positions through the corresponding reference points, and use target reference point coding data to participate in the operation of each pixel in the coding block or the decoding block so as to realize parallel coding and/or parallel decoding, thereby improving the transmission efficiency.
Drawings
FIG. 1 is a diagram of an application environment of an image encoding method in one embodiment;
FIG. 2 is a flow chart of an image encoding method in one embodiment;
FIG. 3 is a schematic diagram of a reference point in one embodiment;
FIG. 4 is a flow chart of fusing the difference value with the encoded data of the target reference point according to one embodiment;
FIG. 5 is a flow chart of fusing the difference value with the encoded data of the target reference point according to another embodiment;
FIG. 6 is a flow chart of an image decoding method according to an embodiment;
FIG. 7 is a flow chart of fusing the difference value with the decoded data of the target reference point in one embodiment;
FIG. 8 is a flow chart of fusing the difference value with the decoded data of the target reference point in one embodiment;
FIG. 9 is a schematic diagram of image tiling in one embodiment;
FIG. 10 is a schematic diagram of a coding block in one embodiment;
FIG. 11 is a schematic diagram of pixel locations involved in prediction in one embodiment;
FIG. 12 is a schematic diagram of pixel locations involved in prediction in one embodiment;
FIG. 13 is a diagram of a filter limit range in one embodiment;
FIG. 14 is a simplified apparatus schematic diagram of image encoding in one embodiment;
FIG. 15 is a schematic diagram of an apparatus for image encoding in one embodiment;
FIG. 16 is a flow chart of image encoding in one embodiment;
FIG. 17 is a diagram of an application environment of an image encoding method and an image decoding method in one embodiment;
FIG. 18 is a block diagram showing the structure of an image encoding apparatus in one embodiment;
fig. 19 is an internal structural view of the computer device in one embodiment.
Detailed Description
In video coding, a plurality of image frames are processed in slices, and then the sliced image frames are grouped to obtain pixel groups, and one pixel group can be regarded as a coding block, and pixels in the coding block should be coded in parallel so as to improve coding efficiency. However, if the conventional method is used, in one encoding block, the encoding process of the pixel to be encoded needs to rely on the encoded data of the previous pixel, and the present application uses the encoded pixel as a reference point, and replaces the encoded data of the previous pixel by some data of the reference point, so as to improve the encoding efficiency.
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The image coding and decoding method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, where the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers. The scheme provided by the embodiment of the application can be implemented by the terminal 102; or may be cooperatively implemented by the terminal 102 and the server 104, for example: the terminal 102 may provide certain information to the server 104, so that the server 104 performs a related calculation process, and after the calculation by the server 104, the calculation result is fed back to the terminal 102 and then implemented by the terminal 102.
In one embodiment, as shown in fig. 2, an image encoding method is provided, and the method is applied to the server 104 in fig. 1 for illustration, and includes the following steps:
in step 202, a current coding block in the image frame is determined, where the current coding block includes pixels to be coded that are coded in parallel.
The coding block is a certain pixel point group or a plurality of pixel point groups in the image frame, and comprises at least two parallel coding pixel points. When a certain coding block is coded according to the pixel point coding sequence, the current coding block in the image frame is determined. In the current coding block, the pixels to be coded are pixels coded in parallel in the coding block.
Optionally, the video image is formed by one or more image frames, the image frames may be divided into a plurality of image blocks, each image block may be divided into a plurality of encoding blocks, the encoding blocks are parallel encoding pixels in the same row, and the parallel encoding pixels may be two pixels or three pixels, and may also be referred to as a pixel group, where each pixel to be encoded is determined by coordinates and the position of its reference point.
Step 204, determining corresponding encoded pixel points based on the position of the current encoding block, taking the encoded pixel points as a plurality of reference points of the pixel points to be encoded, and obtaining difference values between the reference points and the pixel points to be encoded, wherein the difference values are used for representing difference information between the pixel points to be encoded and the reference points.
In the process of coding the image frame, the position of the current coding block is changed gradually, and when coding the coding blocks at different positions, the coded pixel points corresponding to the current coding block are also positioned at different positions. For example: in a certain image frame, if the current coding block is located in the first row of the image frame, the corresponding coded pixel points comprise adjacent forward coding block forward pixel points in the same row, pixel points located in the same position in the previous image frame, and adjacent forward pixel points and adjacent backward pixel points in the same position; if the current coding block is located outside the first line of the image frame, the corresponding coded pixels include the forward pixels of the adjacent forward coding block in the same line, the pixels located at the same position in the previous line of the image frame, and the adjacent forward pixels and the adjacent backward pixels at the same position.
Optionally, in the process of obtaining the difference values between the plurality of reference points and the pixel point to be encoded, the parameter to be combined includes two parts, namely a quantized residual value of a forward reference point of an adjacent forward coding block in the same line as the current coding block, and a difference value of a constraint filter value of a different line reference point in a different line from the current coding block. And for the generation process of the difference value of the constraint filter value of the different-line reference point, the different-line reference point of the coding block, which is the same as the forward reference point of the coding block, needs to be determined, and then the different-line reference point of the pixel point to be coded, which is the same as the pixel point to be coded in the current line position, is determined, and the difference value corresponding to the constraint filter value of the different-line reference point of the coding block is the difference value of the constraint filter value of the different-line reference point. In the process of obtaining the constraint filter value of the alien reference point, the alien reference point needs to be determined from a plurality of reference points, the encoded data of the alien reference point is subjected to low-pass filtering to obtain the filter value of the alien reference point, and the filter value of the alien reference point is constrained according to the quantization level.
And 206, determining a current pixel point to be encoded from the pixel points to be encoded, and determining a target reference point corresponding to the current pixel point to be encoded from a plurality of reference points if the adjacent forward pixel points of the current pixel point to be encoded are positioned in the current encoding block.
If the adjacent forward pixel point of the pixel point to be encoded is located in the current encoding block, the pixel point to be encoded is not the first pixel point in the current encoding block, and the current pixel point to be encoded can be determined. For the process of determining the target reference point corresponding to the current pixel point to be encoded, it may be to determine the target reference point at the same position of the previous line based on the current pixel point to be encoded at the current line position, or it may be to determine the target reference point at the same position in the previous frame based on the current pixel point to be encoded at the current line position.
The target reference point is a reference point for corresponding to a pixel to be coded before, and is used for generating the predicted coding data of the adjacent forward pixel of the pixel to be coded, and the coding data of the target reference point is an inverse quantization residual value of the target reference point, and can be directly obtained from the coded data.
And the adjacent forward pixel points of the current pixel point to be coded are the same row as the current pixel point to be coded and are the pixel points positioned in front of the position of the pixel point to be coded, and the adjacent forward pixel points correspond to the pixel points to be coded in the current coding group. If the adjacent forward pixel point of the pixel point to be encoded is positioned in the last encoding block, the pixel point to be encoded is the first pixel point to be encoded of the current encoding block, and the calculation is not required to be performed by using the encoding data of the target reference point; if the adjacent forward pixel point of the pixel point to be coded is located in the current coding block, the pixel point to be coded is not the first pixel point of the current coding block, coding data of a target reference point is used for calculation, and predicted coding data of the first pixel point of the corresponding current coding block is predicted, so that parallel calculation predicted values of a plurality of pixel points in a coding group are realized, and parallel coding is realized.
And step 208, fusing the difference value of the current pixel point to be coded with the coded data of the target reference point to obtain a predicted value of the current pixel point to be coded in the current coding block, and coding based on the predicted value.
For the current pixel point to be coded in the current coding block, the current pixel point to be coded is the first coding block in the current coding block, and the predicted value of the first coding block is the difference value, which is not calculated by using the coding data of the target reference point. The predicted value of the first encoded block may have a corresponding dynamic threshold, and if the difference value of the first pixel exceeds a boundary value of the dynamic threshold of the predicted value of the first pixel, the boundary value is determined as the predicted value of the first pixel. For the boundary value of the first pixel point, it depends on the one hand on the inverse quantized residual value of the forward reference point of the encoded block, and on the other hand on the constraint filtered value of the alien reference point, which is the reference point in the different rows corresponding to the position of the first pixel point in the current row. If the inverse quantization residual value of the forward reference point of the coding block is smaller, the lower limit of the boundary value is the inverse quantization residual value of the same-line reference point, the upper limit of the boundary value is the constraint filter value of the different-line reference point, and if the inverse quantization residual value of the same-line reference point is larger, the upper limit of the dynamic threshold value is the inverse quantization residual value of the same-line reference point, and the lower limit of the dynamic threshold value is the constraint filter value of the different-line reference point.
The predicted value of the current pixel point to be coded in the current coding block is calculated, and the predicted value can be obtained by fusing the difference value corresponding to the current pixel point to be coded with the coded data of the target reference point or comparing the fused data with a dynamic threshold value. If the fused data exceeds the dynamic threshold upper limit of the current pixel point to be coded, taking the dynamic threshold upper limit as a predicted value of the current pixel point to be coded; if the fused data is in the range corresponding to the dynamic threshold value, the fused data is the predicted value of the current pixel point to be encoded; and if the combination of the difference value and the predicted coding data of the adjacent forward pixel points is lower than the dynamic threshold lower limit of the current pixel point to be coded, taking the dynamic threshold lower limit as the predicted value of the current pixel point to be coded. The dynamic threshold upper limit is the maximum value in constraint filtering values of a plurality of different-line reference points corresponding to the current pixel point to be coded, wherein the inverse quantization residual value of the same-line reference point corresponding to the current coding block; the dynamic threshold lower limit is the minimum value in constraint filter values of a plurality of different-line reference points corresponding to the pixel points to be coded.
In calculating the predicted value of the current pixel point to be encoded in the current encoding block, encoding can be performed based on the predicted value. It can be appreciated that after obtaining the predicted value, each pixel point to be encoded in the current encoding block can be encoded; in the process of coding based on the predicted value, firstly, obtaining the pixel value of each pixel point to be coded, then calculating the difference value between the pixel value and the predicted value to obtain the residual value of the pixel point to be coded, and finally, quantizing based on the residual value to obtain a quantized residual value, wherein the quantized residual value is the coding value of each pixel point to be coded in the current coding block.
After the coding is finished, the data transmission bandwidth can be reduced by transmitting the quantized residual value of the pixel point; and the decompression end only needs to calculate the predicted value of each pixel point by using the same predicted model as the compression end and then add the residual value after inverse quantization, so as to obtain the reconstructed value of each pixel point instead, thereby achieving the purpose of restoring the pixel point.
In the image coding method, a current coding block in an image frame is determined, the current coding block comprises pixel points to be coded which are coded in parallel, and a foundation of parallel coding is laid; determining corresponding encoded pixel points based on the position of the current encoding block, taking a plurality of encoded pixel points as a plurality of reference points of the pixel points to be encoded, and acquiring difference values of the plurality of reference points and the pixel points to be encoded, wherein the difference values are used for representing difference information of the pixel points to be encoded and the plurality of reference points; determining a current pixel point to be coded from the pixel points to be coded, and determining a target reference point corresponding to the current pixel point to be coded from a plurality of reference points if the adjacent forward pixel points of the current pixel point to be coded are positioned in the current coding block; and fusing the difference value with the coding data of the target reference point to obtain a predicted value of the current pixel point to be coded in the current coding block, and coding based on the predicted value. Therefore, the inverse quantization residual value of the reference pixel point is directly used without referring to the inverse quantization residual value of the first pixel point of the current group, and the real coding value of the first pixel point is not needed, so that the predicted value of the current pixel point to be coded in the current coding block is calculated through the combination of the difference value and the predicted coding data of the adjacent forward pixel points, and the coding is carried out based on the predicted value, so that all the pixel points to be coded in the current coding block can be truly parallel, and the data processing capacity of a prediction calculation module is improved.
In one embodiment, as shown in FIG. 3, encoded pixel points that generate difference values are described. In determining a corresponding encoded pixel point based on the position of the current encoding block, if the current encoding block is located in a first line of the image frame, determining a plurality of encoded pixel points corresponding to the position of the current encoding block in the first line of the previous frame; after encoding the first row of pixels of the image frame, a plurality of encoded pixels adjacent to the current encoding block position are determined from the previous row.
Specifically, the encoded pixel point corresponding to the current encoding block includes: the current coding block is adjacent to the coded pixel points in the same row, the coded pixel points in different rows are the same as the reference points in the same row, and the coded pixel points in different rows are the same as or adjacent to the pixel points to be coded in the current coding block.
For the first line of pixels of the first frame image of the video stream, a conventional DES compression algorithm may be used, while the adjacent pixels of the previous line are missing to participate in the prediction calculation when predicting the first line of pixels of the second frame. And introducing the image information of the previous frame into the prediction calculation of the current frame according to the correlation between the continuous images. After the first-line pixel point is calculated, the dequantized residual value after dequantizing the quantized residual value of the first-line of each frame can be written into the RAM, and the first-line pixel point reconstruction value of the previous frame is used for predicting and calculating the first-line pixel point of the current frame from the second frame image of the video stream.
In this embodiment, the image information corresponding to the position of the current coding block in the previous frame is used as a reference point, so that the reference point of the first row of pixel prediction in the image prediction calculation is complemented, so that the whole coding is convenient; by taking the image information of the previous line corresponding to the current coding block position as a reference point, the required predicted value can be calculated more conveniently, and the predicted result is more accurate.
In an alternative embodiment, after the determination of the encoded pixel points is discussed, the process of obtaining the difference value by the encoded pixel points is described in three steps. In the coding block shown in fig. 3, it includes 3 pixel points, i.e., piexl0, piexl1 and Piexl2 in turn, and the reference points are neighboring points of the current coding block, including the alien reference point c, b, d, e, f and the coding block forward reference point a, where a is a point to the left of the 1 st pixel point of the current group. In the first step of obtaining the difference value, whether the reference point is a reference point of a different line in the same image frame or a reference point in a previous image frame, the process of performing low-pass filtering on the different line reference point, and the process of generating the low-pass filtered value may be expressed by using the following formula:
filtB=(c+2×b+d+2)÷4;
filtC=2 Datawidth ÷2;
filtD=(b+2×d+e+2)÷4;
filtE=(d+2×e+f+2)÷4;
Wherein b, c, d, e, f respectively represent the inverse quantized residual values of the alien pixel reference point b, c, d, e, f, and Datawidth represents the number of bit width value steps of the input image data.
In the second step of obtaining the difference value, after the filtered value of c, b, d, e is quantized, the quantized filtered value is constrained according to the quantized data M, and a corresponding range is determined, so that constrained quantized filtered values blendC, blendB, blendD, blendE corresponding to the reference points c, b, d, e of the different pixel points are obtained.
In the third step of obtaining the difference value, the difference value between the pixel point to be encoded and a plurality of reference points can be calculated after the reconstruction value of the same-line reference point, the different-line reference point with the same position as the pixel point to be encoded in the current line and the constrained quantization filtering value of the different-line reference point with the same position as the same-line reference point are combined.
Optionally, for a first pixel to be encoded (Piexl 0) in the current encoding block, the adjacent forward pixel of the first pixel to be encoded is a reference point in the same line in the previous encoding block, and the difference values between the multiple reference points corresponding to the first pixel to be encoded and the first pixel to be encoded may be the predicted value of the first pixel to be encoded, where the expression is:
a+blendD–blendC;
Wherein a represents a quantized residual value of a same-line reference point of a current coding block, blendB is a constraint filter value of a different-line reference point which is the same as a first pixel point to be coded at a current line position, and blendC is a constraint filter value of a different-line reference point which is the same as the same-line reference point of the current coding block at the current line position.
If the predicted value of the first pixel point to be encoded is in the range of the dynamic threshold value, the predicted value of the first pixel point to be encoded is directly used for encoding, and if the predicted value exceeds the dynamic threshold value, the boundary value of the dynamic threshold value is used as the predicted value of the first pixel point to be encoded, and the quantized residual value is generated, so that encoding is realized; the method for obtaining the dynamic threshold is described above, and is not described herein.
In one embodiment, as shown in fig. 4, the current encoding block includes a second pixel point to be encoded, and determining a target reference point corresponding to the current pixel point to be encoded from a plurality of reference points includes: and determining the reference points of the second pixel point to be encoded, which are positioned in the same position in the current row and are positioned in different rows, as target reference points of the second pixel point to be encoded.
Correspondingly, fusing the difference value with the encoded data of the target reference point includes:
Step 402, obtaining a quantized residual value of the target reference point of the second pixel to be encoded, where the quantized residual value is generated by quantizing a difference between a predicted value of the target reference point of the second pixel to be encoded and the pixel value.
Step 404, dequantizing the quantized residual value of the target reference point of the second pixel to be encoded to obtain the dequantized residual value of the target reference point of the second pixel to be encoded.
Step 406, fusing the difference value of the second pixel to be encoded and the inverse quantization residual value of the target reference point of the second pixel to be encoded.
Applying the inverse quantized residual value (R 0last ) After the adjacent forward pixel points of the second pixel point to be encoded are replaced with the predicted encoded data, the first pixel point to be encoded and the second pixel point to be encoded can be encoded in parallel.
Alternatively, for the second pixel to be encoded (Piexl 1) in the current encoding block, the difference value expression of the second pixel to be encoded may be: a+blendD-blendC; wherein a represents a quantized residual value of a forward reference point of the coding block, blendD is a constraint filter value of an alien reference point which is the same as a second pixel point to be coded at a current line position, and blendC is a constraint filter value of the alien reference point which is the same as the forward reference point of the coding block at the current line position.
And combining the predicted coding data of the adjacent forward pixel point of the second pixel point to be coded with the difference value of the second pixel point to be coded to obtain an initial prediction result of the second pixel point to be coded, wherein the initial prediction result of the second pixel point to be coded can be the prediction value of the second pixel point to be coded, and the prediction result is expressed as:
T 1 (x)=a+blendD–blendC+R 0last
wherein T is 1 (x) Representing an initial prediction result of a second pixel point to be encoded, wherein a represents a quantized residual value of a forward reference point of an encoding block, blendD is a constraint filter value of an alien reference point with the same current line position as the second pixel point to be encoded, blendC is a constraint filter value of an alien reference point with the same current line position as the forward reference point of the encoding block, and R 0last And the inverse quantization residual value of the target reference point of the second pixel point to be coded is represented.
Specifically, if the initial prediction result of the second pixel to be encoded is within the threshold, the initial prediction result of the second pixel to be encoded is the prediction value of the second pixel to be encoded, and if the initial prediction result exceeds the threshold, the boundary value of the threshold is used as the prediction value of the second pixel to be encoded, which is expressed as:
P 1 =CLAMP(T 1 (x),T 1 min,T 1 max);
wherein P is 1 Is the predicted value of the second pixel point to be encoded, T 1 (x) Is the initial prediction result of the second pixel point to be encoded, T 1 min is the threshold lower limit of the predicted value of the second pixel point to be encoded, T 1 max is the upper threshold limit of the predicted value of the second pixel to be encoded.
Further, the threshold value of the second pixel to be encoded is dynamic, the upper threshold value is the maximum value of the following data, and the lower threshold value is the minimum value of the following data: the method comprises the steps of quantizing residual values of same-line reference points of a current coding block, constraint filtering values of different-line reference points with the same positions of a first pixel point to be coded in the current line, and constraint filtering values of different-line reference points with the same positions of a second pixel point to be coded in the current line; the expression is as follows:
T 1 min=MIN(a,blendB,blendD);
T 1 max=MAX(a,blendB,blendD);
wherein a represents a quantized residual value of a forward reference point of the coding block, blendB is a constraint filter value of an alien reference point which is the same as a first pixel point to be coded at the current line position, and blendD is a constraint filter value of an alien reference point which is the same as a second pixel point to be coded at the current line position.
In this embodiment, the target reference point of the second pixel to be encoded is determined by the reference points of the second pixel to be encoded, which are located in the same and different rows, in the current row, so as to estimate the expected encoded data of the adjacent forward pixel, so that the predicted values of the first pixel to be encoded and the second pixel to be encoded can be calculated in parallel without using the inverse quantization residual value of the first pixel, thereby improving the encoding speed.
In one embodiment, as shown in fig. 5, the current encoding block includes a third pixel point to be encoded, and determining a target reference point corresponding to the pixel point to be encoded from a plurality of reference points includes: determining a reference point of the third pixel point to be encoded, which is the same in position in the current row and is located in a different row, as a target reference point of the third pixel point to be encoded;
correspondingly, fusing the difference value with the encoded data of the target reference point includes:
step 502, obtaining a quantized residual value of the target reference point of the third pixel to be encoded, where the quantized residual value is generated by quantizing a difference between a predicted value of the target reference point of the third pixel to be encoded and the pixel value.
And 504, performing inverse quantization on the quantized residual value of the target reference point of the third pixel to be encoded to obtain an inverse quantized residual value of the target reference point of the third pixel to be encoded.
Step 506, fusing the inverse quantization residual value of the target reference point of the second pixel to be encoded, the difference value of the third pixel to be encoded, and the inverse quantization residual value of the target reference point of the third pixel to be encoded.
Thereby, the inverse quantized residual value (R 0last ) Inverse quantized residual value (R) with target reference point of third pixel to be encoded 1last ) And, instead of the adjacent forward pixel predicted encoded data of the third pixel to be encoded, the adjacent forward pixel predicted encoded data of the third pixel to be encoded is represented as R 0last +R 1last
For the third pixel point to be encoded (Piexl 2) in the current encoding block, the corresponding difference value expression may be: a+blendE-blendC; wherein a represents a quantized residual value of a forward reference point of the coding block, blendE is a constraint filter value of an alien reference point which is the same as a third pixel point to be coded at a current line position, and blendC is a constraint filter value of the alien reference point which is the same as the forward reference point of the coding block at the current line position.
And fusing the inverse quantization residual value of the target reference point of the second pixel to be encoded and the difference value of the third pixel to be encoded, and the inverse quantization residual value of the target reference point of the third pixel to be encoded to obtain an initial prediction result of the third pixel to be encoded, where the initial prediction result of the third pixel to be encoded may be a prediction value of the third pixel to be encoded, and the method is expressed as:
T 2 (x)=a+blendE–blendC+R 0 l ast +R 1 last;
wherein T is 2 (x) Representing an initial prediction result of a third pixel point to be encoded, wherein a represents a quantized residual value of a forward reference point of an encoding block, blendE is a constraint filter value identical to the current line position of the third pixel point to be encoded, blendC is a constraint filter value of an opposite line reference point identical to the current line position of the forward reference point of the encoding block, and R 0last Representing the inverse quantized residual value of the second pixel to be encoded, R 1last Represent the firstThe residual values of the three pixels to be encoded are inversely quantized.
Specifically, if the initial prediction result of the third pixel to be encoded is within the threshold, the initial prediction result of the third pixel to be encoded is the prediction value of the third pixel to be encoded, and if the initial prediction result exceeds the threshold, the boundary value of the threshold is used as the prediction value of the third pixel to be encoded, which is expressed as:
P 2 =CLAMP(T 2 (x),T 2 min,T 2 max);
wherein P is 2 Is the predicted value of the third pixel point to be encoded, T 2 (x) Is the initial prediction result of the third pixel point to be encoded, T 2 min is the lower threshold limit of the predicted value of the third pixel point to be encoded, T 2 max is the upper threshold limit of the predicted value of the third pixel to be encoded.
Further, the threshold value of the third pixel to be encoded is dynamic, and the threshold value is the maximum value and the minimum value in the following data respectively: the expression of the quantized residual value of the same-line reference point of the current coding block, the constraint filter value of the different-line reference point which is the same as the first pixel point to be coded in the current line position, the constraint filter value of the different-line reference point which is the same as the second pixel point to be coded in the current line position, and the constraint filter value of the different-line reference point which is the same as the third pixel point to be coded in the current line position is as follows:
T 2 min=MIN(a,blendB,blendD,blendE);
T 2 max=MAX(a,blendB,blendD,blendE);
Wherein a represents a quantized residual value of a forward reference point of the coding block, blendB is a constraint filter value of an alien reference point which is the same as a first pixel point to be coded at a current line position, blendD is a constraint filter value of an alien reference point which is the same as a second pixel point to be coded at the current line position, and blendE is a constraint filter value of an alien reference point which is the same as a third pixel point to be coded at the current line position.
In this embodiment, the target reference point of the third pixel to be encoded is determined by the reference points of the third pixel to be encoded, which are located in the same and different rows, in the current row, the target reference point of the second pixel to be encoded replaces the inverse quantization residual value of the first pixel, and the target reference point of the third pixel to be encoded replaces the inverse quantization residual value of the second pixel, so as to directly determine the difference information between the third pixel to be encoded and the first pixel to be encoded, thereby calculating the predicted values of the first pixel to be encoded, the second pixel to be encoded and the third pixel to be encoded in parallel, and improving the encoding speed.
In one embodiment, the application further provides an image decoding method, which is applied to the terminal 102 in fig. 1 for illustration, as shown in fig. 6, and the method includes:
In step 602, a current decoding block in the image frame is determined, where the current decoding block includes pixels to be decoded that are decoded in parallel.
The decoding block is a certain pixel point group or a plurality of pixel point groups in the image frame, and comprises at least two pixel points decoded in parallel. When a certain decoding block is decoded according to the pixel point decoding order, the current decoding block in the image frame is determined. In the current decoding block, the pixels to be decoded are pixels decoded in parallel in the decoding block, and the pixels to be decoded can be two pixels or three pixels, and can also be called a pixel group, wherein each pixel to be encoded determines the position of the pixel to be decoded and the adjacent pixels thereof through coordinates.
Step 604, determining a corresponding decoded pixel point based on the position of the current decoding block, taking the decoded pixel points as a plurality of reference points of the pixel point to be decoded, and obtaining difference values between the reference points and the pixel point to be decoded, wherein the difference values are used for representing difference information between the pixel point to be decoded and the reference points.
In the process of decoding an image frame, the position of a current decoding block is changed gradually, and when decoding the decoding blocks at different positions, decoded pixel points corresponding to the current decoding block are also located at different positions. Optionally, in the process of obtaining the difference values between the multiple reference points and the pixel points to be decoded, the parameters to be combined include two parts, namely the quantized residual values of the reference points in the same line as the current decoding block in the same pixel point line, and the difference values of constraint filter values of the reference points in different pixel point lines as the current decoding block.
Step 606, determining a current pixel point to be decoded from the pixel points to be decoded, and determining a target reference point corresponding to the current pixel point to be decoded from a plurality of reference points if the adjacent forward pixel points of the current pixel point to be decoded are located in the current decoding block.
The target reference point is a reference point for a pixel point to be decoded before, and is used for generating the predicted decoded data of the adjacent forward pixel point of the pixel point to be decoded, and the decoded data of the target reference point is an inverse quantization residual value of the target reference point, and can be directly obtained from the decoded data.
And the adjacent forward pixel points of the current pixel point to be decoded are the same row as the current pixel point to be decoded and are pixel points positioned in front of the position of the pixel point to be decoded, and the adjacent forward pixel points correspond to the pixel points to be decoded in the current decoding group.
And 608, fusing the difference value with the decoded data of the target reference point to obtain a predicted value of the current pixel point to be decoded in the current decoding block, and decoding based on the predicted value.
The predicted value of the current pixel point to be coded in the current coding block is calculated, and the predicted value can be obtained by fusing the difference value corresponding to the current pixel point to be coded with the coded data of the target reference point or comparing the fused data with a dynamic threshold value. If the combination of the difference value and the predicted coding data of the adjacent forward pixel point exceeds the critical value of the dynamic range of the current pixel point to be coded, calculation is not needed, and the dynamic threshold value determination process is similar to the image coding process and is not repeated here.
In the image decoding method, the reconstructed value of each pixel point can be obtained only by calculating the predicted value of each pixel point by using the same prediction model as the compression end and adding the residual value after inverse quantization, thereby achieving the purpose of restoring the pixel point. Therefore, the inverse quantization residual value of the reference pixel point is directly used without referring to the inverse quantization residual value of the first pixel point of the current group, and the real decoding value of the first pixel point is not needed, so that the predicted value of the current pixel point to be decoded in the current decoding block is calculated through the combination of the difference value and the predicted decoding data of the adjacent forward pixel points, decoding is carried out based on the predicted value, all the pixel points to be decoded in the current decoding block can be truly parallel, and the data processing capability of the prediction calculation module is improved.
In an alternative embodiment, determining the corresponding decoded pixel based on the location of the current decoded block includes: determining a plurality of decoded pixel points corresponding to the current decoding block position from a first line of a previous frame when the current decoding block is located in the first line of the image frame; after the first row of pixels of the image frame are decoded, a plurality of decoded pixels adjacent to the current decoding block position are determined from the previous row.
In this embodiment, the image information corresponding to the position of the current decoding block in the previous frame is used as a reference point, so that the reference point of the first row of pixel prediction in the image prediction calculation is complemented, so that the decoding is performed on the whole; by taking the image information of the previous line corresponding to the current decoding block position as a reference point, the required predicted value can be calculated more conveniently, and the predicted result is more accurate.
In one embodiment, as shown in fig. 7, the current decoding block includes a second pixel point to be decoded, and determining a target reference point corresponding to the current pixel point to be decoded from a plurality of reference points includes: determining the reference points of the second pixel point to be decoded, which are the same in position in the current row and are positioned in different rows, as target reference points of the second pixel point to be decoded;
fusing the difference value with the decoded data of the target reference point includes:
step 702, obtaining a quantized residual value of a target reference point of a second pixel to be decoded, where the quantized residual value is generated by quantizing a difference value between a predicted value of the target reference point of the second pixel to be decoded and the pixel value;
step 704, performing inverse quantization on the quantized residual value of the target reference point of the second pixel to be decoded to obtain an inverse quantized residual value of the target reference point of the second pixel to be decoded;
Step 706, fusing the difference value of the second pixel to be decoded and the inverse quantized residual value of the target reference point of the second pixel to be decoded.
In this embodiment, the target reference point of the second pixel to be decoded is determined by the reference points of the second pixel to be decoded, which are located in the same and different rows, in the current row, so as to estimate the required predicted decoded data of the adjacent forward pixel.
In one embodiment, as shown in fig. 8, the current decoding block includes a third pixel point to be decoded, and determining a target reference point corresponding to the current pixel point to be encoded from a plurality of reference points includes: determining a reference point of the third pixel point to be decoded, which is the same in position in the current row and is positioned in a different row, as a target reference point of the third pixel point to be decoded;
fusing the difference value with the decoded data of the target reference point includes:
step 802, obtaining a quantized residual value of a target reference point of a third pixel to be decoded, wherein the quantized residual value is generated by quantizing a difference value between a predicted value of the target reference point of the third pixel to be decoded and the pixel value;
Step 804, dequantizing the quantized residual value of the target reference point of the third pixel to be decoded to obtain the dequantized residual value of the target reference point of the third pixel to be decoded;
step 806, fusing the inverse quantization residual value of the target reference point of the second pixel to be decoded, the difference value of the third pixel to be decoded, and the inverse quantization residual value of the target reference point of the third pixel to be decoded.
In this embodiment, the target reference point of the third pixel to be decoded is determined by the reference point of the third pixel to be decoded, which is located in the same position and in a different row in the current row, the target reference point of the second pixel to be decoded replaces the inverse quantization residual value of the first pixel, and the target reference point of the third pixel to be decoded replaces the inverse quantization residual value of the second pixel, so as to directly determine the difference information between the third pixel to be decoded and the first pixel to be decoded, thereby calculating the predicted values of the first pixel to be decoded, the second pixel to be decoded and the third pixel to be decoded in parallel, and improving the decoding speed.
In one embodiment, the innovation points of the present application are described in comparison to the prior art. In the DSC compression algorithm, a slice processing (slice) is performed on one frame image, as shown in fig. 9. Each slice is in turn divided into a plurality of code blocks, each code block being a group, a group consisting of 3 consecutive pixels, as shown in fig. 10. In the MMAP model, prediction processing is performed in units of groups. The use of the neighbor pixel of the previous row is required for median prediction of pixels of the current row group with MMAP, as shown in fig. 11. But it is found in the processing of the actual image that the first line of a frame of image is not predicted with reference to the neighbor pixel of the previous line. The MMAP is then simplified in the DSC algorithm to yield a PT LEFT prediction model for the prediction of the first row of pixels of each frame of image. In the PT_LEFT prediction model, only the LEFT point of the current prediction pixel point is used as a reference point for prediction, so that the problems of prediction continuity and accuracy of the pixels in the first row of each frame of image are solved. But this method of reducing the prediction reference point necessarily results in lower accuracy of the first line of pixel predictions.
In the MMAP prediction model, the predicted values of pixel1 and pixel2 in the same group are both referred to the inverse quantization residual value of the previous point, so that the processing time of the prediction module is increased. The invention provides an FPGA implementation method of an improved median prediction model aiming at the two points, which can effectively improve the prediction accuracy and the data processing capacity of a first row of pixel prediction module and improve the data throughput of the module.
The invention refers to an MMAP improved self-adaptive median prediction algorithm in a DSC compression algorithm, and the core of the algorithm is that when predicting the values of 3 pixels of a current group, the prediction calculation needs to be carried out by referring to the pixel reconstruction values of 5 surrounding adjacent points c, b, d, e, f of a line on the current group and a point a on the left of the 1 st pixel of the current group, and the positions of all the pixels are shown in figure 11. The predictive calculation process is as follows:
step one: as shown in fig. 11, the inverse quantized residual values corresponding to the alien pixel reference points c, b, d, e are low pass filtered. In predicting the first group of pixels per row, the formula is as follows:
filtB=(c+2×b+d+2)÷4;
filtC=2 Datawidth ÷2;
filtD=(b+2×d+e+2)÷4;
filtE=(d+2×e+f+2)÷4;
the filtering calculation of the c point is not close to the reference point, so the algorithm prescribes that the filtering value filtC of the c point is half of the digital value Datawidth of the input image data. Such as 8 bits for input, then the value of filtC is 128.
As shown in fig. 12, in predicting the first row of pixels of each frame, the formula is:
filtB(1st)=(c(1st)+2×b(1st)+d(1st)+2)÷4;
filtC(1st)=2 Datawidth ÷2;
filtD(1st)=(b(1st)+2×d(1st)+e+2)÷4;
filtE(1st)=(d(1st)+2×e(1st)+f(1st)+2)÷4;
the prediction calculation needs to be performed by referring to the pixel reconstruction values of 5 adjacent points c (1 st), b (1 st), d (1 st), e (1 st), f (1 st) at the corresponding positions of the frame on the current group and the pixel reconstruction value of a point a on the left of the 1st pixel point of the current group. It will be appreciated that the formula used is the same whether the first group of pixels is predicted for each line or the first group of pixels is predicted for each frame, except that the reference point positions used are different, so in the second step, only the formula corresponding to the prediction for each first group of pixels is described.
Step two: the filter value of c, b, d, e is subjected to constraint of a quantization number level M, and the range is determined, as shown in FIG. 13, wherein the constraint range is obtained by combining each point filter value with M to obtain an upper limit constraint value; and calculating the difference between each point filtering value and M to obtain a lower limit filtering value. The constrained filtered value is blendC, blendB, blendD, blendE.
Step three: a predicted value of 3-point pixels is calculated. The formula is as follows:
P 0 =CLAMP(a+blendB-blendC,MIN(a,blendB),MAX(a,blendB));
P 1 =CLAMP(a+blendD–blendC+R 0 ,MIN(a,blendB,blendD),MAX(a,blendB,
blendD));
P2=CLAMP(a+blendE–blendC+R 0 +R 1 ,MIN(a,blendB,blendD,blendE),
MAX(a,blendB,blendD,blendE));
wherein R is 0 And R is 1 The residual values are inverse quantized for the 1st and 2 nd pixels of the current group.
The improved adjacent 3-point pixel predicted value becomes:
P 0 =CLAMP(a+blendB-blendC,MIN(a,blendB),MAX(a,blendB));
P 1 =CLAMP(a+blendD–blendC+R 0last ,MIN(a,blendB,blendD),MAX(a,blendB,blendD));
P 2 =CLAMP(a+blendE–blendC+R 0last +R 1last t,MIN(a,blendB,blendD,blendE),MAX(a,blendB,blendD,blendE));
As can be seen from the above prediction calculation process, in performing 3-point pixel prediction of the current group, R is required for prediction of P1 and P2 0 And R is 1 The components participate in the calculation, so that the prediction calculation of the 3-point pixel value cannot be truly parallel when the algorithm is realized. In digital image processing we find that each pixel in the image is not independent, up and downThe adjacent pixels of the row have strong correlation, so in the implementation of the algorithm FPGA, we can inverse-quantize the residual component R of the current row in the formula 0 And R is 1 Inverse quantized residual component R, which becomes the same column in the previous row 0last And R is 1last A component. Therefore, synchronous calculation of 3-point pixels can be realized in the group, and the processing capacity of the module is greatly improved.
In one embodiment, the technical solution of the present application is described using the modules shown in fig. 14 and 15, and the flow of the functional modules shown in fig. 16, where:
the reconstruction value reading control module (reconstruction_rd module) selects and reads out the pixel reconstruction value from the pixel reconstruction value storage module of the top Line of the previous frame and the pixel reconstruction value storage module of the previous Line (Last Line RAM), and the pixel reconstruction value is the quantized residual value. When the prediction of the 1 st row group of each frame is processed, the reconstruction value read control module can select the first row pixel reconstruction value of the previous frame read from the first row reconstruction value storage module of the previous frame as the neighbor pixel of the current processing group to perform prediction processing. When the group of other rows is processed, the pixel storage module of the previous row is selected to read out the reconstruction value of the first row of pixels of the previous row as the neighbor pixel of the current processed group for prediction processing. The reconstructed value of the pixel in the previous line of the current processing group is used as the neighbor pixel of the current processing group to perform prediction processing.
The Low-pass Filter module (Low_pass Filter module) is used for realizing the first step and performing Low-pass Filter processing on the neighbor pixel.
The quantization level Limit module (Qlevel Limit module) is used for implementing the second step and performing constraint of the quantization level M on the filtered value.
The prediction calculation module (prediction_cal module) is used for realizing the third step and calculating the 3-point pixel prediction value. Wherein R0 and R1 are inverse quantization residual values at the same position of the previous row of the current predicted point, and take values from an inverse quantization residual value storage module (Inverse Quantized RAM) to participate in calculation. At this time, the processing capacity of the prediction module can be improved from original 1pixel/clock to 3pixel/clock.
The residual value quantization module (Quantized Residual _cal module) realizes residual quantization processing of the input actual Pixel values Pixel0, pixel1, pixel2 and the predicted values P0, P1, P2. The formula is as follows:
Q0=(Pixel0-P0)>>qLevel;
Q1=(Pixel1–P1)>>qLevel;
Q2=(Pixel2–P2)>>qLevel;
wherein, Q0: quantized residual values of each group of first point pixels, Q1: quantized residual values of each group of second point pixels, Q2: the quantization residual value, qLevel, of each group of third point pixels is a configurable quantization level.
The residual value dequantization module (Inverse Quantized Residual _cal module) dequantizes the quantized residual values Q0, Q1, Q2 to obtain dequantized residual values Invq0, invq1, invq2, and the formula is as follows:
Invq0=Q0<<qLevel;
Invq1=Q1<<qLevel;
Invq2=Q2<<qLevel。
Writing the dequantized residual value into a dequantized residual value storage module for predicting calculation of the next row. The inverse quantization residual value also needs to participate in the pixel reconstruction calculation in a reconstruction value calculation module (reconstruction_cal module). The inverse quantization residual value storage module is a dual port ram and is used for storing the inverse quantization residual value of the current line.
The reconstruction value calculation module (reconstruction_cal module) realizes the calculation of the pixel reconstruction value, and the formula is as follows:
Reconstruct0=P0+Invq0;
Reconstruct1=P1+Invq1;
Reconstruct2=P2+Invq2;
the reconstruction value writing control module (reconstruction_wr module) selectively writes the pixel reconstruction value into the previous frame pixel storage module and the previous line pixel storage module, and when the 1 st line pixel of each frame is predicted to be processed, the pixel reconstruction value is required to be written into the previous frame first line reconstruction value storage module and the previous frame pixel storage module, and when the non-first line pixel is processed, only the pixel reconstruction value is required to be written into the previous line pixel storage module.
The pixel storage module of the reconstruction value of the previous frame is a dual-port ram and is used for storing the reconstruction value of the pixel of the first line of each frame and for predicting and calculating the first line of the next frame; the pixel storage module of the reconstruction value of the upper row is a dual-port ram for storing the reconstruction value of the pixels of each row and for the prediction calculation of the next row.
After the image data enters the prediction compression module according to the group as a unit, a reconstruction value reading control module (a reconstruction_rd module) judges whether the current input data is first line image data, if so, the reconstruction value reading control module reads the reconstruction value of the reference pixel point b, c, d, e corresponding to the first line of the previous frame from a first line reconstruction value storage module of the previous frame, namely, adjacent point reconstruction values, namely, reconstruction_c, reconstruction_b, reconstruction_d and reconstruction_e; if the data is not the first line, the reconstructed value of the last line reference point b, c, d, e is read out of the last line pixel storage module, i.e. the neighboring point reconstruction values Reconstruct_c, reconstruct_b, reconstruct_d, reconstruct_e.
After the adjacent point reconstruction values, i.e., the adjacent point Filter values filtB, filtC, fild, filtE, and the adjacent point reconstruction values, i.e., the adjacent point Filter values filtB, filtC, fild, and filtE, are obtained by calculating the Low pass Filter values of the reconstruction values sequentially corresponding to the input reference pixel b, c, d, e after the adjacent point reconstruction values, i.e., the adjacent point reconstruction values, are obtained by calculating the adjacent point reconstruction values.
The low-pass filtered values filtB, filtC, filtD, filtE of the reconstructed values enter a quantization level Limit module (Qlevel Limit module) to calculate quantization level Limit values blendB, blendC, blendD, blendE corresponding to the reference pixel b, c, d, e in sequence.
Then, the quantization level limit values blendB, blendC, blendD and blendE enter a prediction calculation module to inversely quantize residual values last_invq0, last_invq1 and last_invq2 of the obtained adjacent points of the previous row, and the prediction values P0, P1 and P2 of the pixel points in current processing are calculated; wherein last_invq0 is the dequantized residual value of the first point pixel of the previous row of the current group; last_invq1 is the dequantized residual value of the second point pixel of the Last row of the current group; last_invq2 is the dequantized residual value of the third point pixel of the Last row of the current group.
Further, the predicted value flows into both modules. The predicted value entering the residual value quantizing module enters the residual value quantizing module and the input actual Pixel values Pixel0, pixel1 and Pixel2 calculate quantized residual values Q0, Q1 and Q2 of each point, and the quantized residual values Q0, Q1 and Q2 can be used as compressed values to be transmitted into the decompressing module through each transmission channel.
After the prediction compression of a group of pixels, the residual values of Q0, Q1 and Q2 of the current group are input into a residual value dequantization module to calculate a dequantization residual value (Invq 0) of a first point pixel of the current group, which is a dequantization residual value (Invq 1) of a second point pixel of the current group, and a dequantization residual value (Invq 2) of a third point pixel of the current group. The dequantized residual value is stored in the dequantized residual value storage module and used as a reference adjacent point of the pixels in the same column of the next row to participate in prediction compression calculation, namely the encoded data or the decoded data. Meanwhile, the dequantized residual value also enters a reconstruction value calculation module to finish reconstruction value calculation together with the predicted values P0, P1 and P2.
The calculated reconstruction value enters a reconstruction value writing control module, judges whether the reconstruction value is written into a previous line reconstruction value storage module and a previous frame first line reconstruction value storage module according to whether the reconstruction value is the first line, participates in the prediction compression calculation of the first line of the next line or the next frame, is a complete prediction compression treatment, and so on, the next group pixel treatment also repeats the flow.
Therefore, the data processing capacity of the prediction calculation module is improved by the implementation of the improved prediction compression model, and the data processing capacity is improved from the original 1pixel/clock to 3pixel/clock. The invention also improves the reliability of the prediction processing of the first row pixels of each frame. The invention also provides a brand-new FPGA implementation method of the prediction compression module, which can be used in the technical field of light compression processing of various display data streams.
The present invention can be used in ultra-high resolution and high frame rate video transmission and processing systems, as shown in fig. 17. Take the LED display control field as an example. In the 8K display control system of the LED, an 8K video source is transmitted into the transmitting card through the DP or HDMI interface. The transmitting card carries out slicing and grouping processing on the image and then transmits the image into the prediction compression processing module for data compression processing. The compressed 8k shows a reduced bandwidth of the data stream. And then transmitted into the receiving card through the data packet format. The receiving card sends the received compressed data to a decompression module of the same prediction model of the invention, and the data is sent to an LED display screen to complete display after being reconstructed.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiments of the present application also provide an image encoding apparatus for implementing the above-mentioned image encoding method, and an image decoding apparatus for implementing the above-mentioned image decoding method. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so the specific limitation in the embodiments of the image encoding and decoding apparatus provided below may refer to the limitation of the image encoding and decoding method hereinabove, and will not be repeated herein.
In one embodiment, as shown in fig. 18, there is provided an image encoding apparatus including: a pixel point to be encoded determination module 1802, a reference point location determination module 1804, a target reference point determination module 1806, and a predictive encoding module 1808, wherein:
a pixel point to be encoded determining module 1802, configured to determine a current encoding block in an image frame, where the current encoding block includes pixel points to be encoded that are encoded in parallel;
a reference point location determining module 1804, configured to determine a corresponding encoded pixel point based on a location of a current encoding block, take a plurality of the encoded pixel points as a plurality of reference points of a pixel point to be encoded, and obtain difference values of the plurality of reference points and the pixel point to be encoded, where the difference values are used to represent difference information of the pixel point to be encoded and the plurality of reference points;
A target reference point determining module 1806, configured to determine a current pixel point to be encoded from the pixel points to be encoded, and determine a target reference point corresponding to the current pixel point to be encoded from the plurality of reference points if an adjacent forward pixel point of the current pixel point to be encoded is located in the current encoding block;
and a predictive coding module 1808, configured to fuse the difference value with the coded data of the target reference point, obtain a predicted value of the current pixel to be coded in the current coding block, and perform coding based on the predicted value.
In an alternative embodiment, the reference point location determination module 1804 includes:
an inter-frame reference point determining unit, configured to determine, when the current coding block is located in a first line of the image frame, a plurality of coded pixel points corresponding to the current coding block position from a first line of a previous frame;
and the intra-frame reference point determining unit is used for determining a plurality of coded pixel points adjacent to the current coding block position from the previous row after the pixel points of the first row of the image frame are coded.
In an optional embodiment, the current encoding block includes a second pixel point to be encoded, and the target reference point determining module 1806 is configured to determine, as the target reference point of the second pixel point to be encoded, a reference point of the second pixel point to be encoded, where the second pixel point to be encoded is located in the same position in the current line and is located in a different line;
Correspondingly, the predictive coding module 1808 includes:
the quantization residual value acquisition unit is used for acquiring a quantization residual value of a target reference point of a second pixel point to be encoded, wherein the quantization residual value is generated by quantizing a difference value between a predicted value of the target reference point of the second pixel point to be encoded and a pixel value;
the inverse quantization residual value generation unit is used for inversely quantizing the quantization residual value of the target reference point of the second pixel point to be encoded to obtain an inverse quantization residual value of the target reference point of the second pixel point to be encoded;
and the predicted value generating unit is used for fusing the difference value of the second pixel point to be coded and the inverse quantization residual value of the target reference point of the second pixel point to be coded.
In an optional embodiment, the current encoding block includes a third pixel point to be encoded, and the target reference point determining module 1806 is configured to determine, as the target reference point of the third pixel point to be encoded, a reference point of the third pixel point to be encoded, where the positions of the third pixel point to be encoded in the current line are the same and are located in different lines;
correspondingly, the predictive coding module 1808 includes:
a quantized residual value obtaining unit, configured to obtain a quantized residual value of a target reference point of the third pixel to be encoded, where the quantized residual value is generated by quantizing a difference value between a predicted value of the target reference point of the third pixel to be encoded and a pixel value;
The inverse quantization residual value generation unit is used for inversely quantizing the quantization residual value of the target reference point of the third pixel point to be encoded to obtain an inverse quantization residual value of the target reference point of the third pixel point to be encoded;
and the predicted value generating unit is used for fusing the inverse quantization residual value of the target reference point of the second pixel point to be coded, the difference value of the third pixel point to be coded and the inverse quantization residual value of the target reference point of the third pixel point to be coded.
The application also provides an image decoding device, which comprises a pixel point determining module to be decoded, a reference point position determining module, a target reference point determining module and a predictive coding module; wherein,,
the pixel point to be decoded determining module is used for determining a current decoding block in the image frame, wherein the current decoding block comprises pixel points to be decoded which are decoded in parallel;
the reference point position determining module is used for determining corresponding decoded pixel points based on the position of the current decoding block, taking the decoded pixel points as a plurality of reference points of the pixel points to be decoded, and obtaining difference values of the reference points and the pixel points to be decoded, wherein the difference values are used for representing difference information of the pixel points to be decoded and the reference points;
The target reference point determining module is used for determining a current pixel point to be decoded from the pixel points to be decoded, and determining a target reference point corresponding to the current pixel point to be decoded from the plurality of reference points if the adjacent forward pixel points of the current pixel point to be decoded are positioned in the current decoding block;
and the prediction coding module is used for fusing the difference value with the decoded data of the target reference point to obtain a predicted value of the current pixel point to be decoded in the current decoding block, and decoding is carried out based on the predicted value.
In an alternative embodiment, the reference point location determination module includes:
an inter-frame reference point determining unit configured to determine, when the current decoding block is located in a first line of the image frame, a plurality of decoded pixel points corresponding to the current decoding block position from a first line of a previous frame;
and the intra-frame reference point determining unit is used for determining a plurality of decoded pixel points adjacent to the current decoding block position from the previous row after the pixel points of the first row of the image frame are decoded.
In an optional embodiment, the current decoding block includes a second pixel point to be decoded, and the target reference point determining module is configured to determine, as a target reference point of the second pixel point to be decoded, a reference point of the second pixel point to be decoded, where the second pixel point to be decoded is located in the same position in the current row and is located in a different row;
Correspondingly, the predictive decoding module comprises:
a quantized residual value obtaining unit, configured to obtain a quantized residual value of a target reference point of a second pixel to be decoded, where the quantized residual value is generated by quantizing a difference value between a predicted value of the target reference point of the second pixel to be decoded and a pixel value;
the inverse quantization residual value generation unit is used for inversely quantizing the quantization residual value of the target reference point of the second pixel point to be decoded to obtain an inverse quantization residual value of the target reference point of the second pixel point to be decoded;
and the predicted value generating unit is used for fusing the difference value of the second pixel point to be decoded and the inverse quantization residual value of the target reference point of the second pixel point to be decoded.
In an optional embodiment, the current decoding block includes a third pixel point to be decoded, and the target reference point determining module is configured to determine, as a target reference point of the third pixel point to be decoded, a reference point of the third pixel point to be decoded, where the third pixel point to be decoded is located in the same position in the current row and is located in a different row;
correspondingly, the predictive decoding module comprises:
a quantized residual value obtaining unit, configured to obtain a quantized residual value of a target reference point of the third pixel to be decoded, where the quantized residual value is generated by quantizing a difference value between a predicted value and a pixel value of the target reference point of the third pixel to be decoded;
The inverse quantization residual value generation unit is used for inversely quantizing the quantization residual value of the target reference point of the third pixel point to be decoded to obtain an inverse quantization residual value of the target reference point of the third pixel point to be decoded;
and the predicted value generating unit is used for fusing the inverse quantization residual value of the target reference point of the second pixel point to be decoded, the difference value of the third pixel point to be decoded and the inverse quantization residual value of the target reference point of the third pixel point to be decoded.
The respective modules in the above-described image encoding and image decoding apparatus may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 19. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing encoded quantized residual values. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an image encoding, image decoding method.
It will be appreciated by those skilled in the art that the structure shown in fig. 19 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
There is provided a computer program product comprising a computer program which, when executed by a processor, carries out the steps of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (12)

1. An image encoding method, the method comprising:
determining a current coding block in an image frame, wherein the current coding block comprises pixel points to be coded which are coded in parallel;
determining corresponding encoded pixel points based on the position of a current encoding block, taking a plurality of encoded pixel points as a plurality of reference points of the pixel points to be encoded, and acquiring difference values of the plurality of reference points and the pixel points to be encoded, wherein the difference values are used for representing difference information of the pixel points to be encoded and the plurality of reference points;
Determining a current pixel point to be coded from the pixel points to be coded, and determining a target reference point corresponding to the current pixel point to be coded from the plurality of reference points if the adjacent forward pixel points of the current pixel point to be coded are positioned in the current coding block;
and fusing the difference value with the coding data of the target reference point to obtain a predicted value of the current pixel point to be coded in the current coding block, and coding based on the predicted value.
2. The method of claim 1, wherein the determining the corresponding encoded pixel point based on the location of the current encoded block comprises:
determining a plurality of encoded pixel points corresponding to the current encoding block position from a first line of a previous frame when the current encoding block is located in the first line of the image frame;
after encoding the first row of pixels of the image frame, a plurality of encoded pixels adjacent to the current encoding block position are determined from the previous row.
3. The method of claim 1, wherein the current encoding block includes a second pixel point to be encoded, and wherein determining a target reference point from the plurality of reference points that corresponds to the current pixel point to be encoded includes:
Determining the reference points of the second pixel points to be coded, which have the same positions in the current row and are positioned in different rows, as target reference points of the second pixel points to be coded;
fusing the difference value with the encoded data of the target reference point includes:
obtaining a quantized residual value of a target reference point of a second pixel point to be encoded, wherein the quantized residual value is generated by quantizing a difference value between a predicted value of the target reference point of the second pixel point to be encoded and a pixel value;
performing inverse quantization on the quantized residual value of the target reference point of the second pixel point to be coded to obtain an inverse quantized residual value of the target reference point of the second pixel point to be coded;
and fusing the difference value of the second pixel point to be coded and the inverse quantization residual value of the target reference point of the second pixel point to be coded.
4. A method according to claim 3, wherein the current coding block includes a third pixel point to be coded, and the determining a target reference point corresponding to the current pixel point to be coded from the plurality of reference points includes:
determining the reference points of the third pixel points to be coded, which have the same positions in the current row and are positioned in different rows, as target reference points of the third pixel points to be coded;
The fusing the difference value with the encoded data of the target reference point includes:
obtaining a quantized residual value of a target reference point of the third pixel point to be encoded, wherein the quantized residual value is generated by quantizing a difference value between a predicted value of the target reference point of the third pixel point to be encoded and a pixel value;
performing inverse quantization on the quantized residual value of the target reference point of the third pixel point to be coded to obtain an inverse quantized residual value of the target reference point of the third pixel point to be coded;
and fusing the inverse quantization residual value of the target reference point of the second pixel point to be encoded, the difference value of the third pixel point to be encoded and the inverse quantization residual value of the target reference point of the third pixel point to be encoded.
5. An image decoding method, the method comprising:
determining a current decoding block in an image frame, wherein the current decoding block comprises pixel points to be decoded which are decoded in parallel;
determining corresponding decoded pixel points based on the position of a current decoding block, taking a plurality of decoded pixel points as a plurality of reference points of the pixel points to be decoded, and acquiring difference values of the plurality of reference points and the pixel points to be decoded, wherein the difference values are used for representing difference information of the pixel points to be decoded and the plurality of reference points;
Determining a current pixel point to be decoded from the pixel points to be decoded, and determining a target reference point corresponding to the current pixel point to be decoded from the plurality of reference points if the adjacent forward pixel points of the current pixel point to be decoded are positioned in the current decoding block;
and fusing the difference value with the decoded data of the target reference point to obtain a predicted value of the current pixel point to be decoded in the current decoding block, and decoding based on the predicted value.
6. The method of claim 5, wherein determining the corresponding decoded pixel based on the location of the current decoded block comprises:
determining a plurality of decoded pixel points corresponding to the current decoding block position from a first line of a previous frame when the current decoding block is positioned in the first line of the image frame;
after the first row of pixels of the image frame are decoded, a plurality of decoded pixels adjacent to the current decoding block position are determined from the previous row.
7. The method of claim 5, wherein the current decoding block includes a second pixel point to be decoded, and wherein determining a target reference point from the plurality of reference points that corresponds to the current pixel point to be decoded includes:
Determining the reference points of the second pixel point to be decoded, which have the same position in the current row and are positioned in different rows, as target reference points of the second pixel point to be decoded;
fusing the difference value with the decoded data of the target reference point includes:
obtaining a quantized residual value of a target reference point of a second pixel to be decoded, wherein the quantized residual value is generated by quantizing a difference value between a predicted value of the target reference point of the second pixel to be decoded and a pixel value;
performing inverse quantization on the quantized residual value of the target reference point of the second pixel point to be decoded to obtain an inverse quantized residual value of the target reference point of the second pixel point to be decoded;
and fusing the difference value of the second pixel point to be decoded and the inverse quantization residual value of the target reference point of the second pixel point to be decoded.
8. The method of claim 7, wherein the current decoding block includes a third pixel point to be decoded, and wherein determining a target reference point from the plurality of reference points that corresponds to the current pixel point to be decoded includes:
determining the reference points of the third pixel point to be decoded, which have the same position in the current row and are positioned in different rows, as target reference points of the third pixel point to be decoded;
The fusing the difference value with the decoded data of the target reference point includes:
obtaining a quantized residual value of a target reference point of the third pixel point to be decoded, wherein the quantized residual value is generated by quantizing a difference value between a predicted value of the target reference point of the third pixel point to be decoded and a pixel value;
performing inverse quantization on the quantized residual value of the target reference point of the third pixel point to be decoded to obtain an inverse quantized residual value of the target reference point of the third pixel point to be decoded;
and fusing the inverse quantization residual value of the target reference point of the second pixel point to be decoded, the difference value of the third pixel point to be decoded and the inverse quantization residual value of the target reference point of the third pixel point to be decoded.
9. An image encoding apparatus, the apparatus comprising:
the pixel point to be encoded determining module is used for determining a current encoding block in an image frame, wherein the current encoding block comprises pixel points to be encoded which are encoded in parallel;
the reference point position determining module is used for determining corresponding encoded pixel points based on the position of the current encoding block, taking a plurality of encoded pixel points as a plurality of reference points of the pixel points to be encoded, and obtaining difference values of the plurality of reference points and the pixel points to be encoded, wherein the difference values are used for representing difference information of the pixel points to be encoded and the plurality of reference points;
The target reference point determining module is used for determining a current pixel point to be encoded from the pixel points to be encoded, and determining a target reference point corresponding to the current pixel point to be encoded from the plurality of reference points if the adjacent forward pixel points of the current pixel point to be encoded are positioned in the current encoding block;
and the prediction coding module is used for fusing the difference value with the coding data of the target reference point to obtain a predicted value of the current pixel point to be coded in the current coding block, and coding is carried out based on the predicted value.
10. An image decoding apparatus, characterized in that the apparatus comprises:
the pixel point to be decoded determining module is used for determining a current decoding block in the image frame, wherein the current decoding block comprises pixel points to be decoded which are decoded in parallel;
the reference point position determining module is used for determining corresponding decoded pixel points based on the position of the current decoding block, taking a plurality of decoded pixel points as a plurality of reference points of the pixel points to be decoded, and obtaining difference values of the plurality of reference points and the pixel points to be decoded, wherein the difference values are used for representing difference information of the pixel points to be decoded and the plurality of reference points;
The target reference point determining module is used for determining a current pixel point to be decoded from the pixel points to be decoded, and determining a target reference point corresponding to the current pixel point to be decoded from the plurality of reference points if the adjacent forward pixel points of the current pixel point to be decoded are positioned in the current decoding block;
and the prediction coding module is used for fusing the difference value with the decoded data of the target reference point to obtain a predicted value of the current pixel point to be decoded in the current decoding block, and decoding is carried out based on the predicted value.
11. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 8 when the computer program is executed.
12. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 8.
CN202210031316.1A 2022-01-12 2022-01-12 Image encoding method, image decoding method, image encoding device, image decoding device, computer device, and storage medium Active CN114501029B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210031316.1A CN114501029B (en) 2022-01-12 2022-01-12 Image encoding method, image decoding method, image encoding device, image decoding device, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210031316.1A CN114501029B (en) 2022-01-12 2022-01-12 Image encoding method, image decoding method, image encoding device, image decoding device, computer device, and storage medium

Publications (2)

Publication Number Publication Date
CN114501029A CN114501029A (en) 2022-05-13
CN114501029B true CN114501029B (en) 2023-06-06

Family

ID=81512819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210031316.1A Active CN114501029B (en) 2022-01-12 2022-01-12 Image encoding method, image decoding method, image encoding device, image decoding device, computer device, and storage medium

Country Status (1)

Country Link
CN (1) CN114501029B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116366864B (en) * 2023-03-23 2024-04-12 格兰菲智能科技有限公司 Parallel encoding and decoding method, device, computer equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007013298A (en) * 2005-06-28 2007-01-18 Renesas Technology Corp Image coding apparatus
KR20090058954A (en) * 2007-12-05 2009-06-10 삼성전자주식회사 Video coding method and apparatus using side matching, and video decoding method and appartus thereof
EP2280550A1 (en) * 2009-06-25 2011-02-02 Thomson Licensing Mask generation for motion compensation
CN102131093A (en) * 2011-01-13 2011-07-20 北京中星微电子有限公司 Image processing method and device
CN104137549A (en) * 2012-01-18 2014-11-05 韩国电子通信研究院 Method and device for encoding and decoding image
CN108965877A (en) * 2018-07-04 2018-12-07 武汉精测电子集团股份有限公司 The device and method of video real-time display is realized based on DSC compression algorithm
CN109089121A (en) * 2018-10-19 2018-12-25 北京金山云网络技术有限公司 A kind of method for estimating based on Video coding, device and electronic equipment
CN109640089A (en) * 2018-11-02 2019-04-16 西安万像电子科技有限公司 Image coding/decoding method and device
WO2021042300A1 (en) * 2019-09-04 2021-03-11 深圳市大疆创新科技有限公司 Encoding method, decoding method, and encoding apparatus and decoding apparatus
CN113196762A (en) * 2019-06-25 2021-07-30 Oppo广东移动通信有限公司 Image component prediction method, device and computer storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010166520A (en) * 2009-01-19 2010-07-29 Panasonic Corp Image encoding and decoding apparatus
US11197009B2 (en) * 2019-05-30 2021-12-07 Hulu, LLC Processing sub-partitions in parallel using reference pixels

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007013298A (en) * 2005-06-28 2007-01-18 Renesas Technology Corp Image coding apparatus
KR20090058954A (en) * 2007-12-05 2009-06-10 삼성전자주식회사 Video coding method and apparatus using side matching, and video decoding method and appartus thereof
EP2280550A1 (en) * 2009-06-25 2011-02-02 Thomson Licensing Mask generation for motion compensation
CN102131093A (en) * 2011-01-13 2011-07-20 北京中星微电子有限公司 Image processing method and device
CN104137549A (en) * 2012-01-18 2014-11-05 韩国电子通信研究院 Method and device for encoding and decoding image
CN108965877A (en) * 2018-07-04 2018-12-07 武汉精测电子集团股份有限公司 The device and method of video real-time display is realized based on DSC compression algorithm
CN109089121A (en) * 2018-10-19 2018-12-25 北京金山云网络技术有限公司 A kind of method for estimating based on Video coding, device and electronic equipment
CN109640089A (en) * 2018-11-02 2019-04-16 西安万像电子科技有限公司 Image coding/decoding method and device
CN113196762A (en) * 2019-06-25 2021-07-30 Oppo广东移动通信有限公司 Image component prediction method, device and computer storage medium
WO2021042300A1 (en) * 2019-09-04 2021-03-11 深圳市大疆创新科技有限公司 Encoding method, decoding method, and encoding apparatus and decoding apparatus

Also Published As

Publication number Publication date
CN114501029A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
RU2762005C2 (en) Method and device for encoding and decoding two-dimensional point clouds
CN109451308B (en) Video compression processing method and device, electronic equipment and storage medium
CN110049336B (en) Video encoding method and video decoding method
TWI834087B (en) Method and apparatus for reconstruct image from bitstreams and encoding image into bitstreams, and computer program product
CN109672897B (en) Panoramic video coding method and device
WO2019004749A1 (en) Method and apparatus for performing low complexity computation in transform kernel for video compression
CN114501029B (en) Image encoding method, image decoding method, image encoding device, image decoding device, computer device, and storage medium
WO2023082834A1 (en) Video compression method and apparatus, and computer device and storage medium
CN114222129A (en) Image compression encoding method, image compression encoding device, computer equipment and storage medium
WO2024140568A1 (en) Image processing method and apparatus, electronic device, and readable storage medium
WO2021022686A1 (en) Video compression method and apparatus, and terminal device
CN115412735A (en) Encoding and decoding method, apparatus, device, storage medium, and computer program
US9232222B2 (en) Lossless color image compression adaptively using spatial prediction or inter-component prediction
US12008954B2 (en) Method and system for compressing Demura compensation value
CN108668169B (en) Image information processing method and device, and storage medium
CN113228665A (en) Method, device, computer program and computer-readable medium for processing configuration data
US9426481B2 (en) Method and apparatus for encoding image, and method and apparatus for decoding image
CN115250351A (en) Compression method, decompression method and related products for image data
CN107172425A (en) Reduced graph generating method, device and terminal device
US20240163479A1 (en) Entropy-Constrained Neural Video Representations
US11979606B2 (en) Conditional recolor for video based point cloud coding
US20240244248A1 (en) Encoding and decoding methods, encoder, decoder and storage medium
US20220394295A1 (en) Fast recolor for video based point cloud coding
WO2024140683A1 (en) Coding method, decoding method and electronic device
CN107124614B (en) Image data compression method with ultrahigh compression ratio

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant