CN111988615A - Decoding method, decoding device, electronic equipment and storage medium - Google Patents

Decoding method, decoding device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111988615A
CN111988615A CN202010814920.2A CN202010814920A CN111988615A CN 111988615 A CN111988615 A CN 111988615A CN 202010814920 A CN202010814920 A CN 202010814920A CN 111988615 A CN111988615 A CN 111988615A
Authority
CN
China
Prior art keywords
decoded
data
data block
data reference
image frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010814920.2A
Other languages
Chinese (zh)
Other versions
CN111988615B (en
Inventor
陈宇聪
闻兴
陈敏
黄跃
王晓楠
高敏
于冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202010814920.2A priority Critical patent/CN111988615B/en
Publication of CN111988615A publication Critical patent/CN111988615A/en
Application granted granted Critical
Publication of CN111988615B publication Critical patent/CN111988615B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present disclosure relates to a decoding method, an apparatus, an electronic device and a storage medium, and belongs to the technical field of video encoding and compression, wherein the method comprises the following steps: the method comprises the steps of obtaining at least two data reference areas related to a data block to be decoded in an image frame according to position information of the data block to be decoded in the image frame, determining pixel change information of the data block to be decoded and each data reference area, determining a target data reference area of the data block to be decoded from each data reference area according to the pixel change information of the data block to be decoded and each data reference area, and decoding the data block to be decoded according to the target data reference area. Therefore, the decoding end determines the target data reference area from each data reference area directly according to the pixel change information of each data reference area and the data block to be decoded, and the encoding end does not need to transmit the identification information of the target data reference area, so that the encoding efficiency can be improved.

Description

Decoding method, decoding device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of video encoding and compression technologies, and in particular, to a decoding method, apparatus, electronic device, and storage medium.
Background
Currently, a Multiple Reference Line (MRL) intra prediction technique is proposed in the next generation Video Coding standard, i.e. multifunctional Video Coding (VVC). Referring to fig. 1, the technique allows an optimal data reference region to be selected from a plurality of data reference regions adjacent to a block to be encoded to encode the block to be encoded, wherein each data reference region comprises a row of pixels and a column of pixels, and a row interval between the row of pixels and a leading row of pixels in the block to be encoded is equal to a column interval between the column of pixels and a leading column of pixels in the block to be encoded.
In the related art, an encoding end tries the encoding performance, such as rate distortion cost, of each data reference region on a block to be encoded in different prediction modes, then selects the data reference region with the minimum rate distortion cost as an optimal data reference region, and transmits the identification information of the selected data reference region through a syntax element intra _ luma _ ref _ idx in a code stream. Subsequently, the decoding end acquires intra _ luma _ ref _ idx from the code stream, analyzes the intra _ luma _ ref _ idx to obtain the identifier of the data reference area of the corresponding data block to be decoded, and then selects the corresponding data reference area according to the identifier to decode the data block to be decoded.
Thus, for each data block to be decoded, the identification information of the data reference region for decoding the data block to be decoded needs to be sent in the code stream, and the identification information of the data reference region needs to consume 1-2 code words, which inevitably reduces the coding efficiency.
Disclosure of Invention
The present disclosure provides a decoding method, apparatus, electronic device and storage medium, so as to at least solve the problem of relatively low coding efficiency in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided a decoding method, including:
acquiring at least two data reference areas related to a data block to be decoded in an image frame according to the position information of the data block to be decoded in the image frame;
for each data reference area, determining the pixel change information of the data block to be decoded and the data reference area;
determining a target data reference area of the data block to be decoded from the at least two data reference areas according to the pixel change information of each data reference area and the data block to be decoded;
and decoding the data block to be decoded according to the target data reference region.
In one possible implementation, acquiring at least two data reference regions in the image frame related to the data block to be decoded according to the position information of the data block to be decoded in the image frame includes:
according to the position information of a data block to be decoded in an image frame, a plurality of row pixels and a plurality of column pixels adjacent to the data block to be decoded in decoded data of the image frame are determined, and at least two data reference areas are selected from the plurality of row pixels and the plurality of column pixels.
In one possible implementation, determining a plurality of row pixels and a plurality of column pixels adjacent to a data block to be decoded in decoded data of an image frame according to position information of the data block to be decoded in the image frame includes:
according to the position information of a data block to be decoded in an image frame, 4 row pixels and 4 column pixels adjacent to the data block to be decoded in decoded data of the image frame are determined, wherein the 4 row pixels are 4 row pixels which are far away from the data block to be decoded from near, and the 4 column pixels are 4 column pixels which are far away from the data block to be decoded from near.
In a possible implementation manner, for each data reference region, determining the pixel variation information of the data block to be decoded and the data reference region includes:
for each pixel point in each data reference region, selecting a comparison pixel point of the pixel point according to the position of the pixel point in the data reference region and a comparison pixel point selection rule of the pixel point at preset different positions, and determining a pixel change value between the pixel point and the selected comparison pixel point as the gradient of the pixel point;
and determining the average gradient of each pixel point in the data reference region as the pixel change information of the data block to be decoded and the data reference region.
In one possible implementation, determining a target data reference region of the data block to be decoded from the at least two data reference regions according to the pixel variation information of each data reference region and the data block to be decoded includes:
and determining the data reference region with the minimum pixel change between the at least two data reference regions and the data block to be decoded as a target data reference region of the data block to be decoded.
According to a second aspect of the embodiments of the present disclosure, there is provided a decoding apparatus including:
the acquisition module is configured to acquire at least two data reference areas related to the data block to be decoded in the image frame according to the position information of the data block to be decoded in the image frame;
the pixel change determining module is configured to determine pixel change information of the data block to be decoded and each data reference area;
a reference region determination module configured to perform determining a target data reference region of the data block to be decoded from the at least two data reference regions according to pixel variation information of each data reference region and the data block to be decoded;
a decoding module configured to perform decoding of the data block to be decoded according to the target data reference region.
In a possible implementation, the obtaining module is specifically configured to perform:
according to the position information of a data block to be decoded in an image frame, a plurality of row pixels and a plurality of column pixels adjacent to the data block to be decoded in decoded data of the image frame are determined, and at least two data reference areas are selected from the plurality of row pixels and the plurality of column pixels.
In a possible implementation, the obtaining module is specifically configured to perform:
according to the position information of a data block to be decoded in an image frame, 4 row pixels and 4 column pixels adjacent to the data block to be decoded in decoded data of the image frame are determined, wherein the 4 row pixels are 4 row pixels which are far away from the data block to be decoded from near, and the 4 column pixels are 4 column pixels which are far away from the data block to be decoded from near.
In a possible implementation, the pixel variation determination module is specifically configured to perform:
for each pixel point in each data reference region, selecting a comparison pixel point of the pixel point according to the position of the pixel point in the data reference region and a comparison pixel point selection rule of the pixel point at preset different positions, and determining a pixel change value between the pixel point and the selected comparison pixel point as the gradient of the pixel point;
and determining the average gradient of each pixel point in the data reference region as the pixel change information of the data block to be decoded and the data reference region.
In a possible implementation, the reference region determination module is specifically configured to perform:
and determining the data reference region with the minimum pixel change between the at least two data reference regions and the data block to be decoded as a target data reference region of the data block to be decoded.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: at least one processor, and a memory communicatively coupled to the at least one processor, wherein:
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any of the decoding methods described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium, wherein when instructions in the storage medium are executed by a processor of an electronic device, the electronic device is capable of executing any one of the decoding methods described above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product which, when invoked by a computer, may cause the computer to perform any of the decoding methods described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the method comprises the steps of obtaining at least two data reference areas related to a data block to be decoded in an image frame according to position information of the data block to be decoded in the image frame, determining pixel change information of the data block to be decoded and each data reference area, further determining a target data reference area of the data block to be decoded from the at least two data reference areas according to the pixel change information of the data block to be decoded and each data reference area, and decoding the data block to be decoded according to the target data reference area. Therefore, the decoding end can directly determine the target data reference area for decoding the data block to be decoded from each data reference area according to the pixel change information of each data reference area and the data block to be decoded, and the encoding end does not need to transmit the identification information of the target data reference area, namely, the encoding end does not need to encode the identification information of the target data reference area any more, so that the encoding efficiency can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a schematic diagram illustrating a data reference region of a data block to be encoded according to an example embodiment.
Fig. 2 is a flow chart illustrating a decoding method according to an example embodiment.
Fig. 3 is a flow chart illustrating a method of acquiring at least two data reference regions in an image frame associated with a block of data to be decoded according to an exemplary embodiment.
Fig. 4 is a diagram illustrating a data reference region of a data block to be decoded according to an example embodiment.
Fig. 5 is a block diagram illustrating a decoding apparatus according to an example embodiment.
Fig. 6 is a schematic structural diagram illustrating an electronic device for implementing a decoding method or a video encoding method according to an exemplary embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 2 is a flowchart illustrating a decoding method according to an exemplary embodiment, the flowchart including the following steps.
S201: and acquiring at least two data reference areas related to the data block to be decoded in the image frame according to the position information of the data block to be decoded in the image frame.
The data block to be decoded may be a decoding Unit (CU).
In a specific implementation, at least two data reference regions in the image frame related to the data block to be decoded may be obtained according to the process shown in fig. 3, where the process includes the following steps:
s301 a: according to the position information of the data block to be decoded in the image frame, a plurality of row pixels and a plurality of column pixels adjacent to the data block to be decoded in the decoded data of the image frame are determined.
The plurality of row pixels refers to a plurality of rows of pixels, and the plurality of column pixels refers to a plurality of columns of pixels.
For example, according to the position information of the data block to be decoded in the image frame, 4 row pixels (4 row pixels) and 4 column pixels (4 column pixels) adjacent to the data block to be decoded in the decoded data of the image frame are selected, wherein the 4 row pixels are 4 row pixels which are distant from the data block to be decoded from near, and the 4 column pixels are 4 column pixels which are distant from the data block to be decoded from near.
Fig. 4 is a schematic diagram of 4 row pixels and 4 column pixels adjacent to a data block to be decoded, where W denotes a width of the data block to be decoded, H denotes a height of the data block to be decoded, K denotes the number of row pixels/column pixels, and K is 4 in fig. 4.
S302 a: at least two data reference areas are selected from the plurality of row pixels and the plurality of column pixels.
In a specific implementation, each data reference region may include a row pixel and a column pixel, and a row interval between the row pixel and a first row pixel in the data block to be decoded is equal to a column interval between the column pixel and a first column pixel in the data block to be decoded.
Based on the shape characteristics of the data reference regions, a maximum of 4 data reference regions can be selected from the plurality of row pixels and the plurality of column pixels.
Suppose that the data reference region closest to the data block to be decoded is data reference region 1, the data reference region second closest to the data block to be decoded is data reference region 2, the data reference region third closest to the data block to be decoded is data reference region 3, and the data reference region farthest from the data block to be decoded is data reference region 4.
In practical applications, it is found that the decoding effect of the data reference area 3 is generally worse than that of the data reference areas 1, 2 and 4, so that the data reference area 3 can be excluded, and only the data reference area 1, 2 and 4 can be used as the data reference area selected from the plurality of row pixels and the plurality of column pixels.
S202: for each data reference area, pixel variation information of the data block to be decoded and the data reference area is determined.
In specific implementation, for each pixel point in each data reference region, the comparison pixel point of the pixel point can be selected according to the position of the pixel point in the data reference region and the comparison pixel point selection rule of the pixel points at different preset positions, and the pixel change value between the pixel point and the selected comparison pixel point is determined as the gradient of the pixel point.
For example, if it is determined that the pixel point is located in the row pixels in the data reference region, a pixel point located right above (i.e., in the same row as) the pixel point and separated from the pixel point by N pixel points may be selected as a comparison pixel point of the pixel point; if the pixel point is determined to be located in the row pixels in the data reference area, the pixel point which is located right to the left side of the pixel point (namely, is in the same row with the pixel point) and is separated from the pixel point by N pixel points can be selected as a comparison pixel point of the pixel point, and then the brightness difference value between the pixel point and the comparison pixel point of the pixel point can be determined as the gradient of the pixel point, wherein N is a non-negative integer.
Further, the average gradient of each pixel point in the data reference region may be determined as the pixel variation information of the data block to be decoded and the data reference region.
S203: and determining a target data reference area of the data block to be decoded from the at least two data reference areas according to the pixel change information of each data reference area and the data block to be decoded.
In practical application, the smaller the pixel change between the data reference region and the data block to be decoded is, the closer the pixel characteristics of the data reference region and the data block to be decoded are, and the smaller the error of decoding the data block to be decoded using the data reference region is, therefore, the data reference region with the smallest pixel change between the at least two data reference regions and the data block to be decoded can be determined as the target data reference region of the data block to be decoded.
S204: and decoding the data block to be decoded according to the target data reference region.
In addition, the above procedure may be executed after determining that the data block to be decoded is decoded by using the multi-reference row intra prediction technology, that is, the decoding method provided by the embodiment of the present disclosure is applicable to the data block to be decoded that is decoded by using the multi-reference row intra prediction technology.
In the embodiment of the disclosure, the average value of the pixel change values between each pixel point and the surrounding pixel points in each data reference region is used as the pixel change information of the data block to be decoded and the data reference region, and then the data reference region with the minimum pixel change between the data block to be decoded and the data block to be decoded is selected as the target data reference region, and the target data reference region is used for decoding the data block to be decoded. Because the mode of calculating the pixel change information between the data block to be decoded and each data reference area is simple, the requirement on the computing capability of the decoding end is low, the decoding end can quickly determine the target data reference area of the data block to be decoded, and the decoding speed can be improved.
In a possible implementation, the target data reference region may also be determined from at least two data reference regions in the image frame associated with the data block to be decoded, using a machine learning method.
In a specific implementation, a plurality of training samples may be obtained, where each training sample includes at least two data reference regions of a data block sample to be decoded, and a target data reference region of the at least two data reference regions is known.
In practical application, when an encoding end encodes a block to be encoded by using a multi-reference-row intra-frame prediction technology, the encoding end tries the encoding performance of at least two data reference regions related to the block to be encoded on the block to be encoded, and selects a target data reference region with the optimal encoding performance from the at least two data reference regions to encode the block to be encoded. In fact, the target data reference region determined by the encoding end is the target data reference region that the decoding end needs to determine the corresponding data block to be decoded, so for each training sample, the information of the target data reference region in the training sample can be obtained from the encoding end, thus the difficulty of obtaining the training sample can be reduced, and the training speed is further improved.
Then, the pixel information of the data reference area in each training sample is used as input, the identification information of the target data reference area in the training sample is used as output, the prediction model is trained, and the trained prediction model learns the pixel characteristics of the target data reference area with the pixel characteristics closest to the data block samples to be decoded.
The pixel information of the data reference region input into the prediction model may be a luminance value of the data reference region, or may be any combination of the luminance value, a chrominance value and a saturation value of the data reference region; the prediction model may be a Support Vector Machine (SVM) model or a deep network learning model.
Subsequently, after the decoding end obtains at least two data reference areas related to the data block to be decoded in the image frame according to the position information of the data block to be decoded in the image frame, the pixel information of each data reference area can be input into the established prediction model, and the prediction model can output the identification information of the target data reference area in each data reference area.
In the scheme, the prediction model can better learn the pixel characteristics of various data reference areas with the pixel characteristics closest to the data block sample to be decoded, so that the accuracy of the determined target data reference area is higher, the prediction residual error of the data block to be decoded, which needs to be transmitted, can be reduced, and the coding efficiency can be further improved.
When the method provided in the embodiments of the present disclosure is implemented in software or hardware or a combination of software and hardware, a plurality of functional modules may be included in the electronic device, and each functional module may include software, hardware or a combination of software and hardware.
Fig. 5 is a block diagram illustrating a decoding apparatus according to an exemplary embodiment, which includes an acquisition module 501, a pixel variation determination module 502, a reference region determination module 503, and a decoding module 504.
An obtaining module 501 configured to perform obtaining at least two data reference areas related to a data block to be decoded in an image frame according to position information of the data block to be decoded in the image frame;
a pixel change determining module 502 configured to determine, for each data reference region, pixel change information of the data block to be decoded and the data reference region;
a reference region determining module 503 configured to perform determining a target data reference region of the data block to be decoded from the at least two data reference regions according to the pixel variation information of each data reference region and the data block to be decoded;
a decoding module 504 configured to perform decoding of the data block to be decoded according to the target data reference region.
In a possible implementation, the obtaining module 501 is specifically configured to perform:
according to the position information of a data block to be decoded in an image frame, a plurality of row pixels and a plurality of column pixels adjacent to the data block to be decoded in decoded data of the image frame are determined, and at least two data reference areas are selected from the plurality of row pixels and the plurality of column pixels.
In a possible implementation, the obtaining module 501 is specifically configured to perform:
according to the position information of a data block to be decoded in an image frame, 4 row pixels and 4 column pixels adjacent to the data block to be decoded in decoded data of the image frame are determined, wherein the 4 row pixels are 4 row pixels which are far away from the data block to be decoded from near, and the 4 column pixels are 4 column pixels which are far away from the data block to be decoded from near.
In a possible implementation, the pixel variation determining module 502 is specifically configured to perform:
for each pixel point in each data reference region, selecting a comparison pixel point of the pixel point according to the position of the pixel point in the data reference region and a comparison pixel point selection rule of the pixel point at preset different positions, and determining a pixel change value between the pixel point and the selected comparison pixel point as the gradient of the pixel point;
and determining the average gradient of each pixel point in the data reference region as the pixel change information of the data block to be decoded and the data reference region.
In a possible implementation, the reference region determination module 503 is specifically configured to perform:
and determining the data reference region with the minimum pixel change between the at least two data reference regions and the data block to be decoded as a target data reference region of the data block to be decoded.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The division of the modules in the embodiments of the present disclosure is illustrative, and is only a logical function division, and there may be another division manner in actual implementation, and in addition, each functional module in each embodiment of the present disclosure may be integrated in one processor, may also exist alone physically, or may also be integrated in one module by two or more modules. The coupling of the various modules to each other may be through interfaces that are typically electrical communication interfaces, but mechanical or other forms of interfaces are not excluded. Thus, modules described as separate components may or may not be physically separate, may be located in one place, or may be distributed in different locations on the same or different devices. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Fig. 6 is a schematic structural diagram of an electronic device according to an exemplary embodiment, where the electronic device includes a transceiver 601 and a processor 602, where the processor 602 may be a Central Processing Unit (CPU), a microprocessor, an application specific integrated circuit, a programmable logic circuit, a large scale integrated circuit, or a digital processing unit. The transceiver 601 is used for data transmission and reception between the electronic device and other devices.
The electronic device may further comprise a memory 603 for storing software instructions executed by the processor 602, but may also store some other data required by the electronic device, such as identification information of the electronic device, encryption information of the electronic device, user data, etc. The memory 603 may be a volatile memory (volatile memory), such as a random-access memory (RAM); the memory 603 may also be a non-volatile memory (non-volatile memory) such as a read-only memory (ROM), a flash memory (flash memory), a Hard Disk Drive (HDD) or a solid-state drive (SSD), or the memory 603 may be any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 603 may be a combination of the above memories.
The embodiment of the present disclosure does not limit the specific connection medium among the processor 602, the memory 603, and the transceiver 601. In fig. 6, the embodiment of the present disclosure is described by taking only the case where the memory 603, the processor 602, and the transceiver 601 are connected by the bus 604 as an example, the bus is shown by a thick line in fig. 6, and the connection manner between other components is merely illustrative and not limited. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 6, but this is not intended to represent only one bus or type of bus.
The processor 602 may be dedicated hardware or a processor running software, and when the processor 602 may run software, the processor 602 reads software instructions stored in the memory 603 and executes the decoding method involved in the foregoing embodiments under the drive of the software instructions.
The embodiment of the present disclosure also provides a storage medium, and when instructions in the storage medium are executed by a processor of an electronic device, the electronic device can execute the decoding method in the foregoing embodiment.
In some possible embodiments, the aspects of the decoding method provided by the present disclosure may also be implemented in the form of a program product, which includes program code for causing an electronic device to perform the decoding method referred to in the foregoing embodiments when the program product is run on the electronic device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for the decoding method in the embodiments of the present disclosure may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a computing device. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, in accordance with embodiments of the present disclosure. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the disclosed methods are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present disclosure have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the disclosure.
It will be apparent to those skilled in the art that various changes and modifications can be made in the present disclosure without departing from the spirit and scope of the disclosure. Thus, if such modifications and variations of the present disclosure fall within the scope of the claims of the present disclosure and their equivalents, the present disclosure is intended to include such modifications and variations as well.

Claims (10)

1. A method of decoding, comprising:
acquiring at least two data reference areas related to a data block to be decoded in an image frame according to the position information of the data block to be decoded in the image frame;
for each data reference area, determining the pixel change information of the data block to be decoded and the data reference area;
determining a target data reference area of the data block to be decoded from the at least two data reference areas according to the pixel change information of each data reference area and the data block to be decoded;
and decoding the data block to be decoded according to the target data reference region.
2. The method of claim 1, wherein obtaining at least two data reference areas related to the data block to be decoded in the image frame according to the position information of the data block to be decoded in the image frame comprises:
according to the position information of a data block to be decoded in an image frame, a plurality of row pixels and a plurality of column pixels adjacent to the data block to be decoded in decoded data of the image frame are determined, and at least two data reference areas are selected from the plurality of row pixels and the plurality of column pixels.
3. The method of claim 2, wherein determining a plurality of row pixels and a plurality of column pixels adjacent to the data block to be decoded in the decoded data of the image frame according to the position information of the data block to be decoded in the image frame comprises:
according to the position information of a data block to be decoded in an image frame, 4 row pixels and 4 column pixels adjacent to the data block to be decoded in decoded data of the image frame are determined, wherein the 4 row pixels are 4 row pixels which are far away from the data block to be decoded from near, and the 4 column pixels are 4 column pixels which are far away from the data block to be decoded from near.
4. The method according to any one of claims 1 to 3, wherein for each data reference region, determining the pixel variation information of the data block to be decoded and the data reference region comprises:
for each pixel point in each data reference region, selecting a comparison pixel point of the pixel point according to the position of the pixel point in the data reference region and a comparison pixel point selection rule of the pixel point at preset different positions, and determining a pixel change value between the pixel point and the selected comparison pixel point as the gradient of the pixel point;
and determining the average gradient of each pixel point in the data reference region as the pixel change information of the data block to be decoded and the data reference region.
5. The method of claim 4, wherein determining a target data reference region of the data block to be decoded from the at least two data reference regions according to the pixel variation information of each data reference region and the data block to be decoded comprises:
and determining the data reference region with the minimum pixel change between the at least two data reference regions and the data block to be decoded as a target data reference region of the data block to be decoded.
6. A decoding apparatus, comprising:
the acquisition module is configured to acquire at least two data reference areas related to the data block to be decoded in the image frame according to the position information of the data block to be decoded in the image frame;
the pixel change determining module is configured to determine pixel change information of the data block to be decoded and each data reference area;
a reference region determination module configured to perform determining a target data reference region of the data block to be decoded from the at least two data reference regions according to pixel variation information of each data reference region and the data block to be decoded;
a decoding module configured to perform decoding of the data block to be decoded according to the target data reference region.
7. The apparatus of claim 6, wherein the obtaining module is specifically configured to perform:
according to the position information of a data block to be decoded in an image frame, a plurality of row pixels and a plurality of column pixels adjacent to the data block to be decoded in decoded data of the image frame are determined, and at least two data reference areas are selected from the plurality of row pixels and the plurality of column pixels.
8. The apparatus of claim 7, wherein the obtaining module is specifically configured to perform:
according to the position information of a data block to be decoded in an image frame, 4 row pixels and 4 column pixels adjacent to the data block to be decoded in decoded data of the image frame are determined, wherein the 4 row pixels are 4 row pixels which are far away from the data block to be decoded from near, and the 4 column pixels are 4 column pixels which are far away from the data block to be decoded from near.
9. An electronic device, comprising: at least one processor, and a memory communicatively coupled to the at least one processor, wherein:
the memory stores instructions executable by the at least one processor, the at least one processor being capable of performing the method of any one of claims 1-5 when the instructions are executed by the at least one processor.
10. A storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of any of claims 1-5.
CN202010814920.2A 2020-08-13 2020-08-13 Decoding method, decoding device, electronic equipment and storage medium Active CN111988615B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010814920.2A CN111988615B (en) 2020-08-13 2020-08-13 Decoding method, decoding device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010814920.2A CN111988615B (en) 2020-08-13 2020-08-13 Decoding method, decoding device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111988615A true CN111988615A (en) 2020-11-24
CN111988615B CN111988615B (en) 2023-08-22

Family

ID=73434269

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010814920.2A Active CN111988615B (en) 2020-08-13 2020-08-13 Decoding method, decoding device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111988615B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190082194A1 (en) * 2017-09-08 2019-03-14 Fg Innovation Ip Company Limited Device and method for coding video data based on multiple reference lines
CN109640089A (en) * 2018-11-02 2019-04-16 西安万像电子科技有限公司 Image coding/decoding method and device
WO2020092535A1 (en) * 2018-10-31 2020-05-07 Interdigital Vc Holdings, Inc. Multi-reference line intra prediction and most probable mode

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190082194A1 (en) * 2017-09-08 2019-03-14 Fg Innovation Ip Company Limited Device and method for coding video data based on multiple reference lines
WO2020092535A1 (en) * 2018-10-31 2020-05-07 Interdigital Vc Holdings, Inc. Multi-reference line intra prediction and most probable mode
CN109640089A (en) * 2018-11-02 2019-04-16 西安万像电子科技有限公司 Image coding/decoding method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BENJAMIN BROSS等: "CE3: Multiple reference line intra prediction (Test 5.4.1, 5.4.2, 5.4.3 and 5.4.4)" *

Also Published As

Publication number Publication date
CN111988615B (en) 2023-08-22

Similar Documents

Publication Publication Date Title
EP3185557A1 (en) Predictive coding/decoding method, corresponding coder/decoder, and electronic device
CN110945871B (en) Efficient intra-coding for screen data
CN110248189B (en) Video quality prediction method, device, medium and electronic equipment
US10742989B2 (en) Variable frame rate encoding method and device based on a still area or a motion area
US12075046B2 (en) Shape adaptive discrete cosine transform for geometric partitioning with an adaptive number of regions
US12041241B2 (en) Method and device for image encoding and decoding with a shifted position
US20240283930A1 (en) Systems and methods for video encoding using image segmentation
CN115134629A (en) Video transmission method, system, device and storage medium
US10863189B2 (en) Motion vector reconstruction order swap
CN111918067A (en) Data processing method and device and computer readable storage medium
WO2020219948A1 (en) Selective motion vector prediction candidates in frames with global motion
JP2022509982A (en) Picture encoding and decoding methods and equipment for video sequences
CN111988615B (en) Decoding method, decoding device, electronic equipment and storage medium
CN115442617A (en) Video processing method and device based on video coding
US11375202B2 (en) Translational and affine candidates in a unified list
KR20240027618A (en) Context-based image coding
US20120183234A1 (en) Methods for parallelizing fixed-length bitstream codecs
CN111225212B (en) Method and device for determining chrominance information of video block
CN112738522A (en) Video coding method and device
CN113542737A (en) Encoding mode determining method and device, electronic equipment and storage medium
RU2809558C2 (en) Method and device for image encoding and decoding
CN112218086A (en) Encoding method, decoding method, transmission method, encoding device, decoding device, and system
CN110662060B (en) Video encoding method and apparatus, video decoding method and apparatus, and storage medium
US20240137502A1 (en) Systems and methods for encoding and decoding video with memory-efficient prediction mode selection
CN110572676B (en) Video encoding method and apparatus, video decoding method and apparatus, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant