CN112437304A - Video decoding method, encoding method, device, equipment and readable storage medium - Google Patents

Video decoding method, encoding method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN112437304A
CN112437304A CN201910790870.6A CN201910790870A CN112437304A CN 112437304 A CN112437304 A CN 112437304A CN 201910790870 A CN201910790870 A CN 201910790870A CN 112437304 A CN112437304 A CN 112437304A
Authority
CN
China
Prior art keywords
vector
vectors
candidate
candidate vectors
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910790870.6A
Other languages
Chinese (zh)
Other versions
CN112437304B (en
Inventor
王英彬
许晓中
刘杉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910790870.6A priority Critical patent/CN112437304B/en
Publication of CN112437304A publication Critical patent/CN112437304A/en
Application granted granted Critical
Publication of CN112437304B publication Critical patent/CN112437304B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/88Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving rearrangement of data among different coding units, e.g. shuffling, interleaving, scrambling or permutation of pixel data or permutation of transform coefficient data among different blocks

Abstract

The application discloses a video decoding method, a video encoding device and a readable storage medium, and relates to the field of video processing. The method comprises the following steps: acquiring n candidate vectors of a current decoding block; processing the n candidate vectors according to the priority, wherein the index values corresponding to the candidate vectors are in negative correlation with the arrangement sequence of the candidate vectors; decoding the coding content of the current decoding block to obtain an index value; and determining a prediction vector corresponding to the index value and obtaining a decoding result. In the decoding process, the priority is determined according to the association degree between the candidate vector and the prediction vector of the current decoding block, and the candidate vector is processed according to the priority, so that the prediction vector is determined from the candidate vectors in sequence, and if the possibility that the prediction vector is positioned in the preamble of the candidate vectors in sequence is relatively large, the corresponding index value is also small, and the decoding efficiency of the video is improved.

Description

Video decoding method, encoding method, device, equipment and readable storage medium
Technical Field
Embodiments of the present disclosure relate to the field of video processing, and in particular, to a video decoding method, an encoding method, an apparatus, a device, and a readable storage medium.
Background
In the process of encoding a video, a reference block corresponding to a current encoding block is determined in an encoded area for the current encoding block, and the current encoding block is encoded according to a Motion Vector (MV) between the reference block and the current encoding block.
In the related art, a History-based Motion Vector Prediction (HMVP) method is provided, where n reference blocks are determined in a coded region according to a coding sequence, n MVs corresponding to the n reference blocks are obtained, each MV corresponds to an index value, MVs arranged in the following according to the coding sequence are closer to a current coding block, the index value corresponding to the MV is smaller, a target MV corresponding to the current coding block is determined from the n MVs, and then the current coding block is coded according to the index value of the target MV.
Since the target MV is determined from n MV blocks, the index value of the target MV is one of the index values corresponding to the n MVs, and the n MVs are arranged according to the coding order of the reference block, when the target MV is arranged at a subsequent position in the coding order, the index value of the target MV is usually larger, thereby resulting in a lower coding efficiency of the video.
Disclosure of Invention
Embodiments of the present application provide a video decoding method, a video encoding method, an apparatus, a video decoding device, and a readable storage medium, which can solve the problem that when n MVs are arranged according to a coding order of a reference block and a target MV belongs to a subsequent order in the coding order, an index value of the target MV is usually large, thereby resulting in low video encoding efficiency. The technical scheme is as follows:
in one aspect, a video decoding method is provided, and the method includes:
acquiring n candidate vectors of a current decoding block, wherein the candidate vectors are motion vectors of interframe coding or block vectors copied by intraframe blocks, and n is a positive integer;
processing the n candidate vectors according to priorities to obtain the candidate vectors in sequential arrangement, wherein the priorities are used for indicating the degree of association between the candidate vectors and the prediction vector of the current decoding block, and index values corresponding to the candidate vectors are in negative correlation with the arrangement sequence of the candidate vectors;
decoding the coded content of the current decoding block to obtain an index value corresponding to the current decoding block;
determining the prediction vector corresponding to the index value of the current decoding block from the candidate vectors which are arranged in sequence;
and combining the prediction vector to obtain a decoding result of the current decoding block.
In another aspect, a video encoding method is provided, the method including:
acquiring n candidate vectors of a current coding block, wherein the candidate vectors are motion vectors of interframe coding or block vectors copied by intraframe blocks, and n is a positive integer;
processing the n candidate vectors according to priorities to obtain the candidate vectors in sequential arrangement, wherein the priorities are used for indicating the degree of association between the candidate vectors and the prediction vector of the current coding block, and index values corresponding to the candidate vectors are in negative correlation with the arrangement sequence of the candidate vectors;
determining the prediction vector of the current coding block from the candidate vectors arranged in sequence;
and coding the current coding block by combining the index value corresponding to the prediction vector.
In another aspect, a video decoding apparatus is provided, the apparatus including:
an obtaining module, configured to obtain n candidate vectors of a current decoding block, where the candidate vectors are inter-frame coded motion vectors or intra-frame block copied block vectors, and n is a positive integer;
the processing module is used for processing the n candidate vectors according to priorities to obtain the candidate vectors which are arranged in sequence, the priorities are used for indicating the degree of association between the candidate vectors and the prediction vector of the current decoding block, and index values corresponding to the candidate vectors are in negative correlation with the arrangement sequence of the candidate vectors;
the decoding module is used for decoding the coding content of the current decoding block to obtain an index value corresponding to the current decoding block;
a determining module, configured to determine the prediction vector corresponding to the index value of the current decoded block from the candidate vectors arranged in sequence;
the determining module is further configured to obtain a decoding result of the current decoded block by combining the prediction vector.
In another aspect, a video encoding apparatus is provided, the apparatus including:
an obtaining module, configured to obtain n candidate vectors of a current coding block, where the candidate vectors are motion vectors of inter-frame coding or block vectors copied from intra-frame blocks, and n is a positive integer;
the processing module is used for processing the n candidate vectors according to priorities to obtain the candidate vectors which are arranged in sequence, the priorities are used for indicating the association degree between the candidate vectors and the prediction vector of the current coding block, and index values corresponding to the candidate vectors are in negative correlation with the arrangement sequence of the candidate vectors;
a determining module, configured to determine the prediction vector of the current coding block from the candidate vectors arranged in sequence;
and the coding module is used for coding the current coding block by combining the index value corresponding to the prediction vector.
In another aspect, a computer device is provided, the computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement a video encoding method or a decoding method as provided in embodiments of the present application.
In another aspect, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, code set or set of instructions, which is loaded and executed by the processor to implement a video encoding method or decoding method as provided in embodiments of the present application.
In another aspect, a computer program product is provided, which when run on a computer, causes the computer to perform a video encoding method or a decoding method as provided in the embodiments of the present application described above.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
in the decoding process, the priority is determined according to the degree of association between the candidate vector and the prediction vector of the current decoding block, and the candidate vector is processed according to the priority, so that the prediction vector is determined from the candidate vectors which are sequentially arranged after the priority processing.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a diagram illustrating encoding of a current block provided by an exemplary embodiment of the present application;
fig. 2 is a block diagram of a communication system 200 provided in an exemplary embodiment of the present application;
FIG. 3 illustrates placement of a video encoder and a video decoder in a streaming environment, as provided by an exemplary embodiment of the present application;
FIG. 4 is a flow chart of a video encoding method provided by an exemplary embodiment of the present application;
FIG. 5 is a flow chart of a video decoding method provided by an exemplary embodiment of the present application;
fig. 6 is a flowchart of a video encoding and decoding method according to another exemplary embodiment of the present application;
fig. 7 is a flowchart of a video encoding and decoding method according to another exemplary embodiment of the present application;
fig. 8 is a flowchart of a video encoding and decoding method according to another exemplary embodiment of the present application;
fig. 9 is a block diagram of a video encoding apparatus according to an exemplary embodiment of the present application;
fig. 10 is a block diagram of a video decoding apparatus according to an exemplary embodiment of the present application;
fig. 11 is a block diagram of a terminal according to an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Various Motion Vector (MV) prediction mechanisms are described in H.265/HEVC (ITU-T H.265 recommendation, "High Efficiency Video Coding", 2016 (12 months) 2016). Among the various MV prediction mechanisms provided by h.265, described herein is a technique referred to hereinafter as "spatial merging.
Referring to fig. 1, a current block (101) includes samples that have been found by an encoder during a motion search process, which may be predicted from previous blocks of the same size that have generated spatial offsets. In addition, instead of directly encoding the MVs, the MVs may be derived from metadata associated with one or more reference pictures. For example, MVs are derived (in decoding order) from the metadata of the most recent reference picture using MVs associated with any of a0, a1, and B0, B1, B2 (102-106, respectively) five surrounding samples. In h.265, MV prediction can use the prediction value of the same reference picture that the neighboring block is also using.
Fig. 2 is a simplified block diagram of a communication system (200) according to an embodiment disclosed herein. The communication system (200) includes a plurality of terminal devices that can communicate with each other through, for example, a network (250). For example, a communication system (200) includes a first terminal device (210) and a second terminal device (220) interconnected by a network (250). In the embodiment of fig. 2, the first terminal device (210) and the second terminal device (220) perform unidirectional data transmission. For example, a first end device (210) may encode video data, such as a stream of video pictures captured by the end device (210), for transmission over a network (250) to a second end device (220). The encoded video data is transmitted in the form of one or more encoded video streams. The second terminal device (220) may receive the encoded video data from the network (250), decode the encoded video data to recover the video data, and display a video picture according to the recovered video data. Unidirectional data transmission is common in applications such as media services.
In another embodiment, a communication system (200) includes a third terminal device (230) and a fourth terminal device (240) that perform bidirectional transmission of encoded video data, which may occur, for example, during a video conference. For bi-directional data transmission, each of the third terminal device (230) and the fourth terminal device (240) may encode video data (e.g., a stream of video pictures captured by the terminal device) for transmission over the network (250) to the other of the third terminal device (230) and the fourth terminal device (240). Each of the third terminal device (230) and the fourth terminal device (240) may also receive encoded video data transmitted by the other of the third terminal device (230) and the fourth terminal device (240), and may decode the encoded video data to recover the video data, and may display video pictures on an accessible display device according to the recovered video data.
In the embodiment of fig. 2, the first terminal device (210), the second terminal device (220), the third terminal device (230), and the fourth terminal device (240) may be a server, a personal computer, and a smart phone, but the principles disclosed herein may not be limited thereto. Embodiments disclosed herein are applicable to laptop computers, tablet computers, media players, and/or dedicated video conferencing equipment. Network (250) represents any number of networks that communicate encoded video data between first terminal device (210), second terminal device (220), third terminal device (230), and fourth terminal device (240), including, for example, wired (wired) and/or wireless communication networks. The communication network (250) may exchange data in circuit-switched and/or packet-switched channels. The network may include a telecommunications network, a local area network, a wide area network, and/or the internet. For purposes of this application, the architecture and topology of the network (250) may be immaterial to the operation disclosed herein, unless explained below.
By way of example, fig. 3 illustrates the placement of a video encoder and a video decoder in a streaming environment. The subject matter disclosed herein is equally applicable to other video-enabled applications including, for example, video conferencing, digital TV, storing compressed video on digital media including CDs, DVDs, memory sticks, and the like.
The streaming system may include an acquisition subsystem (313), which may include a video source (301), such as a digital camera, that creates an uncompressed video picture stream (302). In an embodiment, the video picture stream (302) includes samples taken by a digital camera. The video picture stream (302) is depicted as a thick line to emphasize a high data amount video picture stream compared to the encoded video data (304) (or encoded video code stream), the video picture stream (302) being processable by an electronic device (320), the electronic device (320) comprising a video encoder (303) coupled to a video source (301). The video encoder (303) may comprise hardware, software, or a combination of hardware and software to implement or embody aspects of the disclosed subject matter as described in more detail below. The encoded video data (304) (or encoded video codestream (304)) is depicted as a thin line to emphasize the lower data amount of the encoded video data (304) (or encoded video codestream (304)) as compared to the video picture stream (302), which may be stored on a streaming server (305) for future use. One or more streaming client subsystems, such as client subsystem (306) and client subsystem (308) in fig. 3, may access streaming server (305) to retrieve copies (307) and copies (309) of encoded video data (304). The client subsystem (306) may include, for example, a video decoder (310) in an electronic device (330). The video decoder (310) decodes incoming copies (307) of the encoded video data and generates an output video picture stream (311) that may be presented on a display (312), such as a display screen, or another presentation device (not depicted). In some streaming systems, encoded video data (304), video data (307), and video data (309) (e.g., video streams) may be encoded according to certain video encoding/compression standards. Examples of such standards include ITU-T H.265. In an embodiment, the Video Coding standard under development is informally referred to as next generation Video Coding (VVC), and the present application may be used in the context of the VVC standard.
It should be noted that electronic device (320) and electronic device (330) may include other components (not shown). For example, the electronic device (320) may include a video decoder (not shown), and the electronic device (330) may also include a video encoder (not shown).
In the process of encoding a video, a History-based motion vector Prediction (HMVP) method is provided, in which in the process of encoding a current coding block, MVs of n reference blocks encoded before the current coding block are sequentially obtained as n Prediction MVs, and a list including the n Prediction MVs is generated according to a coding order in which the n reference blocks are obtained, wherein each Prediction MV in the list corresponds to one index value, and an arrangement order is positively correlated with the size of the index value, that is, the index value corresponding to a Prediction MV arranged at the rear is larger, and illustratively, the list includes sequentially arranged MVs 1, MV2 and MV3, wherein the index value of MV1 is 1, the index value of MV2 is 01, and the index value of MV3 is 001.
In the process of encoding a current coding block, a target MV or Motion Vector Prediction (MVP) of the current coding block is first determined from the HMVP. After the target MV is determined from the HMVP, the index value of the target MV is determined and encoded (when the first MV in the HMVP is used as the target MV by default, the index value may not be encoded); when determining the MVP from the HMVP, a Vector Difference (MVD) between the MV and the MVP is determined, and an index value of the MVP (in the case where the first HMVP is used as a target MVP by default, the index value of the MVP may not be encoded) and the MVD are encoded.
However, since the predicted MVs in the list are arranged according to the coding order of the reference block, the MV corresponding to the current coding block may be located at the front position of the list or at the rear position of the list, and when the MV is located at the rear position of the list, the index value of the MV is larger, which results in lower coding efficiency of the video.
In the embodiment of the present application, a video coding method is provided, which may be applied to an inter-frame coding process, an intra-frame block copying process, and other block-based motion compensation techniques. For the "candidate vector" and "predicted vector" mentioned hereinafter, for the inter-coding process, the candidate vector refers to a candidate motion vector, the predicted vector refers to a predicted motion vector, the predicted motion vector is one of a plurality of candidate motion vectors; for the intra block copy process, the candidate vector refers to a candidate block vector, and the prediction vector refers to a prediction block vector. That is, the vector hereinafter refers to a motion vector or a block vector, which is understood as a motion vector in the inter-frame encoding process; in an intra block copy process, it is understood as a block vector.
Fig. 4 is a flowchart of a video encoding method according to an exemplary embodiment of the present application, which is described by taking the method as an example for an encoding end, and as shown in fig. 4, the method includes:
step 401, acquiring n candidate vectors of a current coding block, where n is a positive integer.
Optionally, the n candidate vectors include a vector of a coded block before the current coding block, or a vector of a sample point adjacent to the current coding block in space, or include both a vector of a coded block before the current coding block and a vector of a sample point adjacent to the current coding block in space, which is described separately for the above three cases:
firstly, acquiring vectors of n coded coding blocks before a current coding block as candidate vectors according to a coding sequence;
optionally, taking an inter-frame encoding process as an example, the n encoded encoding blocks are encoding blocks in an encoded image frame before an image frame where the current encoding block is located, optionally, the encoded encoding blocks are encoding blocks corresponding to the current encoding block in a time domain, schematically, the current image frame is a 5 th frame in a group of image frames, the current encoding block is an encoding block located in a target region in the current image frame, and taking n as an example to be 3, the 1 st frame of image frame is directly encoded, and a vector of the encoded encoding block located in the target region in the 2 nd frame, a vector of the encoded encoding block located in the target region in the 3 rd frame, and a vector of the encoded encoding block located in the target region in the 4 th frame are obtained as candidate vectors.
Optionally, when the vectors of the n encoded coding blocks are acquired, each time a vector of a new encoded coding block is acquired, the redundancy check may be performed with the vector of the existing encoded coding block, or may not be performed.
Secondly, acquiring vectors of n adjacent sample points of the current coding block as candidate vectors, wherein the adjacent sample points comprise at least one of adjacent pixel points and adjacent coding blocks;
optionally, the n adjacent samples are samples that the current coding block is adjacent to in space, and optionally, the adjacent samples are pixel points or coding blocks that are in the same frame of image frame as the current coding block and located on the periphery of the current coding block. Optionally, the coding prediction mode adopted by the n neighboring samples is consistent with that of the current coding block, for example: the n adjacent sampling points and the current coding block adopt an interframe prediction mode to carry out interframe coding.
Thirdly, acquiring first vectors of p coded coding blocks before the current coding block and q second vectors of adjacent sampling points of the current coding block according to the coding sequence, taking the p first vectors and the q second vectors as candidate vectors, wherein n is the sum of p and q, and both p and q are positive integers.
Alternatively, the number p of the first vectors and the number q of the second vectors may be preset parameters or may be default parameters in the encoder, where when p and q are preset, p and q are set in a high level syntax such as a picture, a slice, or a sequence.
Step 402, processing the n candidate vectors according to a priority to obtain candidate vectors arranged in sequence, wherein the priority is used for indicating the degree of association between the candidate vectors and the prediction vector of the current coding block, and the index values corresponding to the candidate vectors are in negative correlation with the arrangement sequence of the candidate vectors.
Alternatively, the higher the priority of the candidate vector, the larger the preliminary possibility that the candidate vector is a prediction vector of the current coding block may be.
Optionally, the processing manner for processing according to priority includes at least one of prioritization and priority screening, which are described with respect to the prioritization and priority screening processes respectively:
firstly, sorting according to priority;
alternatively, the higher the priority of the candidate vector, the earlier the arrangement position in the sequentially arranged candidate vectors is, the smaller the index value corresponding to the candidate vector is.
Optionally, in the process of performing priority ranking on the n candidate vectors, the priority ranking may be determined according to a degree of similarity between each candidate vector of the n candidate vectors and other candidate vectors, or may be determined according to sizes of coding blocks corresponding to the n candidate vectors, or may be determined according to count values corresponding to the n candidate vectors.
Secondly, screening according to the priority;
optionally, the n candidate vectors are screened according to the priority condition, m candidate vectors which are sequentially arranged are screened from the n candidate vectors according to the priority condition, and m is greater than 0 and less than or equal to n. The m candidate vectors include candidate vectors meeting the priority condition among the n candidate vectors, and optionally, the priority condition may be that the number of other candidate vectors similar to the ith candidate vector reaches a preset number, that the size of the coding block corresponding to the ith candidate vector reaches a preset size, or that a count value corresponding to the ith candidate vector is counted.
Alternatively, the sequentially arranged candidate vectors are stored in the form of a list.
In step 403, a prediction vector of the current coding block is determined from the sequentially arranged candidate vectors.
Optionally, the prediction vector is a target MV of the current coding block, or the prediction vector is an optimal MVP determined from candidate vectors arranged in sequence.
And step 404, coding the current coding block by combining the index value corresponding to the prediction vector.
Optionally, when the prediction vector is a target MV of the current coding block, encoding an index value corresponding to the target MV; when the prediction vector is the best MVP in the candidate vectors arranged in sequence, determining a vector difference value MVD between the MV and the MVP, wherein the MVD is MV-MVP.
It should be noted that, in the above embodiments, the description is given by taking the application to an inter-coding process as an example, when the above method is applied to an intra block copying process, the above candidate vector may also be implemented as a prediction block displacement, and the above prediction vector may also be implemented as a candidate block displacement.
In summary, in the video encoding method provided in the embodiment of the present application, in the encoding process, the priority is determined according to the degree of association between the candidate vector and the prediction vector of the current encoding block, and the candidate vector is processed according to the priority, so that the prediction vector is determined from the candidate vectors sequentially arranged after the priority processing.
In the process of encoding and decoding a video, for example, the region of the video that is encoded and the region obtained by decoding the video are kept consistent, and for the same target region, the encoding and decoding processes of the video are corresponding, wherein the target region refers to a region in an image that is currently encoded or decoded, and may be a rectangular image block or a connected or non-connected image region of any shape.
Optionally, a decoding process corresponding to the encoding process is shown in fig. 5, fig. 5 is a flowchart of a video decoding method provided by an exemplary embodiment of the present application, and the method is applied to a decoding end, as shown in fig. 5, and the method includes:
step 501, acquiring n candidate vectors of a current decoding block, wherein n is a positive integer.
Optionally, the obtaining manner of the n candidate vectors of the current decoding block corresponds to the obtaining manner of the n candidate vectors of the current encoding block, please refer to step 201 above, which specifically includes at least the following three manners:
firstly, acquiring vectors of n decoded decoding blocks before a current decoding block as candidate vectors according to a decoding sequence;
secondly, acquiring vectors of n adjacent sampling points of the current decoding block as candidate vectors, wherein the adjacent sampling points comprise at least one of adjacent pixel points and adjacent decoding blocks;
thirdly, acquiring first vectors of p decoded blocks before the current decoding block and second vectors of q adjacent samples of the current decoding block according to the decoding sequence, and taking the p first vectors and the q second vectors as candidate vectors, wherein n is the sum of p and q, and both p and q are positive integers.
Optionally, the values of p and q are values preset by the encoding end and sent to the decoding end, or values preset at both the encoding end and the decoding end, or values randomly generated at the encoding end and sent to the decoding end.
Optionally, the manner of obtaining the n candidate vectors is consistent with the manner of obtaining the n candidate vectors in the encoding process.
Step 502, processing the n candidate vectors according to a priority to obtain candidate vectors in sequential arrangement, where the priority is used to indicate the degree of association between the candidate vectors and the prediction vector of the current decoding block, and the index values corresponding to the candidate vectors are in negative correlation with the arrangement order of the candidate vectors.
Alternatively, the higher the priority of a candidate vector, the larger the preliminary likelihood that the candidate vector is a prediction vector of the currently decoded block.
Optionally, the earlier the candidate vector is arranged in the candidate vector arranged in the order, the smaller the index value corresponding to the candidate vector is.
Optionally, the processing manner for processing according to priority includes at least one of prioritization and priority screening, which are described with respect to the prioritization and priority screening processes respectively:
first, sorting according to priority
Alternatively, the higher the priority of the candidate vector, the earlier the arrangement position in the sequentially arranged candidate vectors is, the smaller the index value corresponding to the candidate vector is.
Optionally, in the process of performing priority ranking on the n candidate vectors, the priority ranking may be determined according to a degree of similarity between each candidate vector of the n candidate vectors and other candidate vectors, or may be determined according to sizes of coding blocks corresponding to the n candidate vectors, or may be determined according to count values corresponding to the n candidate vectors.
Secondly, screening according to the priority
Optionally, the n candidate vectors are screened according to the priority condition, and m candidate vectors which are sequentially arranged are obtained according to the priority condition, wherein m is greater than 0 and less than or equal to n. The m candidate vectors include candidate vectors meeting the priority condition among the n candidate vectors, and optionally, the priority condition may be that the number of other candidate vectors similar to the ith candidate vector reaches a preset number, that the size of the coding block corresponding to the ith candidate vector reaches a preset size, or that a count value corresponding to the ith candidate vector is counted.
Optionally, the order arrangement manner of the candidate vectors after the priority screening may be arranged according to the acquisition order, or may be arranged according to other arrangement manners, which is not limited in this embodiment of the application.
Optionally, the ordering result of the candidate vector is consistent with the ordering result of the candidate vector in the encoding process.
Step 503, decoding the encoded content of the current decoding block to obtain an index value corresponding to the current decoding block.
Step 504, determining a prediction vector corresponding to the index value of the current coding block from the candidate vectors arranged in sequence.
Optionally, the prediction vector is the target MV of the current decoded block, or the prediction vector is the best MVP determined from the candidate vectors in sequence.
And 505, obtaining a decoding result of the current decoding block according to the prediction vector.
Optionally, when the prediction vector is a target MV of the current decoded block, directly obtaining a decoding result of the current decoded block according to the target MV; when the prediction vector is the best MVP in the candidate vectors arranged in sequence, the coded content is also decoded to obtain a vector difference value MVD, a vector MV of the current decoded block is obtained according to the MVP and the MVD, wherein the MVP + MVD is MV, and a decoding result is obtained.
In summary, in the video decoding method provided in the embodiment of the present application, in the decoding process, the priority is determined according to the degree of association between the candidate vector and the prediction vector of the current decoding block, and the candidate vector is processed according to the priority, so that the prediction vector is determined from the candidate vectors sequentially arranged after the priority processing.
In an optional embodiment, when performing the priority processing on the n candidate vectors, the processing may be performed according to a similarity relationship between each candidate vector of the n candidate vectors and other candidate vectors. The steps 201 to 202 and steps 301 to 302 may be alternatively implemented as the following steps 601 to 605, which are described by taking an encoding process as an example, as shown in fig. 6:
step 601, obtaining vectors of n encoded encoding blocks before the current encoding block as candidate vectors according to the encoding sequence, wherein n is a positive integer.
Optionally, the vectors of the n encoded blocks are n vectors obtained by history-based vector prediction.
Step 602, obtaining vectors of n adjacent samples of the current coding block as candidate vectors, where the adjacent samples include at least one of adjacent pixel points and adjacent coding blocks.
Optionally, the n adjacent samples of the current coding block are samples located in a preset range around the current coding block, such as: the n adjacent sampling points comprise a coding block positioned above the current coding block, a first pixel point positioned in the coding block on the left side of the current coding block and the like, and optionally, the position relationship between the n adjacent sampling points and the current coding block is preset.
Step 603, obtaining first vectors of p coded coding blocks before the current coding block and second vectors of q adjacent samples of the current coding block according to the coding sequence, and taking the p first vectors and the q second vectors as candidate vectors, wherein n is the sum of p and q, and both p and q are positive integers.
And step 604, aiming at the ith vector in the n candidate vectors, carrying out priority ranking on the n candidate vectors according to the similarity relation between the ith vector and other candidate vectors, wherein the first number is positively correlated with the priority of the ith vector, the first number is the number of other candidate vectors similar to the ith vector, the priority is negatively correlated with the index value corresponding to the candidate vectors, and 0 < i is less than or equal to n.
Optionally, the higher the priority of the candidate vector is, the larger the preliminary possibility that the candidate vector is the prediction vector of the current coding block may be, and optionally, the greater the number of other candidate vectors similar to the ith vector is, the higher the priority of the ith vector is.
Optionally, when there are at least two candidate vectors in the n candidate vectors whose similarity numbers are consistent with those of other candidate vectors, determining the priorities of the at least two candidate vectors according to the acquisition order of the at least two candidate vectors, optionally, the acquisition order is positively correlated with the priorities, that is, the priority of the candidate vector which is earlier in the acquisition order is higher.
Alternatively, the higher the priority of the candidate vector, the earlier the arrangement position in the sequentially arranged candidate vectors is, the smaller the index value corresponding to the candidate vector is.
Optionally, if the n candidate vectors include p first vectors and q second vectors, when determining the similarity relationship between the ith vector and the other candidate vectors, at least two cases are included:
firstly, when the ith vector is a vector in p first vectors, determining a first similarity relation between the ith vector and q second vectors; when the ith vector is a vector in q second vectors, determining a second similarity relation between the ith vector and p first vectors, and performing priority ordering on n candidate vectors according to the first similarity relation and/or the second similarity relation;
optionally, when the ith vector is a vector of the p first vectors, determining the number of candidate vectors similar to the ith vector in the q second vectors as the first similarity relation, and when the ith vector is a vector of the q second vectors, determining the number of candidate vectors similar to the ith vector in the p first vectors as the second similarity relation, and performing the above priority ranking on each candidate vector according to the number.
Secondly, when the ith vector is a vector in the p first vectors, determining a third similarity relation between the ith vector and the p first vectors; and when the ith vector is a vector in the q second vectors, determining a second similarity relation between the ith vector and the q second vectors, and performing priority ordering on the n candidate vectors according to the third similarity relation and/or the fourth similarity relation.
Optionally, when the ith vector is a vector of the p first vectors, determining the number of other candidate vectors similar to the ith vector in the p first vectors as the third similarity relation, and when the ith vector is a vector of the q second vectors, determining the number of other candidate vectors similar to the ith vector in the q second vectors as the fourth similarity relation, and performing the above priority ranking on each candidate vector according to the number.
Alternatively, when the number of other candidate vectors similar to the at least two candidate vectors coincides, the priority between the at least two candidate vectors is determined according to the coding order or other orders.
Optionally, each vector corresponds to a horizontal axis relative coordinate and a vertical axis relative coordinate, wherein the horizontal axis relative coordinate is used for representing relative displacement in the horizontal axis direction, and the vertical axis relative coordinate is used for representing relative displacement in the vertical axis direction, and illustratively, the ith vector is a vector of the ith coding block (xi, y1) based on the reference block (x1, y1), so that the ith vector is a first horizontal axis relative coordinate xi-x1, and the first vertical axis relative coordinate of the ith vector is yi-y 1.
Optionally, when determining whether the ith vector is similar to the other candidate vectors, at least one of the following ways is included:
firstly, when a first absolute value of a difference between a first horizontal axis relative coordinate of an ith vector and a second horizontal axis relative coordinate of other candidate vectors is smaller than a similarity threshold, determining that the ith vector is similar to the other candidate vectors;
secondly, when a second absolute value of a difference between a first vertical axis relative coordinate of the ith vector and second vertical axis relative coordinates of other candidate vectors is smaller than a similarity threshold, determining that the ith vector is similar to the other candidate vectors;
thirdly, when the first absolute value and the second absolute value are both smaller than the similarity threshold, determining that the ith vector is similar to other candidate vectors;
fourthly, when the sum of the first absolute value and the second absolute value is smaller than the similarity threshold, determining that the ith vector is similar to other candidate vectors;
fifthly, when the sum of the square of the first absolute value and the square of the second absolute value is less than the similarity threshold, the ith vector is determined to be similar to the other candidate vectors.
Illustratively, taking the ith vector as MV1 and the other candidate vector as MV2 as an example, the relative coordinate of the first horizontal axis of the ith vector is MV1x, the relative coordinate of the first vertical axis of the ith vector is MV1y, the relative coordinate of the second horizontal axis of MV2 is MV2x, the relative coordinate of the second vertical axis of MV2 is MV2y, and th is a similarity threshold, then when F (MV1x, MV1y, MV2x, MV2y) ≦ th, MV1 and MV2 are similar, where F (MV1x, MV1y, MV2x, MV2y) ≦ th further includes at least one of the following cases:
1、Max(|MV1x-MV2x|,|MV1y-MV2y|)≤th
2、Min(|MV1x-MV2x|,|MV1y-MV2y|)≤th
3、|MV1x-MV2x|+|MV1y-MV2y|≤th
4、|MV1x-MV2x|*|MV1x-MV2x|+|MV1y-MV2y|*|MV1y-MV2y|≤th
and 605, screening m candidate vectors which are sequentially arranged from the n candidate vectors according to the similarity condition, wherein m is more than 0 and less than or equal to n.
Optionally, the similarity condition is used to indicate that the number of other candidate vectors similar to the ith vector reaches a preset number, that is, when the number of other candidate vectors similar to the ith vector reaches the preset number, the ith vector is determined to be one of the m candidate vectors meeting the priority condition.
Optionally, the n candidate vectors are divided into a preceding candidate vector and a succeeding candidate vector according to a preset number, that is, when the number of other candidate vectors similar to the ith vector reaches the preset number, the ith vector is determined to be a candidate vector in the preceding candidate vector, and when the number of other candidate vectors similar to the ith vector does not reach the preset number, the ith vector is determined to be a candidate vector in the succeeding candidate vector. Optionally, when at least two candidate vectors both belong to a preceding candidate vector or a succeeding candidate vector, the priority between the at least two candidate vectors is determined according to a coding order or other orders.
Optionally, the preset number may further include at least two preset numbers, the n candidate vectors are divided into at least two levels of candidate vectors according to the at least two preset numbers, and an interval between every two preset numbers constitutes a candidate vector level, illustratively, the preset number includes a preset number 1, a preset number 2, a preset number 3 and a preset number 4, wherein the arrangement sequence of the preset number 1, the preset number 2, the preset number 3 and the preset number 4 is from large to small, that is, the preset number 1 is the largest, and the preset number 4 is the smallest, the count value is greater than the preset number 1 and corresponds to the candidate vector level 1, the count value is within an interval range of the preset number 1 and the preset number 2 and corresponds to the candidate vector level 2, and the count value is within an interval range of the preset number 2 and the preset number 3 and corresponds to the candidate vector level 3, when the counting value is in the interval range of the preset number 3 and the preset number 4, the counting value corresponds to the candidate vector grade 4, when the counting value is larger than the interval range of the preset number 4, the counting value corresponds to the candidate vector grade 5, and when the number of other candidate vectors similar to the ith vector reaches the preset number 1, the ith vector is determined to be a candidate vector in the candidate vector grade 1; when the number of other candidate vectors similar to the ith vector does not reach the preset number 1 and reaches the preset number 2, determining the ith vector as a candidate vector in the candidate vector level 2; when the number of other candidate vectors similar to the ith vector does not reach the preset number 2 and reaches the preset number 3, determining the ith vector as a candidate vector in the candidate vector level 3; when the number of other candidate vectors similar to the ith vector does not reach the preset number 3 and reaches the preset number 4, determining the ith vector as a candidate vector in the candidate vector level 4; and when the number of other candidate vectors similar to the ith vector does not reach the preset number 4, determining the ith vector as a candidate vector in the candidate vector level 5 or not reserving the ith vector. Optionally, when at least two candidate vectors belong to the same candidate vector level, the priority between the at least two candidate vectors is determined according to the coding order or other orders.
Optionally, the preset number may further include a first preset number and a second preset number, where the n candidate vectors are divided into a preceding candidate vector and a subsequent candidate vector according to the first preset number, and are retained and deleted according to the second preset number, that is, when the number of other candidate vectors similar to the ith vector reaches the first preset number, the ith vector is determined to be a candidate vector in the preceding candidate vectors; when the quantity of other candidate vectors similar to the ith vector does not reach a first preset quantity and reaches a second preset quantity, determining the ith vector as a candidate vector in subsequent candidate vectors; and when the number of other candidate vectors similar to the ith vector does not reach a second preset number, not reserving the ith vector.
Alternatively, the preset number may be preset, or may be implemented as a threshold number obtained by adaptive calculation, such as: and calculating the number of other candidate vectors similar to each vector, and determining a median or average from the calculated number as the threshold number obtained by self-adaptive calculation.
Optionally, in the encoding process, the preset number or the threshold number may be encoded in a Largest Coding Unit (LCU) slice or a sequence header.
It should be noted that, in the above embodiments, the description is given by taking the application to an inter-coding process as an example, when the above method is applied to an intra block copying process, the above candidate vector may also be implemented as a prediction block displacement, and the above prediction vector may also be implemented as a candidate block displacement.
In summary, in the video encoding method provided in the embodiment of the present application, in the encoding process, the priority is determined according to the degree of association between the candidate vector and the prediction vector of the current encoding block, and the candidate vector is processed according to the priority, so that the prediction vector is determined from the candidate vectors sequentially arranged after the priority processing.
In the method provided by this embodiment, in the encoding process, the priority is determined according to the degree of association between the candidate vector and the prediction vector of the current encoding block, and the candidate vectors are sorted according to the priority, so as to determine the prediction vector from the candidate vectors arranged in sequence.
In the method provided by this embodiment, in the encoding process, the priority is determined according to the degree of association between the candidate vector and the prediction vector of the current encoding block, and the candidate vectors are screened according to the priority, so that the candidate vector with higher probability of being the prediction vector is reserved, and the candidate vector with lower probability is deleted, so that the possibility that the prediction vector is located in the preamble of the candidate vectors arranged in the sequence is relatively large, and the index value corresponding to the prediction vector is also small, thereby improving the encoding efficiency of the video.
In the method provided by this embodiment, the candidate vectors are prioritized according to the similarity relationship between the ith vector and other candidate vectors, so that vectors similar to more candidate vectors are arranged in the preamble position, and the degree of association with the predicted vector is generally higher if the vectors are similar to more candidate vectors, thereby increasing the probability that the predicted vector corresponds to a small index value, and improving the encoding efficiency.
In the process of video encoding and decoding, for example, the consistency between the region of video encoding and the region obtained by video decoding is maintained, and for the same target region, the encoding and decoding processes of the video are corresponding, that is, in the decoding process corresponding to the current encoding block, a candidate vector acquisition process and a priority processing process which are consistent with the encoding process are corresponding in the decoding process of the current decoding block, so that the prediction vector corresponding to the current decoding block is determined, and the current decoding block is decoded.
In an alternative embodiment, when the n candidate vectors are vectors of n encoded blocks (or decoded blocks), the priority processing procedure of the n candidate vectors may further process the n candidate vectors according to size information of the encoded blocks (or decoded blocks). The steps 201 to 202 and steps 301 to 302 may be alternatively implemented as the following steps 701 to 704, which are described by taking an encoding process as an example, as shown in fig. 7:
and step 701, acquiring vectors of n encoded encoding blocks before the current encoding block as candidate vectors according to the encoding sequence.
Step 702, obtaining size information of n encoded code blocks.
Optionally, the size information includes at least one of width information and height information, where the width information is determined according to the number of pixels of the coding block in the width, and the height information is determined according to the number of pixels of the coding block in the height.
And 703, performing priority ranking on the n candidate vectors according to the size information, wherein the size of the candidate vectors is positively correlated with the priority.
Optionally, the priority may be determined according to width information of the n candidate vectors, may also be determined according to height information of the n candidate vectors, and may also be determined according to the width and height of the n candidate vectors, which specifically includes any one of the following manners:
firstly, the size information comprises width information, and the width of the candidate vector is positively correlated with the priority;
optionally, the larger the width of the encoded block is, the higher the priority of the candidate vector corresponding to the encoded block is.
Secondly, the size information comprises height information, and the height of the candidate vector is positively correlated with the priority;
optionally, the larger the height of the encoded coding block is, the higher the priority of the candidate vector corresponding to the encoded coding block is;
thirdly, the size information comprises width information and height information, and the product of the width and the height is positively correlated with the priority of the candidate vector.
Optionally, when the sizes of the at least two candidate vectors are consistent, the priority between the at least two candidate vectors is determined according to a coding order or other orders.
And step 704, screening m candidate vectors which are sequentially arranged according to the size condition from the n candidate vectors, wherein m is more than 0 and less than or equal to n.
Optionally, the size condition is used to indicate that the size of the coding block corresponding to the candidate vector reaches the required size, that is, when the size of the coding block corresponding to the ith vector reaches the required size, the ith vector is determined to be one of the m candidate vectors meeting the priority condition.
Optionally, the size includes at least one of a width and a height of the coding block.
Optionally, the n candidate vectors are divided into a preceding candidate vector and a succeeding candidate vector according to the required size, that is, when the size corresponding to the ith vector reaches the required size, the ith vector is determined to be a candidate vector in the preceding candidate vector, and when the size corresponding to the ith vector does not reach the required size, the ith vector is determined to be a candidate vector in the succeeding candidate vector. Optionally, when at least two candidate vectors both belong to a preceding candidate vector or a succeeding candidate vector, the priority between the at least two candidate vectors is determined according to a coding order or other orders.
Optionally, the required sizes may further include at least two required sizes, the n candidate vectors are divided into candidate vectors of at least two levels according to the at least two required sizes, and an interval between each two required sizes constitutes a candidate vector level, illustratively, the required sizes include a required size 1, a required size 2, a required size 3, and a required size 4, where the required size 1, the required size 2, the required size 3, and the required size 4 are arranged in descending order, that is, the required size 1 is the largest, and the required size 4 is the smallest, and corresponds to the candidate vector level 1 if the size is larger than the required size 1, corresponds to the candidate vector level 2 if the size is within the interval between the required size 1 and the required size 2, and corresponds to the candidate vector level 3 if the size is within the interval between the required size 2 and the required size 3, when the size is in the interval range of the required size 3 and the required size 4, the candidate vector grade is 4, when the size is larger than the interval range of the required size 4, the candidate vector grade is 5, and when the size corresponding to the ith vector reaches the required size 1, the ith vector is determined to be the candidate vector in the candidate vector grade 1; when the size corresponding to the ith vector does not reach the required size 1 and reaches the required size 2, determining the ith vector as a candidate vector in the candidate vector level 2; when the size corresponding to the ith vector does not reach the required size 2 and reaches the required size 3, determining the ith vector as a candidate vector in the candidate vector level 3; when the size corresponding to the ith vector does not reach the required size 3 and reaches the required size 4, determining the ith vector as a candidate vector in the candidate vector level 4; and when the size corresponding to the ith vector does not reach the required size 4, determining the ith vector as a candidate vector in the candidate vector level 5, or not reserving the ith vector. Optionally, when at least two candidate vectors belong to the same candidate vector level, the priority between the at least two candidate vectors is determined according to the coding order or other orders.
Optionally, the required sizes may further include a first required size and a second required size, the n candidate vectors are divided into a preceding candidate vector and a subsequent candidate vector according to the first required size, and are retained and deleted according to the second required size, that is, when the size corresponding to the ith vector reaches the first required size, the ith vector is determined to be a candidate vector in the preceding candidate vector; when the size corresponding to the ith vector does not reach the first required size and reaches the second required size, determining the ith vector as a candidate vector in subsequent candidate vectors; and when the size corresponding to the ith vector does not reach the second required size, not reserving the ith vector.
Alternatively, the required size may be preset, or may be obtained by adaptive calculation, such as: and determining the size of each candidate vector, and determining the average size or the size in the median from the sizes of each candidate vector as the required size obtained by self-adaptive calculation.
Alternatively, the required size may be encoded in the lcusclce or sequence header during the encoding process.
In summary, in the video encoding method provided in the embodiment of the present application, in the encoding process, the priority is determined according to the degree of association between the candidate vector and the prediction vector of the current encoding block, and the candidate vector is processed according to the priority, so that the prediction vector is determined from the candidate vectors sequentially arranged after the priority processing.
In the method provided by this embodiment, the sizes of the coding blocks corresponding to the candidate vectors are prioritized, so that the candidate vectors with larger sizes are arranged in the preamble position, and the association degree between the candidate vectors with larger sizes and the prediction vector is generally higher, thereby increasing the probability that the prediction vector corresponds to the small index value, and improving the coding efficiency.
In an alternative embodiment, when the n candidate vectors are vectors of n encoded coding blocks (or decoded blocks), the priority processing procedure of the n candidate vectors may also be determined by count values corresponding to the encoded coding blocks (or decoded blocks), and the steps 201 to 202 and the steps 301 to 302 may instead be implemented as the following steps 801 to 803, taking the decoding procedure as an example, as shown in fig. 8:
in step 801, vectors of n decoded blocks before the current decoded block are obtained as candidate vectors according to the decoding order.
Optionally, each candidate vector of the n candidate vectors corresponds to a count value.
Optionally, the count value is a positive integer greater than 0. Optionally, the counting process of the count value includes: sequentially acquiring vectors of a decoded decoding block before a current decoding block according to a decoding sequence, comparing the similarity between the w-th vector and other candidate vectors when the w-th vector is acquired, wherein w is a positive integer, adding one to the count value of the other candidate vectors when the w-th vector and the other candidate vectors meet the requirement of the similarity, and determining the w-th vector as one of the n candidate vectors when the w-th vector and the other candidate vectors do not meet the requirement of the similarity. Alternatively, the number of vectors of the obtained decoded blocks may be n, or may be greater than n. Alternatively, the similarity requirement may be that the similarity reaches a similarity threshold (e.g., the similarity reaches 90%). Optionally, when the w-th vector does not meet the similarity requirement with other candidate vectors, and the w-th vector is determined to be one of the n candidate vectors, the count value of the w-th vector initially defaults to 1.
And step 802, performing priority ordering on the n candidate vectors according to the count value to obtain candidate vectors in sequence, wherein the count value of the candidate vector is positively correlated with the priority of the candidate vector.
Optionally, the higher the count value corresponding to the candidate vector is, the higher the priority of the candidate vector is.
Optionally, when the count values of at least two candidate vectors are consistent, the priority order of the at least two candidate vectors is determined according to the decoding order of the decoded blocks corresponding to the at least two candidate vectors.
And step 803, m candidate vectors which are sequentially arranged are obtained from the n candidate vectors through screening according to counting numerical conditions, wherein m is more than 0 and less than or equal to n.
Optionally, the count value condition is used to indicate that the count value corresponding to the candidate vector reaches the required value, that is, when the count value corresponding to the ith vector reaches the required value, the ith vector is determined to be one of the m candidate vectors meeting the priority condition.
Optionally, the n candidate vectors are divided into a preceding candidate vector and a succeeding candidate vector according to the requirement value, that is, when the count value corresponding to the i-th vector reaches the requirement value, the i-th vector is determined as the candidate vector in the preceding candidate vector, and when the count value corresponding to the i-th vector does not reach the requirement value, the i-th vector is determined as the candidate vector in the succeeding candidate vector. Alternatively, when at least two candidate vectors both belong to a preceding candidate vector or a succeeding candidate vector, the priority between the at least two candidate vectors is determined according to a decoding order or other orders.
Optionally, the requirement value may further include at least two requirement values, the n candidate vectors are divided into at least two levels of candidate vectors according to the at least two requirement values, and an interval between each two requirement values constitutes a candidate vector level, illustratively, the requirement value includes a requirement value 1, a requirement value 2, a requirement value 3, and a requirement value 4, wherein the ordering of the requirement value 1, the requirement value 2, the requirement value 3, and the requirement value 4 is from large to small, that is, the requirement value 1 is the largest, and the requirement value 4 is the smallest, the count value greater than the requirement value 1 corresponds to the candidate vector level 1, the count value in the interval range of the requirement value 1 and the requirement value 2 corresponds to the candidate vector level 2, and the count value in the interval range of the requirement value 2 and the requirement value 3 corresponds to the candidate vector level 3, when the counting value is in the interval range of the requirement value 3 and the requirement value 4, the counting value corresponds to the candidate vector grade 4, when the counting value is larger than the interval range of the requirement value 4, the counting value corresponds to the candidate vector grade 5, and when the counting value corresponding to the ith vector reaches the requirement value 1, the ith vector is determined to be the candidate vector in the candidate vector grade 1; when the counting numerical value corresponding to the ith vector does not reach the requirement numerical value 1 and reaches the requirement numerical value 2, determining the ith vector as a candidate vector in the candidate vector level 2; when the count value corresponding to the ith vector does not reach the requirement value 2 and reaches the requirement value 3, determining the ith vector as a candidate vector in the candidate vector level 3; when the count value corresponding to the ith vector does not reach the requirement value 3 and reaches the requirement value 4, determining the ith vector as a candidate vector in the candidate vector level 4; and when the counting numerical value corresponding to the ith vector does not reach the requirement numerical value 4, determining that the ith vector is a candidate vector in the candidate vector level 5, or not reserving the ith vector. Optionally, when at least two candidate vectors belong to the same candidate vector level, the priority between the at least two candidate vectors is determined according to a decoding order or other orders.
Optionally, the requirement values may further include a first requirement value and a second requirement value, the n candidate vectors are divided into a preceding candidate vector and a subsequent candidate vector according to the first requirement value, and are retained and deleted according to the second requirement value, that is, when a count value corresponding to an ith vector reaches the first requirement value, the ith vector is determined to be a candidate vector in the preceding candidate vector; when the count value corresponding to the ith vector does not reach a first requirement value and reaches a second requirement value, determining the ith vector as a candidate vector in a subsequent candidate vector; and when the counting numerical value corresponding to the ith vector does not reach the second requirement numerical value, not reserving the ith vector.
Alternatively, the requirement value may be preset, or may be obtained by adaptive calculation, such as: and determining a count value corresponding to each candidate vector, and determining an average value or a median from the count values of each candidate vector as a required value obtained by self-adaptive calculation.
Alternatively, the requirement value may be encoded in the lcusclce or sequence header during the encoding process.
In summary, according to the video encoding and decoding method provided by the embodiment of the present application, in the encoding and decoding process, the priority is determined according to the degree of association between the candidate vector and the prediction vector of the current encoding and decoding block, and the candidate vector is processed according to the priority, so that the prediction vector is determined from the candidate vectors sequentially arranged after the priority processing.
In the method provided by this embodiment, the candidate vectors with a larger count value are arranged in the preamble position by performing priority ranking on the count values corresponding to the candidate vectors, and the degree of association between the candidate vectors with a larger count value and the predicted vector is generally higher, so that the probability that the predicted vector corresponds to a small index value is increased, and the coding efficiency is improved.
In an alternative embodiment based on the above embodiments, for the list of candidate vectors, when a redundant vector having the same relative coordinates of the horizontal axis and the vertical axis is detected, the list of candidate vectors is updated according to one of:
keeping a group with a larger product of width and height in the redundant vector as a new vector in a list;
keeping a group of redundant vectors with a larger sum of width and height as a new vector in a list;
keeping a group with larger width in the redundant vectors as a new vector in a list; and
the higher-order set of redundant vectors is kept in the list as a new vector.
In an illustrative example, the HBVP list holds the block motion vector bv of the most recently encoded block, wide w, high h, and when there is a new instance to add to the HBVP list, a redundancy check is performed. Assume that there are three encoded blocks CB1, CB2, CB3 encoded in sequence in IBC mode, their corresponding bv, width and height are designated bvX, wX, hX, and X is the corresponding number.
The current HBVP list is (bv3, w3, h3), (bv2, w2, h2), (bv1, w1, h 1);
when a new instance (bv4, w4, h4) exists, if the abscissa and ordinate of bv4 and bv1 are the same, the new instance is regarded as a redundant vector.
The HBVP list updates are (bv4, wY, hY), (bv3, w3, h3), (bv2, w2, h 2);
the wY, hY possibilities are:
1) w4 h4 w1 h1 max;
2) w + h is maximal;
3) w is the largest, or h is the largest;
4) or w and h can be respectively updated, w is the maximum value of w4 and w1, h is the maximum value of h4 and h 1;
5) the minimum value in the above example is also possible;
6) in coding order, wY, hY is w4, h 4.
In an alternative embodiment based on the above embodiments, for the list of candidate vectors, when a redundant vector with the same or similar coding block size is detected, and a new vector is kept in the list, the number of redundancies is counted by accumulating a plurality of the same or similar coding block sizes.
In an illustrative example, the current HBVP list is (bv3, w3, h3), (bv2, w2, h2), (bv1, w1, h1), when there is a new instance (bv4, w4, h4), bv4 and bv1 are redundant vectors and bv4 is reserved, the current HBVP list becomes (bv3, w3, h3), (bv2, w2, h2), (bv4, w1+ w4, h1+ h4), or the current HBVP list may also become (bv4, w1+ w4, h1+ h4), (bv3, w3, h3), (bv2, w2, h 2).
In an illustrative example, when bv is stored in the HBVP list, it is stored in the form of (bv, wX × hX), illustratively, the current HBVP list is (bv3, w3 × h3), (bv2, w2 × h2), (bv1, w1 × h 1);
when a new instance (bv4, w4 × h4) exists, if the abscissa and ordinate of bv4 and bv1 are the same, the new instance is regarded as a redundant vector.
HBVP list updates to (bv4, wY hY), (bv3, w3 ah 3), (bv2, w2 ah 2);
the wY, hY possibilities are:
1) w4 h4 w1 h1 max;
2) w + h is maximal;
3) w is the largest, or h is the largest;
4) or w and h can be respectively updated, w is the maximum value of w4 and w1, h is the maximum value of h4 and h 1;
5) the minimum value in the above example is also possible;
6) in coding order, wY, hY is w4, h 4.
In an alternative embodiment based on the above embodiments, for the list of candidate vectors, when a redundant vector with the same or similar coding block size is detected, and a new vector is kept in the list, the number of redundancies is counted by accumulating a plurality of the same or similar coding block sizes.
In an exemplary example, the current HBVP list is (bv, w ×, (bv, w ×,) and when there is a new instance (bv, w ×, [ bv), (w + w) ((h + h) ], or the current HBVP list may also become [ bv, (w + w) × (h) +, (bv, w ·), (w, w ·, w ·), or the current HBVP list may also become (bv, w, h), (w, w ·), (w ·).
In an alternative embodiment based on the above embodiments, for the list of candidate vectors, when a redundant vector having the same relative coordinates of the horizontal axis and the vertical axis is detected, the new vector is retained in the list, and the position of the new vector in the list is kept unchanged.
In an exemplary example, the current HBVP list is MV3, MV2, MV1, and when there are new instances of MV4, MV4, and MV1 as redundant vectors and MV4 is reserved, the current HMVP list becomes MV3, MV2, MV4, or the current HMVP list may also become MV4, MV3, MV 2.
In an alternative embodiment based on the above embodiments, for the list of candidate vectors, when a redundant vector having the same or similar horizontal axis relative coordinates and vertical axis relative coordinates is detected, a new vector is retained in the list, and a redundant secondary value is added to the new vector.
In an illustrative example, the current HMVP list is MV3_ cnt0, MV2_ cnt0, MV1_ cnt0, and when there is a new instance MV4, MV4, and MV1 as redundant vectors and MV4 is reserved, the current HMVP list becomes MV3_ cnt0, MV2_ cnt0, MV4_ cnt1, or the current HMVP list may also become MV4_ cnt1, MV3_ cnt0, MV2_ cnt 0. cnt1 indicates the number of redundancies is 1.
Fig. 9 is a block diagram of a video decoding apparatus 900 according to an exemplary embodiment of the present application, which is illustrated as being applied to a decoding side, and includes: an acquisition module 910, a processing module 920, a decoding module 930, and a determination module 940;
an obtaining module 910, configured to obtain n candidate vectors of a current decoding block, where n is a positive integer;
a processing module 920, configured to process the n candidate vectors according to a priority to obtain the candidate vectors in a sequential arrangement, where the priority is used to indicate a degree of association between the candidate vectors and a prediction vector of the current decoding block, and an index value corresponding to the candidate vector is negatively correlated to an arrangement order of the candidate vectors;
a decoding module 930, configured to decode the encoded content of the current decoding block to obtain an index value corresponding to the current decoding block;
a determining module 940, configured to determine the prediction vector corresponding to the index value of the current decoded block from the candidate vectors arranged in sequence;
the determining module 940 is further configured to obtain a decoding result of the current decoded block by combining the prediction vector.
In an optional embodiment, the obtaining module 910 is further configured to obtain, according to a decoding order, a vector of n decoded blocks before the current decoded block as the candidate vector;
or the like, or, alternatively,
the obtaining module 910 is further configured to obtain vectors of n adjacent samples of the current decoding block as the candidate vectors, where the adjacent samples include at least one of adjacent pixel points and adjacent decoding blocks;
or the like, or, alternatively,
the obtaining module 910 is further configured to obtain, according to the decoding order, first vectors of p decoded blocks before the current decoded block, and second vectors of q neighboring samples of the current decoded block, and use the p first vectors and the q second vectors as the candidate vectors, where n is a sum of p and q, and both p and q are positive integers.
In an optional embodiment, the processing module 920 is further configured to perform priority ordering on the n candidate vectors, so as to obtain the candidate vectors in a sequential arrangement.
In an optional embodiment, the processing module 920 is further configured to, for an ith vector of the n candidate vectors, perform priority ranking on the n candidate vectors according to similarity relationships between the ith vector and other candidate vectors, where a first number is positively correlated with the priority of the ith vector, the first number is the number of other candidate vectors similar to the ith vector, and 0 < i ≦ n.
In an optional embodiment, when p first vectors and q second vectors are included in the n candidate vectors, the processing module 920 is further configured to determine a first similarity relationship between the ith vector and the q second vectors when the ith vector is a vector of the p first vectors; when the ith vector is a vector in the q second vectors, determining a second similarity relation between the ith vector and the p first vectors; and performing priority ranking on the n candidate vectors according to the first similarity relation and/or the second similarity relation.
In an optional embodiment, when p first vectors and q second vectors are included in the n candidate vectors, the processing module 920 is further configured to determine a third similarity relationship between the ith vector and the p first vectors when the ith vector is a vector of the p first vectors; when the ith vector is a vector in the q second vectors, determining a fourth similarity relation between the ith vector and the q second vectors; and performing priority ranking on the n candidate vectors according to the third similarity relation and/or the fourth similarity relation.
In an alternative embodiment, each vector corresponds to a horizontal axis relative coordinate and a vertical axis relative coordinate;
the determining module 940 is further configured to determine that the ith vector is similar to the other candidate vectors when a first absolute value of a difference between a first horizontal-axis relative coordinate of the ith vector and a second horizontal-axis relative coordinate of the other candidate vectors is smaller than a similarity threshold;
or the like, or, alternatively,
the determining module 940 is further configured to determine that the ith vector is similar to the other candidate vectors when a second absolute value of a difference between the first vertical axis relative coordinate of the ith vector and the second vertical axis relative coordinate of the other candidate vectors is smaller than the similarity threshold;
or the like, or, alternatively,
the determining module 940 is further configured to determine that the ith vector is similar to the other candidate vectors when the first absolute value and the second absolute value are both smaller than the similarity threshold;
or the like, or, alternatively,
the determining module 940 is further configured to determine that the ith vector is similar to the other candidate vectors when the sum of the first absolute value and the second absolute value is smaller than the similarity threshold;
or the like, or, alternatively,
the determining module 940 is further configured to determine that the ith vector is similar to the other candidate vectors when a sum of a square of the first absolute value and a square of the second absolute value is smaller than the similarity threshold.
In an alternative embodiment, when the n candidate vectors are vectors of n decoded blocks, the obtaining module 910 is further configured to obtain size information of the n decoded blocks;
the processing module 920 is further configured to prioritize the n candidate vectors according to the size information, wherein the size of the candidate vector is positively correlated to the priority.
In an optional embodiment, the size information includes width information, and the width of the candidate vector is positively correlated with the priority;
or the like, or, alternatively,
the size information comprises height information, and the height of the candidate vector is positively correlated with the priority;
or the like, or, alternatively,
the size information includes the width information and the height information, and a product of the width and the height is positively correlated with the priority of the candidate vector.
In an alternative embodiment, when n of said candidate vectors are vectors of n of said decoded blocks, each of said n candidate vectors corresponds to a count value; the processing module 920 is further configured to perform the priority ranking on the n candidate vectors according to the count value, so as to obtain the candidate vectors in a sequential arrangement, where the count value of the candidate vector is positively correlated with the priority of the candidate vector.
In an optional embodiment, the obtaining module 910 is further configured to sequentially obtain, according to the decoding order, vectors of decoded blocks before the current decoded block; when a vector of a w-th decoded decoding block is obtained, comparing the similarity between the w-th vector and other candidate vectors, wherein w is a positive integer; when the w-th vector and the other candidate vectors meet the similarity requirement, adding one to the counting value of the other candidate vectors; and when the w-th vector and the other candidate vectors do not meet the similarity requirement, determining the w-th vector as one of the n candidate vectors.
In an optional embodiment, the processing module 920 is further configured to generate a candidate vector list according to the top k candidate vectors with the highest priority in the ranking result, where k is greater than 0 and less than or equal to n.
In an optional embodiment, the processing module 920 is further configured to filter the n candidate vectors according to a priority condition to obtain m candidate vectors that are sequentially arranged, where m is greater than 0 and less than or equal to n.
In summary, in the video decoding apparatus provided in the embodiment of the present application, in the decoding process, the priority is determined according to the degree of association between the candidate vector and the prediction vector of the current decoding block, and the candidate vector is processed according to the priority, so that the prediction vector is determined from the candidate vectors sequentially arranged after the priority processing, and since the candidate vector is processed according to the priority, and the possibility that the prediction vector is located in the preamble of the candidate vector sequentially arranged is relatively large, the index value corresponding to the prediction vector is also small, thereby improving the decoding efficiency of the video.
Fig. 10 is a block diagram of a video decoding apparatus 1000 according to an exemplary embodiment of the present application, which is illustrated as being applied to an encoding side, and includes: an acquisition module 1010, a processing module 1020, a determination module 1030, and an encoding module 1040;
an obtaining module 1010, configured to obtain n candidate vectors of a current coding block, where n is a positive integer;
a processing module 1020, configured to process the n candidate vectors according to a priority to obtain the candidate vectors that are sequentially arranged, where the priority is used to indicate a degree of association between the candidate vectors and a prediction vector of the current coding block, and an index value corresponding to the candidate vector is negatively correlated with an arrangement order of the candidate vectors;
a determining module 1030, configured to determine the prediction vector of the current coding block from the candidate vectors arranged in sequence;
and an encoding module 1040, configured to encode the current coding block in combination with the index value corresponding to the prediction vector.
In an optional embodiment, the obtaining module 1010 is further configured to obtain vectors of n encoded encoding blocks before the current encoding block according to a coding order as the candidate vectors;
or the like, or, alternatively,
the obtaining module 1010 is further configured to obtain vectors of n neighboring samples of the current coding block as the candidate vectors, where the neighboring samples include at least one of neighboring pixel points and neighboring coding blocks;
or the like, or, alternatively,
the obtaining module 1010 is further configured to obtain, according to the coding order, first vectors of p coded coding blocks before the current coding block, and obtain second vectors of q neighboring samples of the current coding block, and use the p first vectors and the q second vectors as the candidate vectors, where n is a sum of p and q, and both p and q are positive integers.
In an optional embodiment, the processing module 1020 is further configured to perform priority ordering on the n candidate vectors, so as to obtain the candidate vectors in a sequential arrangement.
In an optional embodiment, the processing module 1020 is further configured to, for an ith vector of the n candidate vectors, perform priority ranking on the n candidate vectors according to similarity relations between the ith vector and other candidate vectors, where a first number is positively correlated with the priority of the ith vector, the first number is the number of other candidate vectors similar to the ith vector, and 0 < i ≦ n.
In an alternative embodiment, when p first vectors and q second vectors are included in the n candidate vectors, the processing module 1020 is further configured to determine a first similarity relationship between the ith vector and the q second vectors when the ith vector is a vector of the p first vectors; when the ith vector is a vector in the q second vectors, determining a second similarity relation between the ith vector and the p first vectors; and performing priority ranking on the n candidate vectors according to the first similarity relation and/or the second similarity relation.
In an optional embodiment, when p first vectors and q second vectors are included in the n candidate vectors, the processing module 1020 is further configured to determine a third similarity relationship between the ith vector and the p first vectors when the ith vector is a vector of the p first vectors; when the ith vector is a vector in the q second vectors, determining a fourth similarity relation between the ith vector and the q second vectors; and performing priority ranking on the n candidate vectors according to the third similarity relation and/or the fourth similarity relation.
In an alternative embodiment, each vector corresponds to a horizontal axis relative coordinate and a vertical axis relative coordinate;
the determining module 1030 is further configured to determine that the ith vector is similar to the other candidate vectors when a first absolute value of a difference between a first horizontal-axis relative coordinate of the ith vector and a second horizontal-axis relative coordinate of the other candidate vectors is smaller than a similarity threshold;
or the like, or, alternatively,
the determining module 1030 is further configured to determine that the ith vector is similar to the other candidate vectors when a second absolute value of a difference between a first vertical axis relative coordinate of the ith vector and a second vertical axis relative coordinate of the other candidate vectors is smaller than the similarity threshold;
or the like, or, alternatively,
the determining module 1030, further configured to determine that the ith vector is similar to the other candidate vectors when both the first absolute value and the second absolute value are smaller than the similarity threshold;
or the like, or, alternatively,
the determining module 1030, further configured to determine that the ith vector is similar to the other candidate vectors when the sum of the first absolute value and the second absolute value is smaller than the similarity threshold;
or the like, or, alternatively,
the determining module 1030 is further configured to determine that the ith vector is similar to the other candidate vectors when a sum of a square of the first absolute value and a square of the second absolute value is smaller than the similarity threshold.
In an optional embodiment, when the n candidate vectors are vectors of n encoded coding blocks, the obtaining module 1010 is further configured to obtain size information of the n encoded coding blocks;
the processing module 1020 is further configured to prioritize the n candidate vectors according to the size information, wherein the size of the candidate vector is positively correlated to the priority.
In an optional embodiment, the size information includes width information, and the width of the candidate vector is positively correlated with the priority;
or the like, or, alternatively,
the size information comprises height information, and the height of the candidate vector is positively correlated with the priority;
or the like, or, alternatively,
the size information includes the width information and the height information, and a product of the width and the height is positively correlated with the priority of the candidate vector.
In an alternative embodiment, when n of said candidate vectors are vectors of n of said encoded blocks, each of said n candidate vectors corresponds to a count value; the processing module 1020 is further configured to perform the priority ranking on the n candidate vectors according to the count value, so as to obtain the candidate vectors in a sequential arrangement, where the count value of the candidate vector is positively correlated with the priority of the candidate vector.
In an optional embodiment, the obtaining module 1010 is further configured to sequentially obtain vectors of coded coding blocks before the current coding block according to the coding order; when a vector of a w-th coded encoding block is obtained, comparing the similarity between the w-th vector and other candidate vectors, wherein w is a positive integer; when the w-th vector and the other candidate vectors meet the similarity requirement, adding one to the counting value of the other candidate vectors; and when the w-th vector and the other candidate vectors do not meet the similarity requirement, determining the w-th vector as one of the n candidate vectors.
In an optional embodiment, the processing module 1020 is further configured to generate a candidate vector list according to the top k candidate vectors with the highest priority in the ranking result, where k is greater than 0 and less than or equal to n.
In an optional embodiment, the processing module 1020 is further configured to filter the n candidate vectors according to a priority condition to obtain m candidate vectors that are sequentially arranged, where m is greater than 0 and less than or equal to n.
In summary, in the video encoding apparatus provided in the embodiment of the present application, in the encoding process, the priority is determined according to the degree of association between the candidate vector and the prediction vector of the current encoding block, and the candidate vector is processed according to the priority, so that the prediction vector is determined from the candidate vectors sequentially arranged after the priority processing.
Fig. 11 shows a block diagram of a terminal 1100 according to an exemplary embodiment of the present invention. The terminal 1100 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1100 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so forth.
In general, terminal 1100 includes: a processor 1101 and a memory 1102.
Processor 1101 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1101 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1101 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1101 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, the processor 1101 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1102 may include one or more computer-readable storage media, which may be non-transitory. Memory 1102 can also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1102 is used to store at least one instruction for execution by processor 1101 to implement a video decoding method or a video encoding method provided by method embodiments herein.
In some embodiments, the terminal 1100 may further include: a peripheral interface 1103 and at least one peripheral. The processor 1101, memory 1102 and peripheral interface 1103 may be connected by a bus or signal lines. Various peripheral devices may be connected to the peripheral interface 1103 by buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1104, touch display screen 1105, camera 1106, audio circuitry 1107, positioning component 1108, and power supply 1109.
The peripheral interface 1103 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1101 and the memory 1102. In some embodiments, the processor 1101, memory 1102, and peripheral interface 1103 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1101, the memory 1102 and the peripheral device interface 1103 may be implemented on separate chips or circuit boards, which is not limited by this embodiment.
The Radio Frequency circuit 1104 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1104 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1104 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 1104 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1104 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1104 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1105 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1105 is a touch display screen, the display screen 1105 also has the ability to capture touch signals on or over the surface of the display screen 1105. The touch signal may be input to the processor 1101 as a control signal for processing. At this point, the display screen 1105 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1105 may be one, providing the front panel of terminal 1100; in other embodiments, the display screens 1105 can be at least two, respectively disposed on different surfaces of the terminal 1100 or in a folded design; in still other embodiments, display 1105 can be a flexible display disposed on a curved surface or on a folded surface of terminal 1100. Even further, the display screen 1105 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display screen 1105 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
Camera assembly 1106 is used to capture images or video. Optionally, camera assembly 1106 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1106 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1107 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1101 for processing or inputting the electric signals to the radio frequency circuit 1104 to achieve voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of terminal 1100. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1101 or the radio frequency circuit 1104 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1107 may also include a headphone jack.
Positioning component 1108 is used to locate the current geographic position of terminal 1100 for purposes of navigation or LBS (Location Based Service). The Positioning component 1108 may be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
Power supply 1109 is configured to provide power to various components within terminal 1100. The power supply 1109 may be alternating current, direct current, disposable or rechargeable. When the power supply 1109 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1100 can also include one or more sensors 1110. The one or more sensors 1110 include, but are not limited to: acceleration sensor 1111, gyro sensor 1112, pressure sensor 1113, fingerprint sensor 1114, optical sensor 1115, and proximity sensor 1116.
Acceleration sensor 1111 may detect acceleration levels in three coordinate axes of a coordinate system established with terminal 1100. For example, the acceleration sensor 1111 may be configured to detect components of the gravitational acceleration in three coordinate axes. The processor 1101 may control the touch display screen 1105 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1111. The acceleration sensor 1111 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1112 may detect a body direction and a rotation angle of the terminal 1100, and the gyro sensor 1112 may cooperate with the acceleration sensor 1111 to acquire a 3D motion of the user with respect to the terminal 1100. From the data collected by gyroscope sensor 1112, processor 1101 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1113 may be disposed on a side bezel of terminal 1100 and/or on an underlying layer of touch display screen 1105. When the pressure sensor 1113 is disposed on the side frame of the terminal 1100, the holding signal of the terminal 1100 from the user can be detected, and the processor 1101 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1113. When the pressure sensor 1113 is disposed at the lower layer of the touch display screen 1105, the processor 1101 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1105. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1114 is configured to collect a fingerprint of the user, and the processor 1101 identifies the user according to the fingerprint collected by the fingerprint sensor 1114, or the fingerprint sensor 1114 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the user is authorized by the processor 1101 to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Fingerprint sensor 1114 may be disposed on the front, back, or side of terminal 1100. When a physical button or vendor Logo is provided on the terminal 1100, the fingerprint sensor 1114 may be integrated with the physical button or vendor Logo.
Optical sensor 1115 is used to collect ambient light intensity. In one embodiment, the processor 1101 may control the display brightness of the touch display screen 1105 based on the ambient light intensity collected by the optical sensor 1115. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1105 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1105 is turned down. In another embodiment, processor 1101 may also dynamically adjust the shooting parameters of camera assembly 1106 based on the ambient light intensity collected by optical sensor 1115.
Proximity sensor 1116, also referred to as a distance sensor, is typically disposed on a front panel of terminal 1100. Proximity sensor 1116 is used to capture the distance between the user and the front face of terminal 1100. In one embodiment, the touch display screen 1105 is controlled by the processor 1101 to switch from a bright screen state to a dark screen state when the proximity sensor 1116 detects that the distance between the user and the front face of the terminal 1100 is gradually decreasing; when the proximity sensor 1116 detects that the distance between the user and the front face of the terminal 1100 becomes gradually larger, the touch display screen 1105 is controlled by the processor 1101 to switch from a breath-screen state to a bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 11 does not constitute a limitation of terminal 1100, and may include more or fewer components than those shown, or may combine certain components, or may employ a different arrangement of components.
Optionally, the computer-readable storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a Solid State Drive (SSD), or an optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM). The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (21)

1. A method of video decoding, the method comprising:
acquiring n candidate vectors of a current decoding block, wherein the candidate vectors are motion vectors of interframe coding or block vectors copied by intraframe blocks, and n is a positive integer;
processing the n candidate vectors according to priorities to obtain the candidate vectors in sequential arrangement, wherein the priorities are used for indicating the degree of association between the candidate vectors and the prediction vector of the current decoding block, and index values corresponding to the candidate vectors are in negative correlation with the arrangement sequence of the candidate vectors;
decoding the coded content of the current decoding block to obtain an index value corresponding to the current decoding block;
determining the prediction vector corresponding to the index value of the current decoding block from the candidate vectors which are arranged in sequence;
and combining the prediction vector to obtain a decoding result of the current decoding block.
2. The method of claim 1, wherein said obtaining n candidate vectors for a current decoded block comprises:
acquiring vectors of n decoded decoding blocks before the current decoding block according to a decoding sequence to serve as the candidate vectors;
or the like, or, alternatively,
acquiring vectors of n adjacent sampling points of the current decoding block as the candidate vectors, wherein the adjacent sampling points comprise at least one of adjacent pixel points and adjacent decoding blocks;
or the like, or, alternatively,
and acquiring first vectors of p decoded blocks before the current decoded block and second vectors of q adjacent samples of the current decoded block according to the decoding sequence, and taking the p first vectors and the q second vectors as the candidate vectors, wherein n is the sum of p and q, and both p and q are positive integers.
3. The method of claim 2, wherein said processing n candidate vectors according to priority to obtain the ordered candidate vectors comprises:
and sequencing the n candidate vectors according to the priority to obtain the candidate vectors in sequential arrangement.
4. The method of claim 3, wherein said sorting the n candidate vectors according to the priority comprises:
and aiming at the ith vector in the n candidate vectors, carrying out priority ranking on the n candidate vectors according to the similarity relation between the ith vector and other candidate vectors, wherein i is more than 0 and less than or equal to n.
5. The method according to claim 4, wherein when p first vectors and q second vectors are included in the n candidate vectors, said sorting the n candidate vectors according to the priority according to the similarity relationship between the ith vector and other candidate vectors comprises:
when the ith vector is a vector in the p first vectors, determining a first similarity relation between the ith vector and the q second vectors;
when the ith vector is a vector in the q second vectors, determining a second similarity relation between the ith vector and the p first vectors;
and performing priority ranking on the n candidate vectors according to the first similarity relation and/or the second similarity relation.
6. The method according to claim 4, wherein when p first vectors and q second vectors are included in the n candidate vectors, the prioritizing the n candidate vectors according to similarity relationship between the ith vector and other candidate vectors comprises:
when the ith vector is a vector in the p first vectors, determining a third similarity relation between the ith vector and the p first vectors;
when the ith vector is a vector in the q second vectors, determining a fourth similarity relation between the ith vector and the q second vectors;
and performing priority ranking on the n candidate vectors according to the third similarity relation and/or the fourth similarity relation.
7. The method of claim 4, wherein each vector corresponds to a horizontal axis relative coordinate and a vertical axis relative coordinate;
the method further comprises the following steps:
determining that the ith vector is similar to the other candidate vectors when a first absolute value of a difference between a first horizontal-axis relative coordinate of the ith vector and a second horizontal-axis relative coordinate of the other candidate vectors is less than a similarity threshold;
or the like, or, alternatively,
determining that the ith vector is similar to the other candidate vectors when a second absolute value of a difference between the first vertical axis relative coordinate of the ith vector and the second vertical axis relative coordinate of the other candidate vectors is less than the similarity threshold;
or the like, or, alternatively,
determining that the ith vector is similar to the other candidate vectors when the first absolute value and the second absolute value are both less than the similarity threshold;
or the like, or, alternatively,
determining that the ith vector is similar to the other candidate vectors when the sum of the first absolute value and the second absolute value is less than the similarity threshold;
or the like, or, alternatively,
determining that the ith vector is similar to the other candidate vectors when a sum of a square of the first absolute value and a square of the second absolute value is less than the similarity threshold.
8. The method of claim 7, further comprising: when a redundant vector with the same horizontal axis relative coordinate and vertical axis relative coordinate is detected, updating the list of candidate vectors according to one of the following:
keeping a group with a larger product of width and height in the redundant vector as a new vector in the list;
retaining a set of said redundant vectors that is larger than the sum of said width and said height as a new vector in said list;
retaining said wider group of said redundant vectors in said list as a new vector; and
and keeping the group with larger height in the redundant vectors as a new vector in the list.
9. The method of claim 8, further comprising:
and keeping the position of the new vector in the list unchanged when the new vector is kept in the list.
10. The method of claim 9, further comprising:
and adding a redundancy number value to the new vector when the new vector is kept in the list.
11. The method of claim 3, wherein said prioritizing n of said candidate vectors when said n of said candidate vectors are vectors of n of said decoded blocks, further comprises:
acquiring size information of the n decoded decoding blocks;
prioritizing the n candidate vectors according to the size information, wherein the size of the candidate vectors is positively correlated with the priority.
12. The method of claim 8,
the size information comprises width information, and the width of the candidate vector is positively correlated with the priority;
or the like, or, alternatively,
the size information comprises height information, and the height of the candidate vector is positively correlated with the priority;
or the like, or, alternatively,
the size information includes the width information and the height information, and a product of the width and the height is positively correlated with the priority of the candidate vector.
13. The method of claim 3 wherein when n of said candidate vectors are vectors of n of said decoded blocks, each of said n candidate vectors corresponds to a count value;
the performing priority ranking on the n candidate vectors to obtain the candidate vectors in sequential arrangement includes:
and performing the priority ranking on the n candidate vectors according to the counting value to obtain the candidate vectors which are sequentially arranged, wherein the counting value of the candidate vector is positively correlated with the priority of the candidate vector.
14. The method of claim 10, wherein said obtaining a vector of n decoded blocks before said current decoded block as said candidate vector according to a decoding order comprises:
sequentially acquiring vectors of decoded decoding blocks before the current decoding block according to the decoding sequence;
when a vector of a w-th decoded decoding block is obtained, comparing the similarity between the w-th vector and other candidate vectors, wherein w is a positive integer;
when the w-th vector and the other candidate vectors meet the similarity requirement, adding one to the counting value of the other candidate vectors;
and when the w-th vector and the other candidate vectors do not meet the similarity requirement, determining the w-th vector as one of the n candidate vectors.
15. The method according to any one of claims 1 to 11, wherein said obtaining the ordered sequence of candidate vectors comprises:
and generating a candidate vector list according to the first k candidate vectors with the highest priority in the arrangement result, wherein k is more than 0 and less than or equal to n.
16. The method of claim 2, wherein said processing n candidate vectors according to priority to obtain the ordered candidate vectors comprises:
and screening the n candidate vectors according to a priority condition to obtain m candidate vectors which are sequentially arranged, wherein m is more than 0 and less than or equal to n.
17. A method of video encoding, the method comprising:
acquiring n candidate vectors of a current coding block, wherein the candidate vectors are motion vectors of interframe coding or block vectors copied by intraframe blocks, and n is a positive integer;
processing the n candidate vectors according to priorities to obtain the candidate vectors in sequential arrangement, wherein the priorities are used for indicating the degree of association between the candidate vectors and the prediction vector of the current coding block, and index values corresponding to the candidate vectors are in negative correlation with the arrangement sequence of the candidate vectors;
determining the prediction vector of the current coding block from the candidate vectors arranged in sequence;
and coding the current coding block by combining the index value corresponding to the prediction vector.
18. A video decoding apparatus, characterized in that the apparatus comprises:
an obtaining module, configured to obtain n candidate vectors of a current decoding block, where the candidate vectors are inter-frame coded motion vectors or intra-frame block copied block vectors, and n is a positive integer;
the processing module is used for processing the n candidate vectors according to priorities to obtain the candidate vectors which are arranged in sequence, the priorities are used for indicating the degree of association between the candidate vectors and the prediction vector of the current decoding block, and index values corresponding to the candidate vectors are in negative correlation with the arrangement sequence of the candidate vectors;
the decoding module is used for decoding the coding content of the current decoding block to obtain an index value corresponding to the current decoding block;
a determining module, configured to determine the prediction vector corresponding to the index value of the current decoded block from the candidate vectors arranged in sequence;
the determining module is further configured to obtain a decoding result of the current decoded block by combining the prediction vector.
19. A video encoding apparatus, characterized in that the apparatus comprises:
an obtaining module, configured to obtain n candidate vectors of a current coding block, where the candidate vectors are motion vectors of inter-frame coding or block vectors copied from intra-frame blocks, and n is a positive integer;
the processing module is used for processing the n candidate vectors according to priorities to obtain the candidate vectors which are arranged in sequence, the priorities are used for indicating the association degree between the candidate vectors and the prediction vector of the current coding block, and index values corresponding to the candidate vectors are in negative correlation with the arrangement sequence of the candidate vectors;
a determining module, configured to determine the prediction vector of the current coding block from the candidate vectors arranged in sequence;
and the coding module is used for coding the current coding block by combining the index value corresponding to the prediction vector.
20. A computer device comprising a processor and a memory, said memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by said processor to implement the video decoding method of any one of claims 1 to 16 or the video encoding method of claim 17.
21. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the video decoding method of any one of claims 1 to 16 or the video encoding method of claim 17.
CN201910790870.6A 2019-08-26 2019-08-26 Video decoding method, encoding method, device, equipment and readable storage medium Active CN112437304B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910790870.6A CN112437304B (en) 2019-08-26 2019-08-26 Video decoding method, encoding method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910790870.6A CN112437304B (en) 2019-08-26 2019-08-26 Video decoding method, encoding method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN112437304A true CN112437304A (en) 2021-03-02
CN112437304B CN112437304B (en) 2022-06-03

Family

ID=74689804

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910790870.6A Active CN112437304B (en) 2019-08-26 2019-08-26 Video decoding method, encoding method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN112437304B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005301621A (en) * 2004-04-09 2005-10-27 Sony Corp Image processing device and method, recording medium, and program
US20090304293A1 (en) * 2008-06-08 2009-12-10 Te-Hao Chang Motion estimation method and related apparatus for efficiently selecting motion vector
CN102934434A (en) * 2010-07-12 2013-02-13 联发科技股份有限公司 Method and apparatus of temporal motion vector prediction
CN104025601A (en) * 2011-12-30 2014-09-03 数码士有限公司 Method And Device For Encoding Three-Dimensional Image, And Decoding Method And Device
WO2015052273A1 (en) * 2013-10-11 2015-04-16 Canon Kabushiki Kaisha Method and apparatus for displacement vector component prediction in video coding and decoding
JP2016034160A (en) * 2015-12-08 2016-03-10 株式会社Jvcケンウッド Image encoding device, image encoding method, image encoding program, transmission device, transmission method, and transmission program
CN106878752A (en) * 2015-12-11 2017-06-20 北京三星通信技术研究有限公司 The decoding method and device of a kind of Video Encoding Mode
CN108293131A (en) * 2015-11-20 2018-07-17 联发科技股份有限公司 The method and apparatus of motion-vector prediction or motion compensation for coding and decoding video

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005301621A (en) * 2004-04-09 2005-10-27 Sony Corp Image processing device and method, recording medium, and program
US20090304293A1 (en) * 2008-06-08 2009-12-10 Te-Hao Chang Motion estimation method and related apparatus for efficiently selecting motion vector
CN102934434A (en) * 2010-07-12 2013-02-13 联发科技股份有限公司 Method and apparatus of temporal motion vector prediction
CN104025601A (en) * 2011-12-30 2014-09-03 数码士有限公司 Method And Device For Encoding Three-Dimensional Image, And Decoding Method And Device
WO2015052273A1 (en) * 2013-10-11 2015-04-16 Canon Kabushiki Kaisha Method and apparatus for displacement vector component prediction in video coding and decoding
CN108293131A (en) * 2015-11-20 2018-07-17 联发科技股份有限公司 The method and apparatus of motion-vector prediction or motion compensation for coding and decoding video
JP2016034160A (en) * 2015-12-08 2016-03-10 株式会社Jvcケンウッド Image encoding device, image encoding method, image encoding program, transmission device, transmission method, and transmission program
CN106878752A (en) * 2015-12-11 2017-06-20 北京三星通信技术研究有限公司 The decoding method and device of a kind of Video Encoding Mode

Also Published As

Publication number Publication date
CN112437304B (en) 2022-06-03

Similar Documents

Publication Publication Date Title
US20220264136A1 (en) Methods for decoding or coding prediction mode, decoding or coding apparatus, and storage medium
CN108391127B (en) Video encoding method, device, storage medium and equipment
CN112532975B (en) Video encoding method, video encoding device, computer equipment and storage medium
CN109168032B (en) Video data processing method, terminal, server and storage medium
CN110049326B (en) Video coding method and device and storage medium
CN110572679B (en) Method, device and equipment for coding intra-frame prediction and readable storage medium
CN110177275B (en) Video encoding method and apparatus, and storage medium
CN112437304B (en) Video decoding method, encoding method, device, equipment and readable storage medium
CN114302137B (en) Time domain filtering method and device for video, storage medium and electronic equipment
CN111770339B (en) Video encoding method, device, equipment and storage medium
CN113038124B (en) Video encoding method, video encoding device, storage medium and electronic equipment
CN114079787A (en) Video decoding method, video encoding method, video decoding apparatus, video encoding apparatus, and storage medium
CN113079372A (en) Method, device and equipment for coding inter-frame prediction and readable storage medium
CN115811615A (en) Screen video coding method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40041015

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant