CN117692648A - Video encoding method, apparatus, device, storage medium, and computer program product - Google Patents

Video encoding method, apparatus, device, storage medium, and computer program product Download PDF

Info

Publication number
CN117692648A
CN117692648A CN202410147421.0A CN202410147421A CN117692648A CN 117692648 A CN117692648 A CN 117692648A CN 202410147421 A CN202410147421 A CN 202410147421A CN 117692648 A CN117692648 A CN 117692648A
Authority
CN
China
Prior art keywords
intra
splitting
mode
candidate
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410147421.0A
Other languages
Chinese (zh)
Other versions
CN117692648B (en
Inventor
赵志远
钟亮
尹基航
王强
吴景然
张贤国
李雅卿
齐洪钢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202410147421.0A priority Critical patent/CN117692648B/en
Publication of CN117692648A publication Critical patent/CN117692648A/en
Application granted granted Critical
Publication of CN117692648B publication Critical patent/CN117692648B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present application relates to a video encoding method, apparatus, device, storage medium and computer program product. The method can be applied to various technical fields such as cloud technology, artificial intelligence, intelligent traffic, auxiliary driving and the like. The method comprises the following steps: aiming at the intra-frame coding unit, calculating estimated distortion cost corresponding to the candidate splitting mode according to cost corresponding to the non-splitting mode; determining estimated total cost corresponding to each candidate splitting mode according to the estimated distortion cost and the estimated code stream cost corresponding to each candidate splitting mode; when the cost does not exceed the estimated total cost corresponding to the candidate split mode, skipping the rate distortion cost calculation of the candidate split mode; when the cost exceeds the estimated total cost corresponding to the candidate split mode, rate distortion cost calculation is carried out on the candidate split mode, and an optimal split mode is selected from the candidate split modes which are not skipped according to a calculation result; and splitting the coding blocks of the intra-frame coding units according to the optimal splitting mode, so that the video coding efficiency is improved.

Description

Video encoding method, apparatus, device, storage medium, and computer program product
Technical Field
The present application relates to the field of computing technology, and in particular, to a video encoding method, apparatus, device, storage medium, and computer program product.
Background
With the rapid development of computer technology and multimedia technology, the resolution and quality of video are also increasing, thereby generating more video data. To ensure video transmission efficiency, it is often necessary to encode the video.
In the related art, after a video image is divided into a plurality of intra-frame coding units, for the intra-frame coding units, an encoder selects one mode from a plurality of preset splitting modes to divide the intra-frame coding units, specifically, all splitting modes need to be traversed, corresponding costs are calculated, and a mode with the smallest cost is selected as a final splitting mode.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a video encoding method, apparatus, device, storage medium, and computer program product that can improve encoding efficiency.
In a first aspect, the present application provides a video encoding method. The method comprises the following steps:
Determining a candidate splitting mode corresponding to an intra-frame coding unit;
aiming at the intra-frame coding unit, calculating the cost corresponding to the non-splitting mode, and calculating the estimated distortion cost corresponding to the candidate splitting mode according to the cost;
aiming at the intra-frame coding unit, respectively calculating estimated code stream writing costs corresponding to each candidate splitting mode, and determining estimated total costs corresponding to each candidate splitting mode according to the estimated distortion costs and the estimated code stream writing costs corresponding to each candidate splitting mode;
skipping rate distortion cost calculation corresponding to the corresponding candidate split mode when the cost corresponding to the non-split mode does not exceed the estimated total cost corresponding to the candidate split mode;
when the cost corresponding to the non-split mode exceeds the estimated total cost corresponding to the candidate split mode, performing rate distortion cost calculation on the corresponding candidate split mode, and selecting an optimal split mode corresponding to the intra-frame coding unit from the non-skipped candidate split modes according to a calculation result;
and splitting the coding blocks of the intra-frame coding units according to the optimal splitting mode.
In a second aspect, the present application also provides a video encoding apparatus. The device comprises:
The splitting mode determining module is used for determining a candidate splitting mode corresponding to the intra-frame coding unit;
the estimated distortion cost determining module is used for calculating the cost corresponding to the non-splitting mode aiming at the intra-frame coding unit, and calculating the estimated distortion cost corresponding to the candidate splitting mode according to the cost;
the estimated total cost determining module is used for respectively calculating estimated code stream costs corresponding to each candidate splitting mode aiming at the intra-frame coding unit, and determining the estimated total cost corresponding to each candidate splitting mode according to the estimated distortion costs and the estimated code stream costs corresponding to each candidate splitting mode;
the skipping module is used for skipping the rate distortion cost calculation corresponding to the corresponding candidate splitting mode when the cost corresponding to the non-splitting mode does not exceed the estimated total cost corresponding to the candidate splitting mode;
the splitting mode selection module is used for carrying out rate distortion cost calculation on the corresponding candidate splitting modes when the cost corresponding to the non-splitting mode exceeds the estimated total cost corresponding to the candidate splitting mode, and selecting the optimal splitting mode corresponding to the intra-frame coding unit from the non-skipped candidate splitting modes according to the calculation result;
And the splitting module is used for splitting the coding blocks of the intra-frame coding units according to the optimal splitting mode.
In some embodiments, the splitting mode determining module is configured to calculate, when the intra-frame encoding unit meets a preset size constraint condition, a horizontal texture gradient and a vertical texture gradient of the intra-frame encoding unit according to pixel values of each pixel point in the intra-frame encoding unit; determining a pixel texture direction of the intra-frame encoding unit according to the horizontal texture gradient and the vertical texture gradient, wherein the pixel texture direction is at least one of a horizontal direction and a vertical direction; and screening candidate splitting modes corresponding to the intra-frame coding units from a plurality of preset splitting modes according to the pixel texture direction.
In some embodiments, the split mode determining module is further configured to obtain a pixel width and a pixel height of the intra-coding unit; calculating the aspect ratio and the aspect ratio of the intra-frame coding unit according to the pixel width and the pixel height; and if the aspect ratio and the aspect ratio are smaller than the preset proportion, determining that the intra-frame coding unit meets the preset size constraint condition.
In some embodiments, the splitting mode determining module is further configured to determine that the intra-frame encoding unit does not meet a preset size constraint condition if the aspect ratio is greater than or equal to the preset proportion and the aspect ratio is less than the preset proportion, and screen a preset splitting mode in a vertical direction from a plurality of preset splitting modes as a candidate splitting mode corresponding to the intra-frame encoding unit; if the aspect ratio is greater than or equal to the preset proportion and the aspect ratio is smaller than the preset proportion, determining that the intra-frame coding unit does not meet a preset size constraint condition, and screening a preset splitting mode in a horizontal direction from a plurality of preset splitting modes to serve as a candidate splitting mode corresponding to the intra-frame coding unit.
In some embodiments, the splitting mode determining module is configured to determine adjacent pixels of each pixel in the intra-frame encoding unit in a horizontal direction; calculating the pixel difference in the horizontal direction between the pixel point and the corresponding adjacent pixel point; and summing the calculated pixel differences in the horizontal direction between the pixel points and the corresponding adjacent pixel points to obtain the horizontal texture gradient of the intra-frame coding unit.
In some embodiments, the splitting mode determining module is configured to determine neighboring pixels of each pixel in the intra-frame encoding unit in a vertical direction; calculating the vertical pixel difference between the pixel point and the corresponding adjacent pixel point; and summing the calculated pixel differences in the vertical direction between the pixel points and the corresponding adjacent pixel points to obtain the vertical texture gradient of the intra-frame coding unit.
In some embodiments, the split mode determination module is configured to calculate a first ratio between the horizontal texture gradient and the vertical texture gradient, and calculate a second ratio between the vertical texture gradient and the horizontal texture gradient; if the first ratio and the second ratio are smaller than the preset ratio, determining that the pixel texture direction of the intra-frame coding unit comprises a horizontal direction and a vertical direction; if the first ratio is smaller than the preset ratio and the second ratio is larger than or equal to the preset ratio, determining that the pixel texture direction of the intra-frame coding unit is a horizontal direction; and if the second ratio is smaller than the preset ratio and the first ratio is larger than or equal to the preset ratio, determining that the pixel texture direction of the intra-frame coding unit is a vertical direction.
In some embodiments, the splitting mode determining module is configured to, if the pixel texture direction includes a horizontal direction and a vertical direction, use the plurality of preset splitting modes as candidate splitting modes corresponding to the intra-frame encoding unit; if the pixel texture direction is the horizontal direction, screening a preset splitting mode in the horizontal direction from a plurality of preset splitting modes to serve as a candidate splitting mode corresponding to the intra-frame coding unit; and if the pixel texture direction is the vertical direction, screening a preset splitting mode in the vertical direction from a plurality of preset splitting modes, and taking the preset splitting mode as a candidate splitting mode corresponding to the intra-frame coding unit.
In some embodiments, the estimated distortion cost determining module is configured to calculate pixel differences of each pixel point in the intra-frame encoding unit before intra-frame prediction and after intra-frame prediction in a non-split mode, and obtain a distortion cost corresponding to the non-split mode based on the pixel differences of each pixel point; invoking a code stream cost function, and obtaining a code stream cost corresponding to the non-splitting mode according to a mode identification code corresponding to the non-splitting mode, and the pixel height and the pixel width of the intra-frame coding unit; and obtaining the cost corresponding to the non-split mode based on the distortion cost and the code stream writing cost corresponding to the non-split mode.
In some embodiments, the estimated distortion cost determining module is configured to obtain a preset difference estimated coefficient, where the difference estimated coefficient characterizes a difference between a cost corresponding to the non-splitting mode and a distortion cost corresponding to the candidate splitting mode; and taking the product of the cost corresponding to the non-splitting mode and the difference pre-estimated coefficient as the pre-estimated distortion cost corresponding to the candidate splitting mode.
In some embodiments, the estimated total cost determining module is configured to obtain, for each candidate splitting mode, a split code stream cost required for splitting a coding block of the intra-frame coding unit according to the candidate splitting mode, and obtain other code stream costs except for the split code stream cost when intra-frame predicting the intra-frame coding unit; and superposing the code stream writing cost required by splitting the coding block and the other code stream writing cost to obtain a superposition value, and determining the estimated code stream writing cost corresponding to the candidate splitting mode according to the superposition value.
In some embodiments, the estimated total cost determining module is configured to invoke a code stream cost function, and obtain, according to a mode identifier corresponding to the candidate splitting mode, a pixel height and a pixel width of the intra-frame coding unit, splitting code stream costs required for splitting the coding block of the intra-frame coding unit according to the candidate splitting mode.
In some embodiments, the splitting mode selecting module is configured to select, if there are multiple skipped candidate splitting modes, a candidate splitting mode with a minimum corresponding rate-distortion cost from the multiple skipped candidate splitting modes, as an optimal splitting mode corresponding to the intra-frame coding unit.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of the video encoding method described above when the processor executes the computer program.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the video encoding method described above.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of the video encoding method described above.
The video coding method, the video coding device, the video coding equipment, the storage medium and the computer program product are characterized in that candidate splitting modes corresponding to the intra-frame coding units are determined; aiming at an intra-frame coding unit, calculating the cost corresponding to the non-splitting mode, and estimating the estimated distortion cost corresponding to the candidate splitting mode according to the cost; and aiming at the intra-frame coding unit, respectively calculating the estimated code stream writing cost corresponding to each candidate splitting mode. In this way, the estimated total cost corresponding to each candidate splitting mode can be rapidly estimated by estimating the estimated code stream cost corresponding to each candidate splitting mode. When the cost corresponding to the non-split mode does not exceed the estimated total cost corresponding to the candidate split mode, the candidate split mode is indicated to generate huge cost, and the corresponding rate distortion cost calculation is low-efficiency calculation, so that the rate distortion cost calculation corresponding to the corresponding candidate split mode is needed to be skipped, and the calculated amount is reduced. When the cost corresponding to the non-splitting mode exceeds the estimated total cost corresponding to the candidate splitting mode, the candidate splitting mode can be used as one of the selection of the optimal splitting mode, and the candidate splitting mode does not need to be skipped, so that the rate distortion cost of the candidate splitting mode is calculated. Selecting an optimal splitting mode corresponding to the intra-frame coding unit from the non-skipped candidate splitting modes according to the calculation result of the non-skipped candidate splitting modes; and splitting the coding blocks of the intra-frame coding units according to an optimal splitting mode. Therefore, in the whole coding process, the rate distortion cost of all candidate splitting modes does not need to be calculated, the low-efficiency rate distortion cost calculation is avoided, the calculated amount is reduced, and the coding efficiency is improved.
Drawings
FIG. 1 is a diagram of an application environment for a video encoding method in one embodiment;
FIG. 2 is a flow chart of a video encoding method in one embodiment;
FIG. 3 is a schematic diagram of candidate split patterns in one embodiment;
FIG. 4 is a flow chart illustrating pixel texture direction determination in one embodiment;
FIG. 5 is a flow chart of a pixel texture direction determination in another embodiment;
FIG. 6 is a flow diagram of a candidate split mode skip determination in one embodiment;
FIG. 7 is a schematic diagram of split mode selection of the related art;
FIG. 8 is a schematic diagram of candidate split mode selection in an embodiment of the present application;
FIG. 9 is a schematic diagram of coding efficiency evaluation in one embodiment;
FIG. 10 is a block diagram showing the structure of a video encoding apparatus in one embodiment;
FIG. 11 is an internal block diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
Before describing the embodiments of the present application, a brief description of intra prediction will be given. Intra prediction refers to the analysis and prediction of the current frame during video coding. Specifically, the current frame is used as a reference to find a similar region, and the predicted value is used to replace the original pixel value. Before intra prediction is entered, an image block processing needs to be performed on a video image (current frame) to obtain a plurality of coding units (i.e., intra coding units). For each coding unit, intra prediction is performed on the coding unit. Taking a certain coding unit as an example, first, one mode is selected from a plurality of splitting modes, and one coding unit is split into a plurality of coding blocks by the selected mode. Then, for each encoded block, the encoded block is encoded by a corresponding prediction mode.
In order to reduce the coding cost of intra-frame coding as much as possible, in the related art, all split modes are traversed, and the coding cost corresponding to each split mode is calculated. And selecting a splitting mode with the minimum coding cost for splitting, and coding according to the split coding blocks. That is, in the related art, the coding cost corresponding to each splitting mode needs to be calculated, the calculation process is complicated, and the coding efficiency cannot be ensured.
In the embodiment of the application, the candidate splitting mode corresponding to the intra-frame coding unit is determined; aiming at an intra-frame coding unit, calculating the cost corresponding to the non-splitting mode, and estimating the estimated distortion cost corresponding to the candidate splitting mode according to the cost; and aiming at the intra-frame coding unit, respectively calculating the estimated code stream writing cost corresponding to each candidate splitting mode. In this way, the estimated total cost corresponding to each candidate splitting mode can be rapidly estimated by estimating the estimated code stream cost corresponding to each candidate splitting mode. When the cost corresponding to the non-split mode does not exceed the estimated total cost corresponding to the candidate split mode, the candidate split mode is indicated to generate huge cost, and the corresponding rate distortion cost calculation is low-efficiency calculation, so that the rate distortion cost calculation corresponding to the corresponding candidate split mode is needed to be skipped, and the calculated amount is reduced. When the cost corresponding to the non-splitting mode exceeds the estimated total cost corresponding to the candidate splitting mode, the candidate splitting mode can be used as one of the selection of the optimal splitting mode, and the candidate splitting mode does not need to be skipped, so that the rate distortion cost of the candidate splitting mode is calculated. Selecting an optimal splitting mode corresponding to the intra-frame coding unit from the non-skipped candidate splitting modes according to the calculation result of the non-skipped candidate splitting modes; and splitting the coding blocks of the intra-frame coding units according to an optimal splitting mode. Therefore, in the whole coding process, the rate distortion cost of all candidate splitting modes does not need to be calculated, the low-efficiency rate distortion cost calculation is avoided, the calculated amount is reduced, and the coding efficiency is improved.
The video coding method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the sender 102 communicates with the server 104 via a network. The receiving end 106 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on the cloud or other servers.
In some embodiments, the transmitting end 102 acquires the video image to be encoded and transmits the video image to be encoded to the server 104. The server 104 splits the video image to be encoded to obtain at least one intra-frame coding unit. For each intra-coding unit, server 104 determines a candidate split mode for the intra-coding unit. For the intra coding unit, the server 104 calculates the cost corresponding to the non-split mode, and calculates the estimated distortion cost corresponding to the candidate split mode according to the cost. For the intra-frame coding unit, the server 104 calculates estimated code stream costs corresponding to each candidate splitting mode, and determines an estimated total cost corresponding to each candidate splitting mode according to the estimated distortion costs and the estimated code stream costs corresponding to each candidate splitting mode. When the cost corresponding to the non-split mode does not exceed the estimated total cost corresponding to the candidate split mode, the server 104 skips the rate-distortion cost calculation corresponding to the corresponding candidate split mode. When the cost corresponding to the non-split mode exceeds the estimated total cost corresponding to the candidate split mode, the server 104 performs rate-distortion cost calculation on the corresponding candidate split mode, and selects the optimal split mode corresponding to the intra-frame coding unit from the non-skipped candidate split modes according to the calculation result. And splitting the coding blocks of the intra-frame coding units according to the optimal splitting mode to obtain all the coding blocks.
Further, the server 104 encodes each encoding block, generates encoded data corresponding to the video image to be encoded, transmits the encoded data to the receiving end 106, and displays the decoded video image.
Of course, in other embodiments, the video encoding method provided in the embodiments of the present application may also be performed by the transmitting end 102, and the encoded data may be stored.
The transmitting end 102 and the receiving end 106 are terminals, and the terminals may be, but are not limited to, smart phones, tablet computers, notebook computers, desktop computers, smart speakers, smart watches, and the like. The server 104 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligent platforms, and the like. The transmitting end 102, the server 104, the receiving end 106 and the server 104 may be directly or indirectly connected through wired or wireless communication, which is not limited herein.
In one embodiment, as shown in fig. 2, a video encoding method is provided, which is applied to a computer device (may be the transmitting end 102 or the server 104 in fig. 1) for illustration, and includes the following steps:
step S202, determining a candidate split mode corresponding to the intra-frame coding unit.
Wherein a Coding Unit (CU) is a basic Unit to be video-encoded. The intra-frame encoding unit is an encoding unit that performs video encoding by intra-frame detection. Optionally, the computer device performs image blocking processing on the video image (a frame of image) to obtain a plurality of image blocks. Based on each image block, each intra coding unit is determined. Illustratively, the computer device divides one video image into a plurality of LCUs (Largest Coding Unit, maximum coding units) that do not overlap each other. For each LCU, the computer equipment performs image blocking processing on the LCU again to obtain a plurality of CUs, and each CU is an intra-frame coding unit. For example, for an LCU of size 128×128, a quarter method is used to obtain 4 CUs, each of size 64×64.
The candidate split mode refers to a split mode for splitting an intra-frame coding unit when intra-frame prediction is performed. The candidate splitting pattern is split in a certain splitting direction. The candidate split mode may be a horizontal split mode in which splitting is performed in a horizontal direction or a vertical split mode in which splitting is performed in a vertical direction, for example. Illustratively, as shown in FIG. 3, a schematic diagram of candidate split patterns in one embodiment is shown. In fig. 3, 3 horizontal split modes are provided, namely a horizontal split mode 1, a horizontal split mode 2 and a horizontal split mode 3. There are also provided 3 vertical split modes, vertical split mode 1, vertical split mode 2, vertical split mode 3, respectively.
Optionally, the computer device performs image blocking processing on the video image to be processed to obtain a plurality of image blocks. And dividing each image block according to a corresponding dividing mode to obtain a plurality of intra-frame coding units of the image block.
For each intra-frame coding unit, the computer device determines texture information for the intra-frame coding unit and screens out candidate splitting modes adapted to the texture direction from a plurality of preset splitting modes.
Texture information characterizes the degree of association between pixels in different directions in the intra-coding unit. For example, if the texture information reflects that each pixel value in the horizontal direction of the intra-frame encoding unit is more compact, i.e., the correlation degree in the horizontal direction is higher, the horizontal split mode is selected. If the texture information reflects that each pixel value in the vertical direction in the intra-frame coding unit is tighter, namely, the association degree in the vertical direction is higher, the vertical splitting mode is screened out. If the texture information reflects that the horizontal association degree (i.e., the association degree between pixels in the horizontal direction) and the vertical association degree (i.e., the association degree between pixels in the vertical direction) are not different, that is, the difference between the two is smaller than the preset difference, the candidate splitting mode may be a horizontal splitting mode or a vertical splitting mode.
Illustratively, for each intra-coding unit, the computer device obtains size information for the intra-coding unit and screens out a split mode that is adapted to the size information from a plurality of preset split modes. The computer equipment determines the texture information of the intra-frame coding unit, and screens out candidate splitting modes matched with the texture direction from the splitting modes matched with the size information.
The size information includes, but is not limited to, the height and width of the intra-coded unit. For example, if the height of the intra-coding unit is less than the preset height, horizontal splitting is not allowed. If the width of the intra-frame coding unit is smaller than the preset width, vertical splitting is not allowed.
Step S204, aiming at the intra-frame coding unit, calculating the cost corresponding to the non-splitting mode, and calculating the estimated distortion cost corresponding to the candidate splitting mode according to the cost.
The non-split mode refers to a split mode in which the intra-frame coding unit is not split. The cost corresponding to the non-split mode is the rate distortion cost corresponding to the non-split mode, the cost corresponding to the non-split mode is obtained by comprehensively considering the distortion and the code rate after the intra-frame coding unit is coded in the non-split mode, and the cost corresponding to the non-split mode can be also understood as the total cost in the non-split mode.
Distortion refers to loss of video quality, and code rate refers to the code stream (bit number) required to store or transmit encoded video data to a network. Therefore, the cost corresponding to the non-split mode is determined based on the distortion cost caused by distortion in the non-split mode and the code rate cost caused by the code rate.
The estimated distortion cost corresponding to the candidate splitting mode refers to the cost caused by the distortion after the intra-frame coding unit is split according to the candidate splitting mode.
Optionally, for an intra-coding unit, the computer device calculates a cost when the intra-coding unit is not split. And the computer equipment predicts the distortion cost corresponding to the candidate splitting mode according to the cost, and obtains the predicted distortion cost corresponding to the candidate splitting mode.
Illustratively, for the intra-frame coding unit, the computer device calculates, according to the size information of the intra-frame coding unit, through rate distortion optimization (Rate Distortion Optimization, RDO), a rate distortion cost when the intra-frame coding unit is not split, and takes the calculated rate distortion cost as a cost corresponding to the non-split mode.
It should be noted that, in the calculation process of the estimated distortion cost corresponding to the candidate split mode, there is no need to actually calculate the distortion cost corresponding to the candidate split mode. In the step, based on the calculated cost corresponding to the non-splitting mode, the estimated distortion cost corresponding to the candidate splitting mode can be estimated more simply and rapidly.
Step S206, aiming at the intra-frame coding unit, respectively calculating estimated code stream costs corresponding to each candidate splitting mode, and determining estimated total costs corresponding to each candidate splitting mode according to the estimated distortion costs and the estimated code stream costs corresponding to each candidate splitting mode.
Wherein intra prediction comprises a split phase and a prediction phase. In the splitting stage, for a mode (either a non-splitting mode or a candidate splitting mode) of any splitting stage, there is a first write stream cost (code rate required by the mode) corresponding to the mode. In the prediction stage, there is a second code stream cost (code rate required by the prediction mode) corresponding to the prediction mode, and there is a third code stream cost (code rate caused by the encoded data) caused by the encoded data after the encoding is completed. The sum of the first, second and third write stream costs represents the write stream cost required for intra prediction.
Thus, the estimated code stream cost corresponding to the candidate split mode refers to the estimated code stream cost required for intra-frame prediction between the candidate split modes. The estimated code stream cost is an estimate of the code stream cost of the whole intra-frame prediction.
The estimated total cost corresponding to the candidate split mode refers to an estimated value obtained by estimating the total cost corresponding to the candidate split mode. For any one split-phase mode, the total cost corresponding to the mode is determined based on the distortion cost and the code stream writing cost corresponding to the mode. Therefore, it can be understood that the estimated total cost corresponding to the mode is determined based on the estimated distortion cost and the estimated code stream cost corresponding to the mode.
Optionally, for the intra-frame coding unit, the computer device calculates estimated code stream costs corresponding to each candidate split mode. For each candidate splitting mode, the computer equipment fuses the estimated code stream cost and the estimated distortion cost corresponding to the candidate splitting mode to obtain the estimated total cost corresponding to the candidate splitting mode.
Illustratively, taking the candidate splitting mode i as an example, the estimated write code stream cost corresponding to the candidate splitting mode i is Ri, the corresponding estimated distortion cost is Di, and a total cost calculation function, i.e., j=d+λr, is obtained; wherein R is the cost of writing the code stream, D is the distortion cost, lambda is the Lagrange coefficient, lambda is the coefficient obtained by minimizing the total cost calculation function, and the lambda is the known coefficient. At this time, ri and Di are substituted into R and D respectively, and calculation is performed according to a total cost calculation function, so as to obtain estimated total cost Ji corresponding to the candidate splitting mode i.
Step S208, skipping rate distortion cost calculation corresponding to the corresponding candidate split mode when the cost corresponding to the non-split mode does not exceed the estimated total cost corresponding to the candidate split mode.
Where the rate distortion cost calculation refers to a calculation using RDO. Therefore, the rate distortion cost corresponding to the candidate split mode is the total cost corresponding to the candidate split mode obtained by RDO calculation.
It should be noted that, the estimated total cost and the rate distortion cost corresponding to the candidate split mode can reflect the cost of the intra-frame prediction whole in the candidate split mode. However, the calculation method and the calculation complexity are different. The estimated total cost corresponding to the candidate splitting mode does not relate to RDO calculation, and is simpler and more convenient. The rate distortion cost corresponding to the candidate split mode is closer to the cost of the whole intra-frame prediction, and the calculation accuracy is higher.
Considering that there may be a split mode in the candidate split mode that is higher than the cost corresponding to the non-split mode, the split mode will not eventually be the optimal split mode for the intra coding unit, and therefore the calculation of the rate distortion cost for the split mode is an inefficient calculation. The explanation about the optimal split pattern is described later.
Then, to ensure the effectiveness and the calculation efficiency of the calculation, the candidate splitting mode which cannot be selected as one of the optimal splitting modes can be filtered out in advance by calculating the more simple estimated total cost.
Thus, illustratively, the computer device compares the cost corresponding to the candidate split pattern with the estimated total cost corresponding to the candidate split pattern. When the cost corresponding to the non-splitting mode does not exceed the estimated total cost corresponding to the candidate splitting mode, the rate distortion calculation of the candidate splitting mode is indicated to be low-efficiency calculation, and the intra-frame coding unit cannot be split according to the candidate splitting mode, so that the computer equipment skips the rate distortion cost calculation corresponding to the candidate splitting mode.
And S210, when the cost corresponding to the non-split mode exceeds the estimated total cost corresponding to the candidate split mode, performing rate distortion cost calculation on the corresponding candidate split mode, and selecting an optimal split mode corresponding to the intra-frame coding unit from the non-skipped candidate split modes according to the calculation result.
And the calculation result is the rate distortion cost corresponding to the non-skipped candidate split mode. The optimal split mode is a split mode capable of minimizing the writing of a code stream and distortion, and in general, the smaller the distortion is, the better the coding quality is, but the larger the writing of a code stream is, which leads to a reduction in coding efficiency. Therefore, the optimal splitting mode is a splitting mode which balances distortion and code stream writing and gives consideration to coding quality and coding efficiency.
Optionally, for each candidate split mode, when the cost corresponding to the non-split mode exceeds the estimated total cost corresponding to the candidate split mode, determining that the candidate split mode is an unswept candidate split mode, and calculating the rate-distortion cost of the unswept candidate split mode to obtain the calculation result (i.e., the rate-distortion cost) of the unswept candidate split mode.
If a plurality of non-skipped candidate splitting modes exist, the computer equipment selects an optimal splitting mode corresponding to the intra-frame coding unit from the non-skipped candidate splitting modes according to respective calculation results of the non-skipped candidate splitting modes.
Of course, in other embodiments, if there is one candidate split mode that is not skipped, the computer device determines the candidate split mode that is not skipped as the optimal split mode, at which point the rate-distortion cost may not be needed.
If the non-skipped candidate splitting mode does not exist, the computer equipment determines that the intra-frame coding unit is not split, and carries out subsequent predictive coding based on the intra-frame coding unit to obtain coded data.
In some embodiments, selecting an optimal split mode corresponding to the intra-coding unit from the non-skipped candidate split modes according to the calculation result includes: if a plurality of non-skipped candidate splitting modes exist, selecting a candidate splitting mode with the minimum corresponding rate distortion cost from the plurality of non-skipped candidate splitting modes as an optimal splitting mode corresponding to the intra-frame coding unit.
For example, if there are multiple candidate split modes that are not skipped, the computer device obtains respective calculation results for each of the candidate split modes that are not skipped, and determines a minimum rate-distortion cost from each calculation result. The computer equipment takes the non-skipped candidate splitting mode corresponding to the minimum rate distortion cost as the optimal splitting mode corresponding to the intra-frame coding unit.
In this embodiment, after determining that there are a plurality of skipped candidate split modes, a candidate split mode with the minimum rate distortion cost is selected as the optimal split mode based on the rate distortion cost of each skipped candidate split mode. Therefore, the intra-frame coding unit is split and then coded in the optimal splitting mode, video coding is realized with minimum rate distortion cost on the basis of ensuring coding quality, and the effectiveness and efficiency of video coding are improved.
And S212, splitting the coding blocks of the intra-frame coding units according to an optimal splitting mode.
Optionally, the computer device splits the intra-frame coding unit according to an optimal splitting mode to obtain a plurality of coding blocks, and respectively codes each coding block to obtain the coded data of the intra-frame coding unit.
Illustratively, after a plurality of coding blocks are obtained, for each coding block, the computer device performs predictive coding on the coding block through a corresponding prediction mode to obtain coding data of the coding block, and determines coding data of an intra-frame coding unit according to the coding data of each coding block.
In the video coding method, the candidate splitting mode corresponding to the intra-frame coding unit is determined; aiming at an intra-frame coding unit, calculating the cost corresponding to the non-splitting mode, and estimating the estimated distortion cost corresponding to the candidate splitting mode according to the cost; and aiming at the intra-frame coding unit, respectively calculating the estimated code stream writing cost corresponding to each candidate splitting mode. In this way, the estimated total cost corresponding to each candidate splitting mode can be rapidly estimated by estimating the estimated code stream cost corresponding to each candidate splitting mode. When the cost corresponding to the non-split mode does not exceed the estimated total cost corresponding to the candidate split mode, the candidate split mode is indicated to generate huge cost, and the corresponding rate distortion cost calculation is low-efficiency calculation, so that the rate distortion cost calculation corresponding to the corresponding candidate split mode is needed to be skipped, and the calculated amount is reduced. When the cost corresponding to the non-splitting mode exceeds the estimated total cost corresponding to the candidate splitting mode, the candidate splitting mode can be used as one of the selection of the optimal splitting mode, and the candidate splitting mode does not need to be skipped, so that the rate distortion cost of the candidate splitting mode is calculated. Selecting an optimal splitting mode corresponding to the intra-frame coding unit from the non-skipped candidate splitting modes according to the calculation result of the non-skipped candidate splitting modes; and splitting the coding blocks of the intra-frame coding units according to an optimal splitting mode. Therefore, in the whole coding process, the rate distortion cost of all candidate splitting modes does not need to be calculated, the low-efficiency rate distortion cost calculation is avoided, the calculated amount is reduced, and the coding efficiency is improved.
In some embodiments, determining a candidate split mode for an intra-coded unit includes: when the intra-frame coding unit meets the preset size constraint condition, calculating a horizontal texture gradient and a vertical texture gradient of the intra-frame coding unit according to pixel values of all pixel points in the intra-frame coding unit; determining a pixel texture direction of the intra-frame encoding unit according to the horizontal texture gradient and the vertical texture gradient, wherein the pixel texture direction is at least one of the horizontal direction and the vertical direction; and screening candidate splitting modes corresponding to the intra-frame coding units from a plurality of preset splitting modes according to the pixel texture direction.
Wherein the preset size constraint condition is a constraint rule for defining the size of the intra-coding unit. Illustratively, the preset size constraint includes constraining a ratio between a width and a height of the intra-coding unit.
The horizontal texture gradient refers to the degree of texture variation in the horizontal direction, reflecting the degree of correlation between pixels in the horizontal direction. The vertical texture gradient refers to the degree of texture variation in the vertical direction, reflecting the degree of correlation between pixels in the vertical direction.
The pixel texture direction refers to the direction in which each pixel point with high association degree in the intra-frame coding unit is located. The pixel texture direction can also be understood as the splitting direction. For example, if the correlation degree of the pixel values of each pixel point in the horizontal direction in the intra-frame encoding unit is higher than the correlation degree of the pixel values of each pixel point in the vertical direction, the pixel texture direction is determined to be the horizontal direction, and at this time, the splitting direction of the intra-frame encoding unit is horizontal splitting.
Optionally, for an intra-coding unit, the computer device verifies whether the intra-coding unit meets a preset size constraint based on size information of the intra-coding unit. And when the intra-frame coding unit is verified to meet the preset size constraint condition, the computer equipment acquires a pixel matrix of the intra-frame coding unit, wherein the pixel matrix comprises pixel values of all pixel points in the intra-frame coding unit. A horizontal texture gradient is calculated based on the pixel values of the pixels located in the same row, and a vertical texture gradient is calculated based on the pixel values of the pixels located in the same column.
The computer equipment determines texture information according to the horizontal texture gradient and the vertical texture gradient, selects the pixel texture direction of the intra-frame coding unit from the horizontal direction and the vertical direction according to the texture information, and screens out candidate splitting modes consistent with the pixel texture direction from a plurality of preset splitting modes.
Illustratively, starting from the first row, obtaining a sub-horizontal texture gradient of the current row based on the pixel value of each pixel point positioned in the current row, and fusing the sub-horizontal texture gradients corresponding to each row to obtain a horizontal texture gradient. Starting from the first column, obtaining a sub-vertical texture gradient of the current column based on pixel values of all pixel points positioned in the current column, and fusing sub-vertical texture gradients corresponding to all columns to obtain a vertical texture gradient. The computer device compares the horizontal texture gradient to the vertical texture gradient to determine texture information.
For example, if the horizontal texture gradient is similar or analogous to the vertical texture gradient, then the texture information is determined to characterize the horizontal degree of association (i.e., the degree of association between pixels in the horizontal direction) as not significantly different from the vertical degree of association (i.e., the degree of association between pixels in the vertical direction), and the pixel texture direction is determined to include both the horizontal direction and the vertical direction.
If the horizontal texture gradient is larger than the vertical texture gradient, determining that the correlation degree of the texture information in the horizontal direction is lower, filtering the horizontal direction, and determining that the pixel texture direction is the vertical direction. If the horizontal texture gradient is smaller than the vertical texture gradient, determining that the correlation degree of the texture information representation in the vertical direction is lower, filtering the vertical direction, and determining that the pixel texture direction is the horizontal direction.
In some embodiments, filtering candidate split modes corresponding to intra-coding units from a plurality of preset split modes according to pixel texture directions includes: if the pixel texture direction comprises a horizontal direction and a vertical direction, a plurality of preset splitting modes are used as candidate splitting modes corresponding to the intra-frame coding units; if the pixel texture direction is the horizontal direction, screening a preset splitting mode in the horizontal direction from a plurality of preset splitting modes to serve as a candidate splitting mode corresponding to the intra-frame coding unit; if the pixel texture direction is the vertical direction, a preset splitting mode in the vertical direction is selected from a plurality of preset splitting modes and is used as a candidate splitting mode corresponding to the intra-frame coding unit.
Wherein, the preset splitting modes of the two splitting directions exist in the plurality of preset splitting modes, and are respectively a preset splitting mode (horizontal splitting mode) of the horizontal splitting direction and a preset splitting mode (vertical splitting mode) of the vertical splitting mode.
Thus, in the case where the pixel texture direction includes a horizontal direction and a vertical direction, the computer device determines each of the preset split modes as a candidate split mode. For example, as in fig. 3, in this case, there are 6 candidate split modes, namely horizontal split mode 1, horizontal split mode 2, horizontal split mode 3, vertical split mode 1, vertical split mode 2, and vertical split mode 3.
In the case that the pixel texture direction is the horizontal direction, the computer device takes each horizontal splitting mode as a candidate splitting mode, as in fig. 3, there are 3 candidate splitting modes, namely a horizontal splitting mode 1, a horizontal splitting mode 2 and a horizontal splitting mode 3. Similarly, in the case that the pixel texture direction is the vertical direction, the computer device uses each vertical splitting mode as a candidate splitting mode, as in fig. 3, there are 3 candidate splitting modes, namely, a vertical splitting mode 1, a vertical splitting mode 2 and a vertical splitting mode 3.
Therefore, if the pixel texture direction comprises the horizontal direction and the vertical direction, the preset splitting modes are not required to be screened, and each preset splitting mode is directly used as a candidate splitting mode. If the pixel texture direction is the horizontal direction, screening out a preset splitting mode in the horizontal direction as a candidate splitting mode. If the pixel texture direction is the vertical direction, screening out a preset splitting mode in the vertical direction as a candidate splitting mode. In this way, the candidate splitting mode matched with the pixel texture direction is screened out by combining the pixel texture direction, so that the screening effectiveness of the candidate splitting mode is ensured, and the effectiveness of video coding is improved.
In this embodiment, when the intra-frame encoding unit meets a preset size constraint condition, the horizontal texture gradient and the vertical texture gradient of the intra-frame encoding unit are calculated according to the pixel values of each pixel point in the intra-frame encoding unit. According to the horizontal texture gradient and the vertical texture gradient, the pixel texture direction of the matched intra-frame coding unit is accurately selected from the horizontal direction and the vertical direction. And further screening the candidate range of the splitting mode of the intra-frame coding unit from a plurality of preset splitting modes according to the pixel texture direction, namely screening the candidate splitting mode corresponding to the intra-frame coding unit. Therefore, the reasonable and effective video coding is performed based on the candidate splitting mode, and the video coding efficiency is ensured and the video coding effectiveness is improved.
In some embodiments, the step of determining whether the intra-coding unit meets a preset size constraint comprises: acquiring the pixel width and the pixel height of an intra-frame coding unit; calculating the aspect ratio and the aspect ratio of the intra-frame coding unit according to the pixel width and the pixel height; if the aspect ratio and the aspect ratio are both smaller than the preset ratio, determining that the intra-frame coding unit meets the preset size constraint condition.
Wherein, the pixel width refers to the width of the intra-frame coding unit, and the pixel height refers to the height of the intra-frame coding unit. The preset ratio is used to constrain the intra coding unit size for performing texture gradient calculations (calculation of horizontal texture gradients and calculation of vertical texture gradients).
Optionally, the computer device calculates the pixel width divided by the pixel height to obtain the aspect ratio, and calculates the pixel height divided by the pixel width to obtain the aspect ratio. If the aspect ratio and the aspect ratio are smaller than the preset proportion, determining that the intra-frame coding unit meets the preset size constraint condition, and determining that texture gradient calculation is allowed.
Illustratively, if the intra coding unit size is 64×64 and the preset ratio is 4, the aspect ratio and the aspect ratio are both 1 and less than 4, and it is determined that the intra coding unit satisfies the preset size constraint.
Illustratively, if the intra-coding unit size is 64×32 and the predetermined ratio is 4, the aspect ratio is 2 (i.e., 64 divided by 32, equal to 2), the aspect ratio is 0.5 (i.e., 32 divided by 64, equal to 0.5), the aspect ratio and the aspect ratio are both less than 4, and it is determined that the intra-coding unit satisfies the predetermined size constraint.
It should be noted that, if the preset ratio is greater than 1 and the aspect ratio is smaller than the preset ratio, horizontal splitting is allowed. The aspect ratio is less than the predetermined ratio, indicating that vertical splitting is allowed. The aspect ratio and the aspect ratio are smaller than the preset ratio, so that horizontal splitting and vertical splitting are indicated, and at the moment, candidate splitting modes can be further screened through texture gradient calculation.
In the present embodiment, the aspect ratio to the aspect ratio of the intra-frame encoding unit is calculated based on the pixel width and the pixel height of the intra-frame encoding unit. If the aspect ratio and the aspect ratio are smaller than the preset proportion, the intra-frame coding unit is determined to meet the preset size constraint condition, the horizontal splitting is allowed, the vertical splitting is allowed, the candidate splitting mode can be further determined by using texture gradient, the screening effectiveness of the candidate splitting mode is ensured, and therefore the effectiveness of video coding is improved.
In some embodiments, the method further comprises: if the aspect ratio is greater than or equal to the preset proportion and the aspect ratio is less than the preset proportion, determining that the intra-frame coding unit does not meet the preset size constraint condition, and screening a preset splitting mode in the vertical direction from a plurality of preset splitting modes to serve as a candidate splitting mode corresponding to the intra-frame coding unit. If the aspect ratio is greater than or equal to the preset proportion and the aspect ratio is less than the preset proportion, determining that the intra-frame coding unit does not meet the preset size constraint condition, and screening a preset splitting mode in the horizontal direction from a plurality of preset splitting modes to serve as a candidate splitting mode corresponding to the intra-frame coding unit.
For example, if the aspect ratio is greater than or equal to the preset ratio and the aspect ratio is less than the preset ratio, the pixel width is very wide and the pixel height is not high, and at this time, the method is suitable for splitting in the vertical direction and is not suitable for splitting in the horizontal direction, and then the texture gradient calculation is not required to be further calculated, so that the preset splitting mode in the vertical direction is directly determined to be the candidate splitting mode.
Similarly, if the aspect ratio is greater than or equal to the preset ratio and the aspect ratio is less than the preset ratio, the pixel height is very high, the pixel width is not wide, the method is suitable for splitting in the horizontal direction and is not suitable for splitting in the vertical direction, then further calculation of texture gradient is not needed, and the preset splitting mode in the horizontal direction is directly determined to be a candidate splitting mode.
Thus, after the candidate split mode is determined, step S204 is continued.
In this embodiment, under the condition that the preset size constraint condition is not satisfied, texture gradient calculation is not needed, and the preset splitting mode of the unadapted splitting direction is filtered out directly based on the result that the aspect ratio and the aspect ratio are respectively compared with the preset ratio. Therefore, the determination efficiency of the candidate split mode can be greatly improved, and the efficiency of video coding is improved.
In some embodiments, the method further comprises: and acquiring a preset size range, and if the size of the intra-frame coding unit is within the size range, returning to the step of calculating the aspect ratio and the aspect ratio of the intra-frame coding unit according to the pixel width and the pixel height to continue execution.
Wherein the size range is used to check whether the intra-coding unit is allowed to perform intra-splitting. The size range includes a maximum pixel height and a maximum pixel width.
If the pixel height of the intra-frame coding unit is less than or equal to the maximum pixel height and the pixel width of the intra-frame coding unit is less than or equal to the maximum pixel width, determining that the size of the intra-frame coding unit is within the size range. If the pixel height of the intra-frame coding unit is larger than the maximum pixel height, or the pixel width of the intra-frame coding unit is larger than the maximum pixel width, determining that the size of the intra-frame coding unit is not in the size range, and not allowing the intra-frame coding unit to be split, and directly carrying out the subsequent prediction stage.
In the present embodiment, before checking whether a preset size constraint condition is satisfied, it is determined in advance that the size of the intra-coding unit is within the size range. If the size meets the splitting requirement, then judging the preset size constraint condition. If not, the size is not in accordance with the splitting requirement, and splitting is not performed. Thus, the effectiveness and accuracy of video coding can be further improved.
In some embodiments, calculating a horizontal texture gradient of an intra-coding unit from pixel values of respective pixels within the intra-coding unit includes: determining adjacent pixel points of each pixel point in an intra-frame coding unit in the horizontal direction; calculating the pixel difference in the horizontal direction between the pixel point and the corresponding adjacent pixel point; and summing the calculated pixel differences in the horizontal direction between the pixel points and the corresponding adjacent pixel points to obtain the horizontal texture gradient of the intra-frame coding unit.
Optionally, the computer device determines, in a top-to-bottom direction, a target line involved in performing the horizontal texture gradient, the target line being a line other than the lowest line. For each pixel of the same row, the computer device determines the last pixel of the row in a left to right direction. For each target row, each pixel (non-last pixel) is a pixel adjacent to the pixel and located on the right side of the pixel, as an adjacent pixel of the pixel in the horizontal direction. And calculating the absolute value of the difference value between the pixel value of the pixel point and the pixel value of the adjacent pixel point to obtain the difference in the horizontal direction corresponding to the pixel point.
And the computer equipment calculates the sum value of the differences corresponding to the pixel points in the target row to obtain the sub-horizontal texture gradient of the target row.
The computer device sums the respective sub-horizontal texture gradients for each target line to obtain a horizontal texture gradient for the intra-coding unit.
Of course, the above may also determine the target row from bottom to top, and the target row is other than the uppermost row. The last pixel point in each target row and the adjacent pixel points of each pixel point can also be determined from right to left.
Illustratively, let the pixel width be width and the pixel height be height, any of the rows of target row 1 through height-1 are determined in a left to right, top to bottom direction. For each target row, the last pixel is the pixel of the column where width is located. For each target row and each pixel x (non-last pixel), the corresponding pixel value is src [ x ], the corresponding adjacent pixel is x+1, and the pixel value of the adjacent pixel is src [ x+1], the following formula is adopted to calculate the horizontal texture gradient hor_grad:
in this embodiment, after determining the adjacent pixels of each pixel point in the intra-frame encoding unit in the horizontal direction, the degree of tightness between a single pixel point and the adjacent pixels in the horizontal direction is estimated by calculating the horizontal direction pixel difference between the pixel point and the corresponding adjacent pixel point, and the smaller the horizontal direction pixel difference is, the more closely the two pixels are, and the more similar. And then summing the pixel differences in the horizontal direction of each pixel point to obtain the horizontal texture gradient of the determined intra-frame coding unit so as to accurately evaluate the overall compactness in the horizontal direction.
In some embodiments, computing a vertical texture gradient for an intra-coding unit from pixel values for respective pixels within the intra-coding unit includes: determining adjacent pixel points of each pixel point in the intra-frame coding unit in the vertical direction; calculating the vertical pixel difference between the pixel point and the corresponding adjacent pixel point; and summing the calculated pixel differences in the vertical direction between the pixel points and the corresponding adjacent pixel points to obtain the vertical texture gradient of the intra-frame coding unit.
Optionally, the computer device determines, from left to right, the target columns involved in performing the vertical texture gradient, the target columns being other columns than the rightmost column. For each pixel point in the same column, determining the last pixel point in the column according to the direction from top to bottom. For each target column, each pixel (other pixels except the last pixel in the target column) is a pixel adjacent to the pixel and below the pixel, and is a pixel adjacent to the pixel in the vertical direction. And the computer equipment calculates the absolute value of the difference value between the pixel value of the pixel point and the pixel value of the adjacent pixel point to obtain the difference in the vertical direction corresponding to the pixel point.
And the computer equipment calculates the sum value of the differences corresponding to the pixel points of the target column to obtain the sub-vertical texture gradient of the target column.
The computer device sums the sub-vertical texture gradients for each target column to obtain a vertical texture gradient for the intra-coding unit.
Similarly, the target column may be determined from right to left, and the target column is other columns than the leftmost column. The last pixel point in each target column and the adjacent pixel points of each pixel point can be determined from bottom to top.
Illustratively, let the pixel width be width and the pixel height be height, the target column is determined to be any of 1 through width-1 in a left to right, top to bottom direction. For each target column, the last pixel is the pixel of the row in which the height is located. For each target column and each pixel x (non-last pixel), the corresponding pixel value is src [ x ], the corresponding adjacent pixel is x+stride, the pixel value of the adjacent pixel is src [ x+stride ], the stride is a step length, and the value is 1, at this time, the vertical texture gradient ver_grad is calculated by adopting the following formula:
after determining the horizontal texture gradient and the vertical texture gradient of the intra-frame encoding unit, the horizontal texture gradient and the vertical texture gradient are stored into a gradient array texture_dir2, which is used for storing the texture gradients corresponding to each intra-frame encoding unit. The gradient array includes a horizontal texture gradient array texture_dir [0] and a vertical texture gradient array texture_dir [1]. The computer device stores the horizontal texture gradient and the vertical texture gradient of the intra-coding unit into a horizontal texture gradient array and a vertical texture gradient array, respectively, belonging to the intra-coding unit.
In this embodiment, after determining the adjacent pixels in the vertical direction of each pixel in the intra-frame encoding unit, the degree of tightness between a single pixel and the adjacent pixels in the vertical direction is estimated by calculating the vertical direction pixel difference between the pixel and the corresponding adjacent pixel, and the smaller the vertical direction pixel difference, the more closely the two pixels are, and the more similar. And then, summing the pixel differences in the vertical direction of each pixel point to obtain the vertical texture gradient of the determined intra-frame coding unit so as to accurately evaluate the overall compactness in the vertical direction.
In some embodiments, as shown in fig. 4, a flow chart of the pixel texture direction determination in one embodiment is shown. Determining a pixel texture direction of the intra-coding unit from the horizontal texture gradient and the vertical texture gradient, comprising:
in step S402, a first ratio between the horizontal texture gradient and the vertical texture gradient is calculated, and a second ratio between the vertical texture gradient and the horizontal texture gradient is calculated.
Illustratively, the first ratio is the ratio of the horizontal texture gradient divided by the vertical texture gradient, and the second ratio is the ratio of the vertical texture gradient divided by the horizontal texture gradient.
In step S404, if the first ratio and the second ratio are both smaller than the preset ratio, it is determined that the pixel texture direction of the intra-frame encoding unit includes a horizontal direction and a vertical direction.
The preset ratio is a value greater than 1.
The step of determining the preset ratio includes: and acquiring each coded historical intra-frame coding unit, and determining a splitting direction related to an optimal splitting mode of each historical intra-frame coding unit, wherein the splitting direction is one of a horizontal direction and a vertical direction. The optimal split pattern of the historical intra coding unit is determined based on RDO calculations in the related art. The computer device calculates a historical vertical texture gradient and a historical horizontal texture gradient corresponding to each of the historical intra-frame encoding units. For each historical intra-coding unit, a first ratio of the historical horizontal texture gradient to the historical vertical texture gradient and a second ratio of the historical vertical texture gradient to the historical horizontal texture gradient are calculated. And determining a preset ratio according to the respective first ratio and second ratio of each historical intra-frame coding unit and the splitting direction of the optimal splitting mode.
In step S406, if the first ratio is smaller than the preset ratio and the second ratio is greater than or equal to the preset ratio, the pixel texture direction of the intra-frame encoding unit is determined to be the horizontal direction.
In step S408, if the second ratio is smaller than the preset ratio and the first ratio is greater than or equal to the preset ratio, the direction of the texture of the pixels of the intra-frame encoding unit is determined to be the vertical direction.
Further, for example, as shown in fig. 5, a flow chart of determining a pixel texture direction in another embodiment is shown.
First, the computer device acquires a horizontal texture gradient hor_grad and a vertical texture gradient ver_grad.
Then, the computer device first determines whether inequality 1 is satisfied:i.e. whether or not: />
If yes, that is, the first ratio is greater than or equal to the preset ratio, the pixel texture direction is determined to not include the horizontal direction, if not (not met), the first ratio is determined to be smaller than the preset ratio, and the pixel texture direction is determined to include the horizontal direction. Wherein y is a preset ratio.
Finally, the computer device determines whether inequality 2 is satisfied:i.e. whether or not: />
If yes, that is, the second ratio is greater than or equal to the preset ratio, determining that the pixel texture direction does not include the vertical direction, if not (not met), determining that the second ratio is less than the preset ratio, and determining that the pixel texture direction includes the vertical direction.
Therefore, it can be understood that if the first ratio is smaller than the preset ratio and the second ratio is greater than or equal to the preset ratio, the pixel texture direction is illustrated as the horizontal direction.
If the second ratio is smaller than the preset ratio and the first ratio is larger than or equal to the preset ratio, determining that the pixel texture direction is vertical.
If the first ratio and the second ratio are smaller than the preset ratio, determining that the pixel texture direction of the intra-frame coding unit comprises a horizontal direction and a vertical direction, and indicating that the pixel texture direction comprises the horizontal direction and the vertical direction.
In this embodiment, whether the splitting in the horizontal direction can be performed is checked by comparing whether the first ratio is smaller than the preset ratio. Likewise, whether the splitting in the vertical direction can be performed is checked by comparing whether the second ratio is smaller than the preset ratio. If the first ratio and the second ratio are smaller than the preset ratio, determining that the pixel texture direction of the intra-frame coding unit comprises a horizontal direction and a vertical direction. If the first ratio is smaller than the preset ratio and the second ratio is larger than or equal to the preset ratio, determining that the pixel texture direction of the intra-frame coding unit is a horizontal direction. If the second ratio is smaller than the preset ratio and the first ratio is larger than or equal to the preset ratio, determining that the pixel texture direction of the intra-frame coding unit is a vertical direction. In the process, the pixel texture direction is accurately estimated from the dimension of the texture gradient through twice comparison, so that the effectiveness of the determination of the subsequent candidate splitting mode is ensured, and the accuracy of video coding is improved.
In some embodiments, for an intra coding unit, calculating a cost corresponding to a non-split mode includes: calculating pixel difference values of all pixel points in an intra-frame coding unit before intra-frame prediction and after intra-frame prediction in a non-splitting mode, and obtaining distortion cost corresponding to the non-splitting mode based on the pixel difference values of all pixel points; calling a code stream cost function, and obtaining a code stream writing cost corresponding to the non-splitting mode according to a mode identification code corresponding to the non-splitting mode and the pixel height and pixel width of the intra-frame coding unit; and obtaining the cost corresponding to the non-split mode based on the distortion cost and the code stream writing cost corresponding to the non-split mode.
The code stream cost function is used for calculating the code stream cost, and in this embodiment, the code stream cost function is used for calculating the code stream cost required by the splitting stage mode. For example, the first write stream cost is calculated using the lbac_enc_part_size function. The pattern identification code characterizes the pattern of a certain split stage, e.g. the pattern identification code of the non-split pattern is 0000 and the pattern identification code of the horizontal split pattern 1 is 0001.
Optionally, the computer device obtains a prediction value of each pixel point when intra prediction is performed in the non-split mode. For each pixel, the computer device calculates the difference between the original pixel value and the predicted value of the pixel before intra-prediction to obtain a pixel value difference. And the computer equipment fuses the pixel value difference values to obtain the distortion cost corresponding to the non-splitting mode.
The computer equipment calls a code stream cost function, and substitutes a mode identification code corresponding to the non-splitting mode, pixel height and pixel width of the intra-frame coding unit into the code stream cost function to output code stream writing cost corresponding to the non-splitting mode. The computer equipment obtains the cost corresponding to the non-split mode based on the distortion cost and the code stream writing cost corresponding to the non-split mode.
In the intra-prediction process, there is also an additional code stream cost involved, for example, including the predicted code stream cost corresponding to the intra-prediction mode and the encoded code stream cost caused by the encoded data.
At this time, the computer device invokes a predictive code stream cost function for calculating a predictive code stream cost corresponding thereto, for example, a predictive code stream cost required for angle prediction is calculated using the lbac_enc_intra_dir function. The computer equipment calls a coding code stream cost function lbac_encode_bin function which is used for calculating the coding code stream cost and corresponds to the coding code stream cost.
The computer equipment calls a predictive code stream cost function based on a mode identification code corresponding to the predictive mode, and calculates predictive code stream writing cost. And calling a coding code stream cost function based on the number of non-0 values and the non-0 values in the coded data, and calculating coding code stream cost. The computer equipment superimposes the code stream writing cost corresponding to the non-splitting mode, predicts the code stream writing cost and encodes the code stream writing cost, and obtains the target code stream writing cost corresponding to the non-splitting mode.
And the computer equipment calculates the product of the Lagrangian coefficient and the target code stream cost, and superimposes the product and the distortion cost to obtain the cost corresponding to the non-splitting mode.
It should be noted that, in the embodiment of the present application, the calculation process of the cost corresponding to the non-splitting mode is a process of calculating the rate distortion cost for RDO (Rate Distortion Optimization ).
In this embodiment, first, pixel difference values of each pixel point in the intra-frame encoding unit before intra-frame prediction and after intra-frame prediction in the non-split mode are calculated. And calculating distortion cost corresponding to the non-splitting mode based on pixel difference values of the pixel points to evaluate the coding instruction in the non-splitting mode. And then, calling a code stream cost function, and calculating the code stream cost corresponding to the non-splitting mode according to the mode identification code corresponding to the non-splitting mode, the pixel height and the pixel width of the intra-frame coding unit. And finally, obtaining the cost corresponding to the non-split mode based on the distortion cost and the code stream writing cost corresponding to the non-split mode. Based on this cost, the coding effect of the non-split mode can be faithfully reflected.
In some embodiments, calculating the estimated distortion cost corresponding to the candidate split mode according to the cost includes: obtaining a preset difference pre-estimated coefficient, wherein the difference pre-estimated coefficient represents the difference between the cost corresponding to the non-splitting mode and the distortion cost corresponding to the candidate splitting mode. And taking the product of the cost corresponding to the non-splitting mode and the difference pre-estimated coefficient as the pre-estimated distortion cost corresponding to the candidate splitting mode.
The step of determining the difference pre-estimation coefficient comprises the following steps: the computer equipment calculates the cost of the non-split mode and the distortion cost of the optimal split mode corresponding to each historical intra-frame coding unit. For each historical intra-coding unit, the computer device calculates a ratio between the cost of the non-split mode and the distortion cost of the optimal split mode. The computer equipment determines the difference pre-estimation coefficient according to the corresponding ratio of each historical intra-frame coding unit.
For example, the computer device takes the average of the ratios as the difference prediction coefficient. For another example, the computer device takes out the highest-frequency ratio as the difference prediction coefficient.
Note that, the difference prediction coefficients corresponding to the candidate split modes may be the same or different, and the method is not particularly limited. The difference prediction coefficient is smaller than 1.
In this embodiment, in order to improve the efficiency and accuracy of the estimated total cost of the candidate split mode, when calculating the corresponding estimated distortion cost, the product of the cost corresponding to the non-split mode and the difference estimated coefficient is directly calculated, so as to obtain the estimated distortion cost corresponding to the candidate split mode, and the estimation process is simple, convenient and accurate, so that the accuracy and efficiency of the estimated distortion cost are improved.
In some embodiments, calculating the estimated write stream cost corresponding to each candidate split mode includes: for each candidate splitting mode, acquiring splitting code stream costs required by splitting coding blocks of an intra-frame coding unit according to the candidate splitting mode, and acquiring other code stream writing costs except the splitting code stream costs when intra-frame prediction is performed on the intra-frame coding unit; and superposing the code stream writing cost required by splitting the coding block and other code stream writing cost to obtain a superposition value, and determining the estimated code stream writing cost corresponding to the candidate splitting mode according to the superposition value.
In the process of predicting the code stream writing cost, for each candidate splitting mode, the corresponding other code stream writing cost is obtained in advance and is a fixed value. Therefore, in the process of actually processing each intra-frame coding unit, when the estimated code stream writing cost corresponding to the candidate splitting mode is determined, the other prestored code stream writing cost can be directly called without calculation.
Illustratively, the determining step of the other write stream costs of the candidate split mode includes: and screening out the historical intra-frame coding units belonging to the candidate split mode from the historical intra-frame coding units. And for each screened historical intra-frame coding unit, acquiring the respective code stream cost of other stages (stages except the splitting stage) from the coding information of the screened historical intra-frame coding unit, and summing to obtain a sum value. And determining other code stream writing costs of the candidate splitting mode based on the respective corresponding sum value of the screened historical intra-frame coding units.
For example, an average value of the sums corresponding to the selected historical intra-frame coding units is taken as the cost of other code writing streams of the candidate splitting mode. For another example, the sum value with the highest occurrence frequency is taken as the other code stream writing cost of the candidate split mode.
The computer device calculates split code stream costs bit_cnt corresponding to the candidate split modes, obtains other code stream costs b of the candidate split modes from other code stream costs corresponding to each of the candidate split modes stored in advance, and superimposes the other code stream costs and the calculated split code stream costs to obtain bit_cnt+b. The computer equipment takes bit_cnt+b as the estimated code stream cost corresponding to the candidate splitting mode.
In some embodiments, obtaining the split write stream cost required for splitting the coded blocks of the intra-coded unit in the candidate split mode includes: and calling a code stream cost function, and obtaining splitting and writing code stream cost required by splitting the coding block of the intra-frame coding unit according to the candidate splitting mode according to the mode identification code corresponding to the candidate splitting mode, the pixel height and the pixel width of the intra-frame coding unit.
Wherein the stream cost function is the write stream cost required for calculating the pattern of the split phase.
Illustratively, after determining the mode identification code corresponding to the candidate split mode, the computer device substitutes the mode identification code, the pixel height and the pixel width of the intra-frame coding unit into a code stream cost function, and outputs the code stream cost corresponding to the candidate split mode.
Therefore, in order to calculate the split code stream cost corresponding to the candidate split mode, a code stream cost function is called first, and then the split code stream cost corresponding to the candidate split mode is calculated according to the mode identification code corresponding to the candidate split mode, the pixel height and the pixel width of the intra-frame coding unit. Therefore, based on the calculated split code stream cost and other preset code stream cost, the estimated code stream cost corresponding to the split mode can be accurately and efficiently determined, so that the efficiency of determining the optimal split mode is improved, and further, the video coding efficiency is improved.
In this embodiment, for each candidate splitting mode, after determining the splitting code stream cost required for splitting the coding block of the intra-frame coding unit according to the candidate splitting mode, other code stream costs are directly obtained, so that the code stream costs of other stages do not need to be calculated one by one, and the calculation amount is reduced. And finally, superposing the code stream writing cost required by splitting the coding block and other code stream writing cost to obtain a superposition value. According to the added value, the estimated code stream cost corresponding to the candidate splitting mode is estimated accurately and rapidly, so that the efficiency of determining the optimal splitting mode is improved, and the video coding efficiency is further improved.
In some embodiments, as shown in fig. 6, a flow chart of the candidate split mode skip determination is shown in one embodiment. For each candidate splitting mode, the computer equipment acquires cost_2Nx2N corresponding to the non-splitting mode, a difference pre-estimation coefficient a and other code stream cost b, and determines the product a multiplied by the cost of the difference pre-estimation coefficient a and the cost corresponding to the non-splitting mode as the pre-estimation distortion cost. After the computer equipment calculates the split code stream cost bit_cnt corresponding to the candidate split mode, determining the superposition value of the split code stream cost bit_cnt and the other code stream cost b as the estimated code stream cost b+bit_cnt, and determining the estimated total cost corresponding to the candidate split mode, namely: a×cost_2nx2n+λ× (b+bit_cnt). And judging whether the total cost is larger than the estimated total cost corresponding to the non-splitting mode. If yes, the computer equipment skips the candidate split mode, and if not, the computer equipment does not skip the candidate split mode.
Therefore, after the candidate splitting mode is determined, before the rate distortion cost is calculated, only the cost corresponding to the non-splitting mode and the split code stream cost of the candidate splitting mode can be calculated, and the pre-obtained difference pre-estimation coefficient and other code stream cost are combined to rapidly estimate the pre-estimated total cost of the candidate splitting mode, so that whether the candidate splitting mode can be used as one of the optimal splitting modes is checked, the effective screening of the candidate splitting mode is realized, the ineffective rate distortion cost calculation is avoided, and the video coding efficiency is improved.
The differences between the embodiments of the present application and the split mode selection in the related art are specifically illustrated below. Fig. 7 is a schematic diagram of split mode selection according to the related art. Fig. 8 is a schematic diagram of candidate split mode selection in an embodiment of the present application. As can be seen from comparing fig. 7 and 8, the steps from the start to the determination of the pixel texture direction (splitting direction) are identical, and this portion is as follows:
first, it is confirmed whether IPF (Interleaved Prediction Fram, interval interleaved intra prediction) is on. If yes (open), determining that the split is not allowed, and directly ending. If not (not on), judging whether the size of the intra-frame coding unit is within the size range, if not (not within the size range), not allowing the splitting, and directly ending. If so (within the size range), the ratio of the pixel width to the pixel height of the intra-coding unit is calculated to obtain the aspect ratio, and the ratio of the pixel height to the pixel width is calculated to obtain the aspect ratio. And judging whether the aspect ratio is smaller than a preset proportion, and if so, allowing horizontal splitting. If not (aspect ratio greater than the preset ratio), horizontal splitting is not allowed. And then judging that the aspect ratio is smaller than the preset proportion, and if the aspect ratio is smaller than the preset proportion, allowing vertical splitting. If not (the aspect ratio is greater than the preset ratio), vertical splitting is not allowed.
At this time, in the related art, after the splitting directions are determined, the splitting modes corresponding to the splitting directions are determined, the rate distortion costs of the splitting modes are calculated, and the final splitting mode is determined based on the rate distortion costs of the splitting modes, that is, the cost of the rate distortion costs is compared with the cost of the non-splitting mode, and the final splitting mode is determined, and then the process is ended.
In the embodiment of the application, after the pixel texture direction is determined, the candidate split mode is determined according to the pixel texture direction. At this time, the cost of the split writing code stream of the candidate split mode and the cost of the non-split mode are calculated, so that the estimated total cost of the candidate split mode is estimated rapidly. That is, based on this approach, the estimated total cost of each candidate split pattern is estimated. And screening out non-skipped candidate split modes based on the estimated total cost, namely, according to the estimated total cost and the cost of the non-split modes, the candidate split modes with higher cost can be filtered out in advance, and the candidate split modes with lower cost (the non-skipped candidate split modes) can be screened out. And finally, determining an optimal splitting mode based on the rate distortion cost of the non-skipped candidate splitting mode, and ending at the moment.
Compared with the method and the device, the method and the device can effectively reduce rate distortion calculation with low cost performance. As shown in table 1, table 1 is a table of effect comparisons before and after optimization using the examples of the present application.
Table 1: effect comparison table
The data in table 1 is obtained by video encoding based on a video image of 50 frames. As shown in table 1, the number of splits was reduced from 6516628 to 5043689 before and after optimization. The probability of failure (trapped worst case, trying all split modes) drops from 3.93 to 0.16 per mill, and the number of skips (number of skip inefficient calculations) rises from 3369640 to 4918696.
Further, as shown in fig. 9, a schematic diagram of coding efficiency evaluation in one embodiment is shown. The data referred to in fig. 9 is the evaluation data obtained by implementing the embodiments of the present application with a low-latency encoder. Fig. 9 illustrates evaluation data of different evaluation standards under different resolutions, and it can be known that the video encoding method provided by the embodiment of the application can be integrally accelerated. Wherein, the higher the acceleration ratio, the better the acceleration effect, the higher the coding efficiency.
Therefore, according to the video coding method provided by the embodiment of the application, the rate distortion cost of all candidate splitting modes is not required to be calculated, the estimated total cost of each candidate splitting mode is estimated first, and then the candidate splitting mode with high cost is filtered out based on the estimated total cost of each candidate splitting mode and the cost of the non-splitting mode. Finally, the rate distortion cost of the candidate split mode lower than the cost of the non-split mode is calculated again to determine the optimal split mode, so that the low-efficiency calculation is reduced, and the coding efficiency is improved.
The application scene also provides an application scene, and the application scene applies the video coding method. Specifically, the application of the video coding method in the application scene is as follows: in a video playing scene, in order to consider both video quality and coding efficiency, the video coding method of the embodiment of the application may be used for video coding. Specifically, the computer equipment acquires an image to be played, and blocks the image to be played to obtain each intra-frame coding unit. For each intra-coding unit, the computer device determines a candidate split mode for the intra-coding unit. Aiming at the intra-frame coding unit, calculating the cost corresponding to the non-splitting mode, and calculating the estimated distortion cost corresponding to the candidate splitting mode according to the cost. Aiming at the intra-frame coding unit, the computer equipment calculates the estimated code stream cost corresponding to each candidate splitting mode respectively, and determines the estimated total cost corresponding to each candidate splitting mode according to the estimated distortion cost and the estimated code stream cost corresponding to each candidate splitting mode. And when the cost corresponding to the non-splitting mode does not exceed the estimated total cost corresponding to the candidate splitting mode, skipping the rate distortion cost calculation corresponding to the corresponding candidate splitting mode. And when the cost corresponding to the non-split mode exceeds the estimated total cost corresponding to the candidate split mode, performing rate distortion cost calculation on the corresponding candidate split mode, and selecting an optimal split mode corresponding to the intra-frame coding unit from the non-skipped candidate split modes according to the calculation result. The computer device performs coding block splitting on the intra-frame coding units according to an optimal splitting mode.
Of course, the video coding method provided by the embodiment of the application is not limited to the above, and the video coding method provided by the application can be applied to other application scenes, for example, video storage scenes.
The above application scenario is only illustrative, and it is to be understood that the application of the video encoding method provided in the embodiments of the present application is not limited to the above scenario.
In a specific embodiment, the video playing method specifically includes the following steps:
step 1: the computer device obtains an intra-frame encoding unit.
Step 2: the computer device checks whether the intra-coded unit meets a preset size constraint.
Optionally, the computer device obtains a pixel width and a pixel height of the intra-coded unit. The aspect ratio to the aspect ratio of the intra coding unit is calculated from the pixel width and the pixel height. If the aspect ratio and the aspect ratio are both smaller than the preset ratio, determining that the intra-frame coding unit meets the preset size constraint condition. The computer device performs step 3.
If the aspect ratio is greater than or equal to the preset proportion and the aspect ratio is less than the preset proportion, determining that the intra-frame coding unit does not meet the preset size constraint condition, and screening a preset splitting mode in the vertical direction from a plurality of preset splitting modes to serve as a candidate splitting mode corresponding to the intra-frame coding unit. At this point, the computer device directly performs step 6.
If the aspect ratio is greater than or equal to the preset proportion and the aspect ratio is less than the preset proportion, determining that the intra-frame coding unit does not meet the preset size constraint condition, and screening a preset splitting mode in the horizontal direction from a plurality of preset splitting modes to serve as a candidate splitting mode corresponding to the intra-frame coding unit. At this point, the computer device directly performs step 6.
Step 3: when the intra-frame coding unit meets the preset size constraint condition, the computer equipment calculates the horizontal texture gradient and the vertical texture gradient of the intra-frame coding unit according to the pixel values of all pixel points in the intra-frame coding unit.
Optionally, the computer device determines neighboring pixels of each pixel in the intra-coding unit in the horizontal direction. And calculating the pixel difference in the horizontal direction between the pixel point and the corresponding adjacent pixel point. And summing the calculated pixel differences in the horizontal direction between the pixel points and the corresponding adjacent pixel points to obtain the horizontal texture gradient of the intra-frame coding unit.
The computer device determines adjacent pixels in the vertical direction for each pixel in the intra-coding unit. And calculating the vertical pixel difference between the pixel point and the corresponding adjacent pixel point. And summing the calculated pixel differences in the vertical direction between the pixel points and the corresponding adjacent pixel points to obtain the vertical texture gradient of the intra-frame coding unit.
Step 4: the computer device determines a pixel texture direction of the intra-coding unit based on the horizontal texture gradient and the vertical texture gradient, the pixel texture direction being at least one of a horizontal direction and a vertical direction.
Optionally, the computer device calculates a first ratio between the horizontal texture gradient and the vertical texture gradient and calculates a second ratio between the vertical texture gradient and the horizontal texture gradient. If the first ratio and the second ratio are smaller than the preset ratio, determining that the pixel texture direction of the intra-frame coding unit comprises a horizontal direction and a vertical direction. If the first ratio is smaller than the preset ratio and the second ratio is larger than or equal to the preset ratio, determining that the pixel texture direction of the intra-frame coding unit is a horizontal direction. If the second ratio is smaller than the preset ratio and the first ratio is larger than or equal to the preset ratio, determining that the pixel texture direction of the intra-frame coding unit is a vertical direction.
Step 5: and the computer equipment screens candidate splitting modes corresponding to the intra-frame coding units from a plurality of preset splitting modes according to the pixel texture direction.
Optionally, if the pixel texture direction includes a horizontal direction and a vertical direction, the computer device uses the plurality of preset splitting modes as candidate splitting modes corresponding to the intra-frame encoding unit. If the pixel texture direction is the horizontal direction, a preset splitting mode in the horizontal direction is selected from a plurality of preset splitting modes and is used as a candidate splitting mode corresponding to the intra-frame coding unit. If the pixel texture direction is the vertical direction, a preset splitting mode in the vertical direction is selected from a plurality of preset splitting modes and is used as a candidate splitting mode corresponding to the intra-frame coding unit.
Step 6: for intra-coding units, the computer device calculates a cost corresponding to the non-split mode.
Optionally, the computer device calculates pixel differences of each pixel point in the intra-frame coding unit before intra-frame prediction and after intra-frame prediction in the non-split mode, and obtains distortion costs corresponding to the non-split mode based on the pixel differences of each pixel point. And calling a code stream cost function, and obtaining the code stream cost corresponding to the non-splitting mode according to the mode identification code corresponding to the non-splitting mode, the pixel height and the pixel width of the intra-frame coding unit. And obtaining the cost corresponding to the non-split mode based on the distortion cost and the code stream writing cost corresponding to the non-split mode.
Step 7: and the computer equipment calculates the estimated distortion cost corresponding to the candidate splitting mode according to the cost.
Optionally, the computer device obtains a preset difference prediction coefficient, where the difference prediction coefficient characterizes a difference between a cost corresponding to the non-splitting mode and a distortion cost corresponding to the candidate splitting mode. And taking the product of the cost corresponding to the non-splitting mode and the difference pre-estimated coefficient as the pre-estimated distortion cost corresponding to the candidate splitting mode.
Step 8: for each candidate splitting mode, the computer device obtains the splitting code stream cost required for splitting the coding blocks of the intra-frame coding unit according to the candidate splitting mode, and obtains other code stream cost except the splitting code stream cost when the intra-frame coding unit is subjected to intra-frame prediction. And superposing the code stream writing cost required by splitting the coding block and other code stream writing cost to obtain a superposition value, and determining the estimated code stream writing cost corresponding to the candidate splitting mode according to the superposition value.
Optionally, the computer device invokes a code stream cost function, and obtains splitting code stream cost required for splitting the coding block of the intra-frame coding unit according to the mode identification code corresponding to the candidate splitting mode, the pixel height and the pixel width of the intra-frame coding unit.
Step 9: and the computer equipment determines the estimated total cost corresponding to the candidate splitting mode according to the estimated distortion cost and the estimated code stream cost corresponding to the candidate splitting mode.
Step 10: and when the cost corresponding to the non-splitting mode does not exceed the estimated total cost corresponding to the candidate splitting mode, skipping the rate distortion cost calculation corresponding to the corresponding candidate splitting mode. When the cost corresponding to the non-split mode exceeds the estimated total cost corresponding to the candidate split mode, the computer equipment determines the candidate split mode as the non-skipped candidate split mode.
Step 11: if there is one candidate split pattern that is not skipped, the computer device determines the candidate split pattern that is not skipped as the optimal split pattern.
If a plurality of non-skipped candidate splitting modes exist, the computer equipment calculates the rate distortion cost of each non-skipped candidate splitting mode to obtain the calculation result corresponding to each non-skipped candidate splitting mode, and according to each calculation result, the candidate splitting mode with the minimum corresponding rate distortion cost is selected from the non-skipped candidate splitting modes to be used as the optimal splitting mode corresponding to the intra-frame coding unit.
Step 12: the computer device performs coding block splitting on the intra-frame coding units according to an optimal splitting mode.
In addition, before step 1, the video image may be segmented by using similar gradient calculation to obtain a corresponding intra-frame coding unit. Thus, the method further comprises: the computer device divides the video image into a plurality of LCUs. And for each LCU, determining a candidate block dividing mode of the LCU according to the pixel value of each pixel point in the LCU.
Illustratively, the computer device calculates a horizontal direction pixel difference between a pixel point and a corresponding adjacent pixel point from adjacent pixel points of each pixel point in the LCU in the horizontal direction. And summing the calculated pixel differences in the horizontal direction between the pixel points and the corresponding adjacent pixel points to obtain the horizontal gradient of the LCU. According to adjacent pixel points of each pixel point in the LCU in the vertical direction; and calculating the vertical pixel difference between the pixel point and the corresponding adjacent pixel point. And summing the calculated pixel differences in the vertical direction between the pixel points and the corresponding adjacent pixel points to obtain the vertical gradient of the LCU.
The computer device calculates a horizontal-to-vertical ratio between the horizontal gradient and the vertical gradient, and calculates a vertical-to-horizontal ratio between the vertical gradient and the horizontal gradient. If the horizontal-to-vertical ratio and the vertical-to-horizontal ratio are both less than the proportional threshold, determining that the blocking direction of the LCU includes a horizontal blocking direction and a vertical blocking direction. If the horizontal-to-vertical ratio is less than the proportional threshold and the vertical-to-horizontal ratio is greater than or equal to the proportional threshold, determining that the blocking direction of the LCU is the horizontal blocking direction. If the vertical-to-horizontal ratio is less than the proportional threshold and the horizontal-to-vertical ratio is greater than or equal to the proportional threshold, determining that the blocking direction of the LCU is the vertical blocking direction.
The computer device determines candidate blocking means based on the blocking direction. And respectively blocking the LCU according to each candidate blocking mode to obtain the intra-frame coding units corresponding to each candidate blocking mode.
At this time, for each intra-frame coding unit, steps 1 to 11 are executed, and after obtaining the rate distortion cost of each intra-frame coding unit, the method further includes: for each candidate block mode, the computer equipment sums the rate distortion cost of each intra-frame coding unit corresponding to the candidate block mode to obtain the total cost. Selecting a candidate block mode with the minimum total cost as a target candidate block mode, acquiring coded data obtained according to the target candidate block mode and a corresponding optimal splitting mode, and transmitting or storing the coded data.
In this embodiment, the candidate split mode corresponding to the intra-frame coding unit is determined; aiming at an intra-frame coding unit, calculating the cost corresponding to the non-splitting mode, and estimating the estimated distortion cost corresponding to the candidate splitting mode according to the cost; and aiming at the intra-frame coding unit, respectively calculating the estimated code stream writing cost corresponding to each candidate splitting mode. In this way, the estimated total cost corresponding to each candidate splitting mode can be rapidly estimated by estimating the estimated code stream cost corresponding to each candidate splitting mode. When the cost corresponding to the non-split mode does not exceed the estimated total cost corresponding to the candidate split mode, the candidate split mode is indicated to generate huge cost, and the corresponding rate distortion cost calculation is low-efficiency calculation, so that the rate distortion cost calculation corresponding to the corresponding candidate split mode is needed to be skipped, and the calculated amount is reduced. When the cost corresponding to the non-splitting mode exceeds the estimated total cost corresponding to the candidate splitting mode, the candidate splitting mode can be used as one of the selection of the optimal splitting mode, and the candidate splitting mode does not need to be skipped, so that the rate distortion cost of the candidate splitting mode is calculated. Selecting an optimal splitting mode corresponding to the intra-frame coding unit from the non-skipped candidate splitting modes according to the calculation result of the non-skipped candidate splitting modes; and splitting the coding blocks of the intra-frame coding units according to an optimal splitting mode. Therefore, in the whole coding process, the rate distortion cost of all candidate splitting modes does not need to be calculated, the low-efficiency rate distortion cost calculation is avoided, the calculated amount is reduced, and the coding efficiency is improved.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiments of the present application also provide a video encoding apparatus for implementing the video encoding method referred to above. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so the specific limitation of one or more embodiments of the video encoding apparatus provided below may be referred to the limitation of the video encoding method hereinabove, and will not be repeated here.
In one embodiment, as shown in fig. 10, there is provided a video encoding apparatus 1000 including: split mode determination module 1002, estimated distortion cost determination module 1004, estimated total cost determination module 1006, skip module 1008, split mode selection module 1010, and split module 1012, wherein:
a splitting mode determining module 1002, configured to determine a candidate splitting mode corresponding to an intra-frame coding unit;
the estimated distortion cost determining module 1004 is configured to calculate, for the intra-frame encoding unit, a cost corresponding to the non-splitting mode, and calculate, according to the cost, an estimated distortion cost corresponding to the candidate splitting mode;
the estimated total cost determining module 1006 is configured to calculate estimated write code stream costs corresponding to each candidate splitting mode for the intra-frame coding unit, and determine estimated total costs corresponding to each candidate splitting mode according to the estimated distortion costs and the estimated write code stream costs corresponding to each candidate splitting mode;
a skipping module 1008, configured to skip rate distortion cost calculation corresponding to the corresponding candidate split mode when the cost corresponding to the non-split mode does not exceed the estimated total cost corresponding to the candidate split mode;
the splitting mode selection module 1010 is configured to perform rate distortion cost calculation on the corresponding candidate splitting mode when the cost corresponding to the non-splitting mode exceeds the estimated total cost corresponding to the candidate splitting mode, and select an optimal splitting mode corresponding to the intra-frame coding unit from the non-skipped candidate splitting modes according to the calculation result;
And the splitting module 1012 is used for splitting the coding blocks of the intra-frame coding units according to the optimal splitting mode.
In some embodiments, the splitting mode determining module 1002 is configured to calculate, when the intra-frame encoding unit meets a preset size constraint condition, a horizontal texture gradient and a vertical texture gradient of the intra-frame encoding unit according to pixel values of each pixel point in the intra-frame encoding unit; determining a pixel texture direction of the intra-frame encoding unit according to the horizontal texture gradient and the vertical texture gradient, wherein the pixel texture direction is at least one of the horizontal direction and the vertical direction; and screening candidate splitting modes corresponding to the intra-frame coding units from a plurality of preset splitting modes according to the pixel texture direction.
In some embodiments, the split mode determining module 1002 is further configured to obtain a pixel width and a pixel height of the intra-coding unit; calculating the aspect ratio and the aspect ratio of the intra-frame coding unit according to the pixel width and the pixel height; if the aspect ratio and the aspect ratio are both smaller than the preset ratio, determining that the intra-frame coding unit meets the preset size constraint condition.
In some embodiments, the splitting mode determining module 1002 is further configured to determine that the intra-frame encoding unit does not meet the preset size constraint condition if the aspect ratio is greater than or equal to the preset ratio and the aspect ratio is less than the preset ratio, and screen a preset splitting mode in the vertical direction from the plurality of preset splitting modes as a candidate splitting mode corresponding to the intra-frame encoding unit; if the aspect ratio is greater than or equal to the preset proportion and the aspect ratio is less than the preset proportion, determining that the intra-frame coding unit does not meet the preset size constraint condition, and screening a preset splitting mode in the horizontal direction from a plurality of preset splitting modes to serve as a candidate splitting mode corresponding to the intra-frame coding unit.
In some embodiments, the splitting mode determining module 1002 is configured to determine adjacent pixel points of each pixel point in the intra-frame encoding unit in a horizontal direction; calculating the pixel difference in the horizontal direction between the pixel point and the corresponding adjacent pixel point; and summing the calculated pixel differences in the horizontal direction between the pixel points and the corresponding adjacent pixel points to obtain the horizontal texture gradient of the intra-frame coding unit.
In some embodiments, the splitting mode determining module 1002 is configured to determine neighboring pixels of each pixel in the intra-coding unit in a vertical direction; calculating the vertical pixel difference between the pixel point and the corresponding adjacent pixel point; and summing the calculated pixel differences in the vertical direction between the pixel points and the corresponding adjacent pixel points to obtain the vertical texture gradient of the intra-frame coding unit.
In some embodiments, the split mode determination module 1002 is configured to calculate a first ratio between the horizontal texture gradient and the vertical texture gradient, and calculate a second ratio between the vertical texture gradient and the horizontal texture gradient; if the first ratio and the second ratio are smaller than the preset ratio, determining that the pixel texture direction of the intra-frame coding unit comprises a horizontal direction and a vertical direction; if the first ratio is smaller than the preset ratio and the second ratio is larger than or equal to the preset ratio, determining that the pixel texture direction of the intra-frame coding unit is a horizontal direction; if the second ratio is smaller than the preset ratio and the first ratio is larger than or equal to the preset ratio, determining that the pixel texture direction of the intra-frame coding unit is a vertical direction.
In some embodiments, the splitting mode determining module 1002 is configured to, if the pixel texture direction includes a horizontal direction and a vertical direction, use a plurality of preset splitting modes as candidate splitting modes corresponding to the intra-frame encoding unit; if the pixel texture direction is the horizontal direction, screening a preset splitting mode in the horizontal direction from a plurality of preset splitting modes to serve as a candidate splitting mode corresponding to the intra-frame coding unit; if the pixel texture direction is the vertical direction, a preset splitting mode in the vertical direction is selected from a plurality of preset splitting modes and is used as a candidate splitting mode corresponding to the intra-frame coding unit.
In some embodiments, the estimated distortion cost determining module 1004 is configured to calculate pixel differences of each pixel point in the intra-frame encoding unit before intra-frame prediction and after intra-frame prediction in the non-splitting mode, and obtain a distortion cost corresponding to the non-splitting mode based on the pixel differences of each pixel point; calling a code stream cost function, and obtaining a code stream writing cost corresponding to the non-splitting mode according to a mode identification code corresponding to the non-splitting mode and the pixel height and pixel width of the intra-frame coding unit; and obtaining the cost corresponding to the non-split mode based on the distortion cost and the code stream writing cost corresponding to the non-split mode.
In some embodiments, the estimated distortion cost determining module 1004 is configured to obtain a preset difference estimated coefficient, where the difference estimated coefficient characterizes a difference between a cost corresponding to the non-splitting mode and a distortion cost corresponding to the candidate splitting mode; and taking the product of the cost corresponding to the non-splitting mode and the difference pre-estimated coefficient as the pre-estimated distortion cost corresponding to the candidate splitting mode.
In some embodiments, the estimated total cost determining module 1006 is configured to obtain, for each candidate splitting mode, a split code stream cost required for splitting a coding block of the intra-frame coding unit according to the candidate splitting mode, and obtain other code stream costs except the split code stream cost when intra-frame prediction is performed on the intra-frame coding unit; and superposing the code stream writing cost required by splitting the coding block and other code stream writing cost to obtain a superposition value, and determining the estimated code stream writing cost corresponding to the candidate splitting mode according to the superposition value.
In some embodiments, the estimated total cost determining module 1006 is configured to call a code stream cost function, and obtain split code stream costs required for splitting the encoding block of the intra-frame encoding unit according to the mode identifier corresponding to the candidate splitting mode, and the pixel height and the pixel width of the intra-frame encoding unit.
In some embodiments, the splitting mode selecting module 1010 is configured to select, if there are multiple skipped candidate splitting modes, a candidate splitting mode with the minimum corresponding rate distortion cost from the multiple skipped candidate splitting modes, as an optimal splitting mode corresponding to the intra coding unit.
The various modules in the video encoding apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server or a terminal, and the internal structure of which may be as shown in fig. 11. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a video encoding method.
It will be appreciated by those skilled in the art that the structure shown in fig. 11 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application applies, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (17)

1. A method of video encoding, the method comprising:
determining a candidate splitting mode corresponding to an intra-frame coding unit;
aiming at the intra-frame coding unit, calculating the cost corresponding to the non-splitting mode, and calculating the estimated distortion cost corresponding to the candidate splitting mode according to the cost;
aiming at the intra-frame coding unit, respectively calculating estimated code stream writing costs corresponding to each candidate splitting mode, and determining estimated total costs corresponding to each candidate splitting mode according to the estimated distortion costs and the estimated code stream writing costs corresponding to each candidate splitting mode;
Skipping rate distortion cost calculation corresponding to the corresponding candidate split mode when the cost corresponding to the non-split mode does not exceed the estimated total cost corresponding to the candidate split mode;
when the cost corresponding to the non-split mode exceeds the estimated total cost corresponding to the candidate split mode, performing rate distortion cost calculation on the corresponding candidate split mode, and selecting an optimal split mode corresponding to the intra-frame coding unit from the non-skipped candidate split modes according to a calculation result;
and splitting the coding blocks of the intra-frame coding units according to the optimal splitting mode.
2. The method of claim 1, wherein determining the candidate split mode for the intra-coded unit comprises:
when the intra-frame coding unit meets the preset size constraint condition, calculating a horizontal texture gradient and a vertical texture gradient of the intra-frame coding unit according to pixel values of all pixel points in the intra-frame coding unit;
determining a pixel texture direction of the intra-frame encoding unit according to the horizontal texture gradient and the vertical texture gradient, wherein the pixel texture direction is at least one of a horizontal direction and a vertical direction;
And screening candidate splitting modes corresponding to the intra-frame coding units from a plurality of preset splitting modes according to the pixel texture direction.
3. The method of claim 2, wherein determining whether the intra-coded unit meets a preset size constraint comprises:
acquiring the pixel width and the pixel height of the intra-frame coding unit;
calculating the aspect ratio and the aspect ratio of the intra-frame coding unit according to the pixel width and the pixel height;
and if the aspect ratio and the aspect ratio are smaller than the preset proportion, determining that the intra-frame coding unit meets the preset size constraint condition.
4. A method according to claim 3, characterized in that the method further comprises:
if the aspect ratio is greater than or equal to the preset proportion and the aspect ratio is smaller than the preset proportion, determining that the intra-frame coding unit does not meet a preset size constraint condition, and screening a preset splitting mode in a vertical direction from a plurality of preset splitting modes to serve as a candidate splitting mode corresponding to the intra-frame coding unit;
if the aspect ratio is greater than or equal to the preset proportion and the aspect ratio is smaller than the preset proportion, determining that the intra-frame coding unit does not meet a preset size constraint condition, and screening a preset splitting mode in a horizontal direction from a plurality of preset splitting modes to serve as a candidate splitting mode corresponding to the intra-frame coding unit.
5. The method of claim 2, wherein said calculating a horizontal texture gradient of the intra-coding unit from pixel values of respective pixels within the intra-coding unit comprises:
determining adjacent pixel points of each pixel point in the intra-frame coding unit in the horizontal direction;
calculating the pixel difference in the horizontal direction between the pixel point and the corresponding adjacent pixel point;
and summing the calculated pixel differences in the horizontal direction between the pixel points and the corresponding adjacent pixel points to obtain the horizontal texture gradient of the intra-frame coding unit.
6. The method of claim 2, wherein calculating a vertical texture gradient for the intra-coded unit from pixel values for each pixel point within the intra-coded unit comprises:
determining adjacent pixel points of each pixel point in the intra-frame coding unit in the vertical direction;
calculating the vertical pixel difference between the pixel point and the corresponding adjacent pixel point;
and summing the calculated pixel differences in the vertical direction between the pixel points and the corresponding adjacent pixel points to obtain the vertical texture gradient of the intra-frame coding unit.
7. The method of claim 2, wherein determining the pixel texture direction of the intra-coding unit from the horizontal texture gradient and the vertical texture gradient comprises:
Calculating a first ratio between the horizontal texture gradient and the vertical texture gradient, and calculating a second ratio between the vertical texture gradient and the horizontal texture gradient;
if the first ratio and the second ratio are smaller than the preset ratio, determining that the pixel texture direction of the intra-frame coding unit comprises a horizontal direction and a vertical direction;
if the first ratio is smaller than the preset ratio and the second ratio is larger than or equal to the preset ratio, determining that the pixel texture direction of the intra-frame coding unit is a horizontal direction;
and if the second ratio is smaller than the preset ratio and the first ratio is larger than or equal to the preset ratio, determining that the pixel texture direction of the intra-frame coding unit is a vertical direction.
8. The method according to claim 2, wherein the screening the candidate split modes corresponding to the intra-coded unit from a plurality of preset split modes according to the pixel texture direction includes:
if the pixel texture direction comprises a horizontal direction and a vertical direction, the plurality of preset splitting modes are all used as candidate splitting modes corresponding to the intra-frame coding units;
If the pixel texture direction is the horizontal direction, screening a preset splitting mode in the horizontal direction from a plurality of preset splitting modes to serve as a candidate splitting mode corresponding to the intra-frame coding unit;
and if the pixel texture direction is the vertical direction, screening a preset splitting mode in the vertical direction from a plurality of preset splitting modes, and taking the preset splitting mode as a candidate splitting mode corresponding to the intra-frame coding unit.
9. The method according to claim 1, wherein calculating a cost corresponding to a non-split mode for the intra coding unit comprises:
calculating pixel difference values of all pixel points in the intra-frame coding unit before intra-frame prediction and after intra-frame prediction in a non-splitting mode, and obtaining distortion cost corresponding to the non-splitting mode based on the pixel difference values of all the pixel points;
invoking a code stream cost function, and obtaining a code stream cost corresponding to the non-splitting mode according to a mode identification code corresponding to the non-splitting mode, and the pixel height and the pixel width of the intra-frame coding unit;
and obtaining the cost corresponding to the non-split mode based on the distortion cost and the code stream writing cost corresponding to the non-split mode.
10. The method according to claim 1, wherein calculating the estimated distortion cost corresponding to the candidate split mode according to the cost comprises:
obtaining a preset difference pre-estimation coefficient, wherein the difference pre-estimation coefficient represents the difference between the cost corresponding to the non-splitting mode and the distortion cost corresponding to the candidate splitting mode;
and taking the product of the cost corresponding to the non-splitting mode and the difference pre-estimated coefficient as the pre-estimated distortion cost corresponding to the candidate splitting mode.
11. The method of claim 1, wherein the calculating the estimated write stream costs for each candidate split mode, respectively, comprises:
for each candidate splitting mode, acquiring splitting code stream cost required by splitting a coding block of the intra-frame coding unit according to the candidate splitting mode, and acquiring other code stream cost except the splitting code stream cost when intra-frame prediction is carried out on the intra-frame coding unit;
and superposing the code stream writing cost required by splitting the coding block and the other code stream writing cost to obtain a superposition value, and determining the estimated code stream writing cost corresponding to the candidate splitting mode according to the superposition value.
12. The method of claim 11, wherein the obtaining the split write stream cost required for splitting the coded block of the intra coding unit in the candidate split mode comprises:
and calling a code stream cost function, and obtaining splitting code stream cost required by splitting the coding block of the intra-frame coding unit according to the candidate splitting mode according to the mode identification code corresponding to the candidate splitting mode and the pixel height and pixel width of the intra-frame coding unit.
13. The method according to claim 1, wherein selecting the optimal split mode corresponding to the intra coding unit from the non-skipped candidate split modes according to the calculation result comprises:
if a plurality of non-skipped candidate splitting modes exist, selecting a candidate splitting mode with the minimum corresponding rate distortion cost from the plurality of non-skipped candidate splitting modes as an optimal splitting mode corresponding to the intra-frame coding unit.
14. A video encoding device, the device comprising:
the splitting mode determining module is used for determining a candidate splitting mode corresponding to the intra-frame coding unit;
the estimated distortion cost determining module is used for calculating the cost corresponding to the non-splitting mode aiming at the intra-frame coding unit, and calculating the estimated distortion cost corresponding to the candidate splitting mode according to the cost;
The estimated total cost determining module is used for respectively calculating estimated code stream costs corresponding to each candidate splitting mode aiming at the intra-frame coding unit, and determining the estimated total cost corresponding to each candidate splitting mode according to the estimated distortion costs and the estimated code stream costs corresponding to each candidate splitting mode;
the skipping module is used for skipping the rate distortion cost calculation corresponding to the corresponding candidate splitting mode when the cost corresponding to the non-splitting mode does not exceed the estimated total cost corresponding to the candidate splitting mode;
the splitting mode selection module is used for carrying out rate distortion cost calculation on the corresponding candidate splitting modes when the cost corresponding to the non-splitting mode exceeds the estimated total cost corresponding to the candidate splitting mode, and selecting the optimal splitting mode corresponding to the intra-frame coding unit from the non-skipped candidate splitting modes according to the calculation result;
and the splitting module is used for splitting the coding blocks of the intra-frame coding units according to the optimal splitting mode.
15. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 13 when the computer program is executed.
16. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 13.
17. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any one of claims 1 to 13.
CN202410147421.0A 2024-02-02 2024-02-02 Video encoding method, apparatus, device, storage medium, and computer program product Active CN117692648B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410147421.0A CN117692648B (en) 2024-02-02 2024-02-02 Video encoding method, apparatus, device, storage medium, and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410147421.0A CN117692648B (en) 2024-02-02 2024-02-02 Video encoding method, apparatus, device, storage medium, and computer program product

Publications (2)

Publication Number Publication Date
CN117692648A true CN117692648A (en) 2024-03-12
CN117692648B CN117692648B (en) 2024-05-17

Family

ID=90126899

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410147421.0A Active CN117692648B (en) 2024-02-02 2024-02-02 Video encoding method, apparatus, device, storage medium, and computer program product

Country Status (1)

Country Link
CN (1) CN117692648B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102724509A (en) * 2012-06-19 2012-10-10 清华大学 Method and device for selecting optimal intra-frame coding mode for video sequence
KR20160106348A (en) * 2015-03-02 2016-09-12 한국전자통신연구원 Video Coding Method and Apparatus thereof
CN109672895A (en) * 2018-12-27 2019-04-23 北京佳讯飞鸿电气股份有限公司 A kind of HEVC intra-frame prediction method and system
CN111741299A (en) * 2020-07-09 2020-10-02 腾讯科技(深圳)有限公司 Method, device and equipment for selecting intra-frame prediction mode and storage medium
CN111901602A (en) * 2020-08-07 2020-11-06 北京奇艺世纪科技有限公司 Video data encoding method, video data encoding device, computer equipment and storage medium
WO2023087637A1 (en) * 2021-11-18 2023-05-25 北京达佳互联信息技术有限公司 Video coding method and apparatus, and electronic device and computer-readable storage medium
WO2023131059A1 (en) * 2022-01-04 2023-07-13 维沃移动通信有限公司 Image encoding method, image encoding apparatus, electronic device, and readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102724509A (en) * 2012-06-19 2012-10-10 清华大学 Method and device for selecting optimal intra-frame coding mode for video sequence
KR20160106348A (en) * 2015-03-02 2016-09-12 한국전자통신연구원 Video Coding Method and Apparatus thereof
CN109672895A (en) * 2018-12-27 2019-04-23 北京佳讯飞鸿电气股份有限公司 A kind of HEVC intra-frame prediction method and system
CN111741299A (en) * 2020-07-09 2020-10-02 腾讯科技(深圳)有限公司 Method, device and equipment for selecting intra-frame prediction mode and storage medium
CN111901602A (en) * 2020-08-07 2020-11-06 北京奇艺世纪科技有限公司 Video data encoding method, video data encoding device, computer equipment and storage medium
WO2023087637A1 (en) * 2021-11-18 2023-05-25 北京达佳互联信息技术有限公司 Video coding method and apparatus, and electronic device and computer-readable storage medium
WO2023131059A1 (en) * 2022-01-04 2023-07-13 维沃移动通信有限公司 Image encoding method, image encoding apparatus, electronic device, and readable storage medium

Also Published As

Publication number Publication date
CN117692648B (en) 2024-05-17

Similar Documents

Publication Publication Date Title
KR101152576B1 (en) Selecting encoding types and predictive modes for encoding video data
JP5063677B2 (en) VIDEO ENCODING METHOD AND DECODING METHOD, DEVICE THEREOF, THEIR PROGRAM, AND RECORDING MEDIUM CONTAINING THE PROGRAM
CN110839155B (en) Method and device for motion estimation, electronic equipment and computer-readable storage medium
KR101128533B1 (en) Image encoding method and decoding method, their device, their program, and recording medium with the program recorded thereon
EP2475175A2 (en) Video encoding device, video encoding method, video encoding program, video decoding device, video decoding method, and video decoding program
US20200007862A1 (en) Method of adaptive filtering for multiple reference line of intra prediction in video coding, video encoding apparatus and video decoding apparatus therewith
US20040218675A1 (en) Method and apparatus for determining reference picture and block mode for fast motion estimation
US20230362401A1 (en) Image encoding/decoding method and apparatus
US11558608B2 (en) On split prediction
CN112073719B (en) String matching prediction method, device and system and computer readable storage medium
CN115118977B (en) Intra-frame prediction encoding method, system, and medium for 360-degree video
US20210037251A1 (en) Video encoding method and apparatus, video decoding method and apparatus, computer device, and storage medium
KR100845209B1 (en) Method and apparatus for selecting intra prediction mode
CN112075080A (en) Intra-frame prediction device, image encoding device, image decoding device, and program
CN116614622A (en) Decoding prediction method, device and computer storage medium
CN117692648B (en) Video encoding method, apparatus, device, storage medium, and computer program product
JP7437426B2 (en) Inter prediction method and device, equipment, storage medium
CN112995661B (en) Image encoding method and apparatus, electronic device, and storage medium
CN110166774B (en) Intra-frame prediction method, video encoding method, video processing apparatus, and storage medium
CN113038132B (en) Prediction mode determination method and device for Coding Unit (CU)
KR102552556B1 (en) Efficient low-complexity video compression
CN112788339B (en) Video coding optimization method, device, system, medium and terminal
CN112004099B (en) Intra-frame block copy prediction method and device and computer readable storage medium
JP2012120108A (en) Interpolation image generating apparatus and program, and moving image decoding device and program
JP2009296282A (en) Scalable moving image encoding method, scalable moving image encoding device, scalable moving image encoding program, and computer-readable recording medium with the program stored

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant