CN117201792A - Video encoding method, video encoding device, electronic equipment and computer readable storage medium - Google Patents

Video encoding method, video encoding device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN117201792A
CN117201792A CN202311095360.XA CN202311095360A CN117201792A CN 117201792 A CN117201792 A CN 117201792A CN 202311095360 A CN202311095360 A CN 202311095360A CN 117201792 A CN117201792 A CN 117201792A
Authority
CN
China
Prior art keywords
image block
image
code rate
quantization parameter
level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311095360.XA
Other languages
Chinese (zh)
Inventor
张德钧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TP Link Technologies Co Ltd
Original Assignee
TP Link Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TP Link Technologies Co Ltd filed Critical TP Link Technologies Co Ltd
Priority to CN202311095360.XA priority Critical patent/CN117201792A/en
Publication of CN117201792A publication Critical patent/CN117201792A/en
Pending legal-status Critical Current

Links

Abstract

The application provides a video coding method, a video coding device, electronic equipment and a computer readable storage medium; relates to the technical field of video processing. The method comprises the following steps: acquiring an image frame to be encoded, wherein the image frame comprises an image block corresponding to an object; based on the maximum code rate of the image frame and the complexity of the image block, distributing the upper limit code rate corresponding to the image block; determining acceptable quantization parameters corresponding to the image blocks in the quantization parameters of each level based on the mapping relation between the complexity and the quantization parameters of each level; determining an estimated code rate corresponding to the image block according to the acceptable quantization parameter; if the estimated code rate is smaller than the upper limit code rate, coding the image blocks in the image frame based on the acceptable quantization parameters; the acceptable quantization parameters of the image blocks are determined based on a plurality of levels of quantization parameters corresponding to the complexity of different image blocks and the upper limit code rate, so that the image quality effect of the image blocks corresponding to the object objects after encoding can be improved.

Description

Video encoding method, video encoding device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a video encoding method, apparatus, electronic device, and computer readable storage medium.
Background
Video coding is a technique for compressing the information content of video images, reducing transmission bandwidth and saving storage space. Rate control is an important link of video image coding, and trades the quality of image quality and the level of code rate. The intra-frame code control is a technology for carrying out code rate distribution on a local area of a single-frame video image, and more code rates can be distributed to an interested area based on the technology, so that the image quality of the interested area is improved.
Currently, in the related art, by detecting an interested region of a video image, configuring corresponding quantization parameters for encoding, and guaranteeing the image quality of the interested region. However, for different application scenarios and different usage requirements, a larger model or complex calculation is required to detect different regions of interest; in addition, a plurality of different objects may exist in the same region of interest, and encoding with the same quantization parameter still results in poor image quality, which cannot meet the use requirements in various application scenarios.
Disclosure of Invention
According to various embodiments of the present application, a video encoding method, apparatus, electronic device, and computer-readable storage medium are provided.
In a first aspect, the present application provides a video encoding method, the method comprising: acquiring an image frame to be encoded, wherein the image frame comprises an image block corresponding to an object; based on the maximum code rate of the image frame and the complexity of the image block, allocating an upper limit code rate corresponding to the image block; determining acceptable quantization parameters corresponding to the image block in each level of quantization parameters based on the mapping relation between the complexity and each level of quantization parameters; determining an estimated code rate corresponding to the image block according to the acceptable quantization parameter; and if the estimated code rate is smaller than the upper limit code rate, encoding the image block in the image frame based on the acceptable quantization parameter.
By the mode, the upper limit code rate of the image blocks is distributed based on the maximum code rate of the image frames and the complexity of the image blocks, so that the optimal image quality effect of the image blocks with different complexity in the image frames can be ensured; on the premise of not exceeding the maximum code rate limit of the image frame, the acceptable quantization parameters of the image block can be determined based on a plurality of levels of quantization parameters corresponding to the complexity of the image block, so that the image quality effect of the image block of the object after encoding can be adjusted based on the plurality of levels of quantization parameters and the maximum code rate; the code rate of a single frame image frame is kept unchanged to the greatest extent, and the overall image quality effect of the image frame is ensured, so that the requirements of different object objects on the image quality effect under various application scenes are met; has stronger usability and practicability.
In a possible implementation manner of the first aspect, before the determining, based on the mapping relationship between the complexity and each level of quantization parameters, acceptable quantization parameters corresponding to the image block in each level of quantization parameters, the method further includes:
extracting different object objects in a sample image frame, and acquiring image blocks corresponding to the object objects;
Coding each image block based on a preset quantization parameter to obtain a coded image block corresponding to each image block;
obtaining image quality scores of the coded image blocks, and determining the quantization parameters of each grade corresponding to each image block in the preset quantization parameters based on the image quality scores;
and calculating the complexity of each image block, and establishing a mapping relation between the complexity corresponding to each image block and the quantization parameters of each level.
In a possible implementation manner of the first aspect, after the determining, according to the acceptable quantization parameter, an estimated code rate corresponding to the image block, the method further includes:
judging whether the image block is a moving image block or not based on the current image frame and the adjacent previous image frame;
if the image block is the moving image block, acquiring the motion sensitivity of the moving image block;
if the estimated code rate is smaller than the upper limit code rate, calculating a target code rate corresponding to the image block based on the motion sensitivity, the estimated code rate and the upper limit code rate;
and calculating a target quantization parameter of the image block based on the target code rate, and encoding the moving image block in the current image frame based on the target quantization parameter.
In a possible implementation manner of the first aspect, after the calculating the target quantization parameter of the image block based on the target code rate, the method further includes:
in the current image frame, smoothing the target quantization parameter of the image block based on the quantization parameter of the image block adjacent to the image block and a preset parameter difference threshold value to obtain a smoothed target quantization parameter;
the image block of the current image frame is encoded based on the smoothed target quantization parameter.
In a possible implementation manner of the first aspect, after the calculating the target quantization parameter of the image block based on the target code rate, the method further includes:
smoothing the target quantization parameter of the image block in the current image frame based on the quantization parameter of the image block in the previous image frame, which is the same as the image block in the current image frame, and a preset parameter difference threshold value to obtain a smoothed target quantization parameter;
the image block of the current image frame is encoded based on the smoothed target quantization parameter.
In a possible implementation manner of the first aspect, the acquiring an image frame to be encoded includes:
Dividing the image frame into initial image blocks;
calculating the complexity of the initial image block to obtain the complexity level of the initial image block;
and clustering the initial image block based on the complexity level and the spatial position relation to obtain the image block corresponding to the object.
In a possible implementation manner of the first aspect, after the acquiring the image frame to be encoded, the method further includes:
taking a first image block in each image block as a target image block, and adjusting third-level quantization parameters of other second image blocks in each image block to fourth-level quantization parameters when adjusting first-level quantization parameters of the target image block to second-level quantization parameters;
the first-level quantization parameter and the second-level quantization parameter are quantization parameters in all levels of quantization parameters corresponding to the first image block, and the first-level quantization parameter is larger than the second-level quantization parameter; the third-level quantization parameter and the fourth-level quantization parameter are quantization parameters in all levels of quantization parameters corresponding to the second image block, and the third-level quantization parameter is smaller than the fourth-level quantization parameter; and the total code rate corresponding to the quantization parameter of each adjusted image block is smaller than the maximum code rate of the image frame.
In a possible implementation manner of the first aspect, after the determining, according to the acceptable quantization parameter, an estimated code rate corresponding to the image block, the method further includes:
and if the estimated code rate is greater than or equal to the upper limit code rate, encoding the image block in the image frame based on the upper limit code rate.
In a second aspect, the present application provides a video encoding apparatus comprising:
the acquisition unit is used for acquiring an image frame to be encoded, wherein the image frame comprises an image block corresponding to an object;
the allocation unit is used for allocating the upper limit code rate corresponding to the image block based on the maximum code rate of the image frame and the complexity of the image block;
a processing unit, configured to determine an acceptable quantization parameter corresponding to the image block in each level of quantization parameters based on a mapping relationship between the complexity and each level of quantization parameters;
the calculating unit is used for determining an estimated code rate corresponding to the image block according to the acceptable quantization parameter;
and the encoding unit is used for encoding the image blocks in the image frame based on the acceptable quantization parameter if the estimated code rate is smaller than the upper limit code rate.
In a third aspect, the application provides an electronic device comprising a memory storing a computer program and a processor implementing the method of any of the first aspects when executing the computer program.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of any of the first aspects.
In a fifth aspect, the application provides a computer program product for, when run on an electronic device, causing the electronic device to perform the method of any one of the first aspects.
It will be appreciated that the advantages of the second to fifth aspects may be found in the relevant description of the first aspect, and are not described here again.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an application scenario corresponding to a video encoding method according to an embodiment of the present application;
FIG. 2 is an interface schematic diagram of image blocks of an image frame according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a data mapping model according to an embodiment of the present application;
fig. 4 is a schematic flowchart of an implementation of a video encoding method according to an embodiment of the present application;
fig. 5 is a schematic flow chart of an implementation of a video encoding method according to another embodiment of the present application;
fig. 6 is a schematic flowchart of an implementation of a video encoding method according to another embodiment of the present application;
fig. 7 is a schematic structural diagram of a video encoding device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the technical scheme of the present application will be described in detail below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present application, and thus are merely examples, and are not intended to limit the scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having" and any variations thereof in the description of the application and the claims and the description of the drawings above are intended to cover a non-exclusive inclusion.
In the description of embodiments of the present application, the technical terms "first," "second," and the like are used merely to distinguish between different objects and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated, a particular order or a primary or secondary relationship. In the description of the embodiments of the present application, the meaning of "plurality" is two or more unless explicitly defined otherwise.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In the description of the embodiments of the present application, the term "and/or" is merely an association relationship describing an association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
At present, in the technical field of video coding, the method is mostly used for dividing an interested region and a non-interested region, and the image quality of the interested region is ensured by adjusting strategies such as quantization parameters of the interested region. However, the different usage scenarios and different usage requirements of different time periods cause a variety of definitions of the region of interest, and detection of various regions of interest often requires a large model or complex calculation, which is difficult to be suitable for the field of real-time encoding.
In addition, the commonly adopted quantization parameter (Quantizer Parameter, QP) is adjusted to improve the image quality effect of the region of interest, and because the image quality effect is affected by the subjective visual system characteristics of people, different objects in the same region of interest can correspondingly generate different image quality effects by adopting the same quantization parameter, thereby possibly causing poor image quality effect and reducing the user experience.
Aiming at the defects, the embodiment of the application provides a video coding method, which can adjust the corresponding quantization parameter level based on the complexity of an image block where an object in an image frame is located, can use more proper quantization parameters for coding within a code rate limiting range, ensures the image quality level of the object, meets the viewing requirement, and simultaneously keeps the code rate and the overall definition of a single-frame image frame to the greatest extent basically unchanged.
Fig. 1 shows a schematic application scenario corresponding to a video coding method according to an embodiment of the present application. As shown in fig. 1, the video encoding method can be applied to the image pickup device shown in fig. 1, and the image pickup device can collect image frames in a monitoring area in real time and transmit the encoded image frames to a terminal device for real-time display; as shown in fig. 1, the image frames acquired by the monitoring device may include a static object or a dynamic object.
Fig. 1 illustrates only one application scenario of the video encoding method, and does not limit the application scenario of the method, and the method can be widely applied to various video transmission scenarios, such as live video, video on demand, mobile video, and the like.
Before describing the video coding process provided by the embodiment of the application, the data basis and the established mapping model adopted in the video coding process are first described.
Taking the above application scenario as an example for illustration, the process of establishing the mapping model may include the following steps:
s101, extracting different object objects in the sample image frame, and acquiring image blocks corresponding to the object objects.
For example, an acquired image frame of a plurality of videos (for example, original pictures of various videos) is taken as a sample image frame, an object in the image frame is extracted, and the sample image frame is divided into a plurality of image blocks based on a position area of the object in each sample image frame.
As shown in fig. 2, for the object objects extracted from the sample image frame, an image block corresponding to each object is determined. Each sample image frame may include one or more object objects, so that the sample image frame is divided into image blocks based on the one or more object objects, so as to obtain image blocks of areas where different object objects are located, such as the image block 1, the image block 2 and the image block 3 shown in fig. 2.
For example, the object in the sample image frame may be identified by computer vision, for example, based on comparing the differences of the pixels adjacent to each other in the left-right direction and the up-down direction, the edge of the object is extracted, and further, the location area where the object is located is determined, so as to obtain the corresponding image block.
It should be noted that, fig. 2 only illustrates the division and distribution form of the image blocks, and specifically, the division of the image blocks may also be performed based on the distribution of the extracted object objects in the acquired image frames in the actual application scenario, and the specific division form of the image blocks is not limited herein.
S102, coding each image block based on a preset quantization parameter to obtain coding image blocks corresponding to each image block.
The preset quantization parameter may be a preset series of consecutive quantization parameters, for example from large to small quantization parameters, based on which each image block is encoded, resulting in an encoded image block.
It can be understood that the encoded image blocks encoded based on different quantization parameters have different resolutions corresponding to the image quality. For example, the smaller the value of the quantization parameter, the more information is reserved in the original image block, the larger the corresponding code rate is, and the relative image quality of the coded image block is clearer; the larger the quantization parameter is, the less information is retained in the original image block, the code rate is reduced, and the relative image quality of the encoded image block after encoding is also reduced.
As shown in fig. 3, the preset quantization parameter may include QP1, QP2, QP3, and QP4 sequentially from large to small; for each image block (such as image block 1, image block 2 and image block 3) of the image frame, encoding is performed based on the preset quantization parameter, the encoded image block obtained by encoding corresponding to the image block 1 may include an image block 1a, an image block 1b, an image block 1c and an image block 1d, the encoded image block obtained by encoding corresponding to the image block 2 may include an image block 2a, an image block 2b, an image block 2c and an image block 2d, and the encoded image block obtained by encoding corresponding to the image block 3 may include an image block 3a, an image block 3b, an image block 3c and an image block 3d.
And S103, obtaining the image quality scores of the coded image blocks, and determining each level of quantization parameter corresponding to each image block in the preset quantization parameters based on the image quality scores.
For example, subjective scores may be performed on the image quality of the encoded image blocks based on the user, to obtain image quality scores for different encoded image blocks, and quantization parameters for respective levels may be determined based on the image quality scores for each encoded image block. Wherein the quantization parameter of each level may be a default set of receivable quantization parameters among preset quantization parameters corresponding to each image block; for example, setting an image quality scoring threshold, and after obtaining the image quality score of the encoded image blocks, taking the preset quantization parameter reaching the image quality scoring threshold in the image quality score as the default acceptable quantization parameter, thereby obtaining the default acceptable quantization parameter QP corresponding to each image block.
As shown in fig. 3, after encoding and scoring the image block 1, an image quality score 1a, an image quality score 1b, an image quality score 1c and an image quality score 1d encoded based on different preset quantization parameters are obtained; accordingly, the image block 2 corresponds to the image quality score 2a, the image quality score 2b, the image quality score 2c, and the image quality score 2d, and the image block 3 corresponds to the image quality score 3a, the image quality score 3b, the image quality score 3c, and the image quality score 3d.
For example, when the image quality score 1a and the image quality score 1b corresponding to the image block 1 do not reach the image quality score threshold, and the image quality score 1c and the image quality score 1d reach the image quality score threshold, the preset quantization parameters QP3 and QP4 corresponding to the image quality score 1c and the image quality score 1d are default acceptable QP, that is, quantization parameters of each level; meanwhile, for the image block 2, when the image quality score 2a does not reach the image quality score threshold, and the image quality score 2b, the image quality score 2c and the image quality score 2d reach the image quality score threshold, the corresponding QP2, QP3 and QP4 are used as default acceptable QPs corresponding to the image block 2, namely quantization parameters of all levels; for the image block 3, if the image quality score 3a does not reach the image quality score threshold, and the image quality score 3b, the image quality score 3c and the image quality score 3d reach the image quality score threshold, the corresponding QP2, QP3 and QP4 are used as default acceptable QP corresponding to the image block 3, i.e. quantization parameters of each level.
It should be noted that, QP of different levels corresponds to different image quality sharpness levels of the encoded image block after encoding, that is, the higher the image quality score, the higher the sharpness of the corresponding encoded image block, and the smaller the value of the corresponding QP; therefore, different definition levels of the corresponding image quality from low to high can be determined based on the quantization parameters of each level, as shown in fig. 3, the corresponding QP3 of the image block 1 is greater than QP4, and the definition level of the corresponding encoded image block 1c is smaller than the definition level of the image block 1 d. The image quality score is a score for sharpness of the encoded image block.
S104, calculating the complexity of each image block, and establishing a mapping relation between the complexity corresponding to each image block and quantization parameters of each level.
For example, the image blocks of the object may have different complexity, and the corresponding acceptable QP level may also be different, for example, the image block having smaller complexity may have less detail information, and may be encoded based on a smaller code rate, so that better sharpness may be maintained, and thus the value of the required quantization parameter may be relatively larger; and the image block with larger complexity has more corresponding detail information, so that a larger code rate is needed for encoding to ensure better image quality effect, namely higher definition.
The complexity may be, for example, an image texture complexity or an encoding complexity of the image block. The complexity calculation of each image block can be measured by calculating the pixel gradient and/or other texture feature statistics in the block for the image block in the original image frame before encoding; or the coding complexity obtained by the coding process according to the coding mode and parameters can be measured.
As shown in fig. 3, the complexity of each image block in the image frame is calculated separately, and may be classified according to the complexity value, for example, as shown in fig. 3, image block 1 corresponds to complexity level 1, and image block 2 and image block 3 correspond to complexity level 2; the complexity level 1 may be greater than the complexity level 2, that is, the complexity of the image block 1 is greater than the complexity of the image block 2 and the complexity of the image block 3, and the complexity of the image block 2 and the complexity of the image block 3 may belong to the same complexity level.
It can be understood that, based on the quantization parameters of each level obtained by the image blocks of different object objects and the complexity calculated for each image block, a mapping relationship between the complexity and the quantization parameters of each level can be established; as shown in fig. 3, the mapping relationship between the complexity level 1 corresponding to the image block 1 and the default acceptable QP set 1, the mapping relationship between the complexity level 2 corresponding to the image block 2 and the default acceptable QP set 2, and the mapping relationship between the complexity level 2 corresponding to the image block 3 and the default acceptable QP set 3; and further obtaining mapping models of quantization parameters of each level corresponding to different complexity in different image frames.
Because the complexity of the image blocks is different, the QP corresponding to the target image quality level which can be adjusted is also different, so that based on the existing sample image frame data, a mapping module of the complexity and quantization parameters of each level is established by calculating the complexity level of each image block, the quantization parameters can be determined and encoded according to the complexity level of the image block, the quantization parameters of the image block are adjusted based on the mapping model, the image quality level of the encoded image block is adjusted, and different use requirements are met while the image quality effect (definition) is ensured.
The embodiment of the application establishes the mapping relation between the scene complexity and the acceptable quantization parameter through a scoring mechanism, furthest reflects the acceptability of human eyes on the quality distortion of the picture, and has a plurality of quantization parameters of picture quality grades for flexible adjustment. The built mapping models are sustainable training prediction models, so that the data set and the training method can be flexibly expanded, or the accuracy can be continuously improved in a real-time online training mode.
In some embodiments, acquiring an image frame to be encoded further comprises:
dividing an image frame into initial image blocks; calculating the complexity of the initial image block to obtain the complexity level of the initial image block; and clustering the initial image blocks based on the complexity level and the spatial position relation to obtain the image blocks corresponding to the object objects.
Illustratively, as shown in fig. 2, the acquired image frame is divided into initial image blocks, and the division may be regular division or irregular division based on the image characteristics of the image frame; and then calculating the complexity of each initial image block, and clustering the initial image blocks according to the complexity level of each initial image block and the spatial position of each initial image block in the image frame to obtain the image blocks corresponding to the object objects, for example, clustering the initial image blocks 1 to 9 in fig. 2 to obtain the image block 1.
Fig. 2 is only an exemplary illustration, and the manner of dividing and clustering the initial image blocks to obtain the image blocks corresponding to the object may be divided based on the features of the actual image frame, which is not limited herein.
The specific implementation process of video coding is further described below based on the above-established mapping model, and the implementation subject of the method may be the image capturing device in fig. 1; as shown in fig. 4, the method may include the steps of:
s401, acquiring an image frame to be encoded, wherein the image frame comprises an image block corresponding to an object.
In some embodiments, the image capturing apparatus captures an image frame to be encoded by the camera, and the image frame may include therein image blocks divided based on different object objects, such as image block 1, image block 2, and image block 3 shown in fig. 2.
For example, after the image frame is acquired by the image capturing device, an object in the image frame may be extracted, and the image blocks are divided based on the object, so as to obtain an image frame including one or more image blocks.
S402, distributing an upper limit code rate corresponding to the image block based on the maximum code rate of the image frame and the complexity of the image block.
In some embodiments, the maximum code rate may be a maximum code rate per second set by bandwidth or storage requirements, and frame level code rate control may allocate the maximum code rate per frame (i.e., per image frame) based on the maximum code rate, frame rate, and encoded frame type.
Illustratively, based on the partitioned image blocks, determining a complexity of each image block based on the same implementation as previously described; the complexity may be the texture complexity of the current image frame calculated before encoding or the complexity of the prediction from the encoded content of the previous image frame. The upper limit code rate corresponds to different image blocks with different complexity, and the upper limit code rate can be allocated based on the level of the complexity corresponding to each image block, or the code rate can be allocated based on the actual complexity of each image block.
For example, the ratio of the complexity of image block 1, the complexity of image block 2, and the complexity of image block 3 is 2:1:1, the maximum code rate of the image frame is 400kb/s, and the corresponding upper limit code rates respectively allocated to the image block 1 and the image block 2 and the image block 3 can be 200kb/s, 100kb/s and 100kb/s respectively; or the image frame comprises other areas besides the image block 1, the image block 2 and the image block 3, and the remaining code rate after being distributed to the other areas can be distributed to each image block according to the complexity of each image block; or, the complexity of the image block 3 is very low, all detail information can be reserved based on the code rate coding of 70kb/s, and no higher code rate is needed, so that the upper limit code rate corresponding to the image block 3 can be 70kb/s, and other image blocks 1 and 2 can be subjected to code rate distribution based on the actual complexity or the complexity ratio.
It should be noted that, the sum of the upper limit code rates to which all the image blocks in the image frame are allocated is less than or equal to the maximum code rate of the image frame. The above-mentioned mode of code rate allocation may be set based on the actual application scenario and the data of the acquired image frame, and may be flexibly configured based on the complexity level of the image block, which is not limited herein.
S403, based on the mapping relation between the complexity and the quantization parameters of each level, determining acceptable quantization parameters corresponding to the image block in the quantization parameters of each level.
The acceptable quantization parameter may be any of the respective levels of quantization parameter or the lowest level of quantization parameter, for example.
As shown in fig. 3, in the mapping relationship between the complexity corresponding to the image block 1 and the quantization parameter of each level, any one of QP3 and QP4 in the quantization parameter of each level may be used as an acceptable quantization parameter, or the QP4 with the highest level may be used as an acceptable quantization parameter.
S404, determining the estimated code rate corresponding to the image block according to the acceptable quantization parameter.
In some embodiments, the maximum code rate divided by the resolution of an image frame may calculate a functional relationship model between pixel bits bpp, bpp and QP, e.g., qp=a×log2bpp+b, where parameters a, b are updated gradually as the scene changes. Thus, the estimated code rate corresponding to the image block can be calculated based on the acceptable quantization parameter.
And S405, if the estimated code rate is smaller than the upper limit code rate, encoding the image blocks in the image frame based on the acceptable quantization parameters.
In some embodiments, the quality of the encoded image corresponding to the acceptable QP based on the mapping model described above may be guaranteed, while ensuring that the code rate of the image block is also in a reasonable range.
When the frame image frame is encoded, the code rate corresponding to each image block can be determined based on the above mode, so that the frame image frame is encoded based on the code rate corresponding to each image block, the image quality of each encoded image block is ensured, and meanwhile, the code rate of the whole image frame is ensured to be within the maximum code rate range.
In some embodiments, if the estimated code rate is greater than or equal to the upper bound code rate, image blocks in the image frame are encoded based on the upper bound code rate.
The embodiment of the application uses the proper acceptable QP to code the image blocks of different object objects within the upper limit code rate range based on the intra-frame code control process of the object blocks of different objects, can autonomously adjust the image quality level of any object, meets the viewing requirement, and can simultaneously keep the single-frame code rate and the overall definition basically unchanged to the maximum extent.
In some embodiments, the target code rate of the image block in the image frame to be encoded can be further determined by judging whether the image block moves or not; as shown in fig. 5, after determining the estimated code rate corresponding to the image block according to the acceptable quantization parameter, the method further includes:
s410, based on the current image frame and the adjacent previous image frame, whether the image block is a moving image block is judged.
For example, there may be a moving image block in an image frame of a video, for example, the image block 1 shown in fig. 2, and when there is a moving image block, problems such as discontinuity, unstable rate control, or image quality jump may occur easily based on a conventional encoding method. Therefore, whether an image block in the current image frame is a moving image block may be predicted before encoding by comparing pixel value differences of the image block at the same position of the image frame to be encoded of the current frame and the previous image frame, or may also be predicted from motion information of the previous image frame at the time of encoding.
In addition, whether the image block is a moving image block or not can be judged according to the coding motion vector of the image block in the coded frame or other motion statistical prediction modes.
S411, if the image block is the moving image block, acquiring the motion sensitivity of the moving image block.
For example, the motion sensitivity is the motion amplitude of the object in the image block, for example, when the motion sensitivity is determined by using the sum of pixel value differences between the image block of the current frame image frame and the previous frame image frame, assuming that the sum of pixel value differences is 30, the maximum value of pixel value differences is set to 100-1000, the corresponding motion amplitude (motion sensitivity) is 30% when set to 100, and the corresponding motion amplitude (motion sensitivity) is 3% when set to 1000.
It should be noted that, the maximum value of the pixel value difference may be set based on the actual application scenario, and the motion sensitivity calculation method is only illustrated here, and is not limited specifically.
And S412, if the estimated code rate is smaller than the upper limit code rate, calculating a target code rate corresponding to the image block based on the motion sensitivity, the estimated code rate and the upper limit code rate.
Illustratively, in view of coding problems caused by motion characteristics, the QP determined based on complexity is associated with the rate of motion sensitivity, and the rate of the image block is dynamically adjusted continuously in real time. For example, the motion sensitivity is 30%, the acceptable QP corresponds to a code rate of 100kb/s, and the upper limit code rate is 200kb/s, and the target code rate calculated from the motion is 100+30% × (200-100) =130 kb/s.
S413, calculating a target quantization parameter of the image block based on the target code rate, and encoding the moving image block in the current image frame based on the target quantization parameter.
Illustratively, based on the same implementation as described above, a target quantization parameter is calculated based on a target code rate, and the moving image block is encoded based on the target quantization parameter.
According to the embodiment of the application, the QP determined based on the complexity is associated with the code rate of the motion degree, continuous real-time dynamic code rate adjustment is carried out, the probability of the problems of discontinuity, unstable code rate control, image quality jump and the like caused by classification hierarchical adjustment in the traditional method is reduced, and the continuity and stability of inter-frame code rate control are improved while the image quality effect is ensured.
In some embodiments, after determining the target bitrate of the image block, the target bitrate of the image block may be further smoothed, as shown in fig. 6, and after calculating the target quantization parameter of the image block based on the target bitrate, the method further includes:
s414, in the current image frame, smoothing the target quantization parameter of the image block based on the quantization parameter of the image block adjacent to the image block and a preset parameter difference threshold value, so as to obtain the smoothed target quantization parameter.
For example, in the same frame of image frame, the quantization parameters of the adjacent image blocks can be smoothed, so that the image quality of the adjacent image blocks is smoothed, and the image quality effect is improved.
For example, by processing through a smoothing function, assuming that the target QP calculated for the first image block is 30, the calculated 2 nd image block adjacent to the target QP is 35, and the limiting parameter difference threshold is 4 for no obvious boundary between adjacent image quality differences, the target QP for the second image block can only be 34, and so on, the target quantization parameter for the corresponding image block is calculated and adjusted.
S415 encodes an image block of the current image frame based on the smoothed target quantization parameter.
For example, encoding may be performed based on the target quantization parameter after the smoothing process, or inter-frame smoothing may be performed again after the intra-frame smoothing process, and then encoding may be performed.
S416, smoothing the target quantization parameter of the image block in the current image frame based on the quantization parameter of the image block in the previous image frame, which is the same as the image block in the current image frame, and a preset parameter difference threshold value, to obtain the smoothed target quantization parameter.
Illustratively, the same principle as intra smoothing processing, such as that the 1 st image block of the previous image frame is 30, the 1 st image block of the adjacent next image frame is 35, but the parameter difference threshold is 4, the target quantization parameter of the 1 st image block of the adjacent next image frame is only 34.
S417, the image block of the current image frame is encoded based on the smoothed target quantization parameter.
Based on the implementation process of video coding, after the image frame to be coded is acquired, the embodiment of the application can also realize regulation and control of the code rate among different image blocks in the image frame, and the regulation and control process can comprise the following steps:
and when the first image block in each image block is taken as a target image block and the first level quantization parameter of the target image block is adjusted to the second level quantization parameter, the third level quantization parameter of other second image blocks in each image block is adjusted to the fourth level quantization parameter.
For example, in the process of establishing the mapping model, the multi-level mapping relation of the QP with the complexity to different image quality levels is obtained, the image blocks of various object objects can independently adjust the image quality levels (i.e. the quantization parameters of the respective levels corresponding to the image blocks), and the user can independently select and adjust the image quality definition of each specific object of the picture. When the level of the mapping image quality of the complexity of the target object (the object in the target image block) is increased, other higher complexity image quality levels except the target complexity are correspondingly reduced, so that the code rate of the single-frame image frame is basically unchanged.
As shown in fig. 2, when the billboard of the image block 2 needs to raise the image quality level (lower the quantization parameter), the image quality level of the image block 3 of the same complexity level (higher the corresponding quantization parameter) or the image quality level of the image block 1 higher than the complexity level of the image block 2 (higher the corresponding quantization parameter) can be correspondingly lowered.
In the practical application process, a user can independently and arbitrarily adjust the local definition in a picture, for example, the user wants to see clearly the pedestrians shot by monitoring, then the image quality level of the pedestrians can be adjusted, and the image pickup device maps the complexity of the image blocks related to the pedestrians to the quantization parameter QP which changes according to the image quality level; for example, the user wants to see clearly the characters of the advertisement board which are shot by monitoring, and the advertisement board can be adjusted as well; similarly, the definition of the image quality of any object within the screen becomes continuously adjustable.
Because the human visual system has the characteristics of contrast sensitivity, masking effect and the like, the complexity of the image blocks is different, and QP adjusted to the target image quality level is also different; when the human eyes watch the coded video, the image blocks of the region of interest which cannot be seen clearly have higher definition adjustment requirements, and in the region of non-interest, the region with higher complexity reduces a certain code rate because of masking effect, and distortion is not easy to be perceived; therefore, the intra-frame code control method based on the object blocks of different objects can code the image blocks of different objects by using the proper QP within the code rate limiting range, can automatically adjust the image quality level of any object, meets the viewing requirement, and can keep the single-frame code rate and the overall definition basically unchanged to the maximum extent.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Fig. 7 is a schematic structural diagram of a video encoding apparatus according to an embodiment of the present application, corresponding to the video encoding method provided in the above embodiment, and for convenience of explanation, only a portion related to the embodiment of the present application is shown.
Referring to fig. 7, the video encoding apparatus includes:
an acquisition unit 71, configured to acquire an image frame to be encoded, where the image frame includes an image block corresponding to an object;
an allocation unit 72, configured to allocate an upper limit code rate corresponding to the image block based on a maximum code rate of the image frame and a complexity of the image block;
a processing unit 73, configured to determine an acceptable quantization parameter corresponding to the image block in each level of quantization parameters based on a mapping relationship between the complexity and each level of quantization parameters;
a calculating unit 74, configured to determine an estimated code rate corresponding to the image block according to the acceptable quantization parameter;
and an encoding unit 75, configured to encode the image block in the image frame based on the acceptable quantization parameter if the estimated code rate is less than the upper limit code rate.
In a possible implementation manner, the device further comprises a model creating unit, which is used for extracting different object objects in the sample image frame and obtaining image blocks corresponding to the object objects; coding each image block based on a preset quantization parameter to obtain a coded image block corresponding to each image block; obtaining image quality scores of the coded image blocks, and determining the quantization parameters of each grade corresponding to each image block in the preset quantization parameters based on the image quality scores; and calculating the complexity of each image block, and establishing a mapping relation between the complexity corresponding to each image block and the quantization parameters of each level.
In a possible implementation manner, the encoding unit 75 is further configured to determine, based on the current image frame and the adjacent previous image frame, whether the image block is a moving image block; if the image block is the moving image block, acquiring the motion sensitivity of the moving image block; if the estimated code rate is smaller than the upper limit code rate, calculating a target code rate corresponding to the image block based on the motion sensitivity, the estimated code rate and the upper limit code rate; and calculating a target quantization parameter of the image block based on the target code rate, and encoding the moving image block in the current image frame based on the target quantization parameter.
In a possible implementation manner, the encoding unit 75 is further configured to, in the current image frame, perform smoothing on the target quantization parameter of the image block based on the quantization parameter of the image block adjacent to the image block and a preset parameter difference threshold value, to obtain a smoothed target quantization parameter; the image block of the current image frame is encoded based on the smoothed target quantization parameter.
In a possible implementation manner, the encoding unit 75 is further configured to perform smoothing on the target quantization parameter of the image block in the current image frame based on the quantization parameter of the image block in the previous image frame that is located at the same position as the image block in the current image frame and a preset parameter difference threshold, to obtain a smoothed target quantization parameter; the image block of the current image frame is encoded based on the smoothed target quantization parameter.
In a possible implementation, the obtaining unit 71 is further configured to divide the image frame into initial image blocks; calculating the complexity of the initial image block to obtain the complexity level of the initial image block; and clustering the initial image block based on the complexity level and the spatial position relation to obtain the image block corresponding to the object.
In a possible implementation manner, the encoding unit 75 is further configured to adjust, when a first image block of the image blocks is used as a target image block and the first level quantization parameter of the target image block is adjusted to a second level quantization parameter, a third level quantization parameter of other second image blocks of the image blocks to a fourth level quantization parameter; the first-level quantization parameter and the second-level quantization parameter are quantization parameters in all levels of quantization parameters corresponding to the first image mania, and the first-level quantization parameter is larger than the second-level quantization parameter; the third-level quantization parameter and the fourth-level quantization parameter are quantization parameters in all levels of quantization parameters corresponding to the second image block, and the third-level quantization parameter is smaller than the fourth-level quantization parameter; and the total code rate corresponding to the quantization parameter of each adjusted image block is smaller than the maximum code rate of the image frame.
In a possible implementation, the encoding unit 75 is further configured to encode the image block in the image frame based on the upper limit code rate if the estimated code rate is greater than or equal to the upper limit code rate.
Fig. 8 shows a schematic diagram of the hardware configuration of the electronic device 8.
As shown in fig. 8, the electronic device 8 of this embodiment includes: at least one processor 80 (only one is shown in fig. 8), a memory 81, said memory 81 having stored therein a computer program 82 executable on said processor 80. The steps in the above-described method embodiments are implemented when the processor 80 executes the computer program 82, for example S401 to S405 shown in fig. 4. Alternatively, the processor 80, when executing the computer program 82, performs the functions of the modules/units of the apparatus embodiments described above.
It should be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 8. In other embodiments of the application, the electronic device 8 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The electronic device 8 may be an image pickup apparatus. The electronic device 8 may include, but is not limited to, a processor 80, a memory 81. It will be appreciated by those skilled in the art that fig. 8 is merely an example of the electronic device 8 and is not meant to be limiting as the electronic device 8, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the server may also include an input transmitting device, a network access device, a bus, etc.
The processor 80 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
A memory may also be provided in the processor 80 for storing instructions and data. In some embodiments, the memory in the processor 80 is a cache memory. The memory may hold instructions or data that the processor 80 has just used or recycled. If the processor 80 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided and the latency of the processor 80 is reduced, thereby improving the efficiency of the system.
The above-mentioned memory 81 may in some embodiments be an internal storage unit of the electronic device 8, such as a hard disk or a memory of the electronic device 8. The memory 81 may also be an external storage device of the electronic device 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the electronic device 8. Further, the memory 81 may also include both an internal storage unit and an external storage device of the electronic device 8. The memory 81 is used for storing an operating system, application programs, boot loader (BootLoader), data, other programs, etc., such as program codes of computer programs, etc. The memory 81 may also be used to temporarily store data that has been transmitted or is to be transmitted.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
It should be noted that the structure of the electronic device is only illustrated by way of example, and other entity structures may be included based on different application scenarios, and the entity structure of the electronic device is not limited herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps for implementing the various method embodiments described above.
Embodiments of the present application provide a computer program product which, when run on a server, causes the server to perform steps that enable the implementation of the method embodiments described above.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the steps of each method embodiment described above may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
The algorithm development platform, the electronic device, the computer storage medium and the computer program product provided by the embodiments of the present application are all used for executing the method provided above, so that the beneficial effects achieved by the algorithm development platform, the electronic device, the computer storage medium and the computer program product can refer to the beneficial effects corresponding to the method provided above, and are not described herein again.
It should be understood that the above description is only intended to assist those skilled in the art in better understanding the embodiments of the present application, and is not intended to limit the scope of the embodiments of the present application. It will be apparent to those skilled in the art from the foregoing examples that various equivalent modifications or variations can be made, for example, certain steps may not be necessary in the various embodiments of the detection methods described above, or certain steps may be newly added, etc. Or a combination of any two or more of the above. Such modifications, variations, or combinations are also within the scope of embodiments of the present application.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
It should be understood that the above description is only intended to assist those skilled in the art in better understanding the embodiments of the present application, and is not intended to limit the scope of the embodiments of the present application. It will be apparent to those skilled in the art from the foregoing examples that various equivalent modifications or variations can be made, for example, certain steps may not be necessary in the various embodiments of the detection methods described above, or certain steps may be newly added, etc. Or a combination of any two or more of the above. Such modifications, variations, or combinations are also within the scope of embodiments of the present application.
It should also be understood that the manner, the case, the category, and the division of the embodiments in the embodiments of the present application are merely for convenience of description, should not be construed as a particular limitation, and the features in the various manners, the categories, the cases, and the embodiments may be combined without contradiction.
It is also to be understood that in the various embodiments of the application, where no special description or logic conflict exists, the terms and/or descriptions between the various embodiments are consistent and may reference each other, and features of the various embodiments may be combined to form new embodiments in accordance with their inherent logic relationships.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.
Finally, it should be noted that: the foregoing is merely illustrative of specific embodiments of the present application, and the scope of the present application is not limited thereto, but any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A method of video encoding, the method comprising:
acquiring an image frame to be encoded, wherein the image frame comprises an image block corresponding to an object;
based on the maximum code rate of the image frame and the complexity of the image block, allocating an upper limit code rate corresponding to the image block;
determining acceptable quantization parameters corresponding to the image block in each level of quantization parameters based on the mapping relation between the complexity and each level of quantization parameters;
determining an estimated code rate corresponding to the image block according to the acceptable quantization parameter;
and if the estimated code rate is smaller than the upper limit code rate, encoding the image block in the image frame based on the acceptable quantization parameter.
2. The method of claim 1, wherein prior to said determining acceptable ones of said levels of quantization parameters corresponding to said image block based on a mapping of said complexity to said levels of quantization parameters, said method further comprises:
extracting different object objects in a sample image frame, and acquiring image blocks corresponding to the object objects;
coding each image block based on a preset quantization parameter to obtain a coded image block corresponding to each image block;
Obtaining image quality scores of the coded image blocks, and determining the quantization parameters of each grade corresponding to each image block in the preset quantization parameters based on the image quality scores;
and calculating the complexity of each image block, and establishing a mapping relation between the complexity corresponding to each image block and the quantization parameters of each level.
3. The method of claim 1, wherein after said determining an estimated code rate corresponding to said image block based on said acceptable quantization parameter, said method further comprises:
judging whether the image block is a moving image block or not based on the current image frame and the adjacent previous image frame;
if the image block is the moving image block, acquiring the motion sensitivity of the moving image block;
if the estimated code rate is smaller than the upper limit code rate, calculating a target code rate corresponding to the image block based on the motion sensitivity, the estimated code rate and the upper limit code rate;
and calculating a target quantization parameter of the image block based on the target code rate, and encoding the moving image block in the current image frame based on the target quantization parameter.
4. A method according to claim 3, characterized in that after said calculating the target quantization parameter of the image block based on the target code rate, the method further comprises:
in the current image frame, smoothing the target quantization parameter of the image block based on the quantization parameter of the image block adjacent to the image block and a preset parameter difference threshold value to obtain a smoothed target quantization parameter;
the image block of the current image frame is encoded based on the smoothed target quantization parameter.
5. A method according to claim 3, characterized in that after said calculating the target quantization parameter of the image block based on the target code rate, the method further comprises:
smoothing the target quantization parameter of the image block in the current image frame based on the quantization parameter of the image block in the previous image frame, which is the same as the image block in the current image frame, and a preset parameter difference threshold value, so as to obtain a smoothed target quantization parameter;
the image block of the current image frame is encoded based on the smoothed target quantization parameter.
6. The method of claim 1, wherein the acquiring the image frame to be encoded comprises:
dividing the image frame into initial image blocks;
calculating the complexity of the initial image block to obtain the complexity level of the initial image block;
and clustering the initial image block based on the complexity level and the spatial position relation to obtain the image block corresponding to the object.
7. The method of claim 1, wherein after the capturing of the image frame to be encoded, the method further comprises:
taking a first image block in each image block as a target image block, and adjusting third-level quantization parameters of other second image blocks in each image block to fourth-level quantization parameters when adjusting first-level quantization parameters of the target image block to second-level quantization parameters;
the first-level quantization parameter and the second-level quantization parameter are quantization parameters in all levels of quantization parameters corresponding to the first image block, and the first-level quantization parameter is larger than the second-level quantization parameter; the third-level quantization parameter and the fourth-level quantization parameter are quantization parameters in all levels of quantization parameters corresponding to the second image block, and the third-level quantization parameter is smaller than the fourth-level quantization parameter; and the total code rate corresponding to the quantization parameter of each adjusted image block is smaller than the maximum code rate of the image frame.
8. The method according to any one of claims 1 to 7, wherein after said determining an estimated code rate corresponding to said image block based on said acceptable quantization parameter, said method further comprises:
and if the estimated code rate is greater than or equal to the upper limit code rate, encoding the image block in the image frame based on the upper limit code rate.
9. A video encoding apparatus, comprising:
the acquisition unit is used for acquiring an image frame to be encoded, wherein the image frame comprises an image block corresponding to an object;
the allocation unit is used for allocating the upper limit code rate corresponding to the image block based on the maximum code rate of the image frame and the complexity of the image block;
a processing unit, configured to determine an acceptable quantization parameter corresponding to the image block in each level of quantization parameters based on a mapping relationship between the complexity and each level of quantization parameters;
the calculating unit is used for determining an estimated code rate corresponding to the image block according to the acceptable quantization parameter;
and the encoding unit is used for encoding the image blocks in the image frame based on the acceptable quantization parameter if the estimated code rate is smaller than the upper limit code rate.
10. An electronic device comprising a memory storing a computer program and a processor implementing the method of any one of claims 1 to 8 when the computer program is executed by the processor.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any one of claims 1 to 8.
CN202311095360.XA 2023-08-28 2023-08-28 Video encoding method, video encoding device, electronic equipment and computer readable storage medium Pending CN117201792A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311095360.XA CN117201792A (en) 2023-08-28 2023-08-28 Video encoding method, video encoding device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311095360.XA CN117201792A (en) 2023-08-28 2023-08-28 Video encoding method, video encoding device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN117201792A true CN117201792A (en) 2023-12-08

Family

ID=88999147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311095360.XA Pending CN117201792A (en) 2023-08-28 2023-08-28 Video encoding method, video encoding device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN117201792A (en)

Similar Documents

Publication Publication Date Title
US9392218B2 (en) Image processing method and device
US10242462B2 (en) Rate control bit allocation for video streaming based on an attention area of a gamer
CN110839129A (en) Image processing method and device and mobile terminal
CN107750370B (en) Method and apparatus for determining a depth map for an image
CN108063944B (en) Perception code rate control method based on visual saliency
US9041773B2 (en) Conversion of 2-dimensional image data into 3-dimensional image data
CN110139104B (en) Video decoding method, video decoding device, computer equipment and storage medium
CN110620924B (en) Method and device for processing coded data, computer equipment and storage medium
TWI712990B (en) Method and apparatus for determining a depth map for an image, and non-transitory computer readable storage medium
CN112584119B (en) Self-adaptive panoramic video transmission method and system based on reinforcement learning
US20140092209A1 (en) System and method for improving video encoding using content information
CN110766637B (en) Video processing method, processing device, electronic equipment and storage medium
WO2016033725A1 (en) Block segmentation mode processing method in video coding and relevant apparatus
US20220382053A1 (en) Image processing method and apparatus for head-mounted display device as well as electronic device
US7873226B2 (en) Image encoding apparatus
Nami et al. BL-JUNIPER: A CNN-assisted framework for perceptual video coding leveraging block-level JND
Liu et al. No-reference bitstream-layer model for perceptual quality assessment of V-PCC encoded point clouds
CN110602506A (en) Video processing method, network device and computer readable storage medium
CN112437301B (en) Code rate control method and device for visual analysis, storage medium and terminal
CN110740316A (en) Data coding method and device
Nur Yilmaz A no reference depth perception assessment metric for 3D video
CN112929668A (en) Video coding method, device, equipment and storage medium
CN117201792A (en) Video encoding method, video encoding device, electronic equipment and computer readable storage medium
CN116980604A (en) Video encoding method, video decoding method and related equipment
EP2564591A1 (en) Method of processing an image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination