CN114125459A - Video coding method and device - Google Patents

Video coding method and device Download PDF

Info

Publication number
CN114125459A
CN114125459A CN202010900351.3A CN202010900351A CN114125459A CN 114125459 A CN114125459 A CN 114125459A CN 202010900351 A CN202010900351 A CN 202010900351A CN 114125459 A CN114125459 A CN 114125459A
Authority
CN
China
Prior art keywords
layer
video frame
picture group
target
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010900351.3A
Other languages
Chinese (zh)
Inventor
张海斌
蔡媛
樊鸿飞
张文杰
许道远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Cloud Network Technology Co Ltd
Original Assignee
Beijing Kingsoft Cloud Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Cloud Network Technology Co Ltd filed Critical Beijing Kingsoft Cloud Network Technology Co Ltd
Priority to CN202010900351.3A priority Critical patent/CN114125459A/en
Publication of CN114125459A publication Critical patent/CN114125459A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/177Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability

Abstract

The application relates to a video coding method and a video coding device, wherein the method comprises the following steps: obtaining distortion information of each layer video frame in a plurality of layers of the coded first picture group; determining a target quantization parameter of each layer of video frame in a second picture group corresponding to each layer of video frame in a first picture group according to distortion information of each layer of video frame in the first picture group and distortion information of a reference layer of video frame of each layer of video frame in the first picture group, wherein the reference layer of video frame in each layer of video frame in the first picture group is a video frame referred to when each layer of video frame in the first picture group is coded; and encoding each layer video frame in the second picture group by using the target quantization parameter of each layer video frame in the second picture group. The method and the device solve the technical problem that quantization parameters used in the video coding process in the related art are low in adaptability.

Description

Video coding method and device
Technical Field
The present application relates to the field of computers, and in particular, to a method and an apparatus for encoding a video.
Background
In the current video coding technology, a whole video sequence is usually coded in units of GOPs (Group of Pictures), each GOP is divided into different levels according to a reference relationship, each level is fixedly allocated with different frame level QPs (quantization parameters) according to empirical values, and the more references in the GOP, the lower the level, and the lower the allocated QPs. The prior art ignores the influence of each layer on other layers, and leads to low adaptability of the allocated quantization parameters.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The application provides a video coding method and a video coding device, which are used for at least solving the technical problem of low adaptability of quantization parameters used in a video coding process in the related art.
According to an aspect of an embodiment of the present application, there is provided a video encoding method, including:
obtaining distortion information of each layer of video frames in a plurality of layers of a coded first picture group, wherein the first picture group is a picture group which is coded before a second picture group to be coded in a target video;
determining a target quantization parameter of each layer video frame in the second picture group corresponding to each layer video frame in the first picture group according to distortion information of each layer video frame in the first picture group and distortion information of a reference layer video frame of each layer video frame in the first picture group, wherein the reference layer video frame of each layer video frame in the first picture group is a video frame referred to when each layer video frame in the first picture group is coded;
encoding each layer video frame in the second group of pictures using the target quantization parameter for each layer video frame in the second group of pictures.
Optionally, determining, according to the distortion information of each layer of video frame in the first picture group and the distortion information of the reference layer of video frame of each layer of video frame in the first picture group, a target quantization parameter of each layer of video frame in the second picture group corresponding to each layer of video frame in the first picture group includes:
calculating an influence factor of each layer of video frame in the second picture group corresponding to each layer of video frame in the first picture group according to the distortion information of each layer of video frame in the first picture group and the distortion information of a reference layer of video frame of each layer of video frame in the first picture group, wherein the influence factor is used for indicating the influence degree of each layer of video frame in the second picture group as a reference layer on a coding layer which refers to each layer of video frame in the second picture group;
and determining the target quantization parameter of each layer of video frame in the second picture group according to the influence factor of each layer of video frame in the second picture group.
Optionally, calculating an impact factor of each layer video frame in the second picture group corresponding to each layer video frame in the first picture group according to the distortion information of each layer video frame in the first picture group and the distortion information of the reference layer video frame of each layer video frame in the first picture group includes:
calculating an average dependency coefficient of each layer video frame in the second picture group relative to a reference layer video frame according to distortion information of each layer video frame in the first picture group and distortion information of a reference layer video frame of each layer video frame in the first picture group, wherein the average dependency coefficient is used for indicating the dependency of each layer video frame in the second picture group on the reference layer video frame, and the reference layer video frame is a video frame referred to by each layer video frame in the second picture group when encoding;
and determining the influence factor of each layer of video frame in the second picture group according to the corresponding average dependency coefficient when each layer of video frame in the second picture group is used as a reference layer video frame.
Optionally, calculating an average dependency coefficient of each layer video frame in the second picture group relative to a reference layer video frame according to the distortion information of each layer video frame in the first picture group and the distortion information of the reference layer video frame of each layer video frame in the first picture group includes:
calculating the variance of each video frame in the second picture group in motion estimation;
calculating a dependency coefficient corresponding to each reference relation in the second picture group according to the variance of each video frame, the average coding distortion of the layer where each video frame is located and the average reference distortion of the layer where the reference frame referred to by each video frame is located, wherein the dependency coefficient is used for indicating the dependency of the coding frame in each reference relation on the reference frame, and the distortion information comprises the average coding distortion and the average reference distortion;
and calculating the average dependency coefficient of each layer video frame in the second picture group relative to the reference layer video frame according to the dependency coefficient of the reference relationship and the number of the reference relationships included between each layer video frame in the second picture group and the reference layer video frame.
Optionally, determining the influence factor of each layer of video frame in the second picture group according to the corresponding average dependency coefficient when each layer of video frame in the second picture group is used as a reference layer video frame includes:
determining an influence factor corresponding to a target layer which is not referred to in the second picture group as a preset influence factor;
and determining the influence factor of the target layer for the referred target layer in the second picture group according to the average dependency coefficient and the influence factor corresponding to the coding layer referring to the target layer.
Optionally, determining the target quantization parameter of each layer of video frame in the second picture group according to the influence factor of each layer of video frame in the second picture group includes:
searching a preset Lambda parameter corresponding to a preset quantization parameter from the Lambda parameter and the quantization parameter with the corresponding relation;
for an unreferenced target layer in the second picture group, determining a target quantization parameter corresponding to the target layer as the preset quantization parameter;
for a target layer referred to in the second picture group, determining a ratio between the preset Lambda parameter and the influence factor of the target layer as a target Lambda parameter corresponding to the target layer; and searching the quantization parameter corresponding to the target Lambda parameter from the Lambda parameter and the quantization parameter with the corresponding relation to be used as the target quantization parameter of the target layer.
According to another aspect of the embodiments of the present application, there is also provided an apparatus for encoding video, including:
an obtaining module, configured to obtain distortion information of each layer of a video frame in multiple layers of a first picture group that has been encoded, where the first picture group is a picture group that is encoded in a target video before a second picture group to be encoded currently;
a determining module, configured to determine a target quantization parameter of each layer video frame in the second picture group corresponding to each layer video frame in the first picture group according to distortion information of each layer video frame in the first picture group and distortion information of a reference layer video frame of each layer video frame in the first picture group, where the reference layer video frame of each layer video frame in the first picture group is a video frame referred to when each layer video frame in the first picture group is encoded;
an encoding module for encoding each layer of video frames in the second group of pictures using the target quantization parameter for each layer of video frames in the second group of pictures.
Optionally, the determining module includes:
a calculating unit, configured to calculate an influence factor of each layer of video frames in the second picture group corresponding to each layer of video frames in the first picture group according to distortion information of each layer of video frames in the first picture group and distortion information of reference layer video frames of each layer of video frames in the first picture group, where the influence factor is used to indicate a degree of influence of each layer of video frames in the second picture group as a reference layer on an encoding layer referring to each layer of video frames in the second picture group;
a determining unit, configured to determine the target quantization parameter of each layer of video frame in the second picture group according to an influence factor of each layer of video frame in the second picture group.
According to another aspect of the embodiments of the present application, there is also provided a storage medium including a stored program which, when executed, performs the above-described method.
According to another aspect of the embodiments of the present application, there is also provided an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the above method through the computer program.
In the embodiment of the application, distortion information of each layer of video frames in a plurality of layers of a coded first picture group is obtained, wherein the first picture group is a picture group which is coded before a second picture group to be coded currently in a target video; determining a target quantization parameter of each layer of video frame in a second picture group corresponding to each layer of video frame in a first picture group according to distortion information of each layer of video frame in the first picture group and distortion information of a reference layer of video frame of each layer of video frame in the first picture group, wherein the reference layer of video frame in each layer of video frame in the first picture group is a video frame referred to when each layer of video frame in the first picture group is coded; the method for coding each layer of video frame in the second picture group by using the target quantization parameter of each layer of video frame in the second picture group obtains the distortion information of each layer of video frame in a plurality of layers of the first picture group which completes coding between the second picture groups when coding the second picture group to be coded currently, determines the target quantization parameter of each layer of video frame in the second picture group according to the distortion information of each layer of video frame in the first picture group and the distortion information of the reference layer of video frame of each layer of video frame in the first picture group, achieves the aim of dynamically adjusting the quantization parameter of the picture group to be coded currently according to the distortion information of the coded picture group, enables the quantization parameter of each layer of the picture group to be coded currently to sufficiently represent the reference relation between pictures, and the adjusted quantization parameter can be more suitable for each layer of the picture group to be coded currently, therefore, the technical effect of improving the adaptability of the quantization parameters used in the video coding process is achieved, and the technical problem of low adaptability of the quantization parameters used in the video coding process in the related technology is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic diagram of a hardware environment of an encoding method of video according to an embodiment of the present application;
fig. 2 is a flow chart of an alternative video encoding method according to an embodiment of the present application;
FIG. 3 is a diagram illustrating a hierarchy of groups of pictures according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an alternative video encoding apparatus according to an embodiment of the present application;
fig. 5 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of an embodiment of the present application, there is provided an embodiment of a method of encoding video.
Alternatively, in the present embodiment, the above-described video encoding method may be applied to a hardware environment constituted by the terminal 101 and the server 103 as shown in fig. 1. As shown in fig. 1, a server 103 is connected to a terminal 101 through a network, which may be used to provide services (such as game services, application services, etc.) for the terminal or a client installed on the terminal, and a database may be provided on the server or separately from the server for providing data storage services for the server 103, and the network includes but is not limited to: the terminal 101 is not limited to a PC, a mobile phone, a tablet computer, and the like. The video encoding method according to the embodiment of the present application may be executed by the server 103, the terminal 101, or both the server 103 and the terminal 101. The terminal 101 may perform the video encoding method according to the embodiment of the present application by a client installed thereon.
Fig. 2 is a flowchart of an alternative video encoding method according to an embodiment of the present application, and as shown in fig. 2, the method may include the following steps:
step S202, obtaining distortion information of each layer video frame in a plurality of layers of a coded first picture group, wherein the first picture group is a picture group which is coded before a second picture group to be coded in a target video;
step S204, determining a target quantization parameter of each layer video frame in the second picture group corresponding to each layer video frame in the first picture group according to the distortion information of each layer video frame in the first picture group and the distortion information of a reference layer video frame of each layer video frame in the first picture group, wherein the reference layer video frame of each layer video frame in the first picture group is a video frame referred to when each layer video frame in the first picture group is coded;
step S206, encode each layer of video frames in the second group of pictures using the target quantization parameter of each layer of video frames in the second group of pictures.
Through the above steps S202 to S206, when the second group of pictures to be currently encoded is encoded, the distortion information of each layer of video frames in the plurality of layers of the first group of pictures, which are encoded between the second groups of pictures, is obtained, the target quantization parameter of each layer of video frames in the second group of pictures is determined according to the distortion information of each layer of video frames in the first group of pictures and the distortion information of the reference layer of video frames of each layer of video frames in the first group of pictures, so as to achieve the purpose of dynamically adjusting the quantization parameter of the group of pictures to be currently encoded according to the distortion information of the encoded group of pictures, so that the quantization parameter of each layer of the group of pictures to be currently encoded can sufficiently represent the reference relationship between the pictures, and the adjusted quantization parameter can be more suitable for each layer in the group of pictures to be currently encoded, thereby achieving the technical effect of improving the adaptability of the quantization parameter used in the encoding process of the video, and further, the technical problem that quantization parameters used in the video coding process in the related technology are low in adaptability is solved.
Optionally, in this embodiment, the above video encoding method may be applied to various encoding modes that use quantization parameters to encode picture groups in video, such as: AV1 encoding.
In the technical solution provided in step S202, a Group of Pictures (GOP) refers to a set of consecutive Pictures in a video. Typically, the first picture in a GOP is an I-frame, which is intra-coded without reference to other frames. Other frames in the GOP may be B frames or P frames, where P frames are coded with forward reference and B frames are coded with bi-directional reference.
Optionally, in this embodiment, the first picture group is a picture group that is coded before the second picture group to be currently coded in the target video, for example: the first picture group may be a picture group that is completely coded one before the second picture group to be currently coded in the target video, or the first picture group may be M picture groups that are completely coded before the second picture group to be currently coded in the target video.
Optionally, in this embodiment, the GOP may be layered, pictures of higher layers are usually coded with reference to pictures of lower layers, and pictures of highest layers are not referred to by other pictures. For example: fig. 3 is a schematic diagram of a hierarchical structure of a group of pictures according to an embodiment of the present application, as shown in fig. 3, a video frame in the group of pictures is divided into 5 layers, a 0 th frame is an I frame, and can be encoded without referring to other frames, and can be used as a reference frame of other frames, and is located at the 0 th layer, and all the 1 st, 2 nd, 4 th, 8 th and 16 th frames need the 0 th frame as a reference frame for encoding. In addition, a picture of a higher layer (layer 4) is not referred to by other frames, and lower layers ( layers 1, 2, 3, and 0) are sometimes referred to by other frames.
Optionally, in this embodiment, a frame to be encoded may be referred to as an encoded frame, and a frame referred to when the encoded frame is encoded may be referred to as a reference frame. Similarly, a layer to be encoded in the picture group may be referred to as an encoding layer, and a layer referred to when the encoding layer is encoded may be referred to as a reference layer. Such as: in the hierarchical structure shown in fig. 3, when the 2 nd frame needs to be encoded, the 2 nd frame may be referred to as an encoded frame, and the reference frames of the 2 nd frame are the 0 th frame and the 4 th frame, and when the 1 st frame needs to be encoded, the 1 st frame may be referred to as an encoded frame, and the reference frames of the 1 st frame are the 0 th frame and the 2 nd frame. When a video frame of layer 2 needs to be encoded, layer 2 may be referred to as an encoding layer, and reference layers of layer 2 are layer 0 and layer 1.
Alternatively, in the present embodiment, when the encoded frame refers to another frame (reference frame), the reference frame has already completed encoding. And at the time of actual encoding, the encoded frame refers to the reconstructed frame of the reference frame, not to the reference frame itself (original frame). Therefore, the quality of the reconstructed frame of the reference frame directly affects the quality of the encoded frame, and thus the encoding quality of the entire video. For example, as shown in FIG. 3, frame 7 requires reference to frame 6, frame 6 requires reference to frame 4, and frame 4 requires reference to frame 0. If the coding quality of the 0 th frame is not good, the coding quality of the 4 th frame is directly influenced, the coding quality of the 7 th frame is influenced layer by layer, and finally, the coding quality of the 4 th frame is also influenced. Since these frames are also referenced by other frames, the quality of the other frames may also be affected.
Optionally, in this embodiment, if the first group of pictures is the first group of pictures in the target video, the first group of pictures may be encoded by using a default method in the encoding standard, and if the first group of pictures is not the first group of pictures in the target video, the first group of pictures may be encoded by using the encoding method provided in this embodiment.
Optionally, in this embodiment, the obtained distortion information of each layer video frame in the plurality of layers of the encoded first picture group may include, but is not limited to: the average coding distortion for each layer (denoted Dpre _ i, where i denotes the ith layer) and the average reference distortion (Dref _ j, where j denotes the jth layer). Wherein the average coding distortion for each layer is equal to the sum of the coding distortions of all frames of the layer divided by the number of all frames of the layer. The average reference distortion is equal to the coding distortion sum of all reference frames of the layer divided by the number of all reference frames of the layer. If the first picture group is the first picture group in the target video, since the reference frame uses the reconstructed frame and the distortion is the coding distortion of other frames in the GOP, the average reference distortion is equal to the average coding distortion.
In the solution provided in step S204, the Quantization Parameter (QP), which is in the range of [0,51] in AV1, affects the relationship between the actual coding process distortion and the file size. In short, the larger the value, the larger the quantization step size in the quantization process, the smaller the code rate (bit value) required for encoding (corresponding to the smaller the size of the encoded file), but the larger the distortion caused by the encoding process.
Optionally, in this embodiment, the influence factor of each layer is calculated by calculating the dependency relationship between the encoded frame and the reference frame, and the Lambda value and the QP value of each layer are dynamically adjusted according to the influence factor, so as to achieve the purpose of adjusting the QP value of each layer according to the change of the video content.
Optionally, in this embodiment, the dependency coefficients between the current GOP layers are calculated by calculating the actual coding distortion of each frame of the last GOP, the coding distortion of the corresponding reference frame, and the distortion pre-analyzed by the motion estimation of the current GOP, so as to find the impact factors of each layer, and the Lambda value and the QP value of the next layer are adjusted by the relationship between the impact factors and the top layer.
In the above step S204, the target quantization parameter of each layer of video frame in the second picture group can be determined by, but not limited to, the following manners:
s11, calculating an influence factor of each layer of video frame in the second picture group corresponding to each layer of video frame in the first picture group according to the distortion information of each layer of video frame in the first picture group and the distortion information of the reference layer of video frame of each layer of video frame in the first picture group, wherein the influence factor is used to indicate the degree of influence of each layer of video frame in the second picture group as a reference layer on a coding layer referring to each layer of video frame in the second picture group;
s12, determining the target quantization parameter of each layer of video frame in the second picture group according to the influence factor of each layer of video frame in the second picture group.
Optionally, in this embodiment, the influence factor is used to indicate a degree of influence of each layer of video frames in the second picture group as a reference layer on a coding layer that refers to each layer of video frames in the second picture group, that is, the influence factor represents a degree of dependence of each layer in the picture group on other layers. It can also be said that the influence factor indicates the degree of influence of the current layer on its upper layer.
Optionally, in this embodiment, the target quantization parameter corresponding to each layer is determined according to the influence factor of the layer, that is, the determination of the target quantization parameter fully considers the reference dependency relationship between layers.
In the above step S11, the influence factor of each layer video frame in the second picture group can be calculated by, but not limited to, the following method:
s21, calculating an average dependency coefficient of each layer video frame in the second group relative to a reference layer video frame according to the distortion information of each layer video frame in the first group and the distortion information of the reference layer video frame of each layer video frame in the first group, where the average dependency coefficient is used to indicate the dependency of each layer video frame in the second group on the reference layer video frame, and the reference layer video frame is a video frame referred to by each layer video frame in the second group when encoding;
s22, determining an influence factor of each layer video frame in the second picture group according to a corresponding average dependency coefficient when each layer video frame in the second picture group is used as a reference layer video frame.
Optionally, in this embodiment, an average dependency coefficient of each layer in the first picture group relative to its reference layer is first calculated according to the distortion information of the layer and the distortion information of its reference layer, where the average dependency coefficient is used to indicate the dependency of each layer video frame in the second picture group on its reference layer video frame. And determining the influence factor according to the average dependence coefficient.
In the above step S21, the average dependency coefficient of each layer video frame in the second picture group may be calculated by, but not limited to:
s31, calculating the variance of each video frame in the second picture group in motion estimation;
s32, calculating a dependency coefficient corresponding to each reference relation in the second picture group according to the variance of each video frame, the average coding distortion of the layer where each video frame is located and the average reference distortion of the layer where the reference frame referred to by each video frame is located, wherein the dependency coefficient is used for indicating the dependency of the coding frame in each reference relation on the reference frame, and the distortion information includes the average coding distortion and the average reference distortion;
s33, calculating an average dependency coefficient of each layer video frame in the second group relative to the reference layer video frame according to the dependency coefficient of the reference relationship included between each layer video frame in the second group and the reference layer video frame and the number of reference relationships.
Optionally, in this embodiment, first, a motion estimation process is performed on all frames of the current GOP (i.e., the second group of pictures), and a variance _ ori of each frame in the current GOP during motion estimation may be calculated.
Optionally, in this embodiment, according to the variance of each video frame (variance _ ori), the average coding distortion of the layer where each video frame is located (Dpre _ i), and the average reference distortion of the layer where the reference frame referred to by each video frame is located (Dref _ j), the dependency coefficient dep [ Li, Lj ] corresponding to each reference relationship included in the second picture group may be calculated by, but is not limited to, the following formula: dep [ Li, Lj ] ═ Dpre _ i/(Dref _ j + variance _ ori). That is, in the GOP hierarchy shown in fig. 3, each connecting line represents a reference relationship, and each connecting line can calculate a dependency coefficient.
Optionally, in this embodiment, since the I frame is an intra-frame coding mode frame and does not need to depend on other frames, and the B frame and the P frame are inter-frame coding mode frames and need to be coded with reference to other frames, the dependency coefficients of the B frame and the P frame may be calculated here.
Alternatively, in this embodiment, the average dependency coefficient may be calculated by, but not limited to, dividing the sum of all dep [ Li, Lj ] by the total number of < Li, Lj > references. For example, as shown in the GOP hierarchy shown in fig. 3, the average dependency coefficient of two layers in time can be calculated by dividing the dependency coefficient included between each two layers by the total number of references between the two layers.
In the above step S22, the influence factor of each layer video frame in the second picture group can be determined by, but not limited to, the following method:
s41, determining the influence factor corresponding to the target layer as a preset influence factor for the target layer which is not referred to in the second picture group;
s42, for the target layer referred to in the second picture group, determining an influence factor of the target layer according to the average dependency coefficient and the influence factor corresponding to the coding layer referring to the target layer.
Optionally, in this embodiment, the target layer that is not referred to may refer to, but is not limited to, the topmost layer, and an influence factor may be preset for the target layer, for example: the preset impact factor may be, but is not limited to, 1.
Optionally, in this embodiment, for the target layer referred to in the second picture group, the influence factor corresponding to the target layer may be determined, but not limited to, according to the average dependency coefficient and the influence factor corresponding to the coding layer of the reference target layer.
For example: taking the GOP hierarchy shown in fig. 3 as an example, the top layer (denoted as layer 4) is not referred to by other layers, and the influence on other layers is 0, and its influence factor is denoted as: d4 ═ 1;
layer 3 is referenced by layer 4 of the top layer, and the impact factor is calculated as: d3 ═ 1+ ave _ dep [ L4, L3 ];
layer 2 is referenced by layers 4 and 3, and the impact factor is calculated as: d2 ═ 1+ ave _ dep [ L4, L2] × D4+ ave _ dep [ L3, L3] × D3;
layer 1 is referenced by layer 4, layer 3, and layer 2, and the impact factor is calculated as: d1 ═ 1+ ave _ dep [ L4, L1] × D4+ ave _ dep [ L3, L1] × D3+ ave _ dep [ L2, L1] × D2;
the bottom 0 th layer is referenced by all layers above, and the impact factor is calculated as: d0 ═ 1+ ave _ dep [ L4, L0] × D4+ ave _ dep [ L3, L0] × D3+ ave _ dep [ L2, L0] × D2+ ave _ dep [ L1, L0] × D1.
In the above step S12, the target quantization parameter may be determined in, but not limited to, the following manner:
s51, searching preset Lambda parameters corresponding to preset quantization parameters from the Lambda parameters and the quantization parameters with corresponding relations;
s52, for an unreferenced target layer in the second picture group, determining a target quantization parameter corresponding to the target layer as the preset quantization parameter;
s53, for the target layer referred to in the second picture group, determining a ratio between the preset Lambda parameter and the impact factor of the target layer as a target Lambda parameter corresponding to the target layer; and searching the quantization parameter corresponding to the target Lambda parameter from the Lambda parameter and the quantization parameter with the corresponding relation to be used as the target quantization parameter of the target layer.
Optionally, in this embodiment, the Lambda parameter and the quantization parameter required in the encoding process have a corresponding relationship, and for a target layer (for example, a top layer) in the second picture group that is not referred to, since other layers are referred to by the other layers, the corresponding target quantization parameter may be determined as an initial value, that is, a preset quantization parameter. The target quantization parameters of other layers are determined according to the preset quantization parameters.
For example: taking the above GOP hierarchy as an example, because all the lowest layers are referred to by the top layer 4, the Lambda of the top layer frame is the preset Lambda parameter, the QP is the initial QP value, the preset Lambda parameter can be known from the initial QP value Table (Hash Table of the corresponding relationship between Lambda and QP exists in AV 1), and the Lambda calculation method of the lower layer is as follows: lambda _ i is Lambda/Di. And according to the updated Lambda _ i, looking up a table to obtain the updated QP value corresponding to each layer. The QP value is the QP value needed for all frames of the corresponding layer to be encoded.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
According to another aspect of the embodiments of the present application, there is also provided a video encoding apparatus for implementing the above-described video encoding method. Fig. 4 is a schematic diagram of an alternative video encoding apparatus according to an embodiment of the present application, and as shown in fig. 4, the apparatus may include:
an obtaining module 42, configured to obtain distortion information of each layer video frame in multiple layers of a first picture group that has been encoded, where the first picture group is a picture group that is encoded in a target video before a second picture group to be currently encoded;
a determining module 44, configured to determine a target quantization parameter of each layer video frame in the second picture group corresponding to each layer video frame in the first picture group according to distortion information of each layer video frame in the first picture group and distortion information of a reference layer video frame of each layer video frame in the first picture group, where the reference layer video frame of each layer video frame in the first picture group is a video frame referred to when each layer video frame in the first picture group is encoded;
an encoding module 46, configured to encode each layer video frame in the second group of pictures using the target quantization parameter of each layer video frame in the second group of pictures.
It should be noted that the obtaining module 42 in this embodiment may be configured to execute step S202 in this embodiment, the determining module 44 in this embodiment may be configured to execute step S204 in this embodiment, and the encoding module 46 in this embodiment may be configured to execute step S206 in this embodiment.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules described above as a part of the apparatus may operate in a hardware environment as shown in fig. 1, and may be implemented by software or hardware.
Through the above modules, when a second picture group to be coded currently is coded, distortion information of each layer of video frames in a plurality of layers of a first picture group which completes coding between the second picture groups is obtained, a target quantization parameter of each layer of video frames in the second picture group is determined according to the distortion information of each layer of video frames in the first picture group and the distortion information of a reference layer of video frames of each layer of video frames in the first picture group, the purpose of dynamically adjusting the quantization parameter of the current picture group to be coded according to the distortion information of the coded picture group is achieved, the quantization parameter of each layer of the current picture group to be coded can fully represent the reference relation between pictures, the adjusted quantization parameter can be more suitable for each layer in the current picture group to be coded, and the technical effect of improving the adaptability of the quantization parameter used in the coding process of the video is realized, and further, the technical problem that quantization parameters used in the video coding process in the related technology are low in adaptability is solved.
As an alternative embodiment, the determining module includes:
a calculating unit, configured to calculate an influence factor of each layer of video frames in the second picture group corresponding to each layer of video frames in the first picture group according to distortion information of each layer of video frames in the first picture group and distortion information of reference layer video frames of each layer of video frames in the first picture group, where the influence factor is used to indicate a degree of influence of each layer of video frames in the second picture group as a reference layer on an encoding layer referring to each layer of video frames in the second picture group;
a determining unit, configured to determine the target quantization parameter of each layer of video frame in the second picture group according to an influence factor of each layer of video frame in the second picture group.
As an alternative embodiment, the computing unit is configured to:
calculating an average dependency coefficient of each layer video frame in the second picture group relative to a reference layer video frame according to distortion information of each layer video frame in the first picture group and distortion information of a reference layer video frame of each layer video frame in the first picture group, wherein the average dependency coefficient is used for indicating the dependency of each layer video frame in the second picture group on the reference layer video frame, and the reference layer video frame is a video frame referred to by each layer video frame in the second picture group when encoding;
and determining the influence factor of each layer of video frame in the second picture group according to the corresponding average dependency coefficient when each layer of video frame in the second picture group is used as a reference layer video frame.
As an alternative embodiment, the computing unit is configured to:
calculating the variance of each video frame in the second picture group in motion estimation;
calculating a dependency coefficient corresponding to each reference relation in the second picture group according to the variance of each video frame, the average coding distortion of the layer where each video frame is located and the average reference distortion of the layer where the reference frame referred to by each video frame is located, wherein the dependency coefficient is used for indicating the dependency of the coding frame in each reference relation on the reference frame, and the distortion information comprises the average coding distortion and the average reference distortion;
and calculating the average dependency coefficient of each layer video frame in the second picture group relative to the reference layer video frame according to the dependency coefficient of the reference relationship and the number of the reference relationships included between each layer video frame in the second picture group and the reference layer video frame.
As an alternative embodiment, the computing unit is configured to:
determining an influence factor corresponding to a target layer which is not referred to in the second picture group as a preset influence factor;
and determining the influence factor of the target layer for the referred target layer in the second picture group according to the average dependency coefficient and the influence factor corresponding to the coding layer referring to the target layer.
As an alternative embodiment, the determining unit is configured to:
searching a preset Lambda parameter corresponding to a preset quantization parameter from the Lambda parameter and the quantization parameter with the corresponding relation;
for an unreferenced target layer in the second picture group, determining a target quantization parameter corresponding to the target layer as the preset quantization parameter;
for a target layer referred to in the second picture group, determining a ratio between the preset Lambda parameter and the influence factor of the target layer as a target Lambda parameter corresponding to the target layer; and searching the quantization parameter corresponding to the target Lambda parameter from the Lambda parameter and the quantization parameter with the corresponding relation to be used as the target quantization parameter of the target layer.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules described above as a part of the apparatus may be operated in a hardware environment as shown in fig. 1, and may be implemented by software, or may be implemented by hardware, where the hardware environment includes a network environment.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device for implementing the video encoding method, as shown in fig. 5, the electronic device includes a memory 502 and a processor 504, the memory 502 stores a computer program therein, and the processor 504 is configured to execute the steps in any one of the method embodiments through the computer program.
Optionally, in this embodiment, the electronic apparatus may be located in at least one network device of a plurality of network devices of a computer network.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
obtaining distortion information of each layer of video frames in a plurality of layers of a coded first picture group, wherein the first picture group is a picture group which is coded before a second picture group to be coded in a target video;
determining a target quantization parameter of each layer video frame in the second picture group corresponding to each layer video frame in the first picture group according to distortion information of each layer video frame in the first picture group and distortion information of a reference layer video frame of each layer video frame in the first picture group, wherein the reference layer video frame of each layer video frame in the first picture group is a video frame referred to when each layer video frame in the first picture group is coded;
encoding each layer video frame in the second group of pictures using the target quantization parameter for each layer video frame in the second group of pictures.
Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 5 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a Mobile Internet Device (MID), a PAD, and the like. Fig. 5 is a diagram illustrating a structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 5, or have a different configuration than shown in FIG. 5.
The memory 502 may be used to store software programs and modules, such as program instructions/modules corresponding to the method and apparatus for displaying a data form in the embodiment of the present invention, and the processor 504 executes various functional applications and data processing by running the software programs and modules stored in the memory 502, that is, implements the above-described method for displaying a data form. The memory 502 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 502 may further include memory located remotely from the processor 504, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 502 may be, but not limited to, specifically used for storing information such as feature information and probability result of the account to be processed. As an example, as shown in fig. 5, the memory 502 may include, but is not limited to, an acquisition module 5022, a determination module 5024 and an encoding module 5026 of the video image processing apparatus. In addition, the video encoding apparatus may further include, but is not limited to, other module units in the video encoding apparatus, which is not described in this example again.
Optionally, the transmission device 506 is used for receiving or sending data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 506 includes a network adapter (NIC) that can be connected to a router via a network cable and other network devices so as to communicate with the internet or a local area network. In one example, the transmission device 506 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In addition, the electronic device further includes: a display 508, configured to display the characteristic information and the probability result of the account to be processed; and a connection bus 510 for connecting the respective module parts in the above-described electronic apparatus.
Embodiments of the present application also provide a storage medium. Alternatively, in the present embodiment, the storage medium may be a program code for executing an encoding method of a video.
Optionally, in this embodiment, the storage medium may be located on at least one of a plurality of network devices in a network shown in the above embodiment.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps:
obtaining distortion information of each layer of video frames in a plurality of layers of a coded first picture group, wherein the first picture group is a picture group which is coded before a second picture group to be coded in a target video;
determining a target quantization parameter of each layer video frame in the second picture group corresponding to each layer video frame in the first picture group according to distortion information of each layer video frame in the first picture group and distortion information of a reference layer video frame of each layer video frame in the first picture group, wherein the reference layer video frame of each layer video frame in the first picture group is a video frame referred to when each layer video frame in the first picture group is coded;
encoding each layer video frame in the second group of pictures using the target quantization parameter for each layer video frame in the second group of pictures.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including instructions for causing one or more computer devices (which may be personal computers, servers, network devices, or the like) to execute all or part of the steps of the method described in the embodiments of the present application.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (10)

1. A method for encoding video, comprising:
obtaining distortion information of each layer of video frames in a plurality of layers of a coded first picture group, wherein the first picture group is a picture group which is coded before a second picture group to be coded in a target video;
determining a target quantization parameter of each layer video frame in the second picture group corresponding to each layer video frame in the first picture group according to distortion information of each layer video frame in the first picture group and distortion information of a reference layer video frame of each layer video frame in the first picture group, wherein the reference layer video frame of each layer video frame in the first picture group is a video frame referred to when each layer video frame in the first picture group is coded;
encoding each layer video frame in the second group of pictures using the target quantization parameter for each layer video frame in the second group of pictures.
2. The method of claim 1, wherein determining the target quantization parameter for each layer video frame in the second picture group corresponding to each layer video frame in the first picture group according to the distortion information for each layer video frame in the first picture group and the distortion information for the reference layer video frame for each layer video frame in the first picture group comprises:
calculating an influence factor of each layer of video frame in the second picture group corresponding to each layer of video frame in the first picture group according to the distortion information of each layer of video frame in the first picture group and the distortion information of a reference layer of video frame of each layer of video frame in the first picture group, wherein the influence factor is used for indicating the influence degree of each layer of video frame in the second picture group as a reference layer on a coding layer which refers to each layer of video frame in the second picture group;
and determining the target quantization parameter of each layer of video frame in the second picture group according to the influence factor of each layer of video frame in the second picture group.
3. The method of claim 2, wherein calculating the impact factor for each layer video frame in the second picture group corresponding to each layer video frame in the first picture group according to the distortion information for each layer video frame in the first picture group and the distortion information for the reference layer video frame of each layer video frame in the first picture group comprises:
calculating an average dependency coefficient of each layer video frame in the second picture group relative to a reference layer video frame according to distortion information of each layer video frame in the first picture group and distortion information of a reference layer video frame of each layer video frame in the first picture group, wherein the average dependency coefficient is used for indicating the dependency of each layer video frame in the second picture group on the reference layer video frame, and the reference layer video frame is a video frame referred to by each layer video frame in the second picture group when encoding;
and determining the influence factor of each layer of video frame in the second picture group according to the corresponding average dependency coefficient when each layer of video frame in the second picture group is used as a reference layer video frame.
4. The method of claim 3, wherein calculating the average dependency coefficient of each layer video frame in the second picture group relative to a reference layer video frame according to the distortion information of each layer video frame in the first picture group and the distortion information of the reference layer video frame of each layer video frame in the first picture group comprises:
calculating the variance of each video frame in the second picture group in motion estimation;
calculating a dependency coefficient corresponding to each reference relation in the second picture group according to the variance of each video frame, the average coding distortion of the layer where each video frame is located and the average reference distortion of the layer where the reference frame referred to by each video frame is located, wherein the dependency coefficient is used for indicating the dependency of the coding frame in each reference relation on the reference frame, and the distortion information comprises the average coding distortion and the average reference distortion;
and calculating the average dependency coefficient of each layer video frame in the second picture group relative to the reference layer video frame according to the dependency coefficient of the reference relationship and the number of the reference relationships included between each layer video frame in the second picture group and the reference layer video frame.
5. The method of claim 3, wherein determining the impact factor of each layer video frame in the second group of pictures according to the corresponding average dependency coefficient when each layer video frame in the second group of pictures is used as a reference layer video frame comprises:
determining an influence factor corresponding to a target layer which is not referred to in the second picture group as a preset influence factor;
and determining the influence factor of the target layer for the referred target layer in the second picture group according to the average dependency coefficient and the influence factor corresponding to the coding layer referring to the target layer.
6. The method of claim 2, wherein determining the target quantization parameter for each layer video frame in the second picture group according to the impact factor for each layer video frame in the second picture group comprises:
searching a preset Lambda parameter corresponding to a preset quantization parameter from the Lambda parameter and the quantization parameter with the corresponding relation;
for an unreferenced target layer in the second picture group, determining a target quantization parameter corresponding to the target layer as the preset quantization parameter;
for a target layer referred to in the second picture group, determining a ratio between the preset Lambda parameter and the influence factor of the target layer as a target Lambda parameter corresponding to the target layer; and searching the quantization parameter corresponding to the target Lambda parameter from the Lambda parameter and the quantization parameter with the corresponding relation to be used as the target quantization parameter of the target layer.
7. An apparatus for encoding video, comprising:
an obtaining module, configured to obtain distortion information of each layer of a video frame in multiple layers of a first picture group that has been encoded, where the first picture group is a picture group that is encoded in a target video before a second picture group to be encoded currently;
a determining module, configured to determine a target quantization parameter of each layer video frame in the second picture group corresponding to each layer video frame in the first picture group according to distortion information of each layer video frame in the first picture group and distortion information of a reference layer video frame of each layer video frame in the first picture group, where the reference layer video frame of each layer video frame in the first picture group is a video frame referred to when each layer video frame in the first picture group is encoded;
an encoding module for encoding each layer of video frames in the second group of pictures using the target quantization parameter for each layer of video frames in the second group of pictures.
8. The apparatus of claim 7, wherein the determining module comprises:
a calculating unit, configured to calculate an influence factor of each layer of video frames in the second picture group corresponding to each layer of video frames in the first picture group according to distortion information of each layer of video frames in the first picture group and distortion information of reference layer video frames of each layer of video frames in the first picture group, where the influence factor is used to indicate a degree of influence of each layer of video frames in the second picture group as a reference layer on an encoding layer referring to each layer of video frames in the second picture group;
a determining unit, configured to determine the target quantization parameter of each layer of video frame in the second picture group according to an influence factor of each layer of video frame in the second picture group.
9. A storage medium, characterized in that the storage medium comprises a stored program, wherein the program when executed performs the method of any of the preceding claims 1 to 6.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the method of any of the preceding claims 1 to 6 by means of the computer program.
CN202010900351.3A 2020-08-31 2020-08-31 Video coding method and device Pending CN114125459A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010900351.3A CN114125459A (en) 2020-08-31 2020-08-31 Video coding method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010900351.3A CN114125459A (en) 2020-08-31 2020-08-31 Video coding method and device

Publications (1)

Publication Number Publication Date
CN114125459A true CN114125459A (en) 2022-03-01

Family

ID=80360295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010900351.3A Pending CN114125459A (en) 2020-08-31 2020-08-31 Video coding method and device

Country Status (1)

Country Link
CN (1) CN114125459A (en)

Similar Documents

Publication Publication Date Title
US9215466B2 (en) Joint frame rate and resolution adaptation
US6873654B1 (en) Method and system for predictive control for live streaming video/audio media
US20060017592A1 (en) Method of context adaptive binary arithmetic coding and apparatus using the same
CN103957341B (en) The method of picture transfer and relevant device thereof
JP2011512047A (en) Method and apparatus for performing lower complexity multi-bitrate video encoding using metadata
CN108063946B (en) Image encoding method and apparatus, storage medium, and electronic apparatus
JP2020518174A (en) Video frame coding method, terminal, and storage medium
CN107872669A (en) Video code rate treating method and apparatus
CN112672149B (en) Video processing method and device, storage medium and server
CN112165620A (en) Video encoding method and device, storage medium and electronic equipment
CN107018406A (en) Video information processing method and device
CN114245196B (en) Screen recording and stream pushing method and device, electronic equipment and storage medium
US8681860B2 (en) Moving picture compression apparatus and method of controlling operation of same
CN112351278A (en) Video encoding method and device and video decoding method and device
CN111464811A (en) Image processing method, device and system
CN114125459A (en) Video coding method and device
CN100581245C (en) Efficient rate control techniques for video encoding
CN102577412A (en) Image coding method and device
CN114125458A (en) Video coding method and device
CN115941972A (en) Image transmission method, device, equipment and storage medium
CN114374841A (en) Optimization method and device for video coding rate control and electronic equipment
CN110582022B (en) Video encoding and decoding method and device and storage medium
CN115022636A (en) Rate distortion optimization quantization method and device
CN110636293B (en) Video encoding and decoding methods and devices, storage medium and electronic device
CN114222127A (en) Video coding method, video decoding method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination