CN109951706B - Video coding method, device and coder - Google Patents

Video coding method, device and coder Download PDF

Info

Publication number
CN109951706B
CN109951706B CN201910325030.2A CN201910325030A CN109951706B CN 109951706 B CN109951706 B CN 109951706B CN 201910325030 A CN201910325030 A CN 201910325030A CN 109951706 B CN109951706 B CN 109951706B
Authority
CN
China
Prior art keywords
quantization parameter
current frame
frame
background
foreground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910325030.2A
Other languages
Chinese (zh)
Other versions
CN109951706A (en
Inventor
杨子盼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Goke Microelectronics Co Ltd
Original Assignee
Hunan Goke Microelectronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Goke Microelectronics Co Ltd filed Critical Hunan Goke Microelectronics Co Ltd
Priority to CN201910325030.2A priority Critical patent/CN109951706B/en
Publication of CN109951706A publication Critical patent/CN109951706A/en
Application granted granted Critical
Publication of CN109951706B publication Critical patent/CN109951706B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the invention provides a video coding method, and relates to the field of video coding. The video encoding method includes: respectively acquiring a background frame and a current frame; acquiring the motion grade of the current frame according to the background frame and the current frame; obtaining a foreground region quantization parameter and a background region quantization parameter according to the motion level of the current frame; obtaining the quantization parameter of each macro block according to the first quantization parameter table and the second quantization parameter table; the second quantization parameter represents the complexity and the code rate of the frame to be coded; and coding the frame to be coded according to the quantization parameter of each macro block. The invention divides the image to be coded into background area and foreground area, divides the motion grade of the image to be coded according to the ratio value of the foreground area, and selects proper quantization parameter according to the difference of the motion grade aiming at different macro blocks, thereby achieving the purpose of compressing code rate on the basis of not reducing image quality.

Description

Video coding method, device and coder
Technical Field
The present invention relates to the field of video coding, and in particular, to a video coding method, apparatus and encoder.
Background
The code rate control of the existing encoder is achieved extremely well, and from birth of the first encoder to the present, the code rate control of the existing encoder is gradually improved through iterative upgrade of a plurality of products, and further optimization is required to be achieved, so that the existing encoder is very difficult. The method has the advantages that the quality of an encoder is judged, code rate control is undoubtedly a crucial link, the good encoder can obtain a better subjective effect under the same code rate, and conversely, the code rate of the good encoder is lower under the same subjective effect.
Although the rate control algorithm does not belong to the category of video coding standards, various video coding standards provide corresponding rate control recommendation algorithms based on the important role of the rate control algorithm in an actual video coding system. Such as TM5 from MPEG-2, TMN8 from H.263, etc.
However, as the compression rate of the new encoding standard is higher and higher, the difficulty in estimating the complexity of the image is higher and higher, and it is very difficult to select an appropriate quantization parameter, even if a suitable quantization parameter is selected, since the image details in a frame are more complex, the use of the current quantization parameter may result in that the image data cannot be effectively compressed in the encoding process, and the image details are unnecessarily lost in the decoding process.
Disclosure of Invention
An object of the embodiments of the present invention is to provide a video encoding method, so as to achieve the purpose of compressing a code rate without reducing image quality.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides a video encoding method, which respectively obtains a background frame and a current frame; the background frame is a background model frame of a video to be coded; the current frame is a current frame to be coded;
acquiring the motion grade of the current frame according to the background frame and the current frame; the motion grade of the current frame is represented by the ratio of the foreground area in the current frame; the foreground region is a region with a brightness difference value larger than a preset brightness threshold value; the non-foreground region in the current frame is a background region;
adjusting quantization parameters according to the motion grade of the current frame; and coding the frame to be coded according to the quantization parameter.
In a second aspect, an embodiment of the present invention further provides a video encoding apparatus, including: the device comprises an acquisition module, a processing module and a judgment module. The acquisition module is used for acquiring a background frame and a current frame, acquiring the motion grade of the current frame according to the background frame and the current frame, and adjusting a quantization parameter according to the motion grade of the current frame; the processing module is used for coding the frame to be coded according to the quantization parameter of each macro block; the judging module is used for comparing the brightness difference value with a preset brightness threshold value.
In a third aspect, an embodiment of the present invention further provides an encoder, where the encoder includes a processor, a memory, and a bus, where the memory stores machine-readable instructions executable by the processor, and when the encoder runs, the processor and the memory communicate with each other through the bus, and the processor executes the machine-readable instructions to perform the steps of the video encoding method described above.
The video coding method provided by the embodiment of the invention comprises the following steps: respectively acquiring a background frame and a current frame; the background frame is a background model frame of the video to be coded; the current frame is a current frame to be coded; the current frame comprises at least two macro blocks; acquiring the motion grade of the current frame according to the background frame and the current frame; the motion grade of the current frame is the ratio of the foreground area in the current frame; the foreground area is an area with a brightness difference value larger than a preset brightness threshold value; the non-foreground region in the current frame is a background region; obtaining a foreground region quantization parameter and a background region quantization parameter according to the motion level of the current frame; the foreground region quantization parameter and the background region quantization parameter form a first quantization parameter table; obtaining the quantization parameter of each macro block according to the first quantization parameter table and the second quantization parameter table; the second quantization parameter table is generated by the encoder; the second quantization parameter represents the complexity and the code rate of the frame to be coded; and coding the frame to be coded according to the quantization parameter of each macro block. The invention divides the image to be coded into background area and foreground area, divides the motion grade of the image to be coded according to the ratio value of the foreground area, and selects proper quantization parameter according to the difference of the motion grade aiming at different macro blocks, thereby achieving the purpose of compressing code rate on the basis of not reducing image quality.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 shows a flowchart of a video encoding method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram illustrating a method for acquiring a background frame according to an embodiment of the present invention.
Fig. 3 is a schematic diagram illustrating a method for obtaining a motion level of a current frame according to an embodiment of the present invention.
Fig. 4 shows a flow chart of sub-steps of step 202 provided by an embodiment of the present invention.
Fig. 5 shows a flow chart of sub-steps of step 202-2 provided by an embodiment of the present invention.
Fig. 6 illustrates a MAP representation provided by an embodiment of the present invention.
Fig. 7 shows a MAP representation after morphological completion according to an embodiment of the present invention.
Fig. 8 illustrates another MAP representation provided by an embodiment of the present invention.
Fig. 9 is a diagram illustrating a first quantization parameter table according to an embodiment of the present invention.
Fig. 10 is a diagram illustrating a second quantization parameter table according to an embodiment of the present invention.
Fig. 11 shows a schematic diagram of a final quantization parameter table provided by an embodiment of the present invention.
Fig. 12 is a schematic functional block diagram of a video encoding apparatus according to an embodiment of the present invention.
Fig. 13 shows a schematic diagram of an encoder provided by an embodiment of the present invention.
Icon: 100-down-sampled image; 110-background frame; 130-current frame; 10-brightness difference table; 12-MAP table; 13-background region; 14-foreground region; 15-motion level; 151-an obtaining module; 153-a judgment module; 155-a processing module; 200-an encoder; 210-a processor; 220-a memory; 230-bus.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
Fig. 1 is a flowchart of a video encoding method according to an embodiment of the present invention.
Step 201, respectively acquiring a background frame and a current frame.
Respectively obtaining a background frame and a current frame, wherein the background frame is a background model frame of a video to be coded, the current frame is a current frame to be coded, and the current frame comprises at least two macro blocks.
It should be noted that there are two ways to obtain the background frame, one is to obtain the background frame according to the video to be encoded, and the other is to use the previous frame of the current frame as the background frame.
Fig. 2 is a schematic diagram of a method for obtaining a background frame according to an embodiment of the present invention. As shown in fig. 2, the luma data and chroma data (YUV) of each frame in the video to be encoded are down-sampled, in a possible embodiment, the down-sampling ratio may be, but is not limited to, 1/256, resulting in a down-sampled image 100, such as F1, F2, and F3 … … FN in fig. 2. The downsampled image 100 is subjected to a background modeling algorithm to generate a corresponding background frame 110, the background modeling algorithm may be, but is not limited to, an average background modeling algorithm, a single gaussian background modeling algorithm, and the like, and a specific background modeling algorithm may be determined by itself according to factors such as memory resources of an encoder and overhead of other related resources.
It should be noted that, if the memory resource and other related resources of the encoder are limited, the downsampled image of the previous frame of the current frame can be directly used as the background frame 110.
Step 202, obtaining the motion level of the current frame according to the background frame and the current frame.
Fig. 3 is a schematic diagram of a method for obtaining a motion level 15 of a current frame according to an embodiment of the present invention. Step 202 includes five sub-steps, and referring to fig. 4, a flow chart of the sub-steps of step 202 is provided according to an embodiment of the present invention, which will be described in detail below.
Sub-step 202-1, calculating the luminance difference between the background frame and the current frame.
Referring to fig. 3, the luminance difference between the background frame 110 and the current frame 130 is calculated, i.e. the current frame pixel value I (x, y) is subtracted from the pixel value u (x, y) at the same position in the background frame 110 to obtain a luminance difference d (x, y), and then the luminance difference table 10 is obtained.
Sub-step 202-2, thresholding the luminance difference value in the MAP table of the current frame.
Referring to fig. 3, threshold binarization is performed on the obtained luminance difference table 10 to obtain a current frame threshold binarized MAP table 12. The threshold binarization for the brightness difference table 10 includes three sub-steps, please refer to fig. 5, which is a flow chart of the sub-step of step 202-2 provided in the embodiment of the present invention, as will be described in detail below.
Sub-step 202-2-1, determining whether the brightness difference is greater than a predetermined brightness threshold.
Comparing the brightness difference value in the brightness difference value table 10 with a preset brightness threshold value, judging whether the brightness difference value is greater than the preset brightness threshold value, if so, executing the substep 202-2-2; if not, then sub-step 202-2-3 is performed.
It should be noted that the preset luminance threshold is artificially set, and is different according to the video to be encoded, and is not limited herein.
The sub-step 202-2-2 is to take the position in the MAP table corresponding to the luminance difference value as the foreground region.
When the luminance difference is greater than the preset luminance threshold, the position corresponding to the luminance difference in the MAP table is used as the foreground region, and in a possible embodiment, the luminance difference of the position is set to 255.
And a sub-step 202-2-3 of using the position in the MAP table corresponding to the luminance difference value as a background region.
When the luminance difference is less than or equal to the preset luminance threshold, the position corresponding to the luminance difference in the MAP table is used as a background region, and in a possible embodiment, the luminance difference at the position is set to 0.
In sub-step 202-3, the luminance parameters of the background region surrounded by the foreground region in the MAP table are complemented.
The luminance difference of the foreground region in the MAP table is 255 and the luminance difference of the background region is 0. Referring to fig. 6 and 7, fig. 6 is a MAP representation according to an embodiment of the present invention, and fig. 7 is a MAP representation after morphological completion according to an embodiment of the present invention. For simplicity, fig. 6 and 7 both show 3 × 3 macroblocks, and in the following examples, 3 × 3 macroblocks are used for illustration, and in practical applications, the number of macroblocks in a picture is far more than 3 × 3.
If the MAP table after threshold binarization is as shown in fig. 6, and a background region 13 is surrounded by a foreground region 14, the background region 13 is compensated morphologically, that is, the luminance difference of the background region 13 is 255, and the MAP table after compensation is as shown in fig. 7.
And a substep 202-4 of calculating a foreground region proportion value based on the MAP table.
Referring to fig. 8, another MAP representation scheme according to an embodiment of the present invention is shown. If the MAP table after threshold binarization and morphological completion is shown in fig. 8, where there are four foreground region macroblocks, the ratio of the foreground region 14 is 4/9, which means that about 44.4% of the region is in motion.
And a sub-step 202-5 of determining the motion level of the current frame according to the foreground region ratio.
With continued reference to fig. 8, about 44.4% of the MAP table is in motion, assuming that it is now defined to have a motion level of 40.
It should be noted that the range of motion level is 0-100; the higher the ratio of the foreground area is, the higher the value of the motion grade is; there is no foreground zone 14 when the motion level is 0 and no background zone 13 when the motion level is 100%.
Step 203, obtaining the foreground region quantization parameter and the background region quantization parameter according to the motion level of the current frame.
Referring to fig. 8, when the motion level of a certain frame of picture is 40, the quantization parameters of the foreground and the quantization parameters of the background are configured according to the motion level, and it is assumed that the quantization parameter of the foreground is defined as-2 and the quantization parameter of the background is defined as 3 at this time, so as to obtain the first quantization parameter table shown in fig. 9.
Fig. 9 is a schematic diagram of a first quantization parameter table according to an embodiment of the present invention.
It should be noted that the quantization parameter of the foreground is decreased with the increase of the motion level, the quantization parameter of the background is increased with the increase of the motion level, the larger the motion level is, the more the motion area is, the larger the required code rate is, so the quantization parameter of the background needs to be increased to ensure the code rate.
Step 204, obtaining the quantization parameter of each macroblock according to the first quantization parameter table and the second quantization parameter table.
It should be noted that the second quantization parameter table is automatically generated by the encoder, the encoder generates corresponding quantization parameters according to the image complexity and the code rate of the frame to be encoded, and the value range of the quantization parameter value of the second quantization parameter table is 0 to 51, so as to obtain the second quantization parameter table shown in fig. 10.
Fig. 10 is a schematic diagram of a second quantization parameter table according to an embodiment of the present invention.
The quantization parameter values of the corresponding macroblocks in the first quantization parameter table and the second quantization parameter table are added, and the final quantization parameter table for image coding as shown in fig. 11 is obtained by correspondingly adding the first quantization parameter table shown in fig. 9 and the second quantization parameter table shown in fig. 10.
Fig. 11 is a schematic diagram of a final quantization parameter table according to an embodiment of the present invention.
Step 205, encoding the frame to be encoded according to the quantization parameter of each macroblock.
Referring to fig. 11, the frame to be encoded is encoded according to the quantization parameter shown in fig. 11.
To sum up, the video encoding method provided by the embodiment of the present invention includes: respectively acquiring a background frame and a current frame; the background frame is a background model frame of the video to be coded; the current frame is a current frame to be coded; the current frame comprises at least two macro blocks; acquiring the motion grade of the current frame according to the background frame and the current frame; the motion grade of the current frame is the ratio of the foreground area in the current frame; the foreground area is an area with a brightness difference value larger than a preset brightness threshold value; the non-foreground region in the current frame is a background region; obtaining a foreground region quantization parameter and a background region quantization parameter according to the motion level of the current frame; the foreground region quantization parameter and the background region quantization parameter form a first quantization parameter table; obtaining the quantization parameter of each macro block according to the first quantization parameter table and the second quantization parameter table; the second quantization parameter table is generated by the encoder; the second quantization parameter represents the complexity and the code rate of the frame to be coded; and coding the frame to be coded according to the quantization parameter of each macro block. The invention divides the image to be coded into background area and foreground area, divides the motion grade of the image to be coded according to the ratio value of the foreground area, and selects proper quantization parameter according to the difference of the motion grade aiming at different macro blocks, thereby achieving the purpose of compressing code rate on the basis of not reducing image quality.
Fig. 12 is a functional module schematic diagram of a video encoding apparatus according to an embodiment of the present invention. It should be noted that the video encoding apparatus 150 according to the embodiment of the present invention is configured to execute the method flowcharts shown in fig. 1, fig. 4, and fig. 5. The basic principle and the technical effect are the same as those of the foregoing method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiments for the parts that are not mentioned in this embodiment. The video encoding apparatus 150 includes an obtaining module 151, a determining module 153, and a processing module 155.
The obtaining module 151 is configured to obtain a background frame and a current frame, obtain a motion level of the current frame according to the background frame and the current frame, obtain a foreground region quantization parameter and a background region quantization parameter according to the motion level of the current frame, obtain a quantization parameter of each macroblock according to the first quantization parameter table and the second quantization parameter table, and obtain the background frame according to a code stream to be encoded or use a previous frame of the current frame as the background frame.
It is understood that in a preferred embodiment, the obtaining module 151 can be used for executing step 201, step 202, step 203, and step 204.
The determining module 153 is configured to compare the brightness difference with a preset brightness threshold.
It is to be appreciated that, in a preferred embodiment, the determination module 153 can be utilized to perform the step 202-2-1.
The processing module 155 is configured to encode a frame to be encoded according to the quantization parameter of each macroblock, calculate a luminance difference between a background frame and a current frame, binarize the luminance difference to obtain a MAP table of the current frame, calculate a foreground region ratio according to the MAP table, determine a motion level of the current frame according to the foreground region ratio, and use a position in the MAP table corresponding to the luminance difference as a background region.
It is to be understood that in a preferred embodiment, the processing module 155 is operable to perform step 202 as well as step 205.
The embodiment of the invention also provides an encoder 200. It should be noted that the encoder 200 according to the embodiment of the present invention is configured to execute the method flowcharts shown in fig. 1, fig. 4, and fig. 5. The basic principle and the technical effect are the same as those of the foregoing method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiments for the parts that are not mentioned in this embodiment.
Referring to fig. 13, fig. 13 is a schematic diagram of an encoder 200 according to an embodiment of the present invention, where the encoder 200 includes a processor 210, a memory 220, and a bus 230.
The processor 210 and memory 220 may be connected by one or more buses 230;
the processor 210, for reading/writing data or programs stored in the memory 220, performs corresponding functions.
The memory 220 is used for storing data or programs.
It should be noted that the encoder 200 may further include a device for implementing other functions, for example, a radio frequency circuit, a power circuit, and the like, and the encoder 200 may be: desktop computers, tablet computers, notebooks, servers, etc., without limitation.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, device or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus, devices and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus and method embodiments described above are illustrative only, as the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only an alternative embodiment of the present invention and is not intended to limit the present invention, and various modifications and variations of the present invention may occur to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.

Claims (10)

1. A video encoding method, comprising:
respectively acquiring a background frame and a current frame; the background frame is a background model frame of a video to be coded; the current frame is a current frame to be coded;
acquiring the motion grade of the current frame according to the background frame and the current frame; the motion grade of the current frame is represented by the ratio of the foreground area in the current frame; adjusting quantization parameters according to the motion grade of the current frame; the quantization parameter of the foreground is reduced along with the increase of the motion level, and the quantization parameter of the background is increased along with the increase of the motion level;
and coding the frame to be coded according to the quantization parameter.
2. The video coding method of claim 1, wherein the step of adjusting the quantization parameter according to the motion level of the current frame comprises:
obtaining a foreground region quantization parameter and a background region quantization parameter according to the motion level of the current frame; the foreground region quantization parameter and the background region quantization parameter form a first quantization parameter table;
obtaining quantization parameters according to the first quantization parameter table and the second quantization parameter table; the second quantization parameter table is generated by an encoder; and the second quantization parameter represents the complexity and the code rate of the frame to be coded.
3. The video encoding method of claim 1, wherein the step of obtaining the background frame further comprises:
obtaining the background frame according to the video to be coded;
or taking the previous frame of the current frame as a background frame.
4. The video coding method of claim 1, wherein the step of obtaining the motion level of the current frame based on the background frame and the current frame comprises:
calculating the brightness difference value of the background frame and the current frame;
carrying out threshold binarization on the brightness difference value to obtain a MAP table of the current frame, wherein the MAP table comprises the background area and the foreground area;
calculating the foreground region ratio according to the MAP table; the foreground region proportion is the proportion of the foreground region in the current frame;
and determining the motion grade of the current frame according to the foreground area ratio.
5. The video encoding method of claim 4, wherein the step of thresholding the luminance difference value comprises:
comparing the brightness difference value with a preset brightness threshold value;
when the brightness difference is smaller than or equal to the preset brightness threshold, taking the position in the MAP table corresponding to the brightness difference as a background area;
and when the brightness difference value is larger than the preset brightness threshold value, taking the position in the MAP table corresponding to the brightness difference value as a foreground area.
6. The video coding method of claim 4, wherein the step of obtaining the MAP table of the current frame according to the difference value binarized by the threshold value is followed by:
and completing the brightness parameters of the background area surrounded by the foreground area in the MAP table.
7. A video encoding apparatus, comprising:
an obtaining module for obtaining a background frame and a current frame,
the motion level of the current frame is obtained according to the background frame and the current frame, and the motion level of the current frame is represented by the ratio of a foreground area in the current frame;
and the quantization parameter is adjusted according to the motion level of the current frame, the quantization parameter of the foreground is reduced along with the increase of the motion level, and the quantization parameter of the background is increased along with the increase of the motion level;
and the processing module is used for coding the frame to be coded according to the quantization parameter.
8. The video encoding apparatus of claim 7, comprising:
the obtaining module is further configured to obtain the background frame according to the code stream to be encoded or use a previous frame of the current frame as a background frame,
and is further configured to obtain a foreground region quantization parameter and a background region quantization parameter according to the motion level of the current frame,
and is also used for obtaining the quantization parameter of each macro block according to the first quantization parameter table and the second quantization parameter table;
the processing module is further configured to calculate a luminance difference between the background frame and the current frame,
and is also used for carrying out threshold binarization on the brightness difference value to obtain a MAP table of the current frame,
and is also used for calculating and obtaining the foreground region ratio value according to the MAP table,
and is also used for determining the motion level of the current frame according to the foreground region ratio.
9. The video encoding apparatus of claim 8, further comprising:
the judging module is used for comparing the brightness difference value with a preset brightness threshold value;
the processing module is further configured to make the luminance parameter of a background region surrounded by the foreground region in the MAP table consistent with the luminance parameter of the foreground region.
10. An encoder, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is operating, the processor executing the machine-readable instructions to perform the steps of the video encoding method of any of claims 1 to 6 when executed.
CN201910325030.2A 2019-04-22 2019-04-22 Video coding method, device and coder Active CN109951706B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910325030.2A CN109951706B (en) 2019-04-22 2019-04-22 Video coding method, device and coder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910325030.2A CN109951706B (en) 2019-04-22 2019-04-22 Video coding method, device and coder

Publications (2)

Publication Number Publication Date
CN109951706A CN109951706A (en) 2019-06-28
CN109951706B true CN109951706B (en) 2021-01-01

Family

ID=67015909

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910325030.2A Active CN109951706B (en) 2019-04-22 2019-04-22 Video coding method, device and coder

Country Status (1)

Country Link
CN (1) CN109951706B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110769254B (en) * 2019-10-10 2022-04-22 网宿科技股份有限公司 Code rate configuration method, system and equipment for video frame
CN113438468B (en) * 2020-03-23 2023-02-28 浙江宇视科技有限公司 Dynamic control method and device for video quality, storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1713729A (en) * 2004-06-24 2005-12-28 华为技术有限公司 Video frequency compression
CN102685491A (en) * 2012-03-02 2012-09-19 中兴通讯股份有限公司 Method and system for realizing video coding
CN104243994A (en) * 2014-09-26 2014-12-24 厦门亿联网络技术股份有限公司 Method for real-time motion sensing of image enhancement
WO2015006176A1 (en) * 2013-07-10 2015-01-15 Microsoft Corporation Region-of-interest aware video coding
CN109587495A (en) * 2018-11-05 2019-04-05 深圳威尔视觉传媒有限公司 Method for video coding, device, equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2382940A (en) * 2001-11-27 2003-06-11 Nokia Corp Encoding objects and background blocks
US6763068B2 (en) * 2001-12-28 2004-07-13 Nokia Corporation Method and apparatus for selecting macroblock quantization parameters in a video encoder
DE10300048B4 (en) * 2002-01-05 2005-05-12 Samsung Electronics Co., Ltd., Suwon Image coding method for motion picture expert groups, involves image quantizing data in accordance with quantization parameter, and coding entropy of quantized image data using entropy coding unit

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1713729A (en) * 2004-06-24 2005-12-28 华为技术有限公司 Video frequency compression
CN102685491A (en) * 2012-03-02 2012-09-19 中兴通讯股份有限公司 Method and system for realizing video coding
WO2015006176A1 (en) * 2013-07-10 2015-01-15 Microsoft Corporation Region-of-interest aware video coding
CN104243994A (en) * 2014-09-26 2014-12-24 厦门亿联网络技术股份有限公司 Method for real-time motion sensing of image enhancement
CN109587495A (en) * 2018-11-05 2019-04-05 深圳威尔视觉传媒有限公司 Method for video coding, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于感兴趣区域编码的图像压缩算法;曾国卿;《空间电子技术》;20070228;全文 *

Also Published As

Publication number Publication date
CN109951706A (en) 2019-06-28

Similar Documents

Publication Publication Date Title
US11070803B2 (en) Method and apparatus for determining coding cost of coding unit and computer-readable storage medium
CN109951706B (en) Video coding method, device and coder
CN111698505B (en) Video frame encoding method, video frame encoding device, and readable storage medium
EP3354021B1 (en) Preserving texture/noise consistency in video codecs
CN106971399B (en) Image-mosaics detection method and device
US10123021B2 (en) Image encoding apparatus for determining quantization parameter, image encoding method, and program
CN115619683B (en) Image processing method, apparatus, device, storage medium, and computer program product
CN114051139A (en) Video encoding method and apparatus
CN114466221B (en) Image processing method and device, storage medium and electronic equipment
CN116708789A (en) Video analysis coding system based on artificial intelligence
Ki et al. Learning-based JND-directed HDR video preprocessing for perceptually lossless compression with HEVC
CN113068050B (en) Method and device for determining sample point adaptive compensation mode of tree-shaped coding block
US8369423B2 (en) Method and device for coding
KR20020031182A (en) Pre-processing method for motion estimation
Said Compression of compound images and video for enabling rich media in embedded systems
Zhao et al. Fast CU Size Decision Method Based on Just Noticeable Distortion and Deep Learning
CN114697650A (en) Intra-frame division method based on down-sampling, related device equipment and medium
CN115225911B (en) Code rate self-adaption method and device, computer equipment and storage medium
CN111327895B (en) Data processing method and device
US10944967B2 (en) Encoder and decoder and methods thereof
CN116471405A (en) Encoding method, encoding device, electronic equipment and readable storage medium
CN117336548A (en) Video coding processing method, device, equipment and storage medium
CN118055240A (en) Video coding method, device, computer equipment and medium
CN117956157A (en) Video encoding method, video encoding device, electronic equipment and computer storage medium
CN116828183A (en) Video coding method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant