CN110300302B - Video coding method, device and storage medium - Google Patents

Video coding method, device and storage medium Download PDF

Info

Publication number
CN110300302B
CN110300302B CN201910485750.5A CN201910485750A CN110300302B CN 110300302 B CN110300302 B CN 110300302B CN 201910485750 A CN201910485750 A CN 201910485750A CN 110300302 B CN110300302 B CN 110300302B
Authority
CN
China
Prior art keywords
coded
block
coding
mode
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910485750.5A
Other languages
Chinese (zh)
Other versions
CN110300302A (en
Inventor
许东旭
陈小芬
游乔贝
何耀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ruijie Networks Co Ltd
Original Assignee
Ruijie Networks Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ruijie Networks Co Ltd filed Critical Ruijie Networks Co Ltd
Priority to CN201910485750.5A priority Critical patent/CN110300302B/en
Publication of CN110300302A publication Critical patent/CN110300302A/en
Application granted granted Critical
Publication of CN110300302B publication Critical patent/CN110300302B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock

Abstract

The application provides a video coding method, a video coding device and a storage medium, which are used for solving the problems of high complexity and low speed of video coding calculation and relate to the technical field of video coding. After a first frame image is coded, dividing the image to be coded into a plurality of blocks to be coded aiming at each frame of image to be coded except the first frame image; aiming at each to-be-coded block, determining the coding mode of the to-be-coded block according to the coding modes of at least two adjacent coded blocks of the to-be-coded block and/or the coding modes of the same-position coded block of the to-be-coded block; and coding the block to be coded according to the determined coding mode. The coding mode of the block to be coded can be determined according to the coding modes of at least two adjacent coded blocks of the block to be coded and the coding mode of the co-located coded block, so that the complex intra-frame mode and inter-frame mode prediction of the block to be coded is not needed, the computational complexity of the coding technology is reduced, and the coding efficiency is improved.

Description

Video coding method, device and storage medium
Technical Field
The present application relates to the field of video coding technologies, and in particular, to a video coding method, apparatus, and storage medium.
Background
Under the limited network bandwidth, high-resolution video is transmitted, and the video can be transmitted in a limited channel only by compressing the video through a video coding technology. Currently, the mainstream video coding and decoding standards include h.261 (video coding standard established in 1990), h.263 (draft video coding standard), h.264 (video codec standard with joint video Group elimination), h.265 (new video coding standard established after h.264), and M-JPEG (Motion-joint Photographic Experts Group), which is a technology called Motion still image compression technology, of the Motion still image Experts Group. Among the above mainstream coding standards, the h.264 version of the coding standard is the most prominent, which has the advantages of the conventional standard and has absorbed the experience accumulated in the conventional standard establishment, and is a type of video decoding standard mainly applied in the market at present.
However, h.264 achieves a higher compression ratio and involves a relatively large computational complexity. The computational complexity of h.264 coding is approximately 3 times that of h.263 and the decoding complexity is 2 times that of h.263. Therefore, the method has important practical significance by adopting an effective method to optimize the computational complexity of the coding technology and simultaneously not losing the video quality and the code rate.
Disclosure of Invention
In order to improve video encoding speed, embodiments of the present application provide a video encoding method, a video encoding device, and a storage medium, which solve the problems of high computational complexity and low video encoding speed in video encoding in the related art.
In a first aspect, an embodiment of the present application provides a video encoding method. The method comprises the following steps:
after a first frame image is coded, dividing the image to be coded into a plurality of blocks to be coded aiming at each frame of image to be coded except the first frame image;
aiming at each to-be-coded block, determining the coding mode of the to-be-coded block according to the coding modes of at least two adjacent coded blocks of the to-be-coded block and/or the coding modes of the same-position coded block of the to-be-coded block; the co-located coding blocks are coded blocks at the same position of a reference frame of the current frame image to be coded;
and coding the block to be coded according to the determined coding mode.
Optionally, for each to-be-coded block, determining the coding mode of the to-be-coded block according to the coding modes of at least two adjacent coded blocks of the to-be-coded block and/or the coding mode of the parity coding block of the to-be-coded block, including:
determining whether the coding mode of the to-be-coded block is a skip mode or not according to the coding modes of at least two adjacent coded blocks of the to-be-coded block or the coding modes of the co-located coded blocks of the to-be-coded block;
if the coding mode of the to-be-coded block is determined not to be a skip mode, determining whether the coding mode of the to-be-coded block is an intra-frame mode or not according to the coding modes of at least two adjacent coded blocks of the to-be-coded block and/or the coding modes of the co-located coded blocks of the to-be-coded block;
if the coding mode of the block to be coded is determined not to be the intra-frame mode, respectively determining the cost value of the intra-frame mode and the cost value of the inter-frame mode of the block to be coded, and determining that the coding mode of the block to be coded is the coding mode corresponding to the minimum cost value.
Optionally, the at least two adjacent coded blocks are at least two of a left coded block, an upper right coded block, and an upper right coded block of the coded block.
Optionally, determining whether the coding mode of the to-be-coded block is an intra mode according to the coding modes of at least two adjacent to-be-coded blocks of the to-be-coded block, includes:
and if the optimal coding modes of the at least two adjacent coded blocks are intra-frame modes, determining that the coding mode of the block to be coded is the intra-frame mode.
Optionally, if the at least two adjacent coded blocks are an upper coded block and a left upper coded block of the to-be-coded block, determining whether the coding mode of the to-be-coded block is an intra mode according to the coding modes of the at least two adjacent coded blocks of the to-be-coded block and the coding mode of the co-located coded block of the to-be-coded block, including:
and if the optimal coding modes of the upper coding block and the upper left coding block are both intra-frame modes and the coding mode of the co-located coding block is the intra-frame mode, determining that the coding mode of the block to be coded is the intra-frame mode.
Optionally, the encoding the block to be encoded according to the determined encoding mode includes:
if the image to be coded is divided into 16 × 16 blocks to be coded, the blocks to be coded are coded by adopting one of the following modes:
DC mode, horizontal mode, or vertical mode.
Optionally, the determining, according to the coding mode of the at least two adjacent coded blocks of the to-be-coded block or the coding mode of the collocated coded block of the to-be-coded block, whether the coding mode of the to-be-coded block is a skip mode includes:
if the brightness and the color difference transformation coefficient cbp of the left coding block and the upper coding block are both 0 or the co-located coding block is in a skip mode, determining a predicted motion vector of the block to be coded;
according to the predicted motion vector, performing motion compensation on the block to be coded;
calculating the cbp of the block to be coded after motion compensation;
and if the cbp is 0, determining that the to-be-coded block is in a skip mode.
Optionally, if it is determined that the coding mode of the block to be coded is an inter-frame mode, coding the block to be coded according to the determined coding mode, including:
determining a predicted motion vector and a corresponding cost value of the block to be coded;
determining an average cost value of the prediction motion vector, wherein the average cost value is obtained by dividing a cost value corresponding to the prediction motion vector by the product of the width and the height of the block to be coded;
if the average cost value is less than or equal to a specified threshold value, performing motion compensation on the block to be coded according to the predicted motion vector;
and if the average cost value is larger than a specified threshold value, performing motion search according to the predicted motion vector to obtain a motion vector, taking the motion vector as a new predicted motion vector, and returning to the step of determining the average cost value of the predicted motion vector.
Optionally, each frame of image to be encoded is received in real time from a server, which is configured to perform the conversion of each frame of image to the specified color space.
In a second aspect, an embodiment of the present application further provides a video encoding apparatus. The device includes:
the device comprises a splitting module, a coding module and a decoding module, wherein the splitting module is used for dividing an image to be coded into a plurality of blocks to be coded aiming at each frame of image to be coded except a first frame of image after the first frame of image is coded;
a determining module, configured to determine, for each to-be-coded block, a coding mode of the to-be-coded block according to coding modes of at least two adjacent to-be-coded blocks of the to-be-coded block and/or coding modes of parity-coded blocks of the to-be-coded block; the co-located coding blocks are coded blocks at the same position of a reference frame of the current frame image to be coded;
and the coding module is used for coding the block to be coded according to the determined coding mode.
In a third aspect, another embodiment of the present application further provides a computing device comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute any video coding method provided by the embodiment of the application.
In a fourth aspect, another embodiment of the present application further provides a computer storage medium, where the computer storage medium stores computer-executable instructions for causing a computer to execute any one of the video encoding methods in the embodiments of the present application.
According to the video coding method, the video coding device and the video coding storage medium, the coding mode of the block to be coded can be determined according to the coding modes of the at least two adjacent coded blocks and the coding mode of the parity coding block, the block to be coded does not need to be subjected to complex intra-frame mode and inter-frame mode prediction, the computational complexity of the coding technology is reduced, the computational complexity is reduced, and the coding efficiency is improved.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments of the present invention will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a video encoding method in an embodiment of the present application;
FIG. 2 is a flowchart of a video encoding method according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating another video encoding method according to an embodiment of the present application;
fig. 4 is a schematic view of another application scenario of a video encoding method according to an embodiment of the present application;
fig. 5 is a schematic view of another application scenario of a video encoding method according to an embodiment of the present application;
fig. 6a is a schematic view of another application scenario of a video encoding method according to an embodiment of the present application;
fig. 6b is a schematic view of another application scenario of a video encoding method according to an embodiment of the present application;
fig. 7 is a schematic diagram of a video encoding apparatus according to an embodiment of the present application;
FIG. 8 is a computing device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention.
In order to clearly understand the technical solutions provided by the embodiments of the present application, the following terms appearing in the embodiments of the present application are explained, it should be noted that the terms in the embodiments of the present application are only explained to facilitate understanding of the present application, and are not used to limit the present application, and the terms include:
1. the parity encoding blocks refer to encoding blocks in the reference frame picture having the same encoding block identifiers as those in the picture to be encoded, for example, the reference frame is a P frame, which includes four encoding blocks with encoding block identifiers of No. 1, No. 2, No. 3, and No. 4, and the encoding block No. 4 in the P frame is a parity encoding block of the encoding block No. 4 in the picture to be encoded.
Currently, one type of video decoding method that is mainly used in the market is the h.264 coding standard. H.264 is a digital Video coding standard with milestone significance developed by Joint Video Team (Joint Video Team) composed of the V Video coding expert Group of ITU-T (ITU Telecommunication Standardization Sector) and the MPEG (Moving Picture Experts Group) Joint of ISO (International Organization for Standardization)/IEC (International Electrotechnical Commission).
The standard still follows the previous block-based hybrid coding framework, mainly comprising the following parts: the method comprises the steps of prediction (Estimation) of an inter-frame mode and an intra-frame mode, transformation (Transform) and inverse transformation, Quantization (Quantization) and inverse Quantization, Loop filtering (Loop Filter) and Entropy Coding (Entropy Coding), wherein the intra-frame mode is used for eliminating spatial redundancy, the inter-frame mode is used for eliminating time domain redundancy, and the transformation, Quantization and Entropy Coding technologies are combined to further eliminate redundancy such as statistical perception. Compared with the prior standard H.263, under the same quality, the H.264 can save the code rate by 50% on average compared with the H.263.
However, h.264 achieves a higher compression ratio with a relatively large computational complexity. The computational complexity of h.264 coding is about 3 times that of h.263 and the decoding complexity is about 2 times that of h.263, while the prediction techniques of intra mode and inter mode are the main computational components in the coding process in all coding technique modules. Therefore, the prediction process of the intra-frame mode and the inter-frame mode is optimized by adopting an effective method, and the video quality and the code rate are not lost, so that the H.264 can be applied in a scene requiring high real-time property, and the method has important practical significance.
In order to reduce the computational complexity of the encoding technique, embodiments of the present application provide a video encoding method. In the method, a first frame image of a video is encoded first. The first frame image is encoded mainly in an intra mode. Then, each frame of image to be coded except the first frame of image is obtained, a reference frame image of the frame of image to be coded is obtained, the image to be coded is divided into a plurality of blocks to be coded, and an identifier of each block to be coded is generated. And aiming at each block to be coded, acquiring a co-located coding block of the block to be coded in a reference frame image. And determining the coding mode of the block to be coded according to the coding modes of at least two adjacent coding blocks of the block to be coded and/or the coding mode of the coding block with the same position. And coding the block to be coded according to the determined coding mode.
The method can determine the coding mode of the block to be coded according to the coding modes of at least two adjacent coded blocks of the block to be coded and the coding mode of the co-located coded block, does not need to predict the complex intra-frame mode and inter-frame mode of the block to be coded, reduces the computational complexity of the coding technology, reduces the computational complexity and improves the coding efficiency.
The technical scheme provided by the embodiment of the invention is described below by combining the accompanying drawings.
Fig. 1 is a schematic view of an application scenario of a video encoding method according to an embodiment of the present application. The scene comprises the following steps: a color space conversion device 100, a memory 110, an encoder 120.
The color space conversion apparatus 100 converts the color space of the video to be processed into a specified color space (typically, YUV (luminance, chrominance, saturation) color space) and stores the color space in the memory 110, but it is needless to say that the color space may not be stored in the memory, and a scheme of not storing the color space in the memory will be described later. The encoder 120 retrieves each frame of image of the video to be processed from the memory 110. After the encoder 120 encodes the first frame image, the image to be encoded is divided into a plurality of blocks to be encoded for each frame of image to be encoded except the first frame image. The encoder 120 determines, for each block to be encoded, an encoding mode of the block to be encoded according to encoding modes of at least two adjacent encoded blocks of the block to be encoded and/or encoding modes of co-located encoded blocks of the block to be encoded. The encoder 120 encodes the block to be encoded according to the determined encoding mode.
In a specific implementation, the color space converting apparatus may be a physical apparatus, or may be a virtual module located in a server.
Fig. 2 is a flowchart of a video encoding method according to an embodiment of the present application. May include the steps of:
step 201: after a first frame image is coded, dividing the image to be coded into a plurality of blocks to be coded aiming at each frame of image to be coded except the first frame image.
In particular, each frame of image that has been converted to video in the specified color space may be retrieved from memory. In order to further improve the coding speed, each frame of coded image can also be directly acquired from the color space conversion device, and the video after the color space conversion is not required to be stored in a memory, so that the coding speed is further improved by reducing the processes of storage and memory reading.
Specifically, the coding blocks may be divided into 4 × 4 to-be-coded blocks, or may be divided into 4 × 8 to-be-coded blocks, or may be divided into 8 × 8 or 16 × 16 to-be-coded blocks, which is not limited in this application. After dividing an image to be coded into a plurality of blocks to be coded, storing the position information of each coding block according to rows and columns. For example, the image to be encoded is divided into 4 × 4 encoded blocks, a first row of the encoded blocks to be encoded is stored in the first row, a second row of the encoded blocks is stored in the second row, and so on. And the index number of the coding block can be established according to the position information, for example, the index of the first coding block in the first row is 1, the index of the second coding block in the first row is 2, the index of the first coding block in the second row is (1+ image height), and the like. In specific implementation, the height of the image can be represented by pixel points, and can also be identified by the number of the coding blocks.
Step 202: aiming at each to-be-coded block, determining the coding mode of the to-be-coded block according to the coding modes of at least two adjacent coded blocks of the to-be-coded block and/or the coding modes of the same-position coded block of the to-be-coded block; the co-located encoding blocks are encoded blocks at the same position of a reference frame of the current frame to be encoded.
Step 203: and coding the block to be coded according to the determined coding mode.
The method can determine the coding mode of the block to be coded according to the coding modes of at least two adjacent coded blocks of the block to be coded and the coding mode of the co-located coded block, does not need to predict the complex intra-frame mode and inter-frame mode of the block to be coded, reduces the computational complexity of the coding technology, reduces the computational complexity and improves the coding efficiency.
In specific implementation, the step 202 may be specifically implemented as the step shown in fig. 3, and fig. 3 is a flowchart illustrating another video encoding method in this embodiment. The method comprises the following steps:
step 2021: and determining whether the coding mode of the to-be-coded block is a skip mode or not according to the coding modes of at least two adjacent coded blocks of the to-be-coded block or the coding modes of the co-located coded blocks of the to-be-coded block.
Step 2022: if the coding mode of the block to be coded is determined not to be the skip mode, determining whether the coding mode of the block to be coded is the intra-frame mode according to the coding modes of at least two adjacent coded blocks of the block to be coded and/or the coding modes of the co-located coded blocks of the block to be coded.
Step 2023: if the coding mode of the block to be coded is determined not to be the intra-frame mode, respectively determining the cost value of the intra-frame mode and the cost value of the inter-frame mode of the block to be coded, and determining that the coding mode of the block to be coded is the coding mode corresponding to the minimum cost value.
In a specific implementation, the above steps 2021-2022 may also be performed simultaneously, that is, it may be determined whether the coding block is in a skip mode or an intra mode according to the coding modes of at least two adjacent coding blocks of the to-be-coded block and/or the coding modes of the co-located coding blocks of the to-be-coded block. The steps can be executed one by one according to the sequence, and the application is not particularly limited.
According to the method, the coded block can be determined to be in the skip mode or the intra-frame mode or the inter-frame mode, so that the calculated amount during prediction of the intra-frame mode and the inter-frame mode is reduced, and the coding speed is improved.
In order to more clearly understand the technical solution provided by the embodiment of the present application, the following detailed implementation of the steps in fig. 3 is further described:
in specific implementation, the step 2021 can be implemented as the following two schemes:
the first scheme is as follows:
and determining whether the coding mode of the block to be coded is a skip mode or not according to the coding modes of at least two adjacent coded blocks of the block to be coded.
Fig. 4 is a schematic view of another application scenario of a video encoding method according to an embodiment of the present application. In this scenario, the block to be encoded is the (2,2) encoding block. In specific implementation, the at least two adjacent coded blocks are a left coded block and an upper coded block (black coded block in the figure). And determining whether the luminance and color difference transformation coefficients cbp of the left coding block are both 0 and determining whether the cbp of the upper coding block is both 0. And if the motion vector is 0, determining the predicted motion vector of the block to be coded.
Specifically, cbp represents luminance, color difference, and transform coefficient, and has 6 bits in total. Where the first 2 bits represent the chrominance and saturation components and the last 4 bits represent the luminance component. If both are 0, it means that the residual of the coding block is 0, and the coding mode of the coding block is skip mode.
In specific implementation, the predicted motion vector may be obtained by performing median prediction on the left encoded block, the upper encoded block, and the upper right encoded block according to an h.264 standard scheme, or may be a predicted motion vector allocated by an encoder to the block to be encoded.
According to the predicted motion vector, performing motion compensation on the block to be coded; calculating the cbp of the block to be coded after motion compensation; and if the cbp is 0, determining that the to-be-coded block is in a skip mode.
According to the method, whether the block to be coded is in the skip mode or not can be determined according to the cbp of at least two adjacent coding blocks, complex intra-frame mode and inter-frame mode prediction are not needed, and the calculation complexity is reduced.
Scheme II:and determining whether the block to be coded is in a skip mode or not according to the coding mode of the parity coding block.
Fig. 5 is a schematic view of another application scenario of a video encoding method according to an embodiment of the present application. In the scene, Frame n-1 represents a reference Frame, and Frame n represents an image to be encoded. In particular, the reference frame may be a frame preceding the image to be encoded as shown in fig. 5, or may be an optimal reference frame. Specifically, the candidate reference frame may be specified, for example, it may be the 1 st frame, the previous frame of the image to be encoded, or a specified frame, for example, the 5 th frame, the 11 th frame, or the like. And determining the optimal predictive coding block of the current coding block in the candidate reference frames, wherein the image where the optimal predictive coding block is located is the optimal reference frame.
If the encoding mode of the co-located encoding block is a skip mode, determining a predicted motion vector of the block to be encoded; according to the predicted motion vector, performing motion compensation on the block to be coded; calculating the cbp of the block to be coded after motion compensation; and if the cbp is 0, determining that the to-be-coded block is in a skip mode.
According to the method, whether the module to be coded is in the skip mode or not can be determined according to the cbp of the co-located coding block, prediction of an intra-frame mode and an inter-frame mode with high complexity does not need to be executed, the calculated amount is reduced, and the coding speed is improved.
In one embodiment, the step 2022 can be implemented as the following two methods:
the method comprises the following steps:and determining whether the coding mode of the block to be coded is an intra-frame mode according to the coding modes of at least two adjacent coding blocks of the block to be coded.
Specifically, if the optimal coding mode of the at least two adjacent coded blocks is an intra-frame mode, it is determined that the coding mode of the block to be coded is the intra-frame mode.
In a specific implementation, the at least two adjacent coded blocks may be at least two of a left coded block, an upper right coded block, and an upper right coded block. For example, the left coding block and the upper coding block may be, or the left coding block and the upper right coding block may be, or the upper coding block, the upper left coding block, and the upper right coding block may be.
During specific implementation, the corresponding relation between the optimal coding mode and the coding block index is maintained. An optimal coding mode of at least two adjacent coded blocks may be determined according to the correspondence.
The method can determine the coding mode of the block to be coded according to the coding modes of at least two adjacent coded blocks, does not need to predict an intra-frame mode and an inter-frame mode, reduces the calculation amount and improves the coding speed.
The second method comprises the following steps:and determining whether the coding mode of the block to be coded is an intra mode or not according to the coding modes of at least two adjacent coded blocks of the block to be coded and the coding modes of the co-located coded blocks of the block to be coded.
In specific implementation, at least two adjacent coded blocks are an upper coded block and a left upper coded block, and if the optimal coding modes of the upper coded block and the left upper coded block are both intra-frame modes and the coding mode of the co-located coded block is an intra-frame mode, the coding mode of the block to be coded is determined to be the intra-frame mode.
In specific implementation, if the to-be-coded block is determined to be an intra-frame mode and the to-be-coded image is divided into 16 × 16 to-be-coded blocks, the to-be-coded blocks are prohibited from being coded by adopting a plane mode. That is, the block to be encoded may be encoded in one of a DC (Dublin Core) mode, a horizontal mode, and a vertical mode. Specifically, the cost values of the DC mode, the horizontal mode, and the vertical mode may be randomly selected, or may be calculated respectively, and the coding mode with the minimum cost value is selected to code the block to be coded.
According to the method, because the optimal coding mode of the co-located coding block is the intra-frame mode, the coding mode of the block to be coded can be determined to be the intra-frame mode, the intra-frame mode and the inter-frame mode are not required to be predicted, the calculation complexity is reduced, and the coding efficiency is improved. In addition, for the images to be coded of 16-16 blocks to be coded, a plane mode with high computational complexity is forbidden, the coding efficiency can be further improved, and the code rate of the images to be coded is not influenced.
In specific implementation, if it cannot be determined according to the step 2021 and the step 2022 that the coding mode of the block to be coded is skip mode or intra mode, the intra mode and inter mode of the block to be coded need to be predicted. Specifically, the h.264 coding standard can be used for intra mode and inter mode prediction. Namely, the cost value of the intra mode and the cost value of the inter mode of the block to be coded are respectively determined, and the coding mode of the block to be coded is determined to be the coding mode corresponding to the minimum cost value. In specific implementation, the encoding mode corresponding to the minimum cost value is the optimal encoding mode of the block to be encoded, and the index number of the block to be encoded and the optimal encoding mode are correspondingly stored.
In one embodiment, if the encoding mode corresponding to the minimum cost value is an inter mode, the steps a 1-a 5 are performed when the block to be encoded is encoded:
step A1: and determining the predicted motion vector of the block to be coded and a corresponding cost value.
Step A2: and determining the average cost value of the prediction motion vector, wherein the average cost value is obtained by dividing the cost value corresponding to the prediction motion vector by the product of the width and the height of the block to be coded.
In specific implementation, the average cost value of the predicted motion vector is obtained by the following formula:
avg _ cost/(w h) formula (1)
Wherein avg _ cost represents the average cost value of the predicted motion vector, cost represents the cost value when the predicted motion vector is calculated by the x264 standard, w is the width of the block to be coded, and h is the height of the block to be coded.
Step A3: and judging whether the average cost value of the predicted motion vector is smaller than a specified threshold value, if so, executing the step A4, and if so, executing the step A5.
In specific implementation, the specified threshold may be set according to actual conditions, for example, set to 0.5,1, and the like. Alternatively, the threshold of the first round can be set by itself, and the threshold specified for the second round is 0.8 times of the cost value of the last predicted motion vector. For example, the threshold thresh is 0.8 avg _ cost.
Step A4: and if the average cost value is less than or equal to a specified threshold value, performing motion compensation on the block to be coded according to the predicted motion vector.
Step A5: and if the average cost value is larger than a specified threshold value, performing motion search according to the predicted motion vector to obtain a motion vector, taking the motion vector as a new predicted motion vector, and returning to execute the step A3.
In specific implementation, the motion search method may be a diamond search, a hexagon search, a square search, or the like.
According to the method, the iterative operation of the motion estimation can be stopped in advance through the threshold, and the motion vector does not need to be iteratively calculated for multiple times, so that the calculated amount is reduced, and the coding efficiency is improved.
To further understand the technical solutions provided in the embodiments of the present application, a video encoding method provided in the present application is further described below with specific embodiments.
The first embodiment is as follows:
fig. 6a is a schematic view of another application scenario of a video encoding method according to an embodiment of the present application. The scene comprises the following steps: a color space converting device 600 and an encoder 610.
The color space conversion device 600 converts the video from the RGB format to the YUV format, and the encoder 610 acquires an image of each frame of the video from the color space conversion device 600. The encoder 610 encodes the first frame image in an intra mode, and then obtains a second frame as an image to be encoded. The image to be encoded is divided into a plurality of blocks to be encoded as shown in fig. 6 b. The encoder 610 generates an index number for each block to be encoded, determines the optimal encoding mode of the block 1 as an intra-frame mode according to the h.264 encoding standard for the block 1, encodes the block 1, and stores the index number and the corresponding optimal encoding mode.
The encoder 610 determines that the collocated encoding block of the encoding block 2 is in a skip mode, determines a predicted motion vector of the encoding block 2, performs motion compensation on the encoding block 2 according to the predicted motion vector, calculates cbp of the to-be-encoded block after the motion compensation, and determines that the encoding mode of the encoding block 2 is in the skip mode if the cbp is 0.
In the same manner, the encoder 610 determines the coding block No. 3 as an intra mode, and encodes the coding block No. 3 using a horizontal mode. The encoder 610 determines that the coding block # 4 is in an inter mode, and when the coding block # 4 is encoded in the inter mode, first, determines a predicted motion vector and a corresponding cost value of the coding block # 4. And calculating to obtain the average cost value of the predicted motion vector, and determining that the average cost value is smaller than a specified threshold value, so that motion compensation is carried out on the No. 4 coding block according to the predicted motion vector, and coding is carried out.
In the same way, the encoder 610 determines that the (1+ h) number encoded block is an intra mode. For the coding block number (2+ h), the optimal coding modes of the upper left coding block (coding block number 1) and the left coding block (coding block number (1+ h)) are both intra-frame modes, so that the coding mode of the coding block number (2+ h) is determined to be the intra-frame mode, and the coding block number (2+ h) is coded.
The encoder 610 sequentially encodes the remaining encoded blocks, completing the encoding of the second frame.
Based on the same inventive concept, the embodiment of the present application further provides a video encoding apparatus. Fig. 7 is a block diagram of a video encoding apparatus according to an embodiment of the present application. The device includes:
the splitting module 701 is configured to, after a first frame image is encoded, divide each frame of image to be encoded, except for the first frame image, into a plurality of blocks to be encoded;
a determining module 702, configured to determine, for each to-be-coded block, a coding mode of the to-be-coded block according to coding modes of at least two adjacent to-be-coded blocks of the to-be-coded block and/or coding modes of parity-coded blocks of the to-be-coded block; the co-located coding blocks are coded blocks at the same position of a reference frame of the current frame image to be coded;
and an encoding module 703, configured to encode the block to be encoded according to the determined encoding mode.
Optionally, the determining module 702 is specifically configured to execute:
determining whether the coding mode of the to-be-coded block is a skip mode or not according to the coding modes of at least two adjacent coded blocks of the to-be-coded block or the coding modes of the co-located coded blocks of the to-be-coded block;
if the coding mode of the to-be-coded block is determined not to be a skip mode, determining whether the coding mode of the to-be-coded block is an intra-frame mode or not according to the coding modes of at least two adjacent coded blocks of the to-be-coded block and/or the coding modes of the co-located coded blocks of the to-be-coded block;
if the coding mode of the block to be coded is determined not to be the intra-frame mode, respectively determining the cost value of the intra-frame mode and the cost value of the inter-frame mode of the block to be coded, and determining that the coding mode of the block to be coded is the coding mode corresponding to the minimum cost value.
Optionally, the at least two adjacent coded blocks are at least two of a left coded block, an upper coded block, and an upper right coded block.
Optionally, the determining module 702 is specifically configured to execute:
and if the optimal coding mode of the at least two adjacent coded blocks is an intra-frame mode, determining that the coding mode of the block to be coded is the intra-frame mode.
Optionally, if the at least two adjacent coded blocks are an upper coded block and a left upper coded block of the block to be coded, the determining module 702 is specifically configured to perform:
and if the optimal coding modes of the upper coding block and the upper left coding block are both intra-frame modes and the coding mode of the co-located coding block is the intra-frame mode, determining that the coding mode of the block to be coded is the intra-frame mode.
Optionally, the encoding module 703 is specifically configured to perform:
if the image to be coded is divided into 16 × 16 blocks to be coded, the blocks to be coded are coded by adopting one of the following modes:
DC mode, horizontal mode, or vertical mode.
Optionally, the at least two adjacent coded blocks are a left coded block and an upper coded block, and the determining module 702 is specifically configured to perform:
if the brightness and the color difference transformation coefficient cbp of the left coding block and the upper coding block are both 0 or the co-located coding block is in a skip mode, determining a predicted motion vector of the block to be coded;
according to the predicted motion vector, performing motion compensation on the block to be coded;
calculating the cbp of the block to be coded after motion compensation;
and if the cbp is 0, determining that the to-be-coded block is in a skip mode.
Optionally, if it is determined that the coding mode of the block to be coded is an inter mode, the coding module 703 is specifically configured to perform:
determining a predicted motion vector and a corresponding cost value of the block to be coded;
determining an average cost value of the prediction motion vector, wherein the average cost value is obtained by dividing a cost value corresponding to the prediction motion vector by the product of the width and the height of the block to be coded;
if the average cost value is less than or equal to a specified threshold value, performing motion compensation on the block to be coded according to the predicted motion vector;
and if the average cost value is larger than a specified threshold value, performing motion search according to the predicted motion vector to obtain a motion vector, taking the motion vector as a new predicted motion vector, and returning to the step of determining the average cost value of the predicted motion vector.
Optionally, each frame of image to be encoded is received in real time from a server, which is configured to perform the conversion of each frame of image to the specified color space.
Having described a video encoding method and apparatus of an exemplary embodiment of the present application, a computing apparatus according to another exemplary embodiment of the present application is next described.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible implementations, a computing device according to the present application may include at least one processor, and at least one memory. Wherein the memory stores program code which, when executed by the processor, causes the processor to perform the steps in the video encoding method according to various exemplary embodiments of the present application described above in this specification. For example, the processor may perform steps 201-203 as shown in FIG. 2 or steps 2021-2023 as shown in FIG. 3.
The computing device 130 according to this embodiment of the present application is described below with reference to fig. 8. The computing device 130 shown in fig. 8 is only an example, and should not bring any limitation to the function and the scope of use of the embodiments of the present application.
As shown in FIG. 8, computing device 130 is embodied in the form of a general purpose computing device. Components of computing device 130 may include, but are not limited to: the at least one processor 131, the at least one memory 132, and a bus 133 that connects the various system components (including the memory 132 and the processor 131).
Bus 133 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The memory 132 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)1321 and/or cache memory 1322, and may further include Read Only Memory (ROM) 1323.
Memory 132 may also include a program/utility 1325 having a set (at least one) of program modules 1324, such program modules 1324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Computing device 130 may also communicate with one or more external devices 134 (e.g., keyboard, pointing device, etc.), with one or more devices that enable a user to interact with computing device 130, and/or with any devices (e.g., router, modem, etc.) that enable computing device 130 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 135. Also, computing device 130 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via network adapter 136. As shown, network adapter 136 communicates with other modules for computing device 130 over bus 133. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with computing device 130, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In some possible embodiments, aspects of a video encoding method provided herein may also be implemented in the form of a program product including program code for causing a computer device to perform the steps of a video encoding method according to various exemplary embodiments of the present application described above in this specification when the program product is run on a computer device, for example, the computer device may perform steps 201-203 as shown in fig. 2 or steps 2021-2023 as shown in fig. 3.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for video encoding of embodiments of the present application may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a computing device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user computing device, partly on the user equipment, as a stand-alone software package, partly on the user computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (9)

1. A method of video encoding, the method comprising:
after a first frame image is coded, dividing the image to be coded into a plurality of blocks to be coded aiming at each frame of image to be coded except the first frame image;
aiming at each to-be-coded block, determining the coding mode of the to-be-coded block according to the coding modes of at least two adjacent coded blocks of the to-be-coded block and/or the coding modes of the same-position coded block of the to-be-coded block; the co-located coding blocks are coded blocks at the same position of a reference frame of the current frame image to be coded;
coding the block to be coded according to the determined coding mode;
wherein, for each block to be coded, determining the coding mode of the block to be coded according to the coding modes of at least two adjacent coded blocks of the block to be coded and/or the coding mode of the parity coded block of the block to be coded, includes:
determining whether the coding mode of the to-be-coded block is a skip mode or not according to the coding modes of at least two adjacent coded blocks of the to-be-coded block or the coding modes of the co-located coded blocks of the to-be-coded block;
if the coding mode of the to-be-coded block is determined not to be a skip mode, determining whether the coding mode of the to-be-coded block is an intra-frame mode or not according to the coding modes of at least two adjacent coded blocks of the to-be-coded block and/or the coding modes of the co-located coded blocks of the to-be-coded block;
if the coding mode of the block to be coded is determined not to be the intra-frame mode, respectively determining the cost value of the intra-frame mode and the cost value of the inter-frame mode of the block to be coded, and determining the coding mode of the block to be coded to be the coding mode corresponding to the minimum cost value;
wherein, the determining whether the coding mode of the block to be coded is an intra mode according to the coding modes of at least two adjacent coded blocks of the block to be coded includes:
if the optimal coding modes of the at least two adjacent coded blocks are intra-frame modes, determining that the coding mode of the block to be coded is the intra-frame mode;
if the at least two adjacent coded blocks are an upper coded block and a left upper coded block of the to-be-coded block, determining whether the coding mode of the to-be-coded block is an intra-frame mode according to the coding modes of the at least two adjacent coded blocks of the to-be-coded block and the coding mode of the co-located coded block of the to-be-coded block, including:
and if the optimal coding modes of the upper coding block and the upper left coding block are both intra-frame modes and the coding mode of the co-located coding block is the intra-frame mode, determining that the coding mode of the block to be coded is the intra-frame mode.
2. The method of claim 1, wherein the at least two adjacent coded blocks are at least two of a left coded block, a top-left coded block, a top coded block, and a top-right coded block of the coded block.
3. The method of claim 1, wherein encoding the block to be encoded according to the determined encoding mode comprises:
if the image to be coded is divided into 16 × 16 blocks to be coded, the blocks to be coded are coded by adopting one of the following modes:
dublin core DC mode, horizontal mode or vertical mode.
4. The method of claim 1, wherein the at least two adjacent coded blocks are a left coded block and an upper coded block, and determining whether the coding mode of the to-be-coded block is skip mode according to the coding modes of the at least two adjacent coded blocks of the to-be-coded block or the coding modes of the co-located coded blocks of the to-be-coded block comprises:
if the brightness and the color difference transformation coefficient cbp of the left coding block and the upper coding block are both 0 or the co-located coding block is in a skip mode, determining a predicted motion vector of the block to be coded;
according to the predicted motion vector, performing motion compensation on the block to be coded;
calculating the cbp of the block to be coded after motion compensation;
and if the cbp is 0, determining that the to-be-coded block is in a skip mode.
5. The method of claim 1, wherein if the coding mode of the block to be coded is determined to be inter mode, coding the block to be coded according to the determined coding mode comprises:
determining a predicted motion vector and a corresponding cost value of the block to be coded;
determining an average cost value of the prediction motion vector, wherein the average cost value is obtained by dividing a cost value corresponding to the prediction motion vector by the product of the width and the height of the block to be coded;
if the average cost value is less than or equal to a specified threshold value, performing motion compensation on the block to be coded according to the predicted motion vector;
and if the average cost value is larger than a specified threshold value, performing motion search according to the predicted motion vector to obtain a motion vector, taking the motion vector as a new predicted motion vector, and returning to the step of determining the average cost value of the predicted motion vector.
6. The method of claim 1, wherein each frame of images to be encoded is received in real-time from a server that is configured to perform the conversion of each frame of images to a specified color space.
7. A video encoding apparatus, characterized in that the apparatus comprises:
the device comprises a splitting module, a coding module and a decoding module, wherein the splitting module is used for dividing an image to be coded into a plurality of blocks to be coded aiming at each frame of image to be coded except a first frame of image after the first frame of image is coded;
a determining module, configured to determine, for each to-be-coded block, a coding mode of the to-be-coded block according to coding modes of at least two adjacent to-be-coded blocks of the to-be-coded block and/or coding modes of parity-coded blocks of the to-be-coded block; the co-located coding blocks are coded blocks at the same position of a reference frame of the current frame image to be coded;
the coding module is used for coding the block to be coded according to the determined coding mode;
the determining module is specifically configured to determine whether the coding mode of the to-be-coded block is a skip mode according to the coding modes of at least two adjacent coded blocks of the to-be-coded block, or the coding modes of the parity coded blocks of the to-be-coded block;
if the coding mode of the to-be-coded block is determined not to be a skip mode, determining whether the coding mode of the to-be-coded block is an intra-frame mode or not according to the coding modes of at least two adjacent coded blocks of the to-be-coded block and/or the coding modes of the co-located coded blocks of the to-be-coded block;
if the coding mode of the block to be coded is determined not to be the intra-frame mode, respectively determining the cost value of the intra-frame mode and the cost value of the inter-frame mode of the block to be coded, and determining the coding mode of the block to be coded to be the coding mode corresponding to the minimum cost value;
wherein, the determining whether the coding mode of the block to be coded is an intra mode according to the coding modes of at least two adjacent coded blocks of the block to be coded includes:
if the optimal coding modes of the at least two adjacent coded blocks are intra-frame modes, determining that the coding mode of the block to be coded is the intra-frame mode;
if the at least two adjacent coded blocks are an upper coded block and a left upper coded block of the to-be-coded block, determining whether the coding mode of the to-be-coded block is an intra-frame mode according to the coding modes of the at least two adjacent coded blocks of the to-be-coded block and the coding mode of the co-located coded block of the to-be-coded block, including:
and if the optimal coding modes of the upper coding block and the upper left coding block are both intra-frame modes and the coding mode of the co-located coding block is the intra-frame mode, determining that the coding mode of the block to be coded is the intra-frame mode.
8. A computer-readable medium having stored thereon computer-executable instructions for performing the method of any one of claims 1-6.
9. A computing device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
CN201910485750.5A 2019-06-05 2019-06-05 Video coding method, device and storage medium Active CN110300302B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910485750.5A CN110300302B (en) 2019-06-05 2019-06-05 Video coding method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910485750.5A CN110300302B (en) 2019-06-05 2019-06-05 Video coding method, device and storage medium

Publications (2)

Publication Number Publication Date
CN110300302A CN110300302A (en) 2019-10-01
CN110300302B true CN110300302B (en) 2021-11-12

Family

ID=68027571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910485750.5A Active CN110300302B (en) 2019-06-05 2019-06-05 Video coding method, device and storage medium

Country Status (1)

Country Link
CN (1) CN110300302B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112383774B (en) * 2020-10-30 2023-10-03 网宿科技股份有限公司 Encoding method, encoder and server
CN114584768A (en) * 2022-02-17 2022-06-03 百果园技术(新加坡)有限公司 Video coding control method, device, equipment and storage medium
CN115134526A (en) * 2022-06-28 2022-09-30 润博全景文旅科技有限公司 Image coding method, device and equipment based on cloud control

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101883275A (en) * 2009-05-04 2010-11-10 青岛海信数字多媒体技术国家重点实验室有限公司 Video coding method
CN104618725A (en) * 2015-01-15 2015-05-13 华侨大学 Multi-view video coding algorithm combining quick search and mode optimization
KR20150095254A (en) * 2014-02-13 2015-08-21 한국전자통신연구원 Intra prediction skip method and apparatus for video coding
CN107071479A (en) * 2017-04-28 2017-08-18 南京理工大学 3D video depth image predicting mode selecting methods based on dependency

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101813189B1 (en) * 2010-04-16 2018-01-31 에스케이 텔레콤주식회사 Video coding/decoding apparatus and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101883275A (en) * 2009-05-04 2010-11-10 青岛海信数字多媒体技术国家重点实验室有限公司 Video coding method
KR20150095254A (en) * 2014-02-13 2015-08-21 한국전자통신연구원 Intra prediction skip method and apparatus for video coding
CN104618725A (en) * 2015-01-15 2015-05-13 华侨大学 Multi-view video coding algorithm combining quick search and mode optimization
CN107071479A (en) * 2017-04-28 2017-08-18 南京理工大学 3D video depth image predicting mode selecting methods based on dependency

Also Published As

Publication number Publication date
CN110300302A (en) 2019-10-01

Similar Documents

Publication Publication Date Title
KR100751670B1 (en) Image encoding device, image decoding device and image encoding/decoding method
JP4565010B2 (en) Image decoding apparatus and image decoding method
US20180205970A1 (en) Deblocking Filtering
US9888240B2 (en) Video processors for preserving detail in low-light scenes
US20160309184A1 (en) Method, apparatus, and system for encoding and decoding image
US20130028322A1 (en) Moving image prediction encoder, moving image prediction decoder, moving image prediction encoding method, and moving image prediction decoding method
CN110300302B (en) Video coding method, device and storage medium
US11240503B2 (en) Method for optimizing two-pass coding
TW201945988A (en) Method and apparatus of neural network for video coding
JP2022539768A (en) Image prediction method, encoder, decoder and storage medium
EP3706421A1 (en) Method and apparatus for video encoding and decoding based on affine motion compensation
US20230239464A1 (en) Video processing method with partial picture replacement
US9565404B2 (en) Encoding techniques for banding reduction
KR20210015810A (en) Method and apparatus for video encoding and decoding using partially shared luma and chroma coding trees
KR20170125154A (en) Method and apparatus of video decoder using curve intra prediction
CN110913215B (en) Method and device for selecting prediction mode and readable storage medium
JP2009089267A (en) Method and device for intra predictive coding, and program
JP2001251627A (en) Coder, coding method and recording medium recorded with program
CN117616751A (en) Video encoding and decoding of moving image group
TW201640897A (en) Video prediction encoding device, video prediction encoding method, video prediction encoding program, video prediction decoding device, video prediction decoding method, and video prediction decoding program
KR20170077621A (en) Method and Apparatus of removal of Flickering artifact for Video compression
RU2808075C1 (en) Method for image coding and decoding, coding and decoding device and corresponding computer programs
CN116760976B (en) Affine prediction decision method, affine prediction decision device, affine prediction decision equipment and affine prediction decision storage medium
RU2772813C1 (en) Video encoder, video decoder, and corresponding methods for encoding and decoding
US10992942B2 (en) Coding method, decoding method, and coding device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant