CN108449599B - Video coding and decoding method based on surface transmission transformation - Google Patents

Video coding and decoding method based on surface transmission transformation Download PDF

Info

Publication number
CN108449599B
CN108449599B CN201810247888.7A CN201810247888A CN108449599B CN 108449599 B CN108449599 B CN 108449599B CN 201810247888 A CN201810247888 A CN 201810247888A CN 108449599 B CN108449599 B CN 108449599B
Authority
CN
China
Prior art keywords
motion vector
pixel
block
coding
calculated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810247888.7A
Other languages
Chinese (zh)
Other versions
CN108449599A (en
Inventor
徐加飞
杨超
李透
王啟军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN201810247888.7A priority Critical patent/CN108449599B/en
Publication of CN108449599A publication Critical patent/CN108449599A/en
Application granted granted Critical
Publication of CN108449599B publication Critical patent/CN108449599B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a video coding and decoding method based on surface transmission transformation, which comprises the following steps: carrying out vanishing point detection on an image to be coded; forming a normal vector of the plane according to parallel lines corresponding to the vanishing points, and calculating a perspective transformation matrix H of the planem(ii) a At the encoding end, searching the best matching block in the reconstructed image area, setting a motion vector for identifying a candidate position and carrying out surface transmission transformation; obtaining coordinates of the predicted pixel; at a decoding end, inputting a code stream to be decoded, and analyzing a perspective transformation matrix; resolving a motion vector from the code stream and performing surface transmission transformation; coordinates of the predicted pixel are obtained. The invention obtains the predicted pixel coordinate by carrying out the surface transmission transformation on the motion vector through the video coding/decoding method based on the surface transmission transformation, eliminates the correlation among similar image contents which are expressed as deformation, greatly improves the coding/decoding efficiency, and can be applied to coding methods such as H.264/H.265/H.266 and the like.

Description

Video coding and decoding method based on surface transmission transformation
Technical Field
The invention belongs to the technical field of video compression coding and decoding, and relates to a video coding and decoding method based on surface transmission transformation.
Background
The idea of intra prediction, which is adopted by the MPEG-4 standard at the earliest, mainly to predict the current coding block through the information inside the picture, is extended and extended in h.263+ +, while in h.264/AVC, it is designed to be more elaborate, specifically including a coding method of 4x4 block intra prediction of 9 prediction modes and 16x16 block intra prediction of 4 prediction modes and 8x8 chroma block intra prediction of 4 prediction modes. And the intra-frame prediction modes of HEVC reach 35 types, the direction is finer, and the efficiency is higher. However, these intra-frame prediction methods are all obtained by extrapolation of the surrounding boundary pixels of the current coding block, and only the correlation between the coding block and the surrounding boundary pixels is eliminated, and such a method of predicting "two-dimensional" data from "one-dimensional" data is much less efficient than inter-frame prediction, so that the code rate of I frame is usually twice or more than P frame and 4 times or more than B frame under the same image quality.
The improvement of Intra-frame coding has been explored, and Intra-frame Block Copy is a relatively representative work, for the correlation between pixels far away in the frame, Intra-frame Block Copy (IBC) uses the idea of motion estimation/motion compensation in inter-frame prediction to perform motion estimation in the image area already reconstructed in the frame, obtain the best motion vector according to a certain criterion, and encode the residual error and motion vector after motion compensation.
For the image content of the building surface, although the image content contains a large number of repeated patterns, the image content of the building surface presents a distribution after transmission deformation because the optical axis of the shooting camera is not vertical to the building surface, and the intra-frame block copy is designed based on a translation model, so that the correlation between similar image content presenting deformation is difficult to eliminate. On the basis of an intra block copy technology, the invention introduces surface transmission transformation, changes a prediction generation mode and solves the problem.
Disclosure of Invention
The invention aims to provide a video coding and decoding method based on surface transmission transformation, which solves the problem that the correlation between the contents of deformed similar images is difficult to eliminate, and further greatly improves the coding/decoding efficiency.
The purpose of the invention can be realized by the following technical scheme:
a video coding and decoding method based on surface transmission transformation comprises the following steps:
s1, inputting the image to be coded at the coding end, and detecting the vanishing point of the image to be coded, wherein the vanishing point is represented by three-dimensional homogeneous coordinates;
s2, selecting two different vanishing points as a group, and calculating the normal vector of a plane formed by parallel lines corresponding to the two vanishing points
Figure GDA0002925276380000021
And constructs a perspective transformation matrix H of the planemPerspective transformation matrix HmComprises the following steps:
Figure GDA0002925276380000022
s3, during encoding, inputting a current encoding block, the encoding block comprising a group of pixel positions and original pixel values at the positions, obtaining a predicted value at each position in the encoding block in a reconstructed image area, setting an image area motion vector mv as (mvx, mvy), recording the difference between the current block position and the position where the prediction is generated, wherein mvx represents the difference in the horizontal direction, mvy represents the difference in the vertical direction, performing a surface transmission transform on mv to obtain a representation form (d) of the motion vector in the correction spacex,dy):
Figure GDA0002925276380000031
S4, the flow of obtaining the prediction generated by mv at the identified position is: let the pixel point in the current coding block be tiWhich is calculated from the motion vector mv in the reconstructed image area to yield the predicted pixel position siWherein, in the step (A),
Figure GDA0002925276380000032
its homogeneous coordinate form is
Figure GDA0002925276380000033
Figure GDA0002925276380000034
Is provided with h1,h2,h3Is the matrix H in step S2mThree row vectors of
Figure GDA0002925276380000035
And
Figure GDA0002925276380000036
conversion to a representation in correction space
Figure GDA0002925276380000037
And
Figure GDA0002925276380000038
comprises the following steps:
Figure GDA0002925276380000039
Figure GDA00029252763800000310
the motion vector in the correction space obtained in step S3 is (d)x,dy) Then, then
Figure GDA00029252763800000311
Calculated by the following formula:
Figure GDA00029252763800000312
from equation (5):
Figure GDA00029252763800000313
bringing into formula (6)
Figure GDA00029252763800000314
Obtaining the coordinates of the pixel that produced the prediction:
Figure GDA0002925276380000041
s5, obtaining the motion vector of the given image domain through the motion vector of the given image domain and the predicted pixel coordinate determined by S3 and S4, coding the predicted value of all pixels in the block, determining the optimal motion vector of the image domain from the candidate set of motion vectors of the image domain according to the performance of the predicted value generated by the coding block, and writing the optimal motion vector of the image domain into the code stream;
s6, inputting the code stream to be decoded at the decoding end, and analyzing the code stream
Figure GDA0002925276380000042
Constructing a perspective transformation matrix H of the surface according to equation (2)m
S7, at the decoding end, the motion vector mv (mvx, mvy) is extracted from the code stream, and the motion vector (d) in the correction space is calculated according to the formula (3)x,dy);
S8, assuming that the pixel point position in the current coding block is tiPredicted position of the image in the reconstructed image area
Figure GDA0002925276380000043
S is calculated by equation (7)iAs a homogeneous coordinate
Figure GDA0002925276380000044
The first two dimensions of (a);
s9, obtaining the target pixel position in step S7
Figure GDA0002925276380000045
And if the pixel is a sub-pixel, obtaining a pixel value at the position of the sub-pixel by means of interpolation.
Further, a normal vector of a plane is formed by parallel lines corresponding to the two vanishing points, and a calculation formula of the normal vector of the plane is as follows:
Figure GDA0002925276380000046
wherein vp isiAnd vpjFor two non-identical vanishing points, by vpiAnd vpjCross multiplication to obtain a vanishing line connecting two vanishing pointsObtaining normal vector l of plane determined by two groups of vanishing linesm
Further, in step S5, it is assumed that all pixel positions in the coding block have the same motion vector, and an optimal motion vector mv is determined among all candidate motion vectors mv for the current coding block, where the determined criteria include the sum of squared errors SSD, mean squared error MSE, mean absolute value of errors MAD, absolute value of errors SAD between the original pixel value of the coding block and the prediction value calculated from the motion vector mv through step S4, and if the original pixel value of the current coding block is assumed to be the matrix orig, the matrix formed by the prediction values of each pixel in orig determined in step S4 is pred, the height of the coding block is h, and the width is w:
the calculation formula of the error sum of squares is
Figure GDA0002925276380000051
The mean square error is calculated by the formula
Figure GDA0002925276380000052
The calculation formula of the sum of absolute values of the errors is
Figure GDA0002925276380000053
The calculation formula of the mean value of the absolute values of the errors is
Figure GDA0002925276380000054
Wherein the matrix orig and the matrix pred are both two-dimensional matrices.
Further, the interpolation mode is one of bilinear interpolation, cubic interpolation and bicubic interpolation.
The invention has the beneficial effects that:
the invention obtains the predicted pixel coordinate by carrying out the surface transmission transformation on the motion vector through the video coding/decoding method based on the surface transmission transformation, eliminates the correlation among similar image contents which are expressed as deformation, greatly improves the coding/decoding efficiency, and can be applied to coding methods such as H.264/H.265/H.266 and the like.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flowchart of an encoding end of a video encoding and decoding method based on plane transmission transform according to the present invention;
FIG. 2 is a flowchart of a decoding end of a video encoding and decoding method based on plane transmission transformation according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 and 2, the present invention is a video encoding and decoding method based on plane transmission transform, including the following steps:
s1, at the encoding end, inputting the image to be encoded, detecting the vanishing points in the image to be encoded, and selecting three vanishing points to be respectively marked as vpi,vpj,vpkEach vanishing point is represented by three-dimensional homogeneous coordinates;
s2, selecting two vanishing points as a group, and calculating a normal vector of a plane formed by parallel lines corresponding to the two vanishing points, wherein the calculation formula is as follows:
lm=vpi×vpj (1)
wherein vp isiAnd vpjIs two different disappearingPoints, obtaining a vanishing line connecting two vanishing points by cross multiplication of the two points, namely obtaining two parallel lines (respectively corresponding to the vanishing points vp)iAnd vpj) Normal vector of determined plane (marked m), i.e. vanishing line
Figure GDA0002925276380000061
Its physical meaning is to map the coordinates of the vanishing points to infinity, (i.e. to map the vanishing points to infinity
Figure GDA0002925276380000062
Equal to 0, the image line segments detected to be parallel in the three-dimensional space are mapped into parallel lines lm*vpi=lm*vpj0), the perspective transformation matrix H of the plane defined by the two parallel lines is obtainedmComprises the following steps:
Figure GDA0002925276380000071
during the encoding process, record lmThree floating point numbers of, or
Figure GDA0002925276380000072
Writing two floating point numbers into a code stream;
s3, during encoding, inputting a current encoding block, the encoding block including a group of pixel positions and original pixel values at the positions, searching for a best matching block in a reconstructed image region, setting a motion vector for identifying a candidate position as mv (mvx, mvy), the motion vector mv being used to record a difference between the current block position and the candidate position, mvx representing a difference in a horizontal direction, mvy representing a difference in a vertical direction, performing a plane-pass transform on mv to obtain a representation (d) in a correction spacex,dy) The surface transmission transformation formula is:
Figure GDA0002925276380000073
s4, the flow of obtaining the prediction generated by mv at the identified position is: suppose whenThe pixel point in the pre-coding block is tiWhich is calculated from the motion vector mv in the reconstructed image area to yield the predicted pixel position siWherein, in the step (A),
Figure GDA0002925276380000074
tiand siExtended homogeneous coordinates are respectively
Figure GDA0002925276380000075
Is provided with h1,h2,h3Is the matrix H in step S2mThree row vectors of
Figure GDA0002925276380000076
And
Figure GDA0002925276380000077
conversion to a representation in correction space
Figure GDA0002925276380000078
And
Figure GDA0002925276380000079
comprises the following steps:
Figure GDA00029252763800000710
Figure GDA00029252763800000711
the motion vector in the correction space obtained in step S3 is (d)x,dy) Then, then
Figure GDA00029252763800000712
Calculated by the following formula:
Figure GDA0002925276380000081
from equation (5):
Figure GDA0002925276380000082
bringing into formula (6)
Figure GDA0002925276380000083
The coordinates of the pixel that produced the prediction can be calculated:
Figure GDA0002925276380000084
when the target pixel position obtained in step S4
Figure GDA0002925276380000085
And when the pixel is divided into pixels, the pixel value at the position of the divided pixel is obtained in an interpolation mode.
S5, obtaining the motion vector and the predicted pixel coordinate of the given image domain through S4 and S3, in the encoding process, the encoding basic unit is a rectangular encoding block, the encoding block is a two-dimensional plane matrix, the coordinate of each matrix point is stored with a pixel value, if all pixel positions in the encoding block are set to have the same motion vector, for the current encoding block, the optimal mv needs to be determined in all candidate mvs, the determination criterion includes the error square sum SSD, the mean error square MSE, the error absolute value mean MAD, the error absolute value SAD and the SAD between the original pixel value of the encoding block and the predicted value calculated by mv through the step S4 or the variation forms of the two-dimensional matrix pred, if the original pixel value matrix of the current encoding block is the second-order matrix orig, the matrix formed by the predicted coordinate and the pixel value of the pixel value on each pixel point in orig determined through the step S4 is a two-dimensional matrix pred, the height of the coding block is h, and the width is w, then:
the calculation formula of the error sum of squares is
Figure GDA0002925276380000086
The mean square error is calculated by the formula
Figure GDA0002925276380000087
The calculation formula of the sum of absolute values of the errors is
Figure GDA0002925276380000091
The calculation formula of the mean value of the absolute values of the errors is
Figure GDA0002925276380000092
When the prediction of the coding block is generated by the invention, the optimal mv is written into the code stream.
S6, inputting the code stream to be decoded at the decoding end, and analyzing the code stream
Figure GDA0002925276380000093
Constructing a perspective transformation matrix H of the surface according to equation (2)m
S7, at the decoding end, the motion vector mv is analyzed from the code stream as (mvx, mvy), and the motion vector (d) in the correction space is calculated according to the formula (3)x,dy);
S8, at the decoding end, assuming that the pixel point position in the current decoding block is tiCombining S5 and S6, the predicted position of the reconstructed image area is calculated according to the formula (7)
Figure GDA0002925276380000094
Predicted value coordinate siAs a homogeneous coordinate
Figure GDA0002925276380000095
The first two dimensions of (a);
s9, obtaining the target pixel position S in step S8iAnd if the pixel is a sub-pixel, obtaining a pixel value at the position of the sub-pixel by means of interpolation.
The invention obtains the predicted pixel coordinate by carrying out the surface transmission transformation on the motion vector through the video coding/decoding method based on the surface transmission transformation, eliminates the correlation among similar image contents which are expressed as deformation, greatly improves the coding/decoding efficiency, and can be applied to coding methods such as H.264/H.265/H.266 and the like.
The foregoing is merely exemplary and illustrative of the principles of the present invention and various modifications, additions and substitutions of the specific embodiments described herein may be made by those skilled in the art without departing from the principles of the present invention or exceeding the scope of the claims set forth herein.

Claims (3)

1. A video coding and decoding method based on surface transmission transformation is characterized by comprising the following steps:
s1, inputting the image to be coded at the coding end, and detecting the vanishing point of the image to be coded, wherein the vanishing point is represented by three-dimensional homogeneous coordinates;
s2, selecting two different vanishing points as a group, and calculating the normal vector of a plane formed by parallel lines corresponding to the two vanishing points
Figure FDA0002952261650000011
The normal vector of the plane is calculated by the following formula:
Figure FDA0002952261650000012
wherein vp isiAnd vpjFor two non-identical vanishing points, by vpiAnd vpjCross multiplication to obtain a vanishing line connecting two vanishing points, i.e. to obtain normal vector l of plane determined by two groups of vanishing linesm
And constructs a perspective transformation matrix H of the planemPerspective transformation matrix HmComprises the following steps:
Figure FDA0002952261650000013
s3, during encoding, inputting a current encoding block, the encoding block comprising a group of pixel positions and original pixel values at the positions, obtaining a predicted value at each position in the encoding block in a reconstructed image area, setting an image area motion vector mv as (mvx, mvy), recording the difference between the current block position and the position where the prediction is generated, wherein mvx represents the difference in the horizontal direction, mvy represents the difference in the vertical direction, performing a surface transmission transform on mv to obtain a representation form (d) of the motion vector in the correction spacex,dy):
Figure FDA0002952261650000014
S4, the flow of obtaining the prediction generated by mv at the identified position is: let the pixel point in the current coding block be tiWhich is calculated from the motion vector mv in the reconstructed image area to yield the predicted pixel position siWherein, in the step (A),
Figure FDA0002952261650000021
its homogeneous coordinate form is
Figure FDA0002952261650000022
Figure FDA0002952261650000023
Is provided with h1,h2,h3Is the matrix H in step S2mThree row vectors of
Figure FDA0002952261650000024
And
Figure FDA0002952261650000025
conversion to a representation in correction space
Figure FDA0002952261650000026
And
Figure FDA0002952261650000027
comprises the following steps:
Figure FDA0002952261650000028
Figure FDA0002952261650000029
the motion vector in the correction space obtained in step S3 is (d)x,dy) Then, then
Figure FDA00029522616500000210
Calculated by the following formula:
Figure FDA00029522616500000211
from equation (5):
Figure FDA00029522616500000212
bringing into formula (6)
Figure FDA00029522616500000213
Obtaining the coordinates of the pixel that produced the prediction:
Figure FDA00029522616500000214
s5, obtaining the motion vector of the given image domain through the motion vector of the given image domain and the predicted pixel coordinate determined by S3 and S4, coding the predicted value of all pixels in the block, determining the optimal motion vector of the image domain from the candidate set of motion vectors of the image domain according to the performance of the predicted value generated by the coding block, and writing the optimal motion vector of the image domain into the code stream;
s6, inputting the code stream to be decoded at the decoding end, and analyzing the code stream
Figure FDA00029522616500000215
Constructing a perspective transformation matrix H of the surface according to equation (2)m
S7, at the decoding end, the motion vector mv (mvx, mvy) is extracted from the code stream, and the motion vector (d) in the correction space is calculated according to the formula (3)x,dy);
S8, assuming that the pixel point position in the current coding block is tiPredicted position of the image in the reconstructed image area
Figure FDA0002952261650000031
S is calculated by equation (7)iAs a homogeneous coordinate
Figure FDA0002952261650000032
The first two dimensions of (a);
s9, obtaining the target pixel position in step S7
Figure FDA0002952261650000033
And if the pixel is a sub-pixel, obtaining a pixel value at the position of the sub-pixel by means of interpolation.
2. The method of claim 1, wherein the method comprises: in step S5, it is assumed that all pixel positions in the coding block have the same motion vector, and for the current coding block, the optimal motion vector mv is determined from all the candidate motion vectors mv, and the determination criteria include the sum of squared errors SSD, mean squared error MSE, mean absolute value of errors MAD, absolute value of errors SAD between the original pixel value of the coding block and the prediction value calculated from the motion vector mv through step S4, and if the original pixel value of the current coding block is assumed to be the matrix orig, the matrix formed by the prediction values of each pixel in orig determined in step S4 is pred, the height of the coding block is h, and the width is w:
the calculation formula of the error sum of squares is
Figure FDA0002952261650000034
The mean square error is calculated by the formula
Figure FDA0002952261650000035
The calculation formula of the sum of absolute values of the errors is
Figure FDA0002952261650000036
The calculation formula of the mean value of the absolute values of the errors is
Figure FDA0002952261650000037
3. The method of claim 2, wherein the method comprises: the interpolation mode is one of bilinear interpolation, cubic interpolation and double cubic interpolation.
CN201810247888.7A 2018-03-23 2018-03-23 Video coding and decoding method based on surface transmission transformation Active CN108449599B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810247888.7A CN108449599B (en) 2018-03-23 2018-03-23 Video coding and decoding method based on surface transmission transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810247888.7A CN108449599B (en) 2018-03-23 2018-03-23 Video coding and decoding method based on surface transmission transformation

Publications (2)

Publication Number Publication Date
CN108449599A CN108449599A (en) 2018-08-24
CN108449599B true CN108449599B (en) 2021-05-18

Family

ID=63197019

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810247888.7A Active CN108449599B (en) 2018-03-23 2018-03-23 Video coding and decoding method based on surface transmission transformation

Country Status (1)

Country Link
CN (1) CN108449599B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11570447B2 (en) 2018-12-28 2023-01-31 Hangzhou Hikvision Digital Technology Co., Ltd. Video coding and video decoding

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020244569A1 (en) * 2019-06-04 2020-12-10 Beijing Bytedance Network Technology Co., Ltd. Conditional implementation of motion candidate list construction process
CN110363724B (en) * 2019-07-22 2022-05-17 安徽大学 Non-local low-rank image denoising method based on in-plane perspective and regularity
CN114095727B (en) * 2021-11-17 2023-08-04 安徽大学 JPEG image coding optimization method based on evolution calculation

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015519016A (en) * 2012-05-14 2015-07-06 ロッサト、ルカ Encoding and reconstruction of residual quantity data based on support information
US9769473B2 (en) * 2012-06-08 2017-09-19 Apple Inc. Predictive video coder with low power reference picture transformation
EP2952003B1 (en) * 2013-01-30 2019-07-17 Intel Corporation Content adaptive partitioning for prediction and coding for next generation video
CN108600749B (en) * 2015-08-29 2021-12-28 华为技术有限公司 Image prediction method and device
CN111556323B (en) * 2016-02-06 2022-05-13 华为技术有限公司 Image coding and decoding method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11570447B2 (en) 2018-12-28 2023-01-31 Hangzhou Hikvision Digital Technology Co., Ltd. Video coding and video decoding

Also Published As

Publication number Publication date
CN108449599A (en) 2018-08-24

Similar Documents

Publication Publication Date Title
CN108449599B (en) Video coding and decoding method based on surface transmission transformation
KR100977255B1 (en) Video encoding method, decoding method, device thereof, program thereof, and storage medium containing the program
CN112887716B (en) Encoding and decoding method, device and equipment
JP5266342B2 (en) Video intra prediction method and apparatus
WO2016050051A1 (en) Image prediction method and relevant device
CN108141606B (en) Method and system for global motion estimation and compensation
JP5098081B2 (en) Image processing method and image processing apparatus
JP6636615B2 (en) Motion vector field encoding method, decoding method, encoding device, and decoding device
CN107027025B (en) A kind of light field image compression method based on macro block of pixels adaptive prediction
CN110933426B (en) Decoding and encoding method and device thereof
WO2020133115A1 (en) Coding prediction method and apparatus, and computer storage medium
WO2020181428A1 (en) Prediction method, encoder, decoder, and computer storage medium
CN110832854B (en) Method and apparatus for intra prediction using interpolation
JP4786612B2 (en) Predicted motion vector generation apparatus for moving picture encoding apparatus
CN112887732B (en) Method and device for inter-frame and intra-frame joint prediction coding and decoding with configurable weight
CN110475116B (en) Motion vector deriving method and device and electronic equipment
JP6390275B2 (en) Encoding circuit and encoding method
JP2011091696A (en) Motion vector predicting method
CN113508595A (en) Motion vector refined search area
CN117560494B (en) Encoding method for rapidly enhancing underground low-quality video
KR20200134302A (en) Image processing apparatus and method
CN110062243B (en) Light field video motion estimation method based on neighbor optimization
Xiang et al. A high efficient error concealment scheme based on auto-regressive model for video coding
KR20130105402A (en) Method of multi-view video coding and decoding based on local illumination and contrast compensation of reference frames without extra bitrate overhead

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant