CN102547265B - Interframe prediction method and device - Google Patents

Interframe prediction method and device Download PDF

Info

Publication number
CN102547265B
CN102547265B CN201010610022.1A CN201010610022A CN102547265B CN 102547265 B CN102547265 B CN 102547265B CN 201010610022 A CN201010610022 A CN 201010610022A CN 102547265 B CN102547265 B CN 102547265B
Authority
CN
China
Prior art keywords
prime
reference frame
ref
width
height
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201010610022.1A
Other languages
Chinese (zh)
Other versions
CN102547265A (en
Inventor
舒倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yunzhou Multimedia Technology Co., Ltd.
Original Assignee
SHENZHEN YUNZHOU MULTIMEDIA TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN YUNZHOU MULTIMEDIA TECHNOLOGY Co Ltd filed Critical SHENZHEN YUNZHOU MULTIMEDIA TECHNOLOGY Co Ltd
Priority to CN201010610022.1A priority Critical patent/CN102547265B/en
Priority to PCT/CN2011/076246 priority patent/WO2012088848A1/en
Publication of CN102547265A publication Critical patent/CN102547265A/en
Application granted granted Critical
Publication of CN102547265B publication Critical patent/CN102547265B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/527Global motion vector estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention provides an interframe prediction method which comprises the following steps: step 1: determining the relationship between a first reference frame and the current coding frame; step 2: if a lens zooms in, processing the first reference frame to obtain a second reference frame, and setting the current reference frame as the second reference frame; entering into the step 3; if the lens zooms out, processing the first reference frame to obtain a fourth reference frame, and setting the current reference frame as the fourth reference frame; entering into the step 3: if the current reference frame is the first reference frame, entering into the step 3; step 3: carrying out interframe prediction on the current coding frame by adopting the current reference frame.

Description

A kind of inter-frame prediction method, device
Technical field
The present invention relates to field of video encoding, relate in particular to a kind of inter-frame prediction method, device.
Background technology
At present, in video coding process, conventionally carry out the spatial redundancies of removal of images with intra-frame prediction method, eliminate temporal redundancy with inter-frame prediction method.Due to the temporal redundancy of the interframe of video source, to be compared to spatial redundancies in frame much bigger, and this just makes inter-frame prediction method in Video coding, seem very important so.
Inter prediction is divided into according to the difference of prediction direction: the prediction of P frame and the prediction of B frame.The frame that the forward direction of the P frame Forecasting Methodology of main flow employing has at present been encoded, as the reference frame of current encoded frame, utilizes similitude between the two, the information of compression current encoded frame.This,, in the time that reference frame and current encoded frame have the high degree of approximation, have good effect, but along with the reduction of the degree of approximation between the two, compression effectiveness also can sharply decline.Especially in the time that the film sources such as low frame per second, overall track in are encoded, this problem will significantly show especially.
Summary of the invention
The object of the embodiment of the present invention is to propose a kind of inter-frame prediction method, be intended to solve the reference frame and the current encoded frame degree of approximation that in prior art, run into less, especially in the time that the film sources such as low frame per second, overall track in are encoded, cause compressing the problem that present encoding effect frame is lower.
The invention provides a kind inter-frame prediction method,, described method comprises:
Step 1: the relation of determining the first reference frame and current encoded frame;
Step 2: the first reference frame is processed if camera lens furthers, obtained the second reference frame, and current reference frame is set to the second reference frame; Enter step 3;
If camera lens zooms out the first reference frame is processed, obtain the 4th reference frame, and current reference frame is set to the 4th reference frame; Enter step 3;
If current reference frame is the first reference frame, enter step 3;
Step 3: adopt current reference frame to carry out inter prediction to current encoded frame.
The present invention also provides a kind of inter prediction device, and this device comprises:
Judging unit, for determining the relation of the first reference frame and current encoded frame;
The camera lens unit that furthers, for the first reference frame being processed, obtain the second reference frame in the time that camera lens furthers, and current reference frame is set to the second reference frame;
Camera lens extension unit, for the first reference frame being processed, obtain the 4th reference frame in the time zooming out for camera lens, and current reference frame is set to the 4th reference frame;
Predicting unit, for adopting current reference frame to carry out inter prediction to current encoded frame.
The present invention proposes a kind of inter-frame prediction method and device.The method and device are by determining the relation of current reference frame and current encoded frame, the mode zooming out according to camera lens respectively or further is carried out upper and lower sampling processing, improve the similarity of current reference frame and coded frame, thereby reached the compression effectiveness of optimizing current encoded frame.In the time of low frame per second, the shot change between frame and frame is larger, now adopts frame Forecasting Methodology of the present invention, and the lifting of compression performance will be more remarkable.
Brief description of the drawings
Fig. 1 is the method flow diagram of the embodiment of the present invention 1;
Fig. 2 is the method flow diagram of the embodiment of the present invention 2.
Fig. 3 is the structural representation of the embodiment of the present invention 3.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, the present invention is further elaborated, for convenience of explanation, only show the part relevant to the embodiment of the present invention.Should be appreciated that the specific embodiment that this place is described, only for explaining the present invention, not in order to limit the present invention.
The present invention proposes a kind of inter-frame prediction method of new P frame.The method is by determining the relation of current reference frame and current encoded frame, zooms out respectively or the mode that furthers is carried out upper and lower sampling processing according to camera lens, improved the similarity of current reference frame and coded frame, thereby reached the compression effectiveness of optimization current encoded frame.In the time of low frame per second, the shot change between frame and frame is larger, now adopts frame Forecasting Methodology of the present invention, and the lifting of compression performance will be more remarkable.
Embodiment 1, referring to Fig. 1, the method is mainly used in the prediction of P frame, is specially:
Step 101: the relation of determining the first reference frame and current encoded frame; Namely determine i reference frame ref iwith the relation of current encoded frame frame, select different P frame Forecasting Methodologies; If furthering, camera lens performs step 102; If zooming out, camera lens performs step 103; If do not exist camera lens further or zoom out, the first reference frame is that current reference frame performs step 104;
If (camera lens draws in) enters step 102;
Else if (camera lens pushes away far) enters step 103;
Else curr_ref i=ref i, enter 104;
Namely i reference frame ref of the first reference frame described herein i; Curr_ref ifor the current reference frame after upgrading;
Step 102: the first reference frame is processed if camera lens furthers, obtained the second reference frame, and current reference frame is set to the second reference frame; Enter step 104;
This step method is specially: to ref icarry out up-sampling, obtain new reference frame the second reference frame ref i'; Curr_ref i=ref i';
Step 103: if camera lens zooms out, the first reference frame is processed, obtained the 4th reference frame, and current reference frame is set to the 4th reference frame; Enter step 104;
This step method is specially: to ref icarry out down-sampling, obtain new reference frame the 4th reference frame ref i" ', curr_ref i=ref i" ';
Step 104: adopt current reference frame to carry out inter prediction to current encoded frame.
The method is by determining the relation of current reference frame and current encoded frame, zooms out respectively or the mode that furthers is carried out upper and lower sampling processing according to camera lens, improved the similarity of current reference frame and coded frame, thereby reached the compression effectiveness of optimization current encoded frame.In the time of low frame per second, the shot change between frame and frame is larger, now adopts frame Forecasting Methodology of the present invention, and the lifting of compression performance will be more remarkable.
Embodiment 2, referring to Fig. 2, the method for the present invention is mainly used in the prediction of P frame, is specially:
Step 201: the relation of determining the first reference frame and current encoded frame; If furthering, camera lens performs step 202; If zooming out, camera lens performs step 203; If do not exist camera lens further or zoom out, the first reference frame is that current reference frame performs step 204;
If (camera lens draws in) enters step 202;
Else if (camera lens pushes away far) enters step 203;
Else curr_ref i=ref i, enter 204;
Namely i reference frame ref of the first reference frame described herein i; Curr_ref ifor the current reference frame after upgrading;
Step 202: the first reference frame is processed if camera lens furthers, obtained the second reference frame, and the second reference frame is processed, obtain the 3rd reference frame; Current encoded frame is set to the 3rd reference frame;
This step is specially:
Step 2021: to the first reference frame ref icarry out up-sampling, obtain new reference frame the second reference frame ref i';
Step 2022: to the second reference frame ref i' carry out boundary pixel deletion, obtain the 3rd reference frame ref i"; Thereby make the 3rd reference frame ref i" with the first reference frame ref ithere is identical resolution.(the second reference frame has different resolution from the 3rd reference frame, and the 3rd reference frame carries out after boundary pixel is deleted obtaining to the second reference frame herein)
To the second reference frame ref i' carry out boundary pixel delet method and be:
The 3rd reference frame ref i" (m, n)=ref i' (m+d_heifht', n+d_width')
Wherein o_width, o_height are ref icolumns and line number, m_width', m_height' are ref i' columns and line number, m, n are line label and the row label of reference frame pixel;
d_width'=(m_width'-o_width)/2,
d_height'=(m_height'-o_height)/2
Step 2023:curr_ref i=ref i"
Step 203: if camera lens zooms out, the first reference frame is processed, obtained the 4th reference frame, and the 4th reference frame is processed, obtain the 5th reference frame; And current reference frame is set to the 5th reference frame.
This step implementation method is specially:
Step 2031: to the first reference frame ref icarry out down-sampling, obtain new reference frame the 4th reference frame ref i" ';
Step 2032: to the 4th reference frame ref i" ' carry out boundary pixel is filled expansion, obtains the 5th reference frame ref i" ", thus make the 5th reference frame ref i" ' with the first reference frame ref ithere is identical resolution;
To the 4th reference frame ref i" ' carry out boundary pixel is filled extended method:
Row are filled:
ref i &prime; &prime; &prime; &prime; ( m , n ) = ref i &prime; &prime; &prime; ( m , 0 ) , 0 &le; n < d _ width &prime; &prime; &prime; ref i &prime; &prime; &prime; ( m , n ) , d _ width &prime; &prime; &prime; &le; n < o _ width - d _ width &prime; &prime; &prime; ref i &prime; &prime; &prime; ( m , o _ width - 1 ) , o _ width - d _ width &prime; &prime; &prime; &le; n < o _ width
Row is filled:
ref i &prime; &prime; &prime; &prime; ( m , n ) = ref i &prime; &prime; &prime; ( 0 , n ) , 0 &le; m < d _ height &prime; &prime; &prime; ref i &prime; &prime; &prime; ( m , n ) , d _ height &prime; &prime; &prime; &le; n < o _ height - d _ height &prime; &prime; &prime; ref i &prime; &prime; &prime; ( o _ height - 1 , n ) , o _ height - d _ height &prime; &prime; &prime; &le; n < o _ height
ref i &prime; &prime; &prime; &prime; ( m , n ) = ref i &prime; &prime; &prime; ( 0 , n ) , 0 &le; m < d _ height &prime; &prime; &prime; ref i &prime; &prime; &prime; ( m , n ) , d _ height &prime; &prime; &prime; &le; m < o _ height - d _ height &prime; &prime; &prime; ref i &prime; &prime; &prime; ( o _ height - 1 , n ) , o _ height - d _ height &prime; &prime; &prime; &le; m < o _ height
Wherein o_width, o_height are ref icolumns and line number, m_width " ', m_height " ' is ref i" ' columns and line number, m, n are line label and the row label of reference frame pixel;
d_width″′=(o_width-m_width″′)/2,
d_height″′=(o_height-m_height″′)/2
Step 2033:curr_ref i=ref i" "
Step 204: adopt current reference frame to carry out inter prediction to current encoded frame.
The method is by determining the relation of current reference frame and current encoded frame, the mode zooming out according to camera lens respectively or further is carried out upper and lower sampling processing, and further the second reference frame is carried out to pixel deletion, the 4th reference frame has been carried out to pixel-expansion, make current reference frame there is identical resolution with the first reference frame, avoid redistributing of internal memory, be convenient to the compatibility of code.Thereby further improve the similarity of current reference frame and coded frame, reached the compression effectiveness of optimizing current encoded frame.In the time of low frame per second, the shot change between frame and frame is larger, now adopts frame Forecasting Methodology of the present invention, and the lifting of compression performance will be more remarkable.
Embodiment 3, the present invention also provides a kind of inter prediction device of P frame corresponding to embodiment 1, and referring to Fig. 3, this device comprises:
Judging unit 301, for determining the relation of the first reference frame and current encoded frame;
The camera lens unit 302 that furthers, for the first reference frame being processed, obtain the second reference frame in the time that camera lens furthers, and current reference frame is set to the second reference frame;
Camera lens extension unit 303, for the first reference frame being processed, obtain the 4th reference frame in the time zooming out for camera lens, and current reference frame is set to the 4th reference frame;
Predicting unit 304, for adopting current reference frame to carry out inter prediction to current encoded frame.
Wherein camera lens furthers unit for the first reference frame is processed, and obtains the second reference frame and is specially: the first reference frame is carried out to up-sampling, obtain the second reference frame.
Corresponding to embodiment 2, the described camera lens unit that furthers is further used for obtaining, after described the second reference frame, further the second reference frame being processed, and obtains the 3rd reference frame, and current reference frame is set to the 3rd reference frame.
Wherein camera lens furthers unit for the second reference frame is processed, and obtains the 3rd reference frame and is specially:
The second reference frame is carried out to boundary pixel deletion, obtain the 3rd reference frame, make the 3rd reference frame there is identical resolution with the first reference frame;
Wherein, describedly the second reference frame carried out to boundary pixel delet method be specially:
ref i″(m,n)=ref i'(m+d_heifht',n+d_width')
Wherein ref i' be the second reference frame, ref i' ' be the 3rd reference frame, o_width, o_height are ref ithe columns of current reference frame and line number, m_width', m_height' are ref i' columns and line number, m, n are line label and the row label of reference frame pixel;
d_width'=(m_width'-o_width)/2,
d_height'=(m_height'-o_height)/2
Wherein, described camera lens extension unit, for the first reference frame is processed, obtains the 4th reference frame and is specially: the first reference frame is carried out to down-sampling, obtain the 4th reference frame.
Corresponding to embodiment 2, described camera lens extension unit is further used for obtaining, after the 4th described reference frame, further the 4th reference frame being processed, and obtains the 5th reference frame; Accordingly, current reference frame is set to the 5th reference frame;
Wherein, described camera lens extension unit is for processing the 4th reference frame, obtaining the 5th reference frame is specially: the 4th reference frame is carried out to boundary pixel and fill expansion, obtain the 5th reference frame, make the 5th reference frame have identical resolution with the first reference frame;
This is specially wherein the 4th reference frame to be carried out to boundary pixel filling expansion:
Row are filled:
ref i &prime; &prime; &prime; &prime; ( m , n ) = ref i &prime; &prime; &prime; ( m , 0 ) , 0 &le; n < d _ width &prime; &prime; &prime; ref i &prime; &prime; &prime; ( m , n ) , d _ width &prime; &prime; &prime; &le; n < o _ width - d _ width &prime; &prime; &prime; ref i &prime; &prime; &prime; ( m , o _ width - 1 ) , o _ width - d _ width &prime; &prime; &prime; &le; n < o _ width
Row is filled:
ref i &prime; &prime; &prime; &prime; ( m , n ) = ref i &prime; &prime; &prime; ( 0 , n ) , 0 &le; m < d _ height &prime; &prime; &prime; ref i &prime; &prime; &prime; ( m , n ) , d _ height &prime; &prime; &prime; &le; m < o _ height - d _ height &prime; &prime; &prime; ref i &prime; &prime; &prime; ( o _ height - 1 , n ) , o _ height - d _ height &prime; &prime; &prime; &le; m < o _ height
Wherein ref i" ' be the 4th reference frame, o_width, o_height are ref ithe columns of current reference frame and line number, m_width " ', m_height " ' is ref i" columns of the ' the four reference frame and line number, m, n are line label and the row label of reference frame pixel;
d_width″′=(o_width-m_width″′)/2,
d_height″′=(o_height-m_height″′)/2
Those having ordinary skill in the art will appreciate that, the all or part of step realizing in above-described embodiment method can complete by program command related hardware, described program can be stored in a computer read/write memory medium, and described storage medium can be ROM, RAM, disk, CD etc.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, all any amendments of doing within the spirit and principles in the present invention, be equal to and replace and improvement etc., within all should being included in protection scope of the present invention.

Claims (14)

1. an inter-frame prediction method, is characterized in that, described method comprises:
Step 1: the relation of determining the first reference frame and current encoded frame;
Step 2: the first reference frame is processed if camera lens furthers, obtained the second reference frame, and current reference frame is set to the second reference frame; Enter step 3;
If camera lens zooms out the first reference frame is processed, obtain the 4th reference frame, and current reference frame is set to the 4th reference frame; Enter step 3;
If current reference frame is the first reference frame, enter step 3;
Step 3: adopt current reference frame to carry out inter prediction to current encoded frame;
The first reference frame is processed if described camera lens furthers, is obtained the second reference frame and be specially:
The first reference frame is carried out to up-sampling, obtain the second reference frame.
2. inter-frame prediction method according to claim 1, is characterized in that, obtains, after described the second reference frame, further the second reference frame being processed, and obtains the 3rd reference frame;
Accordingly, current reference frame is set to the 3rd reference frame, enters step 3.
3. inter-frame prediction method according to claim 2, is characterized in that, described further processes the second reference frame, obtains the 3rd reference frame and is specially:
The second reference frame is carried out to boundary pixel deletion, obtain the 3rd reference frame, make the 3rd reference frame there is identical resolution with the first reference frame.
4. inter-frame prediction method according to claim 3, is characterized in that, described the second reference frame is carried out to boundary pixel delet method be specially:
ref i″(m,n)=ref i'(m+d_heifht',n+d_width')
Wherein ref i' be the second reference frame, ref i" be the 3rd reference frame, o_width, o_height are ref ithe columns of current reference frame and line number, m_width', m_height' are ref i' columns and line number, m, n are line label and the row label of reference frame pixel;
d_width'=(m_width'-o_width)/2,
d_height'=(m_height'-o_height)/2。
5. inter-frame prediction method according to claim 1, is characterized in that, described processes the first reference frame, obtains the 4th reference frame and is specially:
The first reference frame is carried out to down-sampling, obtain the 4th reference frame.
6. inter-frame prediction method according to claim 1, is characterized in that, obtains, after the 4th described reference frame, further the 4th reference frame being processed, and obtains the 5th reference frame;
Accordingly, current reference frame is set to the 5th reference frame, enters step 3.
7. inter-frame prediction method according to claim 6, is characterized in that, described processes the 4th reference frame, obtains the 5th reference frame and is specially:
The 4th reference frame is carried out to boundary pixel and fill expansion, obtain the 5th reference frame, make the 5th reference frame there is identical resolution with the first reference frame.
8. inter-frame prediction method according to claim 7, is characterized in that, described the 4th reference frame is carried out to boundary pixel fill expansion and is specially:
Row are filled:
ref i &prime; &prime; &prime; &prime; ( m , n ) = ref i &prime; &prime; &prime; ( m , 0 ) , 0 &le; n < d _ width &prime; &prime; &prime; ref i &prime; &prime; &prime; ( m , n ) , d _ width &prime; &prime; &prime; &le; n < o _ width - d _ width &prime; &prime; &prime; ref i &prime; &prime; &prime; ( m , o _ width - 1 ) , o _ width - d _ width &prime; &prime; &prime; &le; n < o _ width
Row is filled:
ref i &prime; &prime; &prime; &prime; ( m , n ) = ref i &prime; &prime; &prime; ( 0 , n ) , 0 &le; m < d _ height &prime; &prime; &prime; ref i &prime; &prime; &prime; ( m , n ) , d _ height &prime; &prime; &prime; &le; m < o _ height - d _ height &prime; &prime; &prime; ref i &prime; &prime; &prime; ( o _ height - 1 , n ) , o _ height - d _ height &prime; &prime; &prime; &le; m < o _ height
Wherein ref i" ' be the 4th reference frame, o_width, o_height are ref ithe columns of current reference frame and line number, m_width " ', m_height " ' is ref i" columns of the ' the four reference frame and line number, m, n are line label and the row label of reference frame pixel;
d_width″′=(o_width-m_width″′)/2,
d_height″′=(o_height-m_height″′)/2。
9. an inter prediction device, is characterized in that, this device comprises:
Judging unit, for determining the relation of the first reference frame and current encoded frame;
The camera lens unit that furthers, for the first reference frame being processed, obtain the second reference frame in the time that camera lens furthers, and current reference frame is set to the second reference frame;
Camera lens extension unit, for the first reference frame being processed, obtain the 4th reference frame in the time zooming out for camera lens, and current reference frame is set to the 4th reference frame;
Predicting unit, for adopting current reference frame to carry out inter prediction to current encoded frame;
Described camera lens furthers unit for the first reference frame is processed, and obtains the second reference frame and is specially: the first reference frame is carried out to up-sampling, obtain the second reference frame.
10. inter prediction device according to claim 9, it is characterized in that, the described camera lens unit that furthers is further used for obtaining, after described the second reference frame, further the second reference frame being processed, obtain the 3rd reference frame, current reference frame is set to the 3rd reference frame.
11. inter prediction devices according to claim 10, is characterized in that, described camera lens furthers unit for the second reference frame is processed, and obtain the 3rd reference frame and are specially:
The second reference frame is carried out to boundary pixel deletion, obtain the 3rd reference frame, make the 3rd reference frame there is identical resolution with the first reference frame;
Wherein, describedly the second reference frame carried out to boundary pixel delet method be specially:
ref i″(m,n)=ref i'(m+d_heifht',n+d_width')
Wherein ref i' be the second reference frame, ref i" be the 3rd reference frame, o_width, o_height are ref ithe columns of current reference frame and line number, m_width', m_height' are ref i' columns and line number, m, n are line label and the row label of reference frame pixel;
d_width'=(m_width'-o_width)/2,
d_height'=(m_height'-o_height)/2。
12. inter prediction devices according to claim 9, is characterized in that, described camera lens extension unit, for the first reference frame is processed, obtains the 4th reference frame and is specially: the first reference frame is carried out to down-sampling, obtain the 4th reference frame.
13. inter prediction devices according to claim 9, is characterized in that, described camera lens extension unit is further used for obtaining, after the 4th described reference frame, further the 4th reference frame being processed, and obtains the 5th reference frame; Accordingly, current reference frame is set to the 5th reference frame.
14. inter prediction devices according to claim 13, it is characterized in that, described camera lens extension unit is for processing the 4th reference frame, obtaining the 5th reference frame is specially: the 4th reference frame is carried out to boundary pixel and fill expansion, obtain the 5th reference frame, make the 5th reference frame there is identical resolution with the first reference frame;
Wherein the 4th reference frame being carried out to boundary pixel filling expansion is specially:
Row are filled:
ref i &prime; &prime; &prime; &prime; ( m , n ) = ref i &prime; &prime; &prime; ( m , 0 ) , 0 &le; n < d _ width &prime; &prime; &prime; ref i &prime; &prime; &prime; ( m , n ) , d _ width &prime; &prime; &prime; &le; n < o _ width - d _ width &prime; &prime; &prime; ref i &prime; &prime; &prime; ( m , o _ width - 1 ) , o _ width - d _ width &prime; &prime; &prime; &le; n < o _ width
Row is filled:
ref i &prime; &prime; &prime; &prime; ( m , n ) = ref i &prime; &prime; &prime; ( 0 , n ) , 0 &le; m < d _ height &prime; &prime; &prime; ref i &prime; &prime; &prime; ( m , n ) , d _ height &prime; &prime; &prime; &le; m < o _ height - d _ height &prime; &prime; &prime; ref i &prime; &prime; &prime; ( o _ height - 1 , n ) , o _ height - d _ height &prime; &prime; &prime; &le; m < o _ height
Wherein ref i" ' be the 4th reference frame, o_width, o_height are ref ithe columns of current reference frame and line number, m_width " ', m_height " ' is ref i" columns of the ' the four reference frame and line number, m, n are line label and the row label of reference frame pixel;
d_width″′=(o_width-m_width″′)/2,
d_height″′=(o_height-m_height″′)/2。
CN201010610022.1A 2010-12-28 2010-12-28 Interframe prediction method and device Expired - Fee Related CN102547265B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201010610022.1A CN102547265B (en) 2010-12-28 2010-12-28 Interframe prediction method and device
PCT/CN2011/076246 WO2012088848A1 (en) 2010-12-28 2011-06-24 Interframe prediction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010610022.1A CN102547265B (en) 2010-12-28 2010-12-28 Interframe prediction method and device

Publications (2)

Publication Number Publication Date
CN102547265A CN102547265A (en) 2012-07-04
CN102547265B true CN102547265B (en) 2014-09-03

Family

ID=46353072

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010610022.1A Expired - Fee Related CN102547265B (en) 2010-12-28 2010-12-28 Interframe prediction method and device

Country Status (2)

Country Link
CN (1) CN102547265B (en)
WO (1) WO2012088848A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111510726B (en) * 2019-01-30 2023-01-24 杭州海康威视数字技术股份有限公司 Coding and decoding method and equipment thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1288337A (en) * 1999-09-10 2001-03-21 株式会社Ntt杜可莫 Method and device used for automatic data converting coding video frequency image data
CN101252692A (en) * 2008-03-07 2008-08-27 炬力集成电路设计有限公司 Apparatus and method for predicting between frames and video encoding and decoding eqiupment
CN101578879A (en) * 2006-11-07 2009-11-11 三星电子株式会社 Method and apparatus for video interprediction encoding/decoding

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101783949B (en) * 2010-02-22 2012-08-01 深圳市融创天下科技股份有限公司 Mode selection method of skip block

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1288337A (en) * 1999-09-10 2001-03-21 株式会社Ntt杜可莫 Method and device used for automatic data converting coding video frequency image data
CN101578879A (en) * 2006-11-07 2009-11-11 三星电子株式会社 Method and apparatus for video interprediction encoding/decoding
CN101252692A (en) * 2008-03-07 2008-08-27 炬力集成电路设计有限公司 Apparatus and method for predicting between frames and video encoding and decoding eqiupment

Also Published As

Publication number Publication date
WO2012088848A1 (en) 2012-07-05
CN102547265A (en) 2012-07-04

Similar Documents

Publication Publication Date Title
KR101199498B1 (en) Apparatus for encoding or generation of multi-view video by using a camera parameter, and a method thereof, and a recording medium having a program to implement thereof
CN102282851A (en) Image processing device, decoding method, intra-frame decoder, intra-frame decoding method, and intra-frame encoder
US11831919B2 (en) Encoding device and encoding method
US11902505B2 (en) Video decoding device and video decoding method
CN102611885B (en) Encoding and decoding method and device
CN103109535A (en) Image reproduction method, image reproduction device, image reproduction program, imaging system, and reproduction system
CN102547265B (en) Interframe prediction method and device
CN102572419B (en) Interframe predicting method and device
CN103327340B (en) A kind of integer searches method and device
CN103108183A (en) Skip mode and Direct mode motion vector predicting method in three-dimension video
TWI540883B (en) Dynamic image predictive decoding method, dynamic image predictive decoding device, dynamic image predictive decoding program, dynamic image predictive coding method, dynamic image predictive coding device and dynamic image predictive coding program
KR101602871B1 (en) Method and apparatus for data encoding, method and apparatus for data decoding
KR101357755B1 (en) Apparatus for encoding or generation of multi-view video by using a camera parameter, and a method thereof, and a recording medium having a program to implement thereof
KR101313223B1 (en) Apparatus for encoding or generation of multi-view video by using a camera parameter, and a method thereof, and a recording medium having a program to implement thereof
CN102595125B (en) A kind of bi-directional predicted method of P frame and device
KR20160022726A (en) Apparatus and method for encoding
WO2012120910A1 (en) Moving image coding device and moving image coding method
KR101313224B1 (en) Apparatus for encoding or generation of multi-view video by using a camera parameter, and a method thereof, and a recording medium having a program to implement thereof
CN103024327B (en) Video recording method and video recording device
KR101343576B1 (en) Apparatus for encoding or generation of multi-view video by using a camera parameter, and a method thereof, and a recording medium having a program to implement thereof
TWI554114B (en) Method of seamless recording for continuous video string file
CN107820086A (en) Semiconductor device, mobile image processing system, the method for controlling semiconductor device
CN103986937A (en) H.264 interframe encoding storage management method for high-resolution video
CN103379348A (en) Viewpoint synthetic method, device and encoder during depth information encoding
CN103716633A (en) Scaled code stream processing method, scaled code stream processing device and scaled code stream encoder

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: SHENZHEN YUNZHOU MULTIMEDIA TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: SHENZHEN TEMOBI SCIENCE + TECHNOLOGY CO., LTD.

Effective date: 20140801

C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20140801

Address after: Unit B4 9 building 518057 Guangdong city of Shenzhen province Nanshan District high in the four EVOC Technology Building No. 31

Applicant after: Shenzhen Yunzhou Multimedia Technology Co., Ltd.

Address before: 19, building 18, Changhong technology building, 518057 South twelve Road, South tech Zone, Nanshan District hi tech Zone, Guangdong, Shenzhen

Applicant before: Shenzhen Temobi Science & Tech Development Co.,Ltd.

C14 Grant of patent or utility model
GR01 Patent grant
C56 Change in the name or address of the patentee
CP02 Change in the address of a patent holder

Address after: The central Shenzhen city of Guangdong Province, 518057 Keyuan Road, Nanshan District science and Technology Park No. 15 Science Park Sinovac A Building 1 unit 403, No. 405 unit

Patentee after: Shenzhen Yunzhou Multimedia Technology Co., Ltd.

Address before: Unit B4 9 building 518057 Guangdong city of Shenzhen province Nanshan District high in the four EVOC Technology Building No. 31

Patentee before: Shenzhen Yunzhou Multimedia Technology Co., Ltd.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140903

Termination date: 20191228