CN107231559A - A kind of storage method of decoded video data - Google Patents

A kind of storage method of decoded video data Download PDF

Info

Publication number
CN107231559A
CN107231559A CN201710403597.8A CN201710403597A CN107231559A CN 107231559 A CN107231559 A CN 107231559A CN 201710403597 A CN201710403597 A CN 201710403597A CN 107231559 A CN107231559 A CN 107231559A
Authority
CN
China
Prior art keywords
frame
reference frame
data
information
motion vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710403597.8A
Other languages
Chinese (zh)
Other versions
CN107231559B (en
Inventor
邢春悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Wisdom Electronic Technology Co Ltd
Original Assignee
Zhuhai Wisdom Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Wisdom Electronic Technology Co Ltd filed Critical Zhuhai Wisdom Electronic Technology Co Ltd
Priority to CN201710403597.8A priority Critical patent/CN107231559B/en
Publication of CN107231559A publication Critical patent/CN107231559A/en
Application granted granted Critical
Publication of CN107231559B publication Critical patent/CN107231559B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/43Hardware specially adapted for motion estimation or compensation
    • H04N19/433Hardware specially adapted for motion estimation or compensation characterised by techniques for memory access
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention relates to a kind of storage method of decoded video data.Decoded video data storage method includes being stored in front and rear in static memory SRAM to reference frame lists, index information and mark of each unit storage current reference frame correspondence in decoding buffer zone in form pushes up field bottom information, the data message of the also all frames in decoding buffer zone also is stored in static memory SRAM, and the data message of each frame in decoding buffer zone includes having:Whether the POC information of each frame, the image structure information of each frame and make reference information, and the base address that the base address in dynamic memory DDR and motion vector data are stored in dynamic memory DDR is stored in the brightness of also each frame, chroma data.In addition, to the motion vector data and relevant information of each macro block, each macro block fixed storage space is employed, the storage mode not changed with the change of macro block piecemeal type.

Description

A kind of storage method of decoded video data
Technical field
The present invention relates to video algorithm process field, and in particular to the reference frame lists data in video decoding process are deposited Storage and the method for motion vector data machine relevant information storage, and the method based on this method derivation motion vector data.
Background technology
Requirement more and more higher with people to video image quality, resolution ratio is also increasing, from conventional D1, SD High definition till now, ultra high-definition and 4K, 8K development, frame per second is also from conventional 24fps, 25fps, 30fps to current 60fps, 120fps develop, and this is a stern challenge for the performance that video is decoded.
At present on network popularization and it is widely used be substantially H.264 video standard, H.264 standard is in order to be able to big Width lifts compression ratio, reduces code check, algorithmically employs the algorithm of many complexity.First, on picture structure, except tradition Frame structure outside, additionally use field structure and frame field adaptive MBAFF structures, solution separately compiled using top field bottom during field structure Code, and frame field adaptive MBAFF structures use the form encoding and decoding of macro block pair, have overturned in the past the knot of macro block encoding and decoding line by line Structure, causes to need planning and designing again in hardware design.Secondly, H.264 standard using 3 kinds of slice types, i.e. I slice, P slice, B slice, I slice decoding can only be by the way of infra-frame prediction, and P slice and B slice can both be wrapped Mode containing infra-frame prediction, can also include the mode of inter prediction.The mode of infra-frame prediction mainly removes redundancy spatially, Use the macroblock prediction current macro on periphery.The mode of inter prediction mainly removes temporal redundancy, i.e., gone forward with time shaft The frame in face and frame below carry out Block- matching.In inter prediction, in order to be able to which effective and front and rear frame carries out Block- matching, H.264 Standard at most respectively has 16 frame 32 using front and rear 2 Reference Frame Lists, and this 2 Reference Frame Lists can dynamic in decoding process Update.Because reference frame has been compared more than conventional standard a lot, this brings very big difficulty to hardware design.Finally, grand On block type, carry out frame matching in order to more accurate, H.264 standard employs smaller piecemeal type, include 16x16, 16x8,8x16,8x8,8x4,4x8,4x4 piecemeal type, due in H.264 canonical algorithm, be decoded to the direct of B frames and Need to use the information such as correspondence position co-located block motion vector during skip macro blocks, this is accomplished by ought when decoding The relevant information datas such as the motion vector of each macro block of preceding decoding frame are deposited into dynamic memory DDR, due to macro block The piecemeal of type is too many, how to store so that follow-up efficiently read access is the problem of hardware design needs consideration.
Because H.264 canonical algorithm is sufficiently complex in itself, and need to decode super large resolution ratio, high frame-rate video code stream, Therefore, for hardware decoder, how to simplify the difficulty of hardware design, improve hardware decoding speed be at present in the urgent need to The problem of solution.
The content of the invention
The main object of the present invention is to provide a kind of reference frame lists data storage and the method for access.
It is a further object to provide a kind of motion vector data and relevant information storage and the method accessed.
In order to realize above-mentioned main purpose, the reference frame lists data storage and the method for access that the present invention is provided include Forward reference frame list List0 [n] and backward reference frame lists List1 [n] is write into static memory according to the form of regulation In SRAM, each unit in Reference Frame List form is stored with a reference frame correspondingly in the index and mark of decoding buffer device Know the information at top field bottom.The data message of all frames in decoding buffer zone also is stored in static memory SRAM, decoding buffer The data message of each frame in area includes having:The POC information of each frame or frame field adaptive MBAFF, in this way field then be top field and The POC information at bottom, the image structure information of each frame(Frame, field or frame field adaptive MBAFF)And frame or top field bottom are No to be long term reference frame, short-term reference frame and do not make reference, the brightness data of also each frame is stored in dynamic memory DDR Base address, the base address be stored in dynamic memory DDR of chroma data and frame or top field motion vector data be stored in Base address in dynamic memory DDR, the motion vector data at bottom is stored in dynamic memory DDR base address by pushing up field Base address add a fixed skew to obtain.
From such scheme, the reference frame index obtained by entropy decoding can be by being stored in static memory SRAM In reference frame lists obtain index value of the reference frame in decoding buffer device, then pass through this index value access decoding buffer Area, obtain reference frame POC information, the structural information of reference frame, reference frame whether with the information, brightness and the colourity that make reference with And the information such as the base address be stored in dynamic memory DDR of motion vector.Due to reference frame lists, decoding buffer zone Information Number According to being stored in static memory SRAM, the access speed of hardware quickly, can be significantly and by way of 2 grades access Simplify the design of hardware, lift hardware decoding speed.
In order to realize another above-mentioned purpose, the storage of motion vector data and relevant information and access that the present invention is provided Method employ storage mode according to each macro block fixed spaces of grammer direct_8x8_inference_flag, not with The change of macro block piecemeal type and change, the information of each macro block storage has:MB info data include current macro whether be Intra, if whether the reference frame index refIdxLx for being a macro block and Block0 to Block15 is 0, also each 4x4 Index of fritter block motion vector data and current the block reference frame in decoding buffer zone.
From such scheme, when being decoded to direc with the skip macro blocks of B frames, it is necessary to which to access backward reference frame corresponding The co-located block of position motion vector, whether be intra blocks, reference frame index refIdxLx whether be 0 and Index of the reference frame in decoding buffer zone.After being obtained first by the reference frame lists being stored in static memory SRAM To reference frame List1 [0] in the index of decoding buffer zone, decoding buffer zone is then accessed by this index value, reference frame is obtained The base address be stored in dynamic memory DDR of motion vector, then found from backward reference frame lists List1 [0] and The co-located block of current block correspondence positions, and then the co-located for obtaining correspondence position can be accessed Block motion vector, whether it is intra blocks, whether reference frame index refIdxLx is 0 and reference frame is in decoding buffer zone Index.Using this storage mode, it would be desirable to information data all store together, hardware once i.e. may have access to obtains, pole Hardware decoding speed is improved greatly, and the storage of motion vector data is used according to direct_8x8_inference_flag The location mode of each macro block fixed space, very convenient hardware reads the co-located block of correspondence position related letter Breath, also greatly facilitates hardware design, it is to avoid the blocking information of the whole macro block of traversal, simplifies hardware design.
Brief description of the drawings
Fig. 1 is that present invention storage is front and rear to reference frame lists structure chart.
Fig. 2 is that present invention storage is front and rear to each cellular construction figure of reference frame lists.
Fig. 3 is present invention storage decoding buffer zone data information structure figure.
Fig. 4 is the present invention 16 4x4 fritter storage order schematic diagrames when direct_8x8_inference_flag is 1.
Fig. 5 is that the present invention is 1 storage motion vector and relevant information data knot in direct_8x8_inference_flag Composition.
Fig. 6 is the present invention 16 4x4 fritter storage order schematic diagrames when direct_8x8_inference_flag is 0.
Fig. 7 is that the present invention is 0 storage motion vector and relevant information data knot in direct_8x8_inference_flag Composition.
Fig. 8 is that the present invention is carrying out the schematic diagram of motion vector prediction using periphery macro block.
Fig. 9 is that the present invention is carrying out the schematic diagram of brightness interpolating.
Figure 10 is that the present invention is carrying out the schematic diagram of chroma interpolation.
Figure 11 is that the present invention is searched and current block correspondence positions co-located rear into reference frame List [0] Block schematic diagram.
Figure 12 is the schematic diagram that the present invention uses time domain prediction mode in decoding b_direct or b_skip macro blocks.
Below in conjunction with drawings and Examples, the present invention is described further.
Embodiment
Due in H.264 decoding process, when decoding current slice and being P slice, it is necessary to use forward reference frame row Table List0 [n], it is necessary to use forward reference frame list List0 [n] and backward reference frame lists List1 during decoding B slice [n], when decoding picture structure is frame structure or frame field adaptive(MBAFF)When, be up to 16 reference frames, i.e. n values 0 ~ 15; When it is field to decode picture structure, be up to 32 reference fields, i.e. n values 0 ~ 31.
When decoding slice headers, have corresponding syntactic element mark reference frame and reorder information, software is according to this Information to reference frame lists List0 [n] and List1 [n] carries out operation of reordering to front and rear, after the completion of this is operated, software by this Reference frame lists List0 [n] and List1 [n] after reordering are arranged according to Fig. 1, in write-in static memory SRAM, such as The current decoding picture structure of fruit is frame structure or frame field adaptive(MBAFF)When, forward reference frame list List0 [0] to List0 [15] offset address 0x0 to 0x3 is write, 0x4 to 0x7 is empty, backward reference frame lists List1 [0] is inclined to List1 [15] write-ins Address 0x8 to 0x0b is moved, 0xc to 0xf is empty;If currently decoding picture structure is field, by forward reference frame list List0 [0] to List0 [31] writes offset address 0x0 to 0x7, and backward reference frame lists List1 [0] to List1 [31] writes Offset address 0x8 to 0x0f.Using this design, it can greatly facilitate the reading of H.264 hard decoders progress reference frame data, by In being stored in static memory, access speed quickly, can greatly speed up hardware decoding speed.
Front and rear each unit into reference frame lists List0 [n] and List1 [n] takes 8 bit, the data stored As shown in Figure 2.Highest order bit7 currently decoding picture structure be field when effectively, be 0 expression current reference frame List0 [x] or List1 [x] is top field(top field), it is that 1 expression is bottom(bottom field).Low 5bit DPB_Idx are current Reference frame List0 [x] or List1 [x] correspondences are in decoding buffer zone(decode picture buffer)Index, this DPB_ Idx index informations one frame of fixed correspondence within a period of time, i.e., be binding with some reference frame, until this frame data is brushed Go out to show, removed out of decoding buffer zone.So, although often decode a frame or one, it is required for carrying out reference frame lists Rearrangement, reference frame index ref_Idx can change, but the corresponding DPB_Idx of each reference frame will not change, only Need the content of each unit of correspondence form into reference frame lists List0 [n] and List1 [n] before and after changing.As solved During the i-th frame of code, the DPB_Idx of m frames is then inserted bit4 by the m frames in List0 [0] correspondences decoding buffer zone:0, top bottom Information inserts bit7, when being decoded to i+1 frame, and List0 [0] may be changed into the n-th frame of decoding buffer zone after reordering, The DPB_Idx of n-th frame is then inserted into bit4:0, top bottom information inserts bit7.
When decoding present frame or field or frame field adaptive MBAFF, it is necessary to the related data information of reference frame be used, to every One reference frame, has following 4 category information data to use, specific as follows:
1) the POC information of reference frame;
2) attribute of reference frame, includes the picture structure of reference frame, and long term reference frame or short-term reference frame;
3) base address that the brightness of reference frame and chroma data are stored in dynamic memory DDR;
4) motion vector data and relevant information of reference frame are stored in the base address in dynamic memory DDR.
Hardware design, in the present invention, 4 category informations of all reference frames is packed, are uniformly stored in for convenience In one piece of static memory SRAM, storage format is as shown in Figure 3.
H.264 in decoding process, be up to 16 reference frames add current decoding output frame, therefore it is slow to have 17 decodings Area DPB is rushed, each decoding buffer zone DPB takes 5 word, and offset address 0 is used for depositing POC information, if current decoding figure During as structure for frame or frame field adaptive MBAFF, only bit31:16 use, for depositing present frame or frame field adaptive MBAFF POC information, if currently decoding picture structure is field, bit31:16 are used to deposit the POC information for pushing up field, bit15:The 0 POC information for depositing bottom.Offset address 1 is used for the attribute information for depositing reference frame, bit1:0 is used to represent Push up field whether be used for short term reference, or long term reference and it goes without doing reference, bit3:2 represent whether bottom is used to join in short term Examine, or long term reference and it goes without doing reference, bit5:4 represent reference frame image structure, are frame, field or frame field adaptive MBAFF.Offset address 2 is used to deposit the base address that the brightness data of reference frame is stored in dynamic memory DDR.Offset address 3 are used to deposit the base address that the chroma data of reference frame is stored in dynamic memory DDR.Offset address 4, which is used to deposit, to be referred to The base address that the motion vector and relevant information data of frame are stored in dynamic memory DDR, if reference frame image structure is , then the motion vector and relevant information data at bottom are stored in dynamic memory DDR base address by pushing up the base address of field Plus a fixed skew.
Embodiment 1
The present embodiment is realized to be inserted to macro block (mb) type for non-b_direct and non-b_skip other interframe inter macro block (mb) types It is worth the computational methods of computing.
During video image decoding, entropy decoding is carried out first, decoding obtains current block reference frame index, if currently Block types are forward macroblock, then decoding obtains ref0_idx, if backward macro block, then decoding obtains ref1_idx, if It is bidirectional macroblocks, then decoding obtains ref0_idx and ref1_idx.Meanwhile, current macro x-component is obtained according to MB type decodings With the Motion vector residue of y-component, equally, if forward macroblock, decoding obtains Motion vector residue mvd0_x and mvd0_y, If backward macro block, decoding obtains mvd1_x and mvd1_y, if bidirectional macroblocks, then to Motion vector residue before and after obtaining Mvd0_x, mvd0_y and mvd1_x, mvd1_y.For convenience, retouched below using current block types as forward macroblock State.
The reference frame index ref0_idx and Motion vector residue mvd0_x and mvd0_ of forward macroblock have been obtained in entropy decoding After y, according to H.264 motion vector prediction algorithm, it is predicted using periphery macro block and sees Fig. 8, the macro block from the current MB left sides Left A, the macro block top B of top, the macro block top_right C on upper the right and the macro block top_left D predictions on the upper left side The motion vector predictor candidates mvp_x and mvp_y of current macro are obtained, Motion vector residue mvd0_x and mvd0_y is added, so that Obtain final the motion vector mv_x and mv_y of current macro:
mv_x = mvd0_x + mvp_x (Formula 1)
mv_y = mvd0_y + mvp_y (Formula 2)
Meanwhile, the reference frame index ref0_idx obtained according to entropy decoding finds List0 [ref0_ from Fig. 1 reference frame lists Idx], and then can obtain from Fig. 2 the DPB_Idx of this reference frame index ref0_idx correspondences decoding buffer zone, and then according to DPB_Idx obtains base address and chroma data that the brightness data of reference frame is stored in dynamic memory DDR from Fig. 3 The base address being stored in dynamic memory DDR, further according to current block motion vector mv_x and mv_y can just position work as Preceding block reference block data, according to Fig. 9 and Figure 10, the interpolation arithmetic algorithm of brightness and colourity, which is calculated, obtains current block's Interpolation result.
It is stored according to the DPB_Idx motion vectors and relevant information data for obtaining reference frame in dynamic memory DDR Behind base address, it is possible to access the motion vector and relevant information data for obtaining reference frame, the motion vector and correlation of reference frame Whether the grammer direct_8x8_inference_flag that information data is obtained according to being decoded from sequence parameter set SPS is 1 to adopt Deposited with following 2 kinds of forms.
1) when direct_8x8_inference_flag is 1, for the motion vector and relevant information of compressed reference frame Data, the 4x4 fritters block that the row of the left side one and the right one for only storing each macro block MB are arranged motion vector and relevant information number According to storage order is as shown in figure 4, each macro block only needs to store 8 4x4 fritters block motion vector and relevant information number According to compared to the motion vector and relevant information data that original each macro block needs 16 4x4 fritters block of storage, saving one Half memory space, greatlys save bandwidth.The storage format of motion vector and relevant information data is illustrated in fig. 5 shown below.
Each macro block takes 9 word, and first word deposits MB info data, behind 8 word to deposit 8 4x4 small Block block motion vector data.Wherein MB info meanings are as follows:Bit0 is used for representing whether current macro is inter(Such as Fruit is that macro block (mb) type is that inter is set to 0, is otherwise provided as 1), bit1 represents whether current macro is a macro block(Frame macro block is set 0 is set to, field macro block is set to 1), bit16 to bit31 represents Block0 to Block15 reference frame index refIdxLx respectively Whether it is 0(If refIdxLx is 0,1 is set to, 0 is otherwise provided as).8 4x4 fritters block motion vector data is adopted Deposited with same form, low 5bit is bit4:0 is used for depositing DPB_idx, i.e., current block reference frame is in decoding buffer zone DPB Frame index, what bit5 was used for further discriminating between reference is top field or bottom, Bit19:6 are used for depositing current 4x4 fritters The x-component mv_x, Bit31 of block motion vector:20 are used for depositing the y-component of current 4x4 fritters block motion vector mv_y。
2) when direct_8x8_inference_flag is 0,16 4x4 fritters block of each macro block motion to Amount and relevant information data are required for storage, and storage order is as shown in Figure 6.The storage format of motion vector and relevant information data It is illustrated in fig. 7 shown below.
Each macro block takes 17 word, and first word deposits MB info data, behind 16 word deposit 16 Block motion vector and relevant information data.Each word specific meaning and as above, is not repeated.
Because H.264 in P slice and B slice, the piecemeal type of macro block is very more, including 16x16,16x8, 8x16,8x8,8x4,4x8,4x4, if storing motion vector and relevant information data according to the piecemeal type of macro block, not only The piecemeal type of each macro block is stored, and because the piecemeal type of each macro block is different, causes what each macro block took Space is not fixed, in decoding process, when decoded macroblock type is b_direct or b_skip, it is necessary to access correspondence position Can very it be bothered when 4x4 block motion vector and relevant information data, it is necessary to travel through the piecemeal type of whole macro block, Correspondence position 4x4 block position can be positioned, and then reads the related letter such as correspondence position 4x4 block motion vector data Breath, has been significantly greatly increased the design difficulty of hardware decoding.Using storage format above, in direct_8x8_inference_ Each macro block fixes 9 word when flag is 1, and direct_8x8_inference_flag fixes 17 word storages when being 0, The piecemeal type of each macro block need not be gone to consider, when decoded macroblock type is b_direct or b_skip, hardware decoder Correspondence position 4x4 block motion vector very easily can be directly accessed according to direct_8x8_inference_flag The relevant informations such as data, enormously simplify hardware design.
Embodiment 2
The present embodiment is realized when sequence parameter set SPS grammers direct_spatial_mv_pred_flag is 1, to macro block class Type is that b_direct or b_skip derives motion vector using spatial domain prediction mode and carries out the computational methods of interpolation arithmetic.
Spatial domain prediction mode is exactly that reference frame index and the motion of current macro are obtained according to the information prediction of periphery macro block Vector, is calculated according to the information of Fig. 8 current macros periphery A, B, C, D macro block first, obtain refIdxL0_temp and RefIdxL1_temp, calculation formula is as follows:
refIdxL0_temp = MinPositive( refIdxL0A, MinPositive( refIdxL0B, refIdxL0C ) ) (Formula 3)
refIdxL1_temp = MinPositive( refIdxL1A, MinPositive( refIdxL1B, refIdxL1C ) ) (Formula 4)
Wherein:
In above-mentioned formula, refIdxL0A, refIdxL0B and refIdxL0C represent periphery macro block A, B, C forward direction reference respectively Frame index, refIdxL1A, refIdxL1B and refIdxL1C represent periphery macro block A, B, C backward reference frame index respectively.
Then the reference frame index of current macro is obtained according to obtained refIdxL0_temp and refIdxL1_temp RefIdxL0 and refIdxL1, calculation formula is as follows:
If(refIdxL0_temp < 0 && refIdxL1_temp < 0)
refIdxL0 = 0; refIdxL1 = 0;(Formula 5)
else
refIdxL0 = refIdxL0_temp; refIdxL1 = refIdxL1_temp (Formula 6)
The motion vector data of current macro is calculated below.Found first from backward reference frame lists List1 [0] and currently The co-located block of block correspondence positions, as shown in figure 11.Meanwhile, backward reference frame can be obtained from Fig. 2 List1 [0] and then according to DPB_Idx from Fig. 3, obtains backward in the index DPB_Idx and top bottom information of decoding buffer zone The base address that reference frame List1 [0] motion vector is stored in dynamic memory DDR, and reference frame List1 is obtained simultaneously [0] whether it is short-term reference frame, and then motion vector base address according to reference frame List1 [0] and obtained current block The co-located block of correspondence position, can obtain correspondence position co-located block mv_x from Fig. 5 or Fig. 7 Whether it is 0 information with mv_y and refIdxLx.Moving_block can just be calculated according to information obtained above, Moving_block 3 conditions below are arranged to 1 in the case of all meeting:
1) List1 [0] is short-term reference frame;
2) correspondence position co-located block mv_x and mv_y is in [- 1/4 ,+1/4] is interval;
3) correspondence position co-located block refIdxLx is 0.
In addition it is also necessary to obtain the forward motion vector predicted value of current macro according to periphery A, B, C, D macroblock prediction Mvp0_x and mvp0_y, and backward motion vectors predicted value mvp1_x and mvp1_y, are obtained currently finally according to formula below Block forward motion vector mv0_x and mv0_y.
If(refIdxL0_temp < 0)
{ mv0_x = 0; mv0_y= 0;} (Formula 7)
else if(refIdxL0_temp == 0 & moving_block )
{ mv0_x = 0; mv0_y= 0;} (Formula 8)
else
{ mv0_x = mvp0_x; mv0_y= mvp0_y;} (Formula 9)
Backward motion vectors mv1_x and mv1_y are obtained using same formula, it is only necessary to by the refIdxL0_temp in above formula It is changed to refIdxL1_temp.
Obtained by computing above before and after current block to reference frame index refIdxL0 and refIdxL1, from Fig. 1 List0 [refIdxL0] and List1 [refIdxL1] can be obtained, and then can obtain preceding to rear reference frame index from Fig. 2 The DPB_Idx of refIdxL0 decoding buffer zones corresponding with refIdxL1, and then according to DPB_Idx from Fig. 3, before and after can obtaining Dynamic memory is stored in the base address and chroma data being stored in the brightness data of reference frame in dynamic memory DDR Base address in DDR, further according to current block forward motion vector mv0_x and mv0_y, and backward motion vectors mv1_x And mv1_y, it is possible to current block forward direction reference block data and backward reference block data are positioned, it is bright according to Fig. 9 and Figure 10 Degree and the interpolation arithmetic algorithm of colourity calculate the forward interpolation result and backward interpolation result for obtaining current block.It is last according to H.264 standard interpolation arithmetic algorithm obtains current block final interpolation result.
Embodiment 3
The present embodiment is realized when sequence parameter set SPS grammers direct_spatial_mv_pred_flag is 0, to macro block class Type is that b_direct or b_skip derives motion vector using time domain prediction mode and carries out the computational methods of interpolation arithmetic.
Time domain prediction mode is exactly to be derived by according to the relevant information of backward reference frame and forward reference frame on time shaft The reference frame index and motion vector of current macro.Found and current block from backward reference frame lists List1 [0] first The co-located block of correspondence position, as shown in Figure 6.Meanwhile, backward reference frame List1 [0] can be obtained from Fig. 2 and is existed The index DPB_Idx and top bottom information of decoding buffer zone, and then according to DPB_Idx from Fig. 3, backward reference frame can be obtained The base address that List1 [0] motion vector is stored in dynamic memory DDR, and obtain reference frame List1's [0] simultaneously POC(If currently decoding picture structure be that frame or frame field adaptive MBAFF or field and List1 [0] bit7 are 0, frame is taken Or the POC of top field, if if currently decoding picture structure be that field and List1 [0] bit7 are 1, taking the POC of bottom field), mark It is designated as poc1, and then motion vector base address according to reference frame List1 [0] and the current block correspondence positions that find Co-located block, can obtain the letter such as correspondence position co-located block motion vector from Fig. 5 or Fig. 7 Whether breath, include whether it is intra macro blocks, be field, motion vector data mv_x and mv_y, and reference decoder buffer index DPB_idx and top field bottom information.If correspondence position co-located block are intra macro blocks, before current block Backward reference frame index refIdxL0 and refIdxL1 are 0, forward motion vector mv0_x and mv0_y and backward motion vectors Mv1_x and mv1_y are also 0;Otherwise, if correspondence position co-located block are not intra macro blocks, first according to The DPB_idx and top field bottom information of correspondence position co-located block through obtaining, can mutually be tackled from Fig. 3 The POC information of position co-located block reference frame is answered, labeled as refIdxCol_poc, while obtaining this reference frame Whether it is long term reference frame, its POC information and refIdxCol_poc is then found out from forward direction reference frame lists List0 [32] Identical reference frame, is refIdx_map by its reference frame index labelled notation, so far, current block forward reference frame index RefIdxL0 is refIdx_map, and backward reference frame index refIdxL1 is 0.
Current block forward motion vector mv0_x and mv0_y and backward motion vectors mv1_x and mv1_ is calculated below y.The POC that backward reference frame refIdxL1 has been obtained in being described above is poc1, and forward reference frame refIdxL0 POC is RefIdxCol_poc, and obtained whether forward reference frame refIdxL0 is long term reference frame simultaneously, if poc1 and Difference between refIdxCol_poc is 0 or forward reference frame refIdxL0 is long term reference frame, then forward motion vector Mv0_x and mv0_y are correspondence position co-located block motion vector data mv_x and mv_y, backward motion vectors Mv1_x and mv1_y are 0.Otherwise, obtain front and rear to motion vector as Figure 12 is calculated using following calculation formula:
tx = ( 16 384 + Abs( td / 2 ) ) / td (Formula 10)
DistScaleFactor = Clip3( -1024, 1023, ( tb * tx + 32 ) >> 6 ) (Formula 11)
mv0_x= ( DistScaleFactor * mvCol_x + 128 ) >> 8 (Formula 12)
mv1_x=mv0_x– mvCol_x (Formula 13)
mv0_y= ( DistScaleFactor * mvCol_y + 128 ) >> 8 (Formula 14)
mv1_y= mv0_y– mvCol_y (Formula 15)
Wherein:
tb = Clip3( -128, 127, (currPic_poc - refIdxCol_poc) ) (Formula 16)
td = Clip3( -128, 127, (poc1 - refIdxCol_poc)) (Formula 17)
MvCol_x, mvCol_y are correspondence position co-located block motion vector data, and currPic_poc is to work as The POC of preceding decoding image.
Obtained by computing above before and after current block to reference frame index refIdxL0 and refIdxL1, from Fig. 1 List0 [refIdxL0] and List1 [refIdxL1] can be obtained, and then reference frame rope backward can be obtained before this from Fig. 2 Draw the DPB_Idx of refIdxL0 decoding buffer zones corresponding with refIdxL1, and then according to DPB_Idx from Fig. 3, before can obtaining Dynamic memory is stored in the base address and chroma data that the brightness data of backward reference frame is stored in dynamic memory DDR Base address in DDR, further according to current block forward motion vector mv0_x and mv0_y, and backward motion vectors mv1_x And mv1_y, it is possible to current block forward direction reference block data and backward reference block data are positioned, it is bright according to Fig. 9 and Figure 10 Degree and the interpolation arithmetic algorithm of colourity obtain current block forward interpolation result and backward interpolation result.Finally according to H.264 Standard interpolation arithmetic algorithm obtain current block final interpolation result.
Finally it is emphasized that the invention is not restricted to above-mentioned implementation, the Store form of such as reference frame lists changes, Reference frame information data Store form changes, and change of motion vector data Store form etc. should all be included in right of the present invention It is required that protection domain in.

Claims (2)

1. decoded video data storage method, it is characterised in that including:
It is stored in front and rear to reference frame lists in static memory SRAM, described front and rear each unit to reference frame lists Index and top field bottom information of the current reference frame that the is stored with correspondence in decoding buffer zone;
The data message of all frames in decoding buffer zone also is stored in static memory SRAM, the data of each frame in decoding buffer zone Information includes:The POC information of each frame or frame field adaptive MBAFF, field is then the POC information for pushing up field and bottom in this way, The image structure information of each frame(Frame, field or frame field adaptive MBAFF)And frame or top field bottom whether be long term reference frame, Short-term reference frame and do not make reference, base address that the brightness data of also each frame is stored in dynamic memory DDR, colourity number It is stored according to the base address and frame or top field motion vector data be stored in dynamic memory DDR in dynamic memory DDR Base address;
The storage of motion vector data and relevant information employs the storage mode of each macro block fixed space.
2. decoding data storage method according to claim 1, it is characterised in that:
In the storage of the motion vector data and relevant information, the motion vector data and relevant information of each macro block are deposited Put together, the data of each macro block storage include:Current macro type and reference frame index information, and each motion of block Index information of the reference frame of vector data and current block in decoding buffer zone.
CN201710403597.8A 2017-06-01 2017-06-01 A kind of storage method of decoded video data Active CN107231559B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710403597.8A CN107231559B (en) 2017-06-01 2017-06-01 A kind of storage method of decoded video data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710403597.8A CN107231559B (en) 2017-06-01 2017-06-01 A kind of storage method of decoded video data

Publications (2)

Publication Number Publication Date
CN107231559A true CN107231559A (en) 2017-10-03
CN107231559B CN107231559B (en) 2019-11-22

Family

ID=59934617

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710403597.8A Active CN107231559B (en) 2017-06-01 2017-06-01 A kind of storage method of decoded video data

Country Status (1)

Country Link
CN (1) CN107231559B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110868599A (en) * 2019-12-06 2020-03-06 杭州顺网科技股份有限公司 Video compression method of remote desktop
CN111355962A (en) * 2020-03-10 2020-06-30 珠海全志科技股份有限公司 Video decoding caching method suitable for multiple reference frames, computer device and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101257625A (en) * 2008-04-01 2008-09-03 海信集团有限公司 Method for indexing position in video decoder and video decoder
CN101783958A (en) * 2010-02-10 2010-07-21 中国科学院计算技术研究所 Computation method and device of time domain direct mode motion vector in AVS (audio video standard)
CN102223543A (en) * 2011-06-13 2011-10-19 四川虹微技术有限公司 Reference pixel read and storage system
CN104811721A (en) * 2015-05-26 2015-07-29 珠海全志科技股份有限公司 Video decoding data storage method and calculation method of motion vector data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101257625A (en) * 2008-04-01 2008-09-03 海信集团有限公司 Method for indexing position in video decoder and video decoder
CN101783958A (en) * 2010-02-10 2010-07-21 中国科学院计算技术研究所 Computation method and device of time domain direct mode motion vector in AVS (audio video standard)
CN102223543A (en) * 2011-06-13 2011-10-19 四川虹微技术有限公司 Reference pixel read and storage system
CN104811721A (en) * 2015-05-26 2015-07-29 珠海全志科技股份有限公司 Video decoding data storage method and calculation method of motion vector data

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110868599A (en) * 2019-12-06 2020-03-06 杭州顺网科技股份有限公司 Video compression method of remote desktop
CN110868599B (en) * 2019-12-06 2021-11-19 杭州顺网科技股份有限公司 Video compression method of remote desktop
CN111355962A (en) * 2020-03-10 2020-06-30 珠海全志科技股份有限公司 Video decoding caching method suitable for multiple reference frames, computer device and computer readable storage medium

Also Published As

Publication number Publication date
CN107231559B (en) 2019-11-22

Similar Documents

Publication Publication Date Title
TWI736907B (en) Improved pmmvd
CN111147850B (en) Table maintenance for history-based motion vector prediction
US20210266537A1 (en) Using inter prediction with geometric partitioning for video processing
JP7295230B2 (en) Reset lookup table per slice/tile/LCU row
CN107113424B (en) With the Video coding and coding/decoding method of the block of inter-frame forecast mode coding
KR102662024B1 (en) Gradient calculation of different motion vector refinements
US20220007047A1 (en) Interaction between merge list construction and other tools
TW201933866A (en) Improved decoder-side motion vector derivation
TWI538489B (en) Motion vector coding and bi-prediction in hevc and its extensions
TW202025776A (en) Selected mvd precision without mvp truncation
KR20210025538A (en) Update of lookup table: FIFO, constrained FIFO
CN113170167A (en) Flag indication method in intra-block copy mode
KR20190127884A (en) Constraints of Motion Vector Information Derived by Decoder-side Motion Vector Derivation
CN109565590A (en) The motion vector based on model for coding and decoding video derives
JP2022511914A (en) Motion vector prediction in merge (MMVD) mode by motion vector difference
TW202013974A (en) Methods and apparatus for encoding/decoding video data
TW201639368A (en) Deriving motion information for sub-blocks in video coding
TWI719522B (en) Symmetric bi-prediction mode for video coding
CN105531999A (en) Method and apparatus for video coding involving syntax for signalling motion information
CN104584549A (en) Method and apparatus for video coding
JP2009135994A (en) Method of deriving direct mode motion vector
JP2022533056A (en) Adaptive motion vector difference decomposition for affine mode
CN104488271A (en) P frame-based multi-hypothesis motion compensation method
CN104769947A (en) P frame-based multi-hypothesis motion compensation encoding method
CN114009037A (en) Motion candidate list construction for intra block copy mode

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant