CN110337810A - Method for video processing and equipment - Google Patents
Method for video processing and equipment Download PDFInfo
- Publication number
- CN110337810A CN110337810A CN201880012518.3A CN201880012518A CN110337810A CN 110337810 A CN110337810 A CN 110337810A CN 201880012518 A CN201880012518 A CN 201880012518A CN 110337810 A CN110337810 A CN 110337810A
- Authority
- CN
- China
- Prior art keywords
- reconstructed image
- image block
- motion vector
- block
- sampled
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 96
- 238000012545 processing Methods 0.000 title claims abstract description 51
- 239000013598 vector Substances 0.000 claims abstract description 202
- 238000005070 sampling Methods 0.000 claims description 15
- 235000013399 edible fruits Nutrition 0.000 claims 1
- 230000015654 memory Effects 0.000 abstract description 20
- 238000012512 characterization method Methods 0.000 description 12
- 238000005516 engineering process Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000006073 displacement reaction Methods 0.000 description 4
- 238000012935 Averaging Methods 0.000 description 3
- 230000002457 bidirectional effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000002123 temporal effect Effects 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/147—Data rate or code amount at the encoder output according to rate distortion criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/577—Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The embodiment of the present application provides a kind of method for video processing and equipment, it is possible to reduce the memory space of hardware resource consumption and occupancy during obtaining motion vector.This method comprises:, to before being matched for the matched block of reconstructed image, being carried out to reconstructed image data down-sampled during obtaining the motion vector of current image block;Using the block of reconstructed image it is down-sampled after described in reconstructed image data matched, to obtain matching result;Based on the matching result, the motion vector of the current image block is obtained.
Description
Copyright notice
This patent document disclosure includes material protected by copyright.The copyright is all for copyright holder.Copyright
Owner does not oppose the patent document in the presence of anyone replicates the proce's-verbal of Patent&Trademark Office and archives or should
Patent discloses.
Technical field
This application involves field of video processing, and more particularly, to a kind of method for video processing and equipment.
Background technique
Prediction is the important module of major video coding framework, wherein inter-prediction is by way of motion compensation come real
It is existing.For the frame image in video, it the big coding tree unit (Coding Tree Unit, CTU) such as can be first divided into, such as
64x64,128x128 size.Each CTU can with the coding unit of further division squarely or rectangle (Coding Unit,
CU), the most like piece of prediction block as current CU can be found in reference frame for each CU.Between current block and similar block
Relative displacement be motion vector (Motion Vector, MV).Predicted value of the similar block as current block is found in reference frame
Process be exactly motion compensation.
It is the new technology occurred recently that decoding end, which exports motion information technology, is mainly used in decoding end to the fortune decoded
Dynamic vector is modified, and in the case where not increasing code rate, can promote coding quality, and then improve the performance of encoder.
However, will do it a large amount of matching cost when obtaining motion vector and calculate, and need to consume a large amount of hardware moneys
Source stores reconstructed blocks required when calculating matching cost, to occupy a large amount of memory spaces.
Summary of the invention
The embodiment of the present application provides a kind of method for video processing and equipment, it is possible to reduce is obtaining motion vector
The memory space of hardware resource consumption and occupancy in the process.
In a first aspect, providing a kind of method for video processing, comprising:
During obtaining the motion vector of current image block, carry out matching it to for the matched block of reconstructed image
Before, reconstructed image data is carried out down-sampled;
Using the block of reconstructed image it is down-sampled after described in reconstructed image data matched, to be matched
As a result;
Based on the matching result, the motion vector of the current image block is obtained.
Second aspect provides a kind of equipment for video processing, comprising:
Down-sampled unit, for having been reconstructed to for matched during obtaining the motion vector of current image block
Before image block is matched, reconstructed image data is carried out down-sampled;
A matching unit, for reconstructed image data to carry out described in after down-sampled using the reconstructed image block
Match, to obtain matching result;
Acquiring unit obtains the motion vector of the current image block for being based on the matching result.
The third aspect provides a kind of computer system, comprising: memory, for storing computer executable instructions;Place
Device is managed, for accessing the memory, and executes the computer executable instructions, the behaviour in method to carry out above-mentioned first aspect
Make.
Fourth aspect provides a kind of computer storage medium, is stored with program code in the computer storage medium, should
Program code can serve to indicate that the method for executing above-mentioned first aspect.
5th aspect, provides a kind of computer program product, which includes program code, which can
To be used to indicate the method for executing above-mentioned first aspect.
Therefore, in the embodiment of the present application, during obtaining the motion vector MV of current image block, to for matching
The block of reconstructed image matched before, to this, reconstructed image has carried out down-sampled, carries out matching cost again after down-sampled
Calculating, it is possible to reduce the data volume of processing, so as to reduce the hardware resource consumption in data handling procedure and occupy
Memory space.
Detailed description of the invention
Technical solution in ord to more clearly illustrate embodiments of the present application, below will be in embodiment or description of the prior art
Required attached drawing is briefly described, it should be apparent that, the accompanying drawings in the following description is only some realities of the application
Example is applied, it for those of ordinary skill in the art, without creative efforts, can also be according to these attached drawings
Obtain other attached drawings.
Fig. 1 is the schematic figure according to the coding/decoding system of the embodiment of the present application.
Fig. 2 is the schematic flow chart according to the method for video processing of the embodiment of the present application.
Fig. 3 is the schematic flow chart according to the method for video processing of the embodiment of the present application.
Fig. 4 is the schematic diagram according to the two-way template of acquisition of the embodiment of the present application.
Fig. 5 is the schematic figure that motion vector is obtained based on two-way template matching method according to the embodiment of the present application.
Fig. 6 is the schematic figure that motion vector is obtained based on template matching method according to the embodiment of the present application.
Fig. 7 is the schematic figure that motion vector is obtained based on bi-directional matching method according to the embodiment of the present application.
Fig. 8 is the schematic flow chart according to the method for video processing of the embodiment of the present application.
Fig. 9 is the schematic block diagram according to the equipment for video processing of the embodiment of the present application.
Figure 10 is the schematic block diagram according to the computer system of the embodiment of the present application.
Specific implementation
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application is described, and is shown
So, described embodiment is some embodiments of the present application, instead of all the embodiments.Based on the implementation in the application
Example, every other embodiment obtained by those of ordinary skill in the art without making creative efforts belong to
The range of the application protection.
Unless otherwise indicated, the technical field of all technical and scientific terms and the application used in the embodiment of the present application
The normally understood meaning of technical staff it is identical.Term used in this application is intended merely to the mesh of description specific embodiment
, it is not intended that limitation scope of the present application.
Fig. 1 is the architecture diagram using the technical solution of the embodiment of the present application.
As shown in Figure 1, system 100 can receive pending data 102, pending data 102 is handled, at generation
Data 108 after reason.For example, system 100 can receive data to be encoded, after being encoded data to be encoded to generate coding
Data treat decoding data alternatively, system 100 can receive data to be decoded and be decoded to generate decoded data.
In some embodiments, the component in system 100 can be realized that the processor can be calculating and set by one or more processors
Processor in standby, the processor being also possible in mobile device (such as unmanned plane).The processor can be any kind
Processor, it is not limited in the embodiment of the present invention.In some possible designs, which may include encoder, solution
Code device or codec etc..It can also include one or more memories in system 100.The memory can be used for store instruction and
Data, for example, realizing the computer executable instructions of the technical solution of the embodiment of the present invention, pending data 102, treated
Data 108 etc..The memory can be the memory of any kind, the embodiment of the present invention to this also without limitation.
Data to be encoded may include text, image, Drawing Object, animation sequence, audio, video, or any need
Other data of coding.In some cases, data to be encoded may include the sensing data from sensor, which can
Think visual sensor (for example, camera, infrared sensor), microphone, nearfield sensor is (for example, ultrasonic sensor, thunder
Up to), position sensor, temperature sensor, touch sensor etc..In some cases, data to be encoded may include to use by oneself
The information at family, for example, biological information, which may include facial characteristics, finger scan, retina scanning, voice note
Record, DNA sampling etc..
Wherein, when encoding to each image, image can initially be divided into multiple images block.In some embodiments
In, image can be divided into multiple images block, and described image block is referred to as macro block or maximum coding unit in some coding standards
(LCU,Largest Coding Unit).Image block can have or can not have any lap.The image can be with
It is divided into any amount of image block.For example, which can be divided into m × n image block array.Image block can be with
With rectangular shape, square shape, circular shape or any other shape.Image block can have any size, such as p × q
Pixel.It, can be by the way that the image be divided into multiple fritters come the figure to different resolution first in Modern video coding standards
As being encoded.For H.264, image block is referred to as macro block, size can be 16 × 16 pixels, and for HEVC, figure
As block is referred to as maximum coding unit, size can be 64 × 64.Each image block may have identical size and/or shape
Shape.Alternatively, two or more image blocks can have different size and/or shapes.In some embodiments, an image
Block may not be a macro block or maximum coding unit, but include the part of a macro block or maximum coding unit, or
It contains at least two complete macro block (or maximum coding unit), or includes at least one complete macro block (or maximum coding
Unit) and a macro block (or maximum coding unit) part, or contain at least two complete macro block (or maximum coding be single
Member) and some macro blocks (or maximum coding unit) part.In this way, can distinguish after image is divided into multiple images block
These image blocks in image data are encoded.
In an encoding process, in order to remove redundancy, image can be predicted.Different images can be used not in video
Same prediction mode.Image can be divided into intra-prediction image and inter-prediction by the prediction mode according to used by image
Image, wherein inter-prediction image includes forward-predicted picture and bidirectional predictive picture.I image is intra-prediction image, also referred to as
For key frame;P image is forward-predicted picture, namely a P image or I image encoded before use are used as with reference to figure
Picture;B image is bidirectional predictive picture, namely uses the image of front and back as reference picture.One kind being achieved in that coding side will
Multiple images generate sectional image group (group of picture, GOP) after being encoded, which schemed by an I
The image group that picture and multiple B images (or bidirectional predictive picture) and/or P image (or forward-predicted picture) are constituted.Decoding end
It is then to read reading picture after sectional GOP is decoded to render display again when playing.
It wherein, can (generally temporal vicinity have weighed in reference frame for each image block when carrying out inter-prediction
Structure frame) find the most like piece of prediction block as current image block.Relative displacement between current block and prediction block is movement arrow
It measures (Motion Vector, MV).
In order to reduce the code rate between coding side and decoding end, motion information can not be transmitted in code rate, is thus needed
Decoding end exports motion information namely motion vector.Decoding end may result in data throughout mistake when exporting motion information
Greatly, the problem of decoding end will be caused to occupy great amount of hardware resources and space in this way.
For this purpose, the embodiment of the present application proposes a kind of method for video processing, it is possible to reduce decoding end export movement letter
The data volume of required processing when breath, so as to avoid the problem that decoding end occupies a large amount of hardware resource and space.Equally, exist
When the method for the embodiment of the present application is used for coding side, it is possible to reduce the hardware resource and space that coding side occupies.
Fig. 2 is the schematic flow chart according to the method for video processing of the embodiment of the present application.Following methods are optional
Ground can be realized by decoding end, or can also be realized by coding side.
Wherein, when this method is realized by decoding end, current image block mentioned below can be image block to be decoded
(being referred to as image block to be reconstructed).Alternatively, current image block mentioned below can when this method is realized by coding side
To be image block to be encoded.
In 210, processing equipment has been weighed during obtaining the motion vector MV of current image block to for matched
Before structure image block is matched, reconstructed image data is carried out down-sampled.
Wherein, the equipment which can be coding side is also possible to the equipment of decoding end.
And the MV of current image block can be understood as the MV between current image block and the prediction block of selection.
Optionally, in the embodiment of the present application, reconstructed image block can also be known as reference block.
Optionally, in the embodiment of the present application, down-sampled to reconstructed image data to pass through following two realization side
Formula is realized.
In one implementation, by being spaced the sample mode of a certain number of pixels, to the reconstructed image data
It carries out down-sampled.Wherein, be spaced a certain number of pixels sample mode can in the horizontal direction and the vertical direction distinguish between
Every the employing mode of certain quantity.
For example, it is assumed that down-sampled object is the block that reconstructed image block is 128 × 128, then some of which can be taken
The pixel of column or some rows is as the reconstructed image block after down-sampled.
It is alternatively possible to which to this, reconstructed image data has dropped using the sample mode for the pixel for being spaced identical quantity
Sampling.Wherein, the sample mode for being spaced the pixel of identical quantity, which can refer to, is respectively separated phase in horizontal direction and/or vertical direction
Pixel with quantity is used.
For example, it is assumed that down-sampled object is reconstructed image block, to this, reconstructed image block has been both horizontally and vertically
Between to be divided into 2 carry out down-sampled, the pixel in the upper left corner can be taken as down-sampled result;It is of course also possible to take four pixels
Its excess-three point as down-sampled result.
For example, it is assumed that down-sampled object is reconstructed image block, 2 are divided between this horizontal direction of reconstructed image block
Down-sampled, vertical direction is without down-sampled.
For example, it is assumed that down-sampled object is reconstructed image block, 2 are divided between this vertical direction of reconstructed image block
Down-sampled, horizontal direction is without down-sampled.
In one implementation, the mode being averaged to multiple pixels, to this, reconstructed image data has dropped
Sampling.Wherein, multiple pixel can be adjacent pixel.
For example, it is assumed that down-sampled object is that reconstructed image block then can be right for 12 × 12 reconstructed image block
The mode that the pixel of four pixels is averaged, it is down-sampled to the reconstructed image block progress, wherein four pixels can be
Adjacent pixel, for example, it may be the pixel in one 2 × 2 image block.
Optionally, down-sampled reconstructed image data may include for the down-sampled of the matched block of reconstructed image
Reconstructed image data.
In one implementation, it can be adopted to drop is carried out for full frame image belonging to the matched block of reconstructed image
Sample, that is to say, that when carrying out down-sampled, each block of reconstructed image is not distinguished, then at this point, down-sampled reconstruct
Image data may include the reconstructed image data for the matched block of reconstructed image.
In another implementation, it can determine for matched reconstructed image block, and determining this has been reconstructed
Image block carries out down-sampled.
It is below that specific introduction is how down-sampled to the matched block of reconstructed image progress is used for.
Optionally, in the embodiment of the present application, according to the content of the reconstructed image block, to this, reconstructed image block has been
Reconstructed image data carries out down-sampled.Wherein, down-sampled be properly termed as is carried out to the reconstructed image data of reconstructed image block
Reconstructed image block is carried out down-sampled.
Specifically, processing equipment can determine down-sampled ratio according to the content of reconstructed image block;It is adopted using the drop
Sample ratio, to this reconstructed image block reconstructed image data carry out it is down-sampled.
Wherein, the down-sampled ratio that the embodiment of the present application is mentioned can refer to it is down-sampled after the image block pixel number that includes
The ratio between pixel quantity that image block before amount and sampling includes.
Wherein, the complexity of reconstructed image block is small compared with the high then sampling interval (that is, down-sampled ratio is big), and image block is multiple
Miscellaneous degree lower then sampling interval is big (that is, down-sampled ratio is small), thus carried out according to picture material it is adaptive down-sampled, can be with
Reduce the performance loss of data sampling bring.
Optionally, the content for the block of reconstructed image that the embodiment of the present application is mentioned may include: reconstructed image block include
Pixel quantity, pixel grey scale, in edge feature at least one of.
Specifically, processing equipment can be according to the pixel quantity that reconstructed image block has included, pixel grey scale, edge feature
At least one of in, determine down-sampled ratio;It is down-sampled to the reconstructed image block progress using the down-sampled ratio.
Optionally, in the embodiment of the present application, the pixel grey scale of reconstructed image block can pass through reconstructed image block
The variance of grey level histogram characterizes.
Optionally, in the embodiment of the present application, the edge feature of reconstructed image block can pass through reconstructed image block packet
Belong to the pixel quantity of the marginal point of texture in the pixel included to characterize.
It optionally, in the embodiment of the present application, is including at least two reconstruct images for the matched block of reconstructed image
When as block, according to identical down-sampled ratio, to this at least two the reconstructed image data of reconstructed image block carry out drop and adopt
Sample.
Specifically, it is primary determine MV during, in the matching process, if necessary to using at least two reconstruct images
As block, then identical down-sampled ratio can be used, the reconstructed image data of at least two reconstructed image blocks is dropped
Sampling.
For example, according to this at least two the pixel grey scale of reconstructed image block and/or including pixel in belong to texture
Marginal point pixel quantity, determine this at least two reconstructed image block need to be respectively adopted different down-sampled ratios
When, then down-sampled ratio that can be different to this is averaged, average value be used for this at least two reconstructed image block drop
Sampling, alternatively, highest down-sampled ratio or minimum down-sampled ratio can be used, at least two reconstructed image block
Reconstructed image data carry out it is down-sampled.
For example, at least two the value of the pixel grey scale of reconstructed image block and/or having characterized this at least two characterizing this
When the value difference of the edge feature of reconstructed image block, these values can be averaged (if the value and characterization of characterization pixel grey scale
The value of edge feature uses simultaneously, then the value to characterization pixel grey scale and the value of characterization edge feature can make even respectively
), calculate a down-sampled ratio using the value that is averaged, and using a down-sampled ratio respectively to this at least two
The reconstructed image data of reconstructed image block carries out down-sampled;Alternatively, the maximum value in these values can also be taken (if characterization picture
The value of plain gray scale and the value of characterization edge feature use simultaneously, then can take maximum value in the value of characterization pixel grey scale with and
Take characterization edge feature value in maximum value) or minimum value (if characterization pixel grey scale value and characterization edge feature value
Use simultaneously, then can take characterization pixel grey scale value in minimum value and and take characterization edge feature value in minimum
Value), a down-sampled ratio is calculated, and utilize the down-sampled ratio, respectively at least two reconstructed image block
Reconstructed image data carries out down-sampled.
It should be understood that in the embodiment of the present application, can include with current image block for the matched block of reconstructed image
Pixel quantity is identical, then determines down-sampled ratio according to the pixel quantity for including for the matched block of reconstructed image at this time, can
To be by the pixel quantity for including to determine down-sampled ratio to realize according to current image block.
Optionally, in the embodiment of the present application, when meeting at least one in the following conditions, processing equipment determine to
It is carried out with the block of reconstructed image in the process down-sampled:
The pixel quantity that reconstructed image block has included is greater than or equal to first predetermined value;
This grey level histogram of reconstructed image block variance be greater than or equal to second predetermined value;
The quantity for belonging to the edge pixel of texture in the pixel that reconstructed image block has included is predetermined more than or equal to third
Value.
That is, when meeting conditions above, it is down-sampled to the progress of reconstructed image block, otherwise without down-sampled,
It is possible thereby to avoid blindly carrying out it is down-sampled caused by the poor problem of encoding and decoding performance.
Wherein, for the matched block of reconstructed image include at least two reconstructed image blocks when, can be it is each
The variance of pixel quantity, grey level histogram that reconstructed image block includes and including pixel in belong to texture edge pixel number
Amount is all satisfied conditions above, or is also possible to the average of at least two pixel quantity that reconstructed image block has included, gray scale
The variance of histogram and including pixel in belong to texture being averaged for quantity of edge pixel meet conditions above.
It should be understood that in the embodiment of the present application, can include with current image block for the matched block of reconstructed image
Pixel quantity is identical, then is determined whether at this time according to the pixel quantity for including for the matched block of reconstructed image to reconstruct image
As block carry out it is down-sampled, can be by the pixel quantity for including according to current image block determine whether to reconstructed image block into
Row is down-sampled to be realized.
The content according to reconstructed image block determines whether to carry out reconstructed image block down-sampled and down-sampled above
Ratio, it should be understood that the embodiment of the present application is not limited to this, processing equipment is carrying out down-sampled processing to reconstructed image frame
When, it can also determine whether that reconstruction image frame has carried out down-sampled and/or drop and adopts to this according to the content of the reconstruction image frame
The ratio of sample.
Specifically, can according to this pixel quantity for including of reconstructed image frame, pixel grey scale, in edge feature extremely
One item missing determines down-sampled ratio;It is down-sampled to the reconstructed image frame progress using the down-sampled ratio.
Alternatively, it is necessary to meet following condition before down-sampled to the reconstructed image frame progress:
The pixel quantity that reconstructed image frame has included is greater than or equal to a particular value;
This grey level histogram of reconstructed image frame variance be greater than or equal to a particular value;
The quantity for belonging to the edge pixel of texture in the pixel that reconstructed image frame has included is greater than or equal to a particular value.
In 220, processing equipment utilize for the matched block of reconstructed image it is down-sampled after the reconstructed image
Data are matched, to obtain matching result.
Optionally, in the embodiment of the present application, matching can also be known as distortion match, and matching result can be reconstruct image
As carrying out the obtained matching cost of distortion match between block.
In 230, processing equipment obtains the MV of the current image block based on the matching result.
Optionally, in the embodiment of the present application, when the processing equipment is the equipment of coding side, then it can use the MV,
The current image block is encoded or reconstructed.
Wherein, coding side can be using the corresponding block of reconstructed image of the MV as prediction block, based on the prediction block to current
Image block is encoded or is reconstructed.
In one implementation, coding side can be directly using the pixel of the prediction block as the reconstructed image of current image block
It is pre- that the characteristics of element, such mode are properly termed as skip mode, skip mode is that the reconstructed pixel value of current image block can be equal to
The pixel value for surveying block can transmit a mark when coding side uses skip mode in code stream, for indicating to decoding end
The mode used is skip mode.
In another implementation, coding side can subtract each other the pixel of the pixel of current image block and prediction block, obtain
The pixel residual error is transmitted to decoding end to pixel residual error, and in code stream.
It should be understood that after obtaining MV, just code end can be encoded current image block using other modes and again
Structure, the embodiment of the present application are not specifically limited in this embodiment.
Optionally, in the embodiment of the present application, the embodiment of the present application can be used for advanced motion vector prediction (Advanced
Motion Vector Prediction, AMVP) mode, that is to say, that the result matched can be motion vector
Predicted value (Motion Vector Prediction, MVP), coding side after obtaining MVP, can according to MVP determine transport
The starting point of dynamic estimation carries out motion search near starting point, and search obtains optimal MV after finishing, determines ginseng by MV
The position of block in a reference image is examined, reference block subtracts current block and obtains residual block, and MV subtracts MVP and obtains motion vector difference
(Motion Vector Difference, MVD), and the MVD is passed through into bit stream to decoding end.
Optionally, in the embodiment of the present application, the application implementation can be used for Merge (merging) mode, that is to say, that into
The result that row matching obtains can be MVP, which directly can be determined as MV, in other words, be matched to obtain by coding side
The result is that MV.For coding side, coding side is after obtaining MVP (namely MV), without transmitting MVD, because MVD defaults
It is 0.
Optionally, in the embodiment of the present application, when the processing equipment is the equipment of decoding end, then it can use the MV,
The current image block is decoded.
Wherein, decoding end can be using the corresponding block of reconstructed image of the MV as prediction block, based on the prediction block to current
Image block is decoded.
In one implementation, decoding end can directly using the pixel of the prediction block as the pixel of current image block,
The characteristics of such mode is properly termed as skip mode, skip mode is that the reconstructed pixel value of current image block can be equal to prediction block
Pixel value, coding side use skip mode when, one mark can transmit in code stream, for decoding end instruction use
Mode be skip mode.
In another implementation, decoding end can be from the acquisition pixel residual error in the code stream that coding side transmits, will be pre-
The pixel for surveying block is added with the pixel residual error, obtains the pixel of current image block.
It should be understood that can be decoded using other modes to the current image block, the application is real after obtaining MV
Example is applied to be not specifically limited in this embodiment.
Optionally, in the embodiment of the present application, the embodiment of the present application can be used for AMVP mode, that is to say, that be matched
Obtained result can be MVP, and decoding end can obtain the MV of current image block in conjunction with the MVD in the code stream that coding side transmits.
Optionally, in the embodiment of the present application, the application implementation can be used for Merge (merging) mode, that is to say, that into
The MVP directly can be determined as MV, in other words, be matched by the result that row matching obtains with MVP, decoding end
The result is that MV.
Optionally, in the embodiment of the present application, obtain based on the matching result to the initial MV of the current image block into
Row amendment, obtains the MV of the current image block.
That is, the available initial MV of processing equipment, but the initial MV may not be optimal MV or MVP,
Processing equipment can be modified the initial MV, to obtain the MV of current image block.
The index of the initial MV can be encoded, and pass to decoding end by the language for coding side, which can
So that decoding end selects initial MV from initial MV list, wherein the index has been directed toward information below: the index of reference frame
And offset of the reference block relative to current image block on airspace, decoding end, which is based on these information, can choose initial MV.
For decoding end, which can be is obtained based on coding side transmitted stream, can be in the code stream
Including index, it is based on the index, the available initial MV of decoding end.
Optionally, which may include multiple initial MV, and multiple initial MV can be belonging respectively to different frames.Its
In, frame belonging to initial MV refers to frame belonging to the corresponding block of reconstructed image of the MV.
Assuming that multiple initial MV includes the first MV and the 2nd MV, then frame belonging to frame and the 2nd MV belonging to the first MV is
Different frames.
For example, the corresponding block of reconstructed image of the first MV belongs to the forward frame of current image block, the 2nd MV is corresponding
Reconstructed image block belongs to the backward frame of current image block.
Alternatively, the corresponding block of reconstructed image of the first MV belongs to the forward frame of current image block, the 2nd MV is corresponding
Reconstructed image block belongs to the forward frame of current image block.
Certainly, the corresponding block of reconstructed image of the first MV block of reconstructed image corresponding with the 2nd MV is belonging respectively to this
The different backward frames of current image block, the embodiment of the present application are not specifically limited in this embodiment.
In order to which the application is more clearly understood, how progress is modified to initial MV below with reference to implementation A
Explanation.
Implementation A
Specifically, processing equipment can be based on the down-sampled reconstruct of the corresponding block of reconstructed image of multiple initial MV
Image data generates template (for example, the mode being averaging to pixel), using the template of generation, respectively to multiple first
Beginning, MV was modified.
It should be understood that may be used also in addition to the down-sampled reconstructed image data using multiple blocks of reconstructed image generates template
Template is generated with the not down-sampled reconstructed image data using the corresponding block of reconstructed image of multiple initial MV, and to the mould
Plate progress is down-sampled, and the embodiment of the present application is not specifically limited in this embodiment.
Specifically, it is assumed that initial MV includes the first MV and the 2nd MV, and the corresponding block of reconstructed image of the first MV is to belong to the
The reconstructed image block of the first of one frame, the corresponding block of reconstructed image of the 2nd MV belong to the second of the second frame the reconstructed image block,
Based on the down-sampled of the first down-sampled reconstructed image data of reconstructed image block and the second reconstructed image block
Reconstructed image data generates template.Wherein, which is properly termed as two-way template.
Then can use N number of third reconstructed image block it is down-sampled after reconstructed image data (be properly termed as N number of drop
Third after sampling reconstructed image block), it is matched respectively with the template, wherein reconstructed image block is corresponding for N number of third
In N number of 3rd MV;Using the reconstructed image block of M the 4th it is down-sampled after reconstructed image data (be properly termed as M drop to adopt
The 4th reconstructed image block after sample), matched respectively with the template, wherein the M the 4th reconstructed image block correspond to
M the 4th MV;Based on the matching result, the 3rd MV is selected from N number of 3rd MV, and select from the M the 4th MV
Select the 4th MV.
Optionally, which can be the corresponding MV of the smallest distortion cost.Alternatively, the 3rd MV of the selection
It can be MV corresponding less than a certain distortion cost being specifically worth.
Optionally, which can be the corresponding MV of the smallest distortion cost.Alternatively, the 4th MV of the selection
It can be MV corresponding less than a certain specific value distortion cost.
Wherein, MV of one 3rd MV and one 4th MV as the current image block, at this point it is possible to will
One 3rd MV and the corresponding block of reconstructed image of one 4th MV can be weighted and averaged to obtain prediction block
Alternatively, one 3rd MV and one 4th MV are determined for the MV of the current image block,
I.e. one 3rd MV and one 4th MV can be respectively as MVP.At this point it is possible to be based on the 3rd MVP and the 4th
MVP carries out motion search respectively and movement compensation process obtains final MV.
Optionally, in the embodiment of the present application, reconstructed image block may belong to the first frame and the M to N number of third
A 4th reconstructed image block may belong to the second frame.
Optionally, the N and M can be equal.
Optionally, the 3rd MV includes the first MV, and the 4th MV includes the 2nd MV, that is to say, that for generating mould
The corresponding block of reconstructed image of the first MV and the corresponding block of reconstructed image of the 2nd MV of plate, it is also desirable to respectively with template carry out
Match.
Optionally, in the embodiment of the present application, at least partly MV in N number of 3rd MV is carried out partially based on the first MV
Shifting obtains, and at least partly MV in the M the 4th MV is deviated based on the 2nd MV.
For example, the MV in N number of 3rd MV in addition to the first MV can be and be deviated to obtain based on the first MV, example
Such as, N can be equal to 9, then 8 MV therein can be is deviated based on the first MV, for example, can be in eight directions
What enterprising line displacement obtained, or deviate what different pixels obtained in the vertical direction or the horizontal direction.
For another example the MV in the M the 4th MV in addition to the 2nd MV can be and be deviated to obtain based on the 2nd MV, example
Such as, M can be equal to 9, then 8 MV therein can be is deviated based on the 2nd MV, for example, can be in eight directions
It is that enterprising line displacement obtains or deviate what different pixels obtained in the vertical direction or the horizontal direction.
It is alternatively possible to which the MV that the method in implementation A is known as two-way template matching method is selected.
In order to which the application is more clearly understood, implementation A is described in detail below with reference to Fig. 3 to Fig. 5.
In 310, determine the size of current image block width and height whether be respectively smaller than 8 pixels (certainly, can also
To be the pixel of other quantity).321, if so, reference listing 0 in column reference table 1 MV0 and MV1 it is corresponding
Reconstructed image block carries out down-sampled, and is averaging and obtains two-way template.Wherein, the MV in reference listing 0 can be current figure
As the motion vector between the block of reconstructed image in block and forward reference frame, the MV in reference listing 1 can be present image
The motion vector between the block of reconstructed image in block and backward reference frame.
Specifically, as shown in figure 4, current image block is directed to, by the corresponding reference block 0 (reconstructed image block) of MV0 and MV1
Corresponding reference block 1 (reconstructed image block) carries out down-sampled, then is averaging and is dropped to two reference blocks after down-sampled
Two-way template after sampling.
In 322, it is corresponding to the MV0 in list 0 it is down-sampled after the block of reconstructed image matched with template.?
In 323, MV0 is deviated to obtain multiple MV0 '.In 324, the corresponding block of reconstructed image of multiple MV0 ' is subjected to drop and is adopted
Sample, and matched respectively with template.
For example, as shown in figure 5, (it is corresponding that MV0 ' can be can specifically include to the surrounding pixel of the corresponding reference block of MV0
The pixel that reference block includes) carry out it is down-sampled.It specifically, as shown in figure 5, can be to the pixel around the corresponding reference block of MV0
Value is filled, and obtains the corresponding reference block (reference block after offset) of MV0 ', and is carried out to the reference block after offset down-sampled.
Finally when calculating matching cost, use it is down-sampled after two-way template and it is down-sampled after reference block.
In 325, the smallest MV0 ' of matching cost is obtained, wherein the smallest MV0 ' of matching cost can be MV0.
In 331, it is corresponding to the MV1 in list 1 it is down-sampled after the block of reconstructed image matched with template.
In 332, MV1 is deviated to obtain multiple MV1 '.In 333, by the corresponding reconstructed image of multiple MV1 '
Block progress is down-sampled, and is matched respectively with template.In 334, the smallest MV1 ' of matching cost is obtained, wherein matching generation
The smallest MV1 ' of valence can be MV1.
For example, as shown in figure 5, (it is corresponding that MV1 ' can be can specifically include to the surrounding pixel of the corresponding reference block of MV1
The pixel that reference block includes) carry out it is down-sampled.It specifically, as shown in figure 5, can be to the pixel around the corresponding reference block of MV1
Value is filled, and obtains the corresponding reference block (reference block after offset) of MV1 ', and is carried out to the reference block after offset down-sampled.
Finally when calculating matching cost, use it is down-sampled after two-way template and it is down-sampled after reference block.
In 335, according to the smallest MV0 ' of matching cost and the corresponding reconstructed image block of MV1 ', prediction block is generated.
In 336, it is based on the prediction block, current image block is decoded.
The realization of the two-way template matching method of the embodiment of the present application should not be limited only to the description above.
Optionally, implementation above mode A and its optional implementation can be realized by DMVR technology.
Optionally, processing equipment obtains the corresponding initial motion vectors MV of current image block in the embodiment of the present application;Needle
To the initial MV, determine described for matched reconstructed image block.
Wherein, which can be MV to be selected.It is alternatively possible to which this is waited for that the MV of selection is known as MV candidate column
Table.
How from MV to be selected MV is selected below with reference to implementation B and implementation C description.
Implementation B
Specifically, which includes K the 5th MV, utilizes the neighbouring reconstructed image of K the 5th reconstructed image block
Block it is down-sampled after reconstructed image data and the current image block neighbouring reconstructed image block it is down-sampled after weighed
Structure image data is matched, to obtain the matching result, wherein the K the 5th reconstructed image block and the K the 5th MV mono-
One is corresponding, and K is the integer more than or equal to 1;The 5th MV is selected from the K the 5th MV based on the matching result.
Optionally, which can be the corresponding MV of the smallest distortion cost.Alternatively, the 5th MV of the selection
It can be MV corresponding less than a certain specific value distortion cost.
Wherein, the 5th MV for selection can be used as the MV of the current image block.At this point it is possible to by this one the 5th
Prediction block of the corresponding block of reconstructed image of MV as current image block.
Alternatively, the 5th MV for selection is determined for the MV of current image block.
For example, the 5th MV can be used as MVP.At this point it is possible to according to the MVP further progress motion search and fortune
Dynamic compensation, obtains final MV.Using the corresponding block of reconstructed image of the MV after the optimization as prediction block.
For another example first the 5th MV is the MV based on coding unit (Coding Unit, CU) grade being mentioned below,
Then MV is determined for the MV of sub- CU (Sub-CU) grade.
It is alternatively possible to which the K the 5th MV are known as MV candidate list.
It is alternatively possible to which the neighbouring reconstructed image block of current image block to be known as to the template of the current image block.Wherein,
Implementation B is properly termed as the selection of the MV based on template matching method.
Optionally, as shown in fig. 6, the neighbouring reconstructed image block of the 5th reconstructed image block may include upper adjacent block and/
Or left adjacent block and the neighbouring reconstructed image block of current image block may include upper adjacent block and/or left adjacent block.
Implementation C
Specifically, which includes W the 6th MV, wherein W is the integer more than or equal to 1;For W MV centering
Each MV to reconstructed image block described in corresponding two, by reconstructed image block described in one of them it is down-sampled after described in
Reconstructed image data and reconstructed image block described in another it is down-sampled after the reconstructed image data matched,
To obtain the matching result, wherein each MV to include one the 6th MV and one determined based on the 6th MV the
Seven MV;Based on the W MV to corresponding matching result, one MV pairs is selected.
Wherein, the 6th MV of the MV centering of selection is determined as the MV of the current image block.At this point it is possible to by MV pairs of selection
In prediction block of the corresponding block of reconstructed image of the 6th MV as current image block.
Alternatively, the 6th MV of the MV centering of selection is determined for the MV of current image block.
For example, the 6th MV can be used as MVP.At this point it is possible to be mended according to the MVP further progress motion search and movement
It repays, obtains final MV.Using the corresponding block of reconstructed image of the final MV as prediction block.
For another example first the 6th MV is mentioned below based on CU grades of MV, then MV is determined for sub-CU
The MV of grade.
Optionally, in the embodiment of the present application, the 7th MV is in the case where motion profile is continuous hypothesis based on the 6th
What MV was determined.
It is alternatively possible to be MV candidate list by the W the 6th MV.
Optionally, in the embodiment of the present application, the 6th reconstructed image block belong to the affiliated frame of current image block
Forward frame, the 7th reconstructed image block belong to current image block affiliated frame backward frame.
Optionally, in the embodiment of the present application, the 6th time domain distance between reconstructed image block and current image block can
To be equal to current image block and the 7th time domain distance between reconstructed image block.
Optionally, to implementation C, the 6th MV of each of W the 6th MV can be used as input, and be based on bi-directional matching
Method it is assumed that obtaining one MV pairs.For example, a corresponding reference block of effective MVa in MV candidate list belongs in reference columns
Reference frame a in Table A, and the reference frame b where the corresponding reference block of paired MVb is in reference listing B, then referring to
Frame a and reference frame b is located at the both sides of present frame in the time domain.If such a reference frame is not present in reference listing B
B, then reference frame b is one different from the reference frame of reference frame a and the time domain distance of it and present frame is most in reference listing B
Small.After determining reference frame b, the time domain distance based on present frame respectively with reference frame a and reference frame b zooms in and out MVa
MVb can be obtained.
For example, as shown in fig. 7, MV pairs can be generated respectively according to each candidate MV, calculating is each for bi-directional matching method
Distortion between corresponding two reference blocks of two MV (MV0 and MV1) of MV centering.It wherein, can in embodiments herein
It is down-sampled all to be carried out to two reference blocks, then to two reference block calculated distortions after down-sampled.It is corresponding when distortion is minimum
Candidate MV (MV0) is final MV.
Wherein, implementation C is properly termed as the selection of the MV based on bi-directional matching method.
Optionally, implementation above mode B and C can be used for AMVP mode;It can be used for merge mode, specifically may be used
To export technology using the motion vector of pattern match, wherein the PMMVD technology is based on conversion (Frame Rate in frame per second
Up Conversion, FRUC) technology the special merge mode of one kind.Under this mode, the motion information of a block will not
It is encoded in code stream, but is directly generated in decoding end.
Wherein, coding side can be selected in multiple coding modes, specifically, can carry out common merge mould
Formula coding, obtains the smallest rate distortion costs (Rate Distortion Cost, RD-Cost), i.e. cost0;Then, it uses
PMMVD mode is encoded, and RD-Cost is obtained, wherein the corresponding RD-Cost of the MV that bi-directional matching method obtains is cost1, mould
The corresponding RD-Cost of the MV that plate matching method obtains is cost2, cost3=min (cost1, cost2).
If cost0 < cost3, FRUC flag bit are false;Otherwise, FRUC flag bit is true, while additional using one
FRUC mode flags position indicates to use which kind of mode (bi-directional matching method or template matching method).
Wherein, RD-Cost is to be used to measure a kind of criterion which kind of mode is decision use in encoder, both considers video
Quality, it is contemplated that encoder bit rate.RD-Cost=Cost+lambda*bitrate, wherein cost indicates the damage of video quality
It loses, by calculating the similitude (indexs such as SAD, SSD) between original pixels block and reconstructed pixel block;Bitrate indicates to use
The mode needs the bit number consumed.
Use original pixel value due to calculating RD-Cost needs, and in decoding end original pixel value be it is unavailable, because
This, which needs to transmit an additional FRUC mode flags position, indicates which kind of mode to obtain motion information using.
Optionally, in the embodiment of the present application, the export process of the motion information of FRUC merge mode can be divided into two
Step.Wherein, the first step is the export process based on CU grades of motion information, and second step is the motion information based on Sub-CU grades
Export process.
Wherein, in the export process of the motion information based on CU grades, initial MV namely one of entire CU can be exported
CU grades of MV candidate list, wherein the MV candidate list may include:
It include original AMVP candidate MV if 1) current CU uses AMVP mode, specifically, if current CU is used
Be AMVP mode, then original AMVP candidate MV can be added in CU grades of MV candidate list.
It include all merge candidate MV if 2) current CU uses merge mode.
3) MV in interpolated movements vector field, wherein the MV of interpolated movements vector field can be 4, interpolation this four
A MV is optionally located at (0,0) of current CU, (W/2,0), (0, H/2) and (W/2, H/2) position.
4) adjacent MV of top and left.
Optionally, in the candidate list of AMVP mode (length of list is optionally 2), Establishing process may include sky
Domain list foundation and when domain list foundation.
Wherein, in the foundation of the airspace list of AMVP, it is assumed that the lower left corner of current PU is A0, and left side is A1, the upper left corner
It is B2, top is B1, and the upper right corner is B0.The left side and top of current PU can respectively generate a candidate MV.For the time in left side
Select the screening of MV, processing sequence be A0- > A1- > scaled A0- > scaled A1 wherein, scaled A0 indicate by the MV of A0 into
Row ratio is flexible, and scaled A1 indicates that the MV of A1, which is carried out ratio, to be stretched.Screening to the candidate MV of upside, the sequence of processing
It is B0- > B1- > B2 (if these are all not present, continuing with -> scaled B0- > scaled B2), scaled B0
Indicate that the MV of B0, which is carried out ratio, to be stretched, scaled B2 indicates that the MV of B2, which is carried out ratio, to be stretched.Left side (top) is come
It says, as long as soon as find a candidate MV, subsequent candidate is not continued with.And AMVP when domain list foundation
In, time domain candidate list can not directly use candidate blocks motion information, can according between present frame and reference frame when
Domain positional relationship does corresponding flexible adjustment.Time domain can at most provide a candidate MV.If the candidate of candidate list at this time
The quantity of MV is also less than 2, then null vector can be filled.
Optionally, in the candidate list of AMVP mode (length of list is optionally 5), Establishing process may include sky
Domain list foundation and when domain list foundation.
Wherein, in the foundation of the airspace list of merge mode, it is assumed that the lower left corner of current PU is A0, and left side is A1, left
Upper angle is B2, and top is B1, and the upper right corner is B0.Airspace can at most provide 4 candidate MV, candidate sequence be A1- > B1- >
B0- > A0- > B2 four before priority processing, if having one or more to be not present in front four, is just handled
B2.Merge mode when domain list foundation in, time domain candidate list cannot directly use candidate blocks motion information, can
To do corresponding flexible adjustment according to the positional relationship between present frame and reference frame.Time domain can at most provide a candidate
MV, can be with if the MV quantity in list has not been reached yet five it means that if after having handled spatially and temporally
Fill null vector.
In other words, the selection of merge candidate MVP, can be according to the left side -> top -> upper right corner -> lower left corner > upper left corner
Order traversal airspace on adjacent CU MV, then handle the prediction MV referred in time domain, finally arrange and merge.
Wherein, in the export process of the motion information based on sub-CU grades, using the MV obtained based on CU grades as starting
Point carries out further refinement to motion information at Sub-CU grades.Wherein, the MV after Sub-CU grades of refinements is exactly entire CU
MV, wherein the MV candidate list based on sub- CU grades may include:
1) MV obtained based on CU grades.
2) the adjacent MV of the upper, left of MV, upper left and the upper right that should be obtained based on CU grades.
3) resulting MV after the MV scaling of the correspondence time domain adjacent C U in reference frame, wherein time domain phase is corresponded in reference frame
The scaling MV of adjacent CU can be obtained as follows: all reference frames in two reference listings all traverse one time, will refer to
The MV of the CU adjacent with Sub-CU time domain is zoomed in the reference frame where the MV obtained based on CU grades in frame.
4) at most 4 optional time domain motion-vector prediction (alternative temporal motion vector
Prediction, ATMVP) candidate MV, wherein ATMVP allows each CU to be less than the multiple of current CU size from reference frame
Multiple motion information collection are obtained in block.
5) at most 4 spatiotemporal motion vector predictions (spatial temporal motion vector prediction,
STMVP) candidate MV, wherein in STMVP, the motion vector of sub- CU is by reusing time domain prediction motion vector and airspace
Adjacent motion vector obtains.
Optionally, implementation above method B and implementation C can be used for the acquisition of CU grades of MV, can be used for sub-
The acquisition of CU grades of MV.
In order to which PMMVD technology is more clearly understood, it is illustrated below with reference to Fig. 8.
In 410, determine whether current CU uses Merge mode, if it is not, then using AMVP mode (not shown).
In 420, determine whether current CU uses bi-directional matching method, if so, 431 are executed, if not, executing 441.
In 431, MV candidate list is generated.
In 432, optimal MV is selected from candidate list, wherein can be using bi-directional matching method preferentially, it specifically can be with
Referring to the description in above-mentioned implementation C.
In 433, local search is carried out around optimal MV, to the optimal further refinement of MV.It specifically, can be right
Optimal MV is deviated to obtain multiple initial MV, to one MV of selection in multiple initial MV, wherein can use two-way
Preferentially with method, the description being specifically referred in above-mentioned implementation C.
It, can be using the bi-directional matching method in above-mentioned implementation C, CU grades sub- if obtaining CU grades of MV in 434
To the further refinement of MV.
In 441, MV candidate list is generated.
In 442, optimal MV is selected from candidate list, wherein can be using template matching method preferentially, it specifically can be with
Referring to the description in above-mentioned implementation B.
In 443, local search is carried out around optimal MV, to the optimal further refinement of MV.It specifically, can be right
Optimal MV is deviated to obtain multiple initial MV, to one MV of selection in multiple initial MV, wherein can use template
Preferentially with method, the description being specifically referred in above-mentioned implementation B.
It, can be using the template matching method in above-mentioned implementation B, CU grades sub- if obtaining CU grades of MV in 444
To the further refinement of MV.
As it can be seen that the embodiment of the present application is used to export decoding end motion vector refinement (Decode Motion Vector
Refinement, DMVR) motion vector of technology and pattern match exports (Pattern Matching Motion Vector
Derivation, PMMVD) data sampling method can greatly reduce its hardware resource consumption and space in a decoder and account for
With, meanwhile, only bring lesser loss of coding performance.
Therefore, in the embodiment of the present application, during obtaining the motion vector MV of current image block, to for matching
The block of reconstructed image matched before, to this, reconstructed image has carried out down-sampled, carries out matching cost again after down-sampled
Calculating, it is possible to reduce the data volume of processing greatly reduces hardware resource consumption and the space occupied.
Fig. 9 is the schematic block diagram according to the equipment 500 for video processing of the embodiment of the present application.The equipment 500 packet
It includes:
Down-sampled unit 510, for having been weighed to for matched during obtaining the motion vector of current image block
Before structure image block is matched, reconstructed image data is carried out down-sampled;
Matching unit 520, for this after down-sampled using the reconstructed image block, reconstructed image data has been carried out
Match, to obtain matching result;
Acquiring unit 530 obtains the motion vector of the current image block for being based on the matching result.
Optionally, in the embodiment of the present application, which is used for decoding end, the equipment 500 further include:
Decoding unit is decoded the current image block for the motion vector based on the current image block.
Optionally, which is used for coding side, the equipment 500 further include:
Coding unit encodes the current image block for the motion vector based on the current image block.
Optionally, in the embodiment of the present application, which is further used for:
It determines for the matched reconstructed image block;
To this reconstructed image block this reconstructed image data carry out it is down-sampled.
Optionally, in the embodiment of the present application, which is further used for:
According to the content of the reconstructed image block, to this of the reconstructed image block, reconstructed image data has carried out drop and has adopted
Sample.
Optionally, in the embodiment of the present application, which is further used for:
According at least one in the pixel quantity that reconstructed image block has included, pixel grey scale, edge feature, to this
The reconstructed image data progress of reconstructed image block is down-sampled.
Optionally, in the embodiment of the present application, which is further used for:
According at least one in the pixel quantity that reconstructed image block has included, pixel grey scale, edge feature, drop is determined
Oversampling ratio;
Using the down-sampled ratio, to this reconstructed image block this reconstructed image data carry out it is down-sampled.
Optionally, in the embodiment of the present application, which is further used for:
Determine that the pixel quantity that reconstructed image block has included is greater than or equal to first predetermined value;And/or
Determine this grey level histogram of reconstructed image block variance be greater than or equal to second predetermined value;And/or
Determine that the pixel quantity for belonging to the marginal point of texture in the pixel that reconstructed image block has included is greater than or equal to the
Three predetermined values.
Optionally, in the embodiment of the present application, which is further used for:
It is down-sampled to the reconstructed image data progress using the sample mode for the pixel for being spaced identical quantity;Or,
It is down-sampled to the reconstructed image data progress to the mode that multiple pixels are averaged.
Optionally, in the embodiment of the present application, it includes at least two reconstruct images that this, which is used for the matched block of reconstructed image,
As block;
The down-sampled unit 510 is further used for:
According to identical oversampling ratio, to this of at least two reconstructed image block, reconstructed image data has carried out drop and has adopted
Sample.
Optionally, in the embodiment of the present application, which is further used for:
It is modified based on initial motion vectors of the matching result to the current image block, obtains the current image block
Motion vector.
Optionally, in the embodiment of the present application, which is further used for:
Obtain the corresponding initial motion vectors of current image block;
For the initial motion vectors, determine that this is used for matched reconstructed image block.
Optionally, in the embodiment of the present application, which includes the first motion vector and the second motion vector;
The matching unit 520 is further used for:
Reconstructed image data and the second reconstructed image block after down-sampled based on the first reconstructed image block
After down-sampled this reconstructed image data generate template, wherein this first reconstructed image block correspond to this first movement arrow
Measure and belong to first frame, this second reconstructed image block correspond to and second motion vector and belong to the second frame;
Based on the template and it is down-sampled after this reconstructed image data has been matched, to obtain matching result.
Optionally, in the embodiment of the present application, which is further used for:
Using N number of third reconstructed image block it is down-sampled after the reconstructed image data, carried out respectively with the template
Matching, wherein reconstructed image block corresponds to N number of third motion vector and belongs to the first frame N number of third;
Using the reconstructed image block of M the 4th it is down-sampled after the reconstructed image data, carried out respectively with the template
Matching, wherein reconstructed image block corresponds to M the 4th motion vectors and belongs to second frame M a 4th;
The acquiring unit 530 is further used for:
Based on the matching result, a third motion vector is selected from N number of third motion vector, and from the M
The 4th motion vector, a third motion vector and a 4th motion vector conduct are selected in 4th motion vector
The motion vector of the current image block, or the motion vector for determining the current image block.
Optionally, in the embodiment of the present application, which includes first motion vector, the 4th movement arrow
Amount includes second motion vector.
Optionally, in the embodiment of the present application, at least partly motion vector in N number of third motion vector is to be based on being somebody's turn to do
First motion vector is deviated to obtain, and at least partly motion vector in the M the 4th motion vectors is based on second fortune
What dynamic vector was deviated.
Optionally, in the embodiment of the present application, which is equal to the M.
Optionally, in the embodiment of the present application, which is the forward frame of the current image block, which is to deserve
The backward frame of preceding image block;Or,
The first frame is the forward frame of the current image block, which is the forward frame of the current image block.
Optionally, in the embodiment of the present application, which includes K the 5th motion vectors, the matching unit
520 are further used for:
Using the neighbouring reconstructed image block of reconstructed image block of K the 5th it is down-sampled after the reconstructed image number
According to, respectively with the neighbouring reconstructed image block of the current image block it is down-sampled after this reconstructed image data has been matched,
To obtain the matching result, wherein the K the 5th reconstructed image block and the K the 5th motion vectors correspond;
The acquiring unit 530 is further used for:
Based on the matching result, from the K the 5th motion vectors, select the 5th motion vector current as this
The motion vector of image block, or the motion vector for determining the current image block.
Optionally, in the embodiment of the present application, which includes W the 6th motion vectors;
The matching unit 520 is further used for:
For the W each motion vector of motion vector centering to the corresponding two reconstructed image blocks, by one of them
The reconstructed image block it is down-sampled after reconstructed image data and another this reconstructed image block it is down-sampled after
Reconstructed image data is matched for this, to obtain the matching result, wherein each motion vector is to including one the 6th movement
Vector and the 7th motion vector determined based on the 6th motion vector;
The acquiring unit 530 is further used for:
Based on the W motion vector to corresponding matching result, a motion vector pair is selected, wherein the movement of selection
Motion vector of 6th motion vector of vector centering as the current image block, or the fortune for determining the current image block
Dynamic vector.
Optionally, in the embodiment of the present application, the 7th motion vector is based in the case where motion profile is continuous hypothesis
What the 6th motion vector determined.
Optionally, in the embodiment of the present application, the 6th reconstructed image block belong to frame belonging to the current image block
Forward frame, the 7th reconstructed image block belong to the backward frame of frame belonging to the current image block.
Optionally, which may be implemented the operation of the processing equipment in the above method, for sake of simplicity, herein no longer
It repeats.
It should be understood that the equipment for video processing of above-mentioned the embodiment of the present application can be chip, it specifically can be by electricity
Road realize, but the embodiment of the present application to concrete implementation form without limitation.
The embodiment of the present application also provides a kind of encoders, and the encoder is for realizing coding side in the embodiment of the present application
Function may include the module that coding side is used in the equipment for video processing of above-mentioned the embodiment of the present application.
The embodiment of the present application also provides a kind of decoders, and the decoder is for realizing decoding end in the embodiment of the present application
Function may include the module that decoding end is used in the equipment for video processing of above-mentioned the embodiment of the present application.
The embodiment of the present application also provides a kind of codec, which includes being used for for above-mentioned the embodiment of the present application
The equipment of video processing.
Figure 10 shows the schematic block diagram of the computer system 600 of the embodiment of the present application.
As shown in Figure 10, which may include processor 610 and memory 620.
It should be understood that the computer system 600 can also include component usually included in other computer systems, example
Such as, input-output equipment, communication interface etc., the embodiment of the present application does not limit this.
Memory 620 is for storing computer executable instructions.
Memory 620 can be various memories, such as may include high-speed random access memory (Random
Access Memory, RAM), can also include non-labile memory (non-volatile memory), for example, at least one
A magnetic disk storage, the embodiment of the present application do not limit this.
Processor 610 executes the computer executable instructions for accessing the memory 620, to carry out this above-mentioned Shen
It please operation in the method for video processing of embodiment.
Processor 610 may include microprocessor, field programmable gate array (Field-Programmable Gate
Array, FPGA), central processing unit (Central Processing unit, CPU), graphics processor (Graphics
Processing Unit, GPU) etc., the embodiment of the present application does not limit this.
The use that can correspond to the embodiment of the present application for the equipment and computer system of video processing of the embodiment of the present application
In the executing subject of the method for video processing, and for the modules in the equipment and computer system of video processing
It states with other operation and/or functions respectively in order to realize the corresponding process of aforementioned each method, for sake of simplicity, no longer going to live in the household of one's in-laws on getting married herein
It states.
The embodiment of the present application also provides a kind of electronic equipment, which may include the various implementations of above-mentioned the application
The equipment or computer system for video processing of example.
The embodiment of the present application also provides a kind of computer storage medium, program generation is stored in the computer storage medium
Code, the program code can serve to indicate that the method for executing the loop filtering of above-mentioned the embodiment of the present application.
It should be understood that in the embodiment of the present application, term "and/or" is only a kind of incidence relation for describing affiliated partner,
Indicate may exist three kinds of relationships.For example, A and/or B, can indicate: individualism A exists simultaneously A and B, individualism B this
Three kinds of situations.In addition, character "/" herein, typicallys represent the relationship that forward-backward correlation object is a kind of "or".
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps, can be realized with electronic hardware, computer software, or a combination of the two, in order to clearly demonstrate hardware
With the interchangeability of software, each exemplary composition and step are generally described according to function in the above description.This
A little functions are implemented in hardware or software actually, the specific application and design constraint depending on technical solution.Specially
Industry technical staff can use different methods to achieve the described function each specific application, but this realization is not
It is considered as beyond scope of the present application.
It is apparent to those skilled in the art that for convenience of description and succinctly, foregoing description is
The specific work process of system, device and unit, can refer to corresponding processes in the foregoing method embodiment, details are not described herein.
In several embodiments provided herein, it should be understood that disclosed systems, devices and methods, it can be with
It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit
It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components
It can be combined or can be integrated into another system, or some features can be ignored or not executed.In addition, shown or beg for
Opinion mutual coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING of device or unit
Or communication connection, it is also possible to electricity, mechanical or other form connections.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.Some or all of unit therein can be selected to realize the embodiment of the present application scheme according to the actual needs
Purpose.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit
It is that each unit physically exists alone, is also possible to two or more units and is integrated in one unit.It is above-mentioned integrated
Unit both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can store in a computer readable storage medium.Based on this understanding, the technical solution of the application is substantially
The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words
It embodies, which is stored in a storage medium, including some instructions are used so that a computer
Equipment (can be personal computer, server or the network equipment etc.) executes the complete of each embodiment the method for the application
Portion or part steps.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only
Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can store journey
The medium of sequence code.
The above, the only specific embodiment of the application, but the protection scope of the application is not limited thereto, it is any
Those familiar with the art within the technical scope of the present application, can readily occur in various equivalent modifications or replace
It changes, these modifications or substitutions should all cover within the scope of protection of this application.Therefore, the protection scope of the application should be with right
It is required that protection scope subject to.
Claims (44)
1. a kind of method for video processing characterized by comprising
During obtaining the motion vector of current image block, to before matching for the matched block of reconstructed image,
Reconstructed image data is carried out down-sampled;
Using the block of reconstructed image it is down-sampled after described in reconstructed image data matched, with obtain matching knot
Fruit;
Based on the matching result, the motion vector of the current image block is obtained.
2. the method according to claim 1, wherein the method be used for decoding end, the method also includes:
Based on the motion vector of the current image block, the current image block is decoded.
3. the method according to claim 1, wherein the method be used for coding side, the method also includes:
Based on the motion vector of the current image block, the current image block is encoded.
4. according to the method in any one of claims 1 to 3, which is characterized in that described to be carried out to reconstructed image data
It is down-sampled, comprising:
It determines for the matched reconstructed image block;
It is down-sampled to reconstructed image data progress described in the block of reconstructed image.
5. according to the method described in claim 4, it is characterized in that, described to reconstruct image described in the block of reconstructed image
As data progress is down-sampled, comprising:
According to the content of the block of reconstructed image, to reconstructed image data carries out drop and adopts described in the block of reconstructed image
Sample.
6. according to the method described in claim 5, it is characterized in that, the content of reconstructed image block according to, to institute
The reconstructed image data progress for stating reconstructed image block is down-sampled, comprising:
The pixel quantity that includes according to the block of reconstructed image, pixel grey scale, in edge feature at least one of, to it is described
The reconstructed image data of reconstructed image block carries out down-sampled.
7. according to the method described in claim 6, it is characterized in that, the pixel number that reconstructed image block includes according to
Amount, pixel grey scale, in edge feature at least one of, to reconstructed image data drops described in the block of reconstructed image
Sampling, comprising:
The pixel quantity that includes according to the block of reconstructed image, pixel grey scale, in edge feature at least one of, determine that drop is adopted
Sample ratio;
It is down-sampled to reconstructed image data progress described in the block of reconstructed image using the down-sampled ratio.
8. method according to any one of claim 1 to 7, which is characterized in that described to the reconstructed image block
The reconstructed image data carry out it is down-sampled before, the method also includes:
Determine that the pixel quantity that reconstructed image block has included is greater than or equal to first predetermined value;And/or
Determine that the variance of the grey level histogram of reconstructed image block is greater than or equal to second predetermined value;And/or
Determine that the pixel quantity for belonging to the marginal point of texture in the pixel that reconstructed image block has included is greater than or equal to third
Predetermined value.
9. method according to any one of claim 1 to 8, which is characterized in that described to be carried out to reconstructed image data
It is down-sampled, comprising:
Using the sample mode for the pixel for being spaced identical quantity, the reconstructed image data is carried out down-sampled;Or,
To the mode that multiple pixels are averaged, the reconstructed image data is carried out down-sampled.
10. method according to any one of claim 1 to 9, which is characterized in that described to be used for matched reconstructed image
Block includes at least two reconstructed image blocks;
It is described down-sampled to the reconstructed image data progress, comprising:
According to identical oversampling ratio, to reconstructed image data carries out drop and adopts described in described at least two reconstructed image block
Sample.
11. method according to any one of claim 1 to 10, which is characterized in that described to be obtained based on the matching result
Take the motion vector of the current image block, comprising:
It is modified based on initial motion vectors of the matching result to the current image block, obtains the current image block
Motion vector.
12. method according to any one of claim 1 to 10, which is characterized in that the fortune for obtaining current image block
Dynamic vector, further includes:
Obtain the corresponding initial motion vectors of current image block;
For the initial motion vectors, determine described for matched reconstructed image block.
13. according to the method for claim 11, which is characterized in that the initial motion vectors include the first motion vector and
Second motion vector;
It is described using the reconstructed image block it is down-sampled after the reconstructed image data matched, comprising:
The drop of reconstructed image data and the second reconstructed image block described in after down-sampled based on the first reconstructed image block
The reconstructed image data after sampling generates template, wherein described first reconstructed image block correspond to first fortune
Dynamic vector and belong to first frame, described second reconstructed image block correspond to second motion vector and belong to the second frame;
Based on the template and it is down-sampled after described in reconstructed image data matched, to obtain matching result.
14. according to the method for claim 13, which is characterized in that it is described based on the template and after down-sampled described in
Reconstructed image data is matched, to obtain matching result, comprising:
Using N number of third reconstructed image block it is down-sampled after described in reconstructed image data, carried out respectively with the template
Matching, wherein reconstructed image block corresponds to N number of third motion vector and belongs to the first frame N number of third;
Using the reconstructed image block of M the 4th it is down-sampled after described in reconstructed image data, carried out respectively with the template
Matching, wherein reconstructed image block corresponds to M the 4th motion vectors and belongs to second frame M a 4th;
It is described that the initial motion vectors are modified based on the matching result, comprising:
Based on the matching result, a third motion vector is selected from N number of third motion vector, and from the M
The 4th motion vector, one third motion vector and one 4th movement arrow are selected in a 4th motion vector
Measure the motion vector as the current image block, or the motion vector for determining the current image block.
15. according to the method for claim 14, which is characterized in that the third motion vector includes the first movement arrow
Amount, the 4th motion vector includes second motion vector.
16. method according to claim 14 or 15, which is characterized in that at least portion in N number of third motion vector
Partite transport dynamic vector is to be deviated to obtain based on first motion vector, in the M the 4th motion vectors at least partly
Motion vector is deviated based on second motion vector.
17. method described in any one of 4 to 16 according to claim 1, which is characterized in that the N is equal to the M.
18. method described in any one of 3 to 17 according to claim 1, which is characterized in that the first frame is the current figure
As the forward frame of block, second frame is the backward frame of the current image block;Or,
The first frame is the forward frame of the current image block, and second frame is the forward frame of the current image block.
19. according to the method for claim 12, which is characterized in that the initial motion vectors include K the 5th movement arrow
Amount, the reconstructed image block described in it is down-sampled after the reconstructed image data matched, comprising:
Using the neighbouring reconstructed image block of reconstructed image block of K the 5th it is down-sampled after described in reconstructed image data,
Respectively with the neighbouring reconstructed image block of the current image block it is down-sampled after described in reconstructed image data matched,
To obtain the matching result, wherein the K the 5th reconstructed image block and the K the 5th motion vectors correspond;
The motion vector that the current image block is obtained based on the matching result, comprising:
Select the 5th motion vector as described in from the K the 5th motion vectors based on the matching result
The motion vector of current image block, or the motion vector for determining the current image block.
20. according to the method for claim 12, which is characterized in that the initial motion vectors include W the 6th movement arrow
Amount;
It is described using the reconstructed image it is down-sampled after the reconstructed image data matched, comprising:
For the W each motion vector of motion vector centering to reconstructed image block described in corresponding two, by one of institute
State reconstructed image block it is down-sampled after the reconstructed image data and reconstructed image block described in another it is down-sampled
The reconstructed image data afterwards is matched, to obtain the matching result, wherein each motion vector is to including one
6th motion vector and the 7th motion vector determined based on the 6th motion vector;
The motion vector that the current image block is obtained based on the matching result, comprising:
Based on the W motion vector to corresponding matching result, a motion vector pair is selected, wherein the movement of selection is sweared
Motion vector of the 6th motion vector of centering as the current image block is measured, or for determining the current image block
Motion vector.
21. according to the method for claim 20, which is characterized in that it is continuous that the 7th motion vector, which is in motion profile,
Hypothesis under based on the 6th motion vector determine.
22. the method according to claim 20 or 21, which is characterized in that the described 6th reconstructed image block belong to described work as
The forward frame of frame belonging to preceding image block, the described 7th reconstructed image block belong to the backward of frame belonging to the current image block
Frame.
23. a kind of equipment for video processing characterized by comprising
Down-sampled unit, for obtain current image block motion vector during, to be used for matched reconstructed image
Before block is matched, reconstructed image data is carried out down-sampled;
Matching unit, for reconstructed image data to be matched described in after down-sampled using the reconstructed image block,
To obtain matching result;
Acquiring unit obtains the motion vector of the current image block for being based on the matching result.
24. equipment according to claim 23, which is characterized in that the equipment is used for decoding end, the equipment further include:
Decoding unit is decoded the current image block for the motion vector based on the current image block.
25. equipment according to claim 23, which is characterized in that the equipment is used for coding side, the equipment further include:
Coding unit encodes the current image block for the motion vector based on the current image block.
26. the equipment according to any one of claim 23 to 25, which is characterized in that the down-sampled unit is further used
In:
It determines for the matched reconstructed image block;
It is down-sampled to reconstructed image data progress described in the block of reconstructed image.
27. equipment according to claim 26, which is characterized in that the down-sampled unit is further used for:
According to the content of the block of reconstructed image, to reconstructed image data carries out drop and adopts described in the block of reconstructed image
Sample.
28. equipment according to claim 27, which is characterized in that the down-sampled unit is further used for:
The pixel quantity that includes according to the block of reconstructed image, pixel grey scale, in edge feature at least one of, to it is described
The reconstructed image data of reconstructed image block carries out down-sampled.
29. equipment according to claim 28, which is characterized in that the down-sampled unit is further used for:
The pixel quantity that includes according to the block of reconstructed image, pixel grey scale, in edge feature at least one of, determine that drop is adopted
Sample ratio;
It is down-sampled to reconstructed image data progress described in the block of reconstructed image using the down-sampled ratio.
30. the equipment according to any one of claim 23 to 29, which is characterized in that the down-sampled unit is further used
In:
Determine that the pixel quantity that reconstructed image block has included is greater than or equal to first predetermined value;And/or
Determine that the variance of the grey level histogram of reconstructed image block is greater than or equal to second predetermined value;And/or
Determine that the pixel quantity for belonging to the marginal point of texture in the pixel that reconstructed image block has included is greater than or equal to third
Predetermined value.
31. the equipment according to any one of claim 23 to 30, which is characterized in that the down-sampled unit is further used
In:
Using the sample mode for the pixel for being spaced identical quantity, the reconstructed image data is carried out down-sampled;Or,
To the mode that multiple pixels are averaged, the reconstructed image data is carried out down-sampled.
32. the equipment according to any one of claim 23 to 31, which is characterized in that described to be used for matched reconstruct image
As block includes at least two reconstructed image blocks;
The down-sampled unit is further used for:
According to identical oversampling ratio, to reconstructed image data carries out drop and adopts described in described at least two reconstructed image block
Sample.
33. the equipment according to any one of claim 23 to 32, which is characterized in that the acquiring unit is further used
In:
It is modified based on initial motion vectors of the matching result to the current image block, obtains the current image block
Motion vector.
34. the equipment according to any one of claim 23 to 32, which is characterized in that the acquiring unit is further used
In:
Obtain the corresponding initial motion vectors of current image block;
For the initial motion vectors, determine described for matched reconstructed image block.
35. equipment according to claim 33, which is characterized in that the initial motion vectors include the first motion vector and
Second motion vector;
The matching unit is further used for:
The drop of reconstructed image data and the second reconstructed image block described in after down-sampled based on the first reconstructed image block
The reconstructed image data after sampling generates template, wherein described first reconstructed image block correspond to first fortune
Dynamic vector and belong to first frame, described second reconstructed image block correspond to second motion vector and belong to the second frame;
Based on the template and it is down-sampled after described in reconstructed image data matched, to obtain matching result.
36. equipment according to claim 35, which is characterized in that the matching unit is further used for:
Using N number of third reconstructed image block it is down-sampled after described in reconstructed image data, carried out respectively with the template
Matching, wherein reconstructed image block corresponds to N number of third motion vector and belongs to the first frame N number of third;
Using the reconstructed image block of M the 4th it is down-sampled after described in reconstructed image data, carried out respectively with the template
Matching, wherein reconstructed image block corresponds to M the 4th motion vectors and belongs to second frame M a 4th;
The acquiring unit is further used for:
Based on the matching result, a third motion vector is selected from N number of third motion vector, and from the M
The 4th motion vector, one third motion vector and one 4th movement arrow are selected in a 4th motion vector
Measure the motion vector as the current image block, or the motion vector for determining the current image block.
37. equipment according to claim 36, which is characterized in that the third motion vector includes the first movement arrow
Amount, the 4th motion vector includes second motion vector.
38. the equipment according to claim 36 or 37, which is characterized in that at least portion in N number of third motion vector
Partite transport dynamic vector is to be deviated to obtain based on first motion vector, in the M the 4th motion vectors at least partly
Motion vector is deviated based on second motion vector.
39. the equipment according to any one of claim 36 to 38, which is characterized in that the N is equal to the M.
40. the equipment according to any one of claim 35 to 39, which is characterized in that the first frame is the current figure
As the forward frame of block, second frame is the backward frame of the current image block;Or,
The first frame is the forward frame of the current image block, and second frame is the forward frame of the current image block.
41. equipment according to claim 34, which is characterized in that the initial motion vectors include K the 5th movement arrow
Amount, the matching unit are further used for:
Using the neighbouring reconstructed image block of reconstructed image block of K the 5th it is down-sampled after described in reconstructed image data,
Respectively with the neighbouring reconstructed image block of the current image block it is down-sampled after described in reconstructed image data matched,
To obtain the matching result, wherein the K the 5th reconstructed image block and the K the 5th motion vectors correspond;
The acquiring unit is further used for:
Select the 5th motion vector as described in from the K the 5th motion vectors based on the matching result
The motion vector of current image block, or the motion vector for determining the current image block.
42. equipment according to claim 34, which is characterized in that the initial motion vectors include W the 6th movement arrow
Amount;
The matching unit is further used for:
For the W each motion vector of motion vector centering to reconstructed image block described in corresponding two, by one of institute
State reconstructed image block it is down-sampled after the reconstructed image data and reconstructed image block described in another it is down-sampled
The reconstructed image data afterwards is matched, to obtain the matching result, wherein each motion vector is to including one
6th motion vector and the 7th motion vector determined based on the 6th motion vector;
The acquiring unit is further used for:
Based on the W motion vector to corresponding matching result, a motion vector pair is selected, wherein the movement of selection is sweared
Motion vector of the 6th motion vector of centering as the current image block is measured, or for determining the current image block
Motion vector.
43. equipment according to claim 42, which is characterized in that it is continuous that the 7th motion vector, which is in motion profile,
Hypothesis under based on the 6th motion vector determine.
44. the equipment according to claim 42 or 43, which is characterized in that the described 6th reconstructed image block belong to described work as
The forward frame of frame belonging to preceding image block, the described 7th reconstructed image block belong to the backward of frame belonging to the current image block
Frame.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/081651 WO2019191889A1 (en) | 2018-04-02 | 2018-04-02 | Method and device for video processing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110337810A true CN110337810A (en) | 2019-10-15 |
CN110337810B CN110337810B (en) | 2022-01-14 |
Family
ID=68099798
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201880012518.3A Expired - Fee Related CN110337810B (en) | 2018-04-02 | 2018-04-02 | Method and apparatus for video processing |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110337810B (en) |
WO (1) | WO2019191889A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113329228A (en) * | 2021-05-27 | 2021-08-31 | 杭州朗和科技有限公司 | Video encoding method, decoding method, device, electronic device and storage medium |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111462190B (en) * | 2020-04-20 | 2023-11-17 | 海信集团有限公司 | Intelligent refrigerator and food material input method |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6968011B2 (en) * | 2001-11-08 | 2005-11-22 | Renesas Technology Corp. | Motion vector detecting device improved in detection speed of motion vectors and system employing the same devices |
CN101605262A (en) * | 2009-07-09 | 2009-12-16 | 杭州士兰微电子股份有限公司 | The predicting size motion of variable block method and apparatus |
CN102067601A (en) * | 2008-04-11 | 2011-05-18 | 汤姆森特许公司 | Methods and apparatus for template matching prediction (TMP) in video encoding and decoding |
US20140176740A1 (en) * | 2012-12-21 | 2014-06-26 | Samsung Techwin Co., Ltd. | Digital image processing apparatus and method of estimating global motion of image |
WO2015009132A1 (en) * | 2013-07-19 | 2015-01-22 | Samsung Electronics Co., Ltd. | Hierarchical motion estimation method and apparatus based on adaptive sampling |
CN106454349A (en) * | 2016-10-18 | 2017-02-22 | 哈尔滨工业大学 | Motion estimation block matching method based on H.265 video coding |
CN107431820A (en) * | 2015-03-27 | 2017-12-01 | 高通股份有限公司 | Motion vector derives in video coding |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010016010A1 (en) * | 2000-01-27 | 2001-08-23 | Lg Electronics Inc. | Apparatus for receiving digital moving picture |
EP1662800A1 (en) * | 2004-11-30 | 2006-05-31 | Humax Co., Ltd. | Image down-sampling transcoding method and device |
CN101459842B (en) * | 2008-12-17 | 2011-05-11 | 浙江大学 | Decoding method and apparatus for space desampling |
CN102647594B (en) * | 2012-04-18 | 2014-08-20 | 北京大学 | Integer pixel precision motion estimation method and system for same |
CN102790884B (en) * | 2012-07-27 | 2016-05-04 | 上海交通大学 | A kind of searching method based on hierarchical motion estimation and realize system |
CN106210449B (en) * | 2016-08-11 | 2020-01-07 | 上海交通大学 | Multi-information fusion frame rate up-conversion motion estimation method and system |
-
2018
- 2018-04-02 CN CN201880012518.3A patent/CN110337810B/en not_active Expired - Fee Related
- 2018-04-02 WO PCT/CN2018/081651 patent/WO2019191889A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6968011B2 (en) * | 2001-11-08 | 2005-11-22 | Renesas Technology Corp. | Motion vector detecting device improved in detection speed of motion vectors and system employing the same devices |
CN102067601A (en) * | 2008-04-11 | 2011-05-18 | 汤姆森特许公司 | Methods and apparatus for template matching prediction (TMP) in video encoding and decoding |
CN101605262A (en) * | 2009-07-09 | 2009-12-16 | 杭州士兰微电子股份有限公司 | The predicting size motion of variable block method and apparatus |
US20140176740A1 (en) * | 2012-12-21 | 2014-06-26 | Samsung Techwin Co., Ltd. | Digital image processing apparatus and method of estimating global motion of image |
WO2015009132A1 (en) * | 2013-07-19 | 2015-01-22 | Samsung Electronics Co., Ltd. | Hierarchical motion estimation method and apparatus based on adaptive sampling |
CN107431820A (en) * | 2015-03-27 | 2017-12-01 | 高通股份有限公司 | Motion vector derives in video coding |
CN106454349A (en) * | 2016-10-18 | 2017-02-22 | 哈尔滨工业大学 | Motion estimation block matching method based on H.265 video coding |
Non-Patent Citations (1)
Title |
---|
ZHAO WANG; JUNCHENG MA; FALEI LUO; SIWEI MA: "Adaptive motion vector resolution prediction in block-based video coding", 《2015 VISAL COMMUNICATION AND IMAGE PROCESSING》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113329228A (en) * | 2021-05-27 | 2021-08-31 | 杭州朗和科技有限公司 | Video encoding method, decoding method, device, electronic device and storage medium |
CN113329228B (en) * | 2021-05-27 | 2024-04-26 | 杭州网易智企科技有限公司 | Video encoding method, decoding method, device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2019191889A1 (en) | 2019-10-10 |
CN110337810B (en) | 2022-01-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6977068B2 (en) | Motion vector refinement for multi-reference prediction | |
CN113612994B (en) | Method for video coding and decoding with affine motion compensation | |
CN107925758B (en) | Inter-frame prediction method and apparatus in video coding system | |
WO2017148345A1 (en) | Method and apparatus of video coding with affine motion compensation | |
CA3079646C (en) | Predictive encoding method, predictive encoding device, and predictive encoding program of motion vector, and, predictive decoding method, predictive decoding device, and predictive decoding program of motion vector | |
TW201739252A (en) | Method and apparatus of video coding with affine motion compensation | |
CN110651477B (en) | Apparatus and method for determining motion vector of prediction block | |
JP6945654B2 (en) | Methods and Devices for Encoding or Decoding Video Data in FRUC Mode with Reduced Memory Access | |
CN102223542A (en) | Method for performing localized multi-hypothesis prediction during video coding of a coding unit, and associated apparatus | |
GB2519514A (en) | Method and apparatus for displacement vector component prediction in video coding and decoding | |
CN111246212B (en) | Geometric partitioning mode prediction method and device based on encoding and decoding end, storage medium and terminal | |
US20150271516A1 (en) | Video coding apparatus and video coding method | |
CN110337810A (en) | Method for video processing and equipment | |
CN110149512A (en) | Inter-prediction accelerated method, control device, electronic device, computer storage medium and equipment | |
AU2018267557B2 (en) | Predictive encoding method, predictive encoding device, and predictive encoding program of motion vector, and, predictive decoding method, predictive decoding device, and predictive decoding program of motion vector |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20220114 |
|
CF01 | Termination of patent right due to non-payment of annual fee |