CN102164284A - Video decoding method and system - Google Patents

Video decoding method and system Download PDF

Info

Publication number
CN102164284A
CN102164284A CN2010101148474A CN201010114847A CN102164284A CN 102164284 A CN102164284 A CN 102164284A CN 2010101148474 A CN2010101148474 A CN 2010101148474A CN 201010114847 A CN201010114847 A CN 201010114847A CN 102164284 A CN102164284 A CN 102164284A
Authority
CN
China
Prior art keywords
value
pixel
brightness
macro block
chroma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2010101148474A
Other languages
Chinese (zh)
Inventor
谭志明
白向晖
洲镰康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Priority to CN2010101148474A priority Critical patent/CN102164284A/en
Publication of CN102164284A publication Critical patent/CN102164284A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention provides a video decoding method and system. The video decoding method comprises the following steps: performing the variable length decoding and inverse scanning on coded video data of a frame by using a central processing unit, thereby acquiring the variable-length decoded and inversely scanned video data; and then performing the reverse quantization, the reverse discrete cosine transform, the motion compensation and the color space conversion on the variable-length decoded and inversely scanned video data by utilizing a programmable graphic processing unit, thereby acquiring the decoded video data.

Description

Video encoding/decoding method and system
Technical field
The present invention relates to field of video processing, relate more specifically to a kind of video encoding/decoding method and system that uses the programmable graphics processing unit to carry out video decode.
Background technology
In traditional desktop or handheld device, video decode is carried out by CPU (CPU) usually.For the system that does not have powerful CPU, it is very difficult that high definition (HD) video is decoded.For the HD video is decoded, a solution is to add special-purpose video decoding chip in this system, but this scheme cost is too high; Another kind of solution is provided with Graphics Processing Unit (GPU) in this system, finish the partial decoding of h task by GPU.
Utilize GPU to quicken the HD video decode certain history has been arranged.The graphic chips merchant provides number to support the HD video decode of desk device for GPU.Yet, also do not exist at present to be used for acceleration HD video frequency decoding method/system that hand-hold type embeds equipment.
Summary of the invention
One or more problems in view of the above the invention provides a kind of video encoding/decoding method and system of novelty.
Video encoding/decoding method according to the embodiment of the invention comprises: by utilizing CPU the coding video frequency data of one frame picture is carried out variable length decoding and counter-scanning, obtain the video data behind variable length decoding and the counter-scanning; Carry out inverse quantization, inverse discrete cosine transformation, motion compensation and color space conversion by the video data after utilizing the programmable graphics processing unit to variable length decoding and counter-scanning, obtain and finish decoded video data.
Video decoding system according to the embodiment of the invention comprises: CPU, be used for carrying out variable length decoding and counter-scanning by video data to a frame picture, and obtain the video data behind variable length decoding and the counter-scanning; The programmable graphics processing unit is used for obtaining and finishing decoded video data by the video data behind variable length decoding and the counter-scanning being carried out inverse quantization, inverse discrete cosine transformation, motion compensation and color space conversion.
The present invention utilizes the programmable graphics processing unit to come the video data of needs decoding is carried out IQ, IDCT, MC and CSC processing, leave CPU for and carry out and length-changeable decoding (VLD) and counter-scanning (IS) handled, alleviate the calculated load of CPU greatly, thereby improved speed of video decoding.
Description of drawings
From below in conjunction with the present invention may be better understood the description of accompanying drawing to the specific embodiment of the present invention, wherein:
Fig. 1 shows according to the video encoding/decoding method of the embodiment of the invention and the logic diagram of system;
Fig. 2 shows the programmable graphics streamline of following OpenGL ES 2.0 (it is a kind of embedded 3D pattern algorithm standard) according to the embodiment of the invention;
Fig. 3 shows the logic diagram of the IQ tinter of being realized by the fragment shader among Fig. 2;
Fig. 4 shows the logic diagram of the IDCT tinter of being realized by the fragment shader among Fig. 2;
Fig. 5 shows the schematic diagram of the IDCT processing method that is realized by the IDCT tinter;
Fig. 6 shows the logic diagram of the MC tinter of being realized by the fragment shader among Fig. 2;
Fig. 7 shows the logic diagram of the CSC tinter of being realized by the fragment shader among Fig. 2.
Embodiment
To describe the feature and the exemplary embodiment of various aspects of the present invention below in detail.Many details have been contained in following description, so that complete understanding of the present invention is provided.But, it will be apparent to one skilled in the art that the present invention can implement under the situation of some details in not needing these details.Description to embodiment only is in order to provide the clearer understanding to the present invention by example of the present invention is shown below.Any concrete configuration and the algorithm that are proposed below the present invention never is limited to, but any modification, replacement and the improvement that have covered coherent element, parts and algorithm under the premise of without departing from the spirit of the present invention.
Since in the MPEG-2 video decode according to from left to right, from top to bottom order handles macro block (MB), so that the coordinate system x axle in the MPEG-2 video decoding system points to is right, the y axle points to down.(it is a kind of industrywide standard application programming interface that Khronos Group formulated in March, 2007 and at OpenGL ES 2.0, can improve the 3-D graphic of different consumer-elcetronics devicess greatly and play up speed, on embedded system, realized comprehensive programmable 3-D graphic) in, coordinate system x axle points to right, on the y axle points to.
Fig. 1 shows according to the video encoding/decoding method of the embodiment of the invention and the logic diagram of system.As shown in Figure 1, this video encoding/decoding method and system logically comprise variable length decoding (VLD), counter-scanning (IS), inverse quantization (IQ), inverse discrete cosine transformation (IDCT), motion compensation (MC) and color space conversion (CSC) processing 102~114.Except CSC handles all are handled and are all followed Moving Picture Experts Group-2.Particularly, this video encoding/decoding method and system decode to the video data of a frame (the perhaps) picture of needs decoding, and decoded video data is sent to display to show this frame picture.That is to say that this video encoding/decoding method and system carry out decoding processing by picture ground to video data stream.In Fig. 1, VLD and IS handle and will finish in CPU 116, and VLD and IS processed video data are stored in the application memory 120.Application memory also can be called as CPU memory, user's space memory or customer memory.After VLD and IS processed video data are delivered to graphic memory 122 from application memory, IQ, IDCT, MC and CSC handle will finish (mainly being finished by the fragment shader among the GPU) in GPU 118.In the figure term, graphic memory is commonly called mainframe memory.
Because it serves as that carry out on the basis with block of pixels or pixel all that IQ, IDCT, MC and CSC handle, and all block of pixels in the frame picture or pixel all will stand identical processing, can realize in GPU so IQ, IDCT, MC and CSC handle.In addition, since above-mentioned these processing based on vector or matrix, so can accelerate decoding speed.
Fig. 2 shows the programmable graphics streamline of following OpenGL ES 2.0 according to the embodiment of the invention.This streamline comprises vertex shader 202, pel assembly unit 204, rasterizing unit 206, fragment shader 208 and disconnected piecewise operating unit 210.Wherein, vertex shader and fragment shader are programmable, and other unit are fixed functions.The GPU that is realized by tinter able to programme is called as Programmable GPU.
Describe the concrete processing procedure of programmable graphics streamline shown in Figure 2 below in detail.In Fig. 2, the coordinate of a frame picture (this frame picture can be regarded a rectangular block as) (that is, the position coordinates and the texture coordinate on four of this frame picture summits) is sent to vertex cache (it is used by vertex shader 202).Vertex shader is handled one by one to four summits of this frame picture.In OpenGL ES 2.0, the processing that vertex shader is carried out comprises carries out geometric operation such as translation, rotation, perspective transform to apex coordinate, for calculating illumination value or generate texture coordinate etc. in the summit, but in the present invention, these operations can not used, and unique what will do is exactly to keep the position coordinates on summit constant.The operation that the pel assembly unit is finished comprises cutting, perspective division and view transformation etc., and the size of these these frame pictures of operation energy is set to desired size.The rasterizing process of pel is finished in the rasterizing unit, and pairing two triangle primitives of this frame picture are filled with fragment, and a fragment comprises corresponding pixel and attached information here.After the processing of pel assembly unit and rasterizing unit, graphics pipeline generates the segment that will be handled by fragment shader.In the use of fragment shader, the video data of this frame picture will be used as texture object and send to texture storage device (it is used by fragment shader).Fragment shader carries out IQ, IDCT, MC and CSC processing procedure to each fragment, and each fragment is admitted to the segment by segment operating unit then.The segment by segment operating unit carries out crop box detection, stencil buffer detection, depth buffer detection, colour mixture and dither operation to each fragment, and these operations can be converted into fragment visible pixel, to be used for demonstration.
In each stage of above graphics pipeline, the result of generation is stored in the frame buffer.Frame buffer can be and the surface that will draw or the relevant graphics cache of texture object.These texture objects in the frame buffer also can be used as the object in the texture storage device.Certainly, OpenGL ES 2.0 API can the control vertex cache object, the tinter in the streamline, texture object and the frame buffer in the texture storage device.
Because fragment shader is access graphics memory and have more computational resource more neatly, so in handling according to the video decode of the embodiment of the invention, IQ, IDCT, MC and CSC handle and finished by fragment shader.Wherein, every frame picture will be used as a rectangle and handle.Each rectangle comprises two triangles, and these two triangles can be drawn as triangle fan or triangle strip.
Fig. 3 shows the logic diagram of the IQ tinter of being realized by the fragment shader among Fig. 2.Serve as the IQ tinter when fragment shader the brightness/chroma value of certain pixel in the one frame picture of needs decodings is carried out IQ when handling, required information comprises: MB type indicated value and the quantization zooming value of the brightness value of this pixel, chromatic value, quantization matrix value, IQ value of symbol, this pixel place MB.These required information of IQ tinter are included in VLD and the IS processed video data, and (each passage has 8 bits via RGBA (R-G-B-transparency) passage, and this 8 Bit data does not have positive and negative branch, promptly all for just) be delivered to graphic memory from application memory.The brightness value and the chromatic value of each pixel can be sent to the IQ tinter with the RGBA form in the frame picture that needs to decode.
When owing to the brightness of certain pixel in the frame picture that needs decoding or chromatic value cause the brightness of this pixel or the chromatic value can't be via the RGBA passage when application memory is delivered to graphic memory greater than 255 (8 bits all get 1), the brightness or the chromatic value of this pixel can be divided into high-value and low-value, wherein, high-value is brightness or the chromatic value integer part divided by the merchant of 255 gained, low-value is brightness or the chromatic value remainder divided by 255 gained, and each value in high-value and the low-value all is not more than 255.In the present embodiment, can utilize the brightness of any one pixel in the two field picture that the RGBA form will need to decode or chromatic value, quantization matrix value and IQ value of symbol to be packaged as following vector form and be delivered to graphic memory: (high-value from application memory, low-value, quantization matrix value, the IQ value of symbol), each component takies 8 bits here.For the IQ value of symbol, 0 expression IQ symbol is for just, and 1 expression IQ symbol is for bearing.And the MB type indicated value of the MB at any one the pixel place in the two field picture that needs can be decoded and quantization zooming value are packaged as following vector form and are delivered to graphic memory from application memory: (MB type indicated value, the quantization zooming value), each component takies 8 bits here.For MB type indicated value, 0 expression intraframe coding MB, 1 other coded systems of expression MB.
In the present embodiment, brightness value (Y), red color value (Cr), the chroma blue value (Cb) of each pixel of the frame picture that need decode are packaged as the RGBA form and are delivered to graphic memory from application memory.For each pixel, its Y value, Cr value and the Cb value coverlet reason of staying alone.Simultaneously, MB type indicated value, the quantization zooming value of each pixel place MB also are delivered to graphic memory from application memory.Here for convenience of explanation, brightness value, red color value and the chroma blue value of a pixel is designated as (Y0, Cr0, Cb0).
Particularly, for any one pixel in the frame picture of needs decodings, IQ tinter from graphic memory, sample out Y value, Cr value and the Cb value of this pixel.What sample out as mentioned above, is the vector (high-value, low-value, quantization matrix value, IQ value of symbol) that is associated with Y value, Cr value or the Cb value of this pixel.
Inverse quantization process with the Y value of certain pixel is that example describes below.The process that the IQ tinter carries out inverse quantization to the Y value of certain pixel comprises: 1) in conjunction with a high position and low-value in the vector relevant, calculate the absolute value of the Y value of this pixel with the Y value of this pixel, promptly | Y|=high-value * 255+ low-value; 2) when the quantization matrix value of this pixel equals 0, directly with the absolute value of the absolute value of the Y value Y value after as the inverse quantization of this pixel, when the quantization matrix value of this pixel is not equal to 0, utilize following equation to find the solution the absolute value of the Y value behind the inverse quantization of this pixel: tempData=((data*2.0+mb_type) * qmatVal*scale)/32.0, wherein tempData represents the absolute value of the Y value behind the inverse quantization of this pixel, data represents the absolute value of the Y value of this pixel, mb_type represents the MB type indicated value of this pixel place MB, qmatVal represents the quantization matrix value of this pixel, and scale represents the quantization zooming value of this pixel place MB; 3) with the value of symbol of the IQ value of symbol of this pixel Y value after as the inverse quantization of this pixel.Because the absolute value of the Y value behind the inverse quantization of this pixel may be greater than 255, so the Y value behind the inverse quantization of this pixel is packaged as following vector form here: (high-value, low-value, 0.0, the IQ value of symbol), wherein high-value is the integer part of the absolute value of the Y value behind the inverse quantization of this pixel divided by the merchant of 255 gained, and low-value is the remainder of the absolute value of the Y value behind the inverse quantization of this pixel divided by 255 gained.
Need to prove, also can realize by above process to the Cr of pixel and the inverse quantization of Cb value.The IQ tinter is stored in the Y value after the quantification of this pixel, Cr value and Cb value in the frame buffer behind the inverse quantization of finishing the Y value of certain pixel, Cr value and Cb value.Here for convenience of explanation, the Y value behind the inverse quantization of a pixel, Cr value and Cb value are designated as (Y1, Cr1, Cb1).
In the present embodiment, in order to handle conveniently, for any one pixel, the absolute value of the brightness/chroma value of IQ tinter after with the inverse quantization of this pixel and the inverse quantization value of symbol of this pixel are packaged as following form: (high-value, low-value, 0.0, the inverse quantization value of symbol), wherein high-value is by with the absolute value of the brightness/chroma value behind the inverse quantization of this pixel integer part divided by the merchant of 255 gained, and low-value is by with the absolute value of the brightness/chroma value behind the inverse quantization of this pixel remainder values divided by 255 gained.
Fig. 4 shows the logic diagram of the IDCT tinter of being realized by the fragment shader among Fig. 2.The IDCT tinter has the logical construction that is similar to the IQ tinter.When fragment shader is served as the IDCT tinter, brightness value and chromatic value behind the use IQ tinter IQ that generate, that be stored in each pixel in the frame buffer.For each pixel, its Y1 value, Cr1 value and the Cb1 value coverlet reason of staying alone.
In the present embodiment, need a frame picture of decoding to form by a plurality of 8x8 block of pixels.The position of any one pixel in any one 8x8 block of pixels in this 8x8 block of pixels be by this locations of pixels coordinate representation, and the locations of pixels coordinate in 8x8 block of pixels is independent of the locations of pixels coordinate in other 8x8 block of pixels.
For example, the brightness/chroma value behind the inverse quantization of each pixel among any one 8x8 block of pixels A has been formed 8x8 brightness/chroma value matrix according to the position of each pixel in 8x8 block of pixels A
Figure GSA00000020846000071
The transposed matrix that is used for the IDCT coefficient matrix of 8x8 block of pixels A is
Figure GSA00000020846000073
Figure GSA00000020846000074
Wherein, each among D0~D3 and the C0~C3 all has 4x4 element.
The for example position coordinates that the IDCT tinter can obtain among the 8x8 block of pixels A by following processing is (dx, brightness value behind the inverse discrete cosine transformation of a pixel dy) (referring to shown in Figure 5): 1) each row (that is, the 0th walking to the 7th row) by 8x8 brightness value matrix D that the Y value behind the IQ of each pixel among the 8x8 block of pixels A is formed and be used for the transposed matrix C of the inverse discrete cosine transformation coefficient matrix C of 8x8 block of pixels A TEach the row dot product, obtain intermediary matrix TR; 2) pass through the dy row of intermediary matrix TR and the transposed matrix C of inverse discrete cosine transformation coefficient matrix C TThe capable dot product of dx, obtain position coordinates for (dx, the brightness/chroma value behind the inverse discrete cosine transformation of pixel dy).Here for convenience of explanation, the brightness value behind the IDCT of a pixel, red color value and chroma blue value are designated as (Y2, Cr2, Cb2).
Brightness/chroma value behind the IDCT of each pixel in the frame picture that needs to decode will be limited in the scope of [256,255].In the present embodiment, when the brightness/chroma value behind the inverse discrete cosine transformation of a pixel is other values beyond-256, the brightness/chroma value representation of IDCT tinter after with the inverse discrete cosine transformation of this pixel is: (the absolute value of the brightness/chroma value behind the inverse discrete cosine transformation, 0.0,0.0, the value of symbol of the brightness/chroma value behind the inverse discrete cosine transformation).When the brightness/chroma value behind the inverse discrete cosine transformation of a pixel was-256, the brightness/chroma value representation of IDCT tinter after with the inverse discrete cosine transformation of this pixel was: (1.0,1.0,0.0,1.0).
Fig. 6 shows the logic diagram of the MC tinter of being realized by the fragment shader among Fig. 2.The logical construction of MC tinter is more complicated than IQ tinter and IDCT tinter.When fragment shader is served as the MC tinter, brightness value and chromatic value behind the use IDCT tinter IDCT that generate, that be stored in each pixel in the frame buffer.For each pixel, its Y2 value, Cr2 value and the Cb2 value coverlet reason of staying alone.Here for convenience of explanation, the brightness value behind the MC of a pixel, red color value and chroma blue value are designated as (Y3, Cr3, Cb3).
Serve as the MC tinter when fragment shader the brightness/chroma value of certain pixel in the one frame picture of needs decodings is carried out MC when handling, required information comprises: whether each macro block in the frame picture that needs are decoded is the information of predictive coding macro block, the reference direction information of each the predictive coding macro block in the frame picture that needs to decode, the motion vector information of each macro block in the frame picture that needs to decode, each macro block in the frame picture that needs to decode is the information of a discrete cosine transform coding macro block or frame discrete cosine transform coding macro block, the field selection information that needs a frame picture of decoding, and the field selection information of the reference frame of each macro block in the frame picture that needs to decode.These required information of MC tinter are included in VLD and the IS processed video data, and are delivered to graphic memory via the RGBA passage from application memory.
For any one pixel in the frame picture of needs decoding, if this pixel place macro block is not the predictive coding macro block, then the MC tinter is by with brightness/chroma value behind the inverse discrete cosine transformation of this pixel and 128 additions, obtains the brightness/chroma value after the motion compensation of this pixel.
For any one pixel in the frame picture of needs decoding, if this pixel place macro block is the predictive coding macro block, then the MC tinter can carry out motion compensation by the brightness/chroma value of following processing after to the inverse discrete cosine transformation of this pixel: 1) select information according to reference direction information, motion vector information, a selection information and the motion vector field of this pixel place macro block, obtain on the reference frame that is positioned at this pixel place macro block and corresponding to the decoded brightness/chroma value of finishing of the one other pixel of this pixel; 2), obtain the brightness/chroma value after the motion compensation of this pixel by with the brightness/chroma value addition behind the inverse discrete cosine transformation of the decoded brightness/chroma value of finishing of one other pixel and this pixel.
Fig. 7 shows the logic diagram of the CSC tinter of being realized by the fragment shader among Fig. 2.The CSC tinter has the logical construction that is similar to IQ tinter and IDCT tinter.Since the MC result may not be the frame directly drawn (such as, I frame and P frame), so MC result need be resequenced in graphic memory and be upgraded.If an image frame is I frame (intracoded frame) or P frame (forward predicted frame), then be positioned at I frame of its front or P frame and will be used as and draw frame, if an image frame is B frame (bi-directional predicted frames), then also be set to draw frame.Draw frame and need carry out the color space conversion to the brightness/chroma value of each pixel on it:, obtain the color component of this pixel by multiplying each other by vector and the color space transition matrix that the brightness value after the motion compensation of a pixel, chromatic value and 1 are formed by following processing.It should be noted that the color space transition matrix that uses for all pixels on the image frame is an identical matrix.
In sum, the invention provides a kind of OpenGL ES 2.0 (embedded 3D pattern algorithm standard), the technology of utilizing programmable graphics processing unit (GPU) that the MPEG-2 video is decoded based on computer graphics.The present invention can be used in the hand-hold type embedding equipment that is equipped with the GPU that supports OpenGLES2.0.By video encoding/decoding method provided by the invention and system, can carry out most of high definition video decoding task, and can alleviate the calculated load of host CPU.
Below the present invention has been described with reference to specific embodiments of the invention, but those skilled in the art all understand, can carry out various modifications, combination and change to these specific embodiments, and can not break away from the spirit and scope of the present invention that limit by claims or its equivalent.
Can come execution in step with hardware or software as required.Notice that without departing from the scope of the invention, the flow chart that can provide adds step, therefrom removes step or revise wherein step in this specification.In general, flow chart just is used to refer to a kind of possible sequence of the basic operation that is used to realize function.
Embodiments of the invention can utilize programming general purpose digital computer, utilize application-specific integrated circuit (ASIC), programmable logic device, field programmable gate array, light, chemistry, biological, system quantum or nanometer engineering, assembly and mechanism to realize.In general, function of the present invention can be realized by any means known in the art.Can use distributed or networked system, assembly and circuit.The communication of data or to transmit can be wired, wireless or by any other means.
Also will recognize, according to the needs of application-specific, one or more can perhaps even in some cases being removed or being deactivated in the key element shown in the accompanying drawing by more separating or more integrated mode realizes.Program or code that realization can be stored in the machine readable media are carried out above-mentioned any method to allow computer, also within the spirit and scope of the present invention.
In addition, it only is exemplary that any signal arrows in the accompanying drawing should be considered to, rather than restrictive, unless concrete indication is arranged in addition.Separate or the ability of combination when not knowing when term is also contemplated as to make, the combination of assembly or step also will be considered to put down in writing.

Claims (26)

1. video encoding/decoding method comprises:
Carry out variable length decoding and counter-scanning by the video data after utilizing CPU to the coding of a frame picture, obtain the video data behind variable length decoding and the counter-scanning;
Carry out inverse quantization, inverse discrete cosine transformation, motion compensation and color space conversion by the video data after utilizing the programmable graphics processing unit to variable length decoding and counter-scanning, obtain and finish decoded video data.
2. video encoding/decoding method according to claim 1, it is characterized in that the video data behind described variable length decoding and the counter-scanning comprises following information: the macro block (mb) type indicated value and the quantization zooming value of each macro block in brightness value, chromatic value, quantization matrix value and the inverse quantization value of symbol of each pixel in the described frame picture and the described frame picture.
3. video encoding/decoding method according to claim 2, it is characterized in that, the brightness/chroma value of each pixel in the described frame picture, quantization matrix value and inverse quantization value of symbol are packaged as following form: (first high-value, first low-value, quantization matrix value, the inverse quantization value of symbol), wherein, described first high-value is by with the brightness/chroma value of each pixel in the described frame picture integer part divided by the merchant of 255 gained, and described first low-value is by with the brightness/chroma value of each pixel in the described frame picture remainder values divided by 255 gained.
4. video encoding/decoding method according to claim 2 is characterized in that, the macro block (mb) type indicated value and the quantization zooming value of each macro block in the described frame picture are packaged as following form: (macro block (mb) type indicated value, quantization zooming value, 0.0,0.0).
5. video encoding/decoding method according to claim 2 is characterized in that, the processing of the brightness/chroma value of any one pixel in the described frame picture being carried out inverse quantization comprises:
Whether the quantization matrix value of judging a described pixel equals 0;
If equal 0, then keep the brightness/chroma value of a described pixel constant, otherwise according to following equation the absolute value of the brightness/chroma value of a described pixel is carried out inverse quantization: tempData=((data*2.0+mb_type) * qmatVal*scale)/32.0, wherein
TempData represents the absolute value of the brightness/chroma value behind the inverse quantization of a described pixel, data represents the absolute value of the brightness/chroma value of a described pixel, mb_type represents the macro block (mb) type indicated value of a described pixel place macro block, qmatVal represents the quantization matrix value of a described pixel, scale represents the quantization zooming value of a described pixel place macro block, wherein, macro block (mb) type indicated value 0 indication inter-coded macroblocks, the macro block of the coded system beyond the macro block (mb) type indicated value 1 indication intraframe coding method.
6. video encoding/decoding method according to claim 5 is characterized in that, the processing of the brightness/chroma value of any one pixel in the described frame picture being carried out inverse quantization also comprises:
Value of symbol with the inverse quantization value of symbol of the described pixel brightness/chroma value after as the inverse quantization of a described pixel, and the absolute value of the brightness/chroma value behind the inverse quantization of a described pixel and the inverse quantization value of symbol of a described pixel be packaged as following form: (second high-value, second low-value, 0.0, the inverse quantization value of symbol), wherein, described second high-value is by with the absolute value of the brightness/chroma value behind the inverse quantization of the described pixel integer part divided by the merchant of 255 gained, and described second low-value is by with the absolute value of the brightness/chroma value behind the inverse quantization of the described pixel remainder values divided by 255 gained.
7. video encoding/decoding method according to claim 2, it is characterized in that, a described frame picture is made up of a plurality of 8x8 block of pixels, the position of any one pixel in any one 8x8 block of pixels in described a plurality of 8x8 block of pixels in a described 8x8 block of pixels be by a described locations of pixels coordinate representation, and the locations of pixels coordinate in the described 8x8 block of pixels is independent of the locations of pixels coordinate in other 8x8 block of pixels in described a plurality of 8x8 block of pixels.
8. video encoding/decoding method according to claim 7, it is characterized in that, brightness/chroma value behind the inverse quantization of each pixel in the described 8x8 block of pixels has been formed a 8x8 brightness/chroma value matrix D according to the position of each pixel in a described 8x8 block of pixels, wherein, obtain in the described 8x8 block of pixels position coordinates for (dx, the processing of the brightness/chroma value behind the inverse discrete cosine transformation of pixel dy) comprises:
By each transposed matrix C capable and inverse discrete cosine transformation coefficient matrix C with a described 8x8 brightness/chroma value matrix D TEach the row dot product, obtain intermediary matrix TR;
By with the transposed matrix C of the dy of described intermediary matrix TR row with described inverse discrete cosine transformation coefficient matrix C TThe capable dot product of dx, obtain position coordinates for (dx, the brightness/chroma value behind the inverse discrete cosine transformation of described pixel dy).
9. video encoding/decoding method according to claim 8 is characterized in that,
When the brightness/chroma value behind the inverse discrete cosine transformation of a described pixel is other values beyond-256, brightness/chroma value behind the inverse discrete cosine transformation of a described pixel is represented as: (the absolute value of the brightness/chroma value behind the inverse discrete cosine transformation, 0.0,0.0, the value of symbol of the brightness/chroma value behind the inverse discrete cosine transformation)
When the brightness/chroma value behind the inverse discrete cosine transformation of a described pixel was-256, the brightness/chroma value behind the inverse discrete cosine transformation of a described pixel was represented as: (1.0,1.0,0.0,1.0).
10. video encoding/decoding method according to claim 2, it is characterized in that the video data behind described variable length decoding and the counter-scanning further comprises following information: whether each macro block in the described frame picture is the information of predictive coding macro block, the reference direction information of each the predictive coding macro block in the described frame picture, the motion vector information of each macro block in the described frame picture, each macro block in the described frame picture is the information of a discrete cosine transform coding macro block or frame discrete cosine transform coding macro block, the field selection information of a described frame picture, field selection information with the reference frame of each macro block in the described frame picture.
11. video encoding/decoding method according to claim 10, it is characterized in that, for any one pixel in the described frame picture, if a described pixel place macro block is not the predictive coding macro block, then by with brightness/chroma value behind the inverse discrete cosine transformation of a described pixel and 128 additions, obtain the brightness/chroma value after the motion compensation of a described pixel.
12. video encoding/decoding method according to claim 10, it is characterized in that, for any one pixel in the described frame picture, if a described pixel place macro block is the predictive coding macro block, then the processing that the brightness/chroma value behind the inverse discrete cosine transformation of a described pixel is carried out motion compensation comprises:
According to the field selection information of the reference frame of field selection information and a described pixel place macro block of the motion vector information of the reference direction information of a described pixel place macro block, a described pixel place macro block, a described frame picture, obtain on the reference frame that is positioned at a described pixel place macro block and corresponding to the decoded brightness/chroma value of finishing of the one other pixel of a described pixel;
By with the brightness/chroma value addition behind the inverse discrete cosine transformation of a decoded brightness/chroma value of finishing of described one other pixel and a described pixel, obtain the brightness/chroma value after the motion compensation of a described pixel.
13. video encoding/decoding method according to claim 2 is characterized in that, the processing of the brightness/chroma value after the motion compensation of any one pixel in the described frame picture being carried out the color space conversion comprises:
By multiplying each other, obtain the color component of a described pixel by vector and the color space transition matrix that the brightness value after the motion compensation of a described pixel, chromatic value and 1 are formed.
14. a video decoding system comprises:
CPU is used for by the video data behind the coding of a frame picture is carried out variable length decoding and counter-scanning, obtains the video data behind variable length decoding and the counter-scanning;
The programmable graphics processing unit is used for obtaining and finishing decoded video data by the video data behind variable length decoding and the counter-scanning being carried out inverse quantization, inverse discrete cosine transformation, motion compensation and color space conversion.
15. video decoding system according to claim 14, it is characterized in that the video data behind described variable length decoding and the counter-scanning comprises following information: the macro block (mb) type indicated value and the quantization zooming value of each macro block in brightness value, chromatic value, quantization matrix value and the inverse quantization value of symbol of each pixel in the described frame picture and the described frame picture.
16. video decoding system according to claim 15, it is characterized in that, described CPU is with the brightness/chroma value of each pixel in the described frame picture, quantization matrix value and inverse quantization value of symbol are packaged as following form: (first high-value, first low-value, quantization matrix value, the inverse quantization value of symbol), wherein, described first high-value is by with the brightness/chroma value of each pixel in the described frame picture integer part divided by the merchant of 255 gained, and described first low-value is by with the brightness/chroma value of each pixel in the described frame picture remainder values divided by 255 gained.
17. video decoding system according to claim 15, it is characterized in that described CPU is packaged as following form with the macro block (mb) type indicated value and the quantization zooming value of each macro block in the described frame picture: (macro block (mb) type indicated value, quantization zooming value, 0.0,0.0).
18. video decoding system according to claim 15 is characterized in that, described programmable graphics processing unit carries out inverse quantization by following processing to the brightness/chroma value of any one pixel in the described frame picture:
Whether the quantization matrix value of judging a described pixel equals 0;
If equal 0, then keep the brightness/chroma value of a described pixel constant, otherwise according to following equation the absolute value of the brightness/chroma value of a described pixel is carried out inverse quantization: tempData=((data*2.0+mb_type) * qmatVal*scale)/32.0, wherein
TempData represents the absolute value of the brightness/chroma value behind the inverse quantization of a described pixel, data represents the absolute value of the brightness/chroma value of a described pixel, mb_type represents the macro block (mb) type indicated value of a described pixel place macro block, qmatVal represents the quantization matrix value of a described pixel, scale represents the quantization zooming value of a described pixel place macro block, macro block (mb) type indicated value 0 indication inter-coded macroblocks, the macro block of the coded system beyond the macro block (mb) type indicated value 1 indication intraframe coding method.
19. video decoding system according to claim 18, it is characterized in that, described programmable graphics processing unit is with the value of symbol of the inverse quantization value of symbol of the described pixel brightness/chroma value after as the inverse quantization of a described pixel, and the absolute value of the brightness/chroma value behind the inverse quantization of a described pixel and the inverse quantization value of symbol of a described pixel be packaged as following form: (second high-value, second low-value, 0.0, the inverse quantization value of symbol), wherein, described second high-value is by with the absolute value of the brightness/chroma value behind the inverse quantization of the described pixel integer part divided by the merchant of 255 gained, and described second low-value is by with the absolute value of the brightness/chroma value behind the inverse quantization of the described pixel remainder values divided by 255 gained.
20. video decoding system according to claim 15, it is characterized in that, a described frame picture is made up of a plurality of 8x8 block of pixels, the position of any one pixel in any one 8x8 block of pixels in described a plurality of 8x8 block of pixels in a described 8x8 block of pixels be by a described locations of pixels coordinate representation, and the locations of pixels coordinate in the described 8x8 block of pixels is independent of the locations of pixels coordinate in other 8x8 block of pixels in described a plurality of 8x8 block of pixels.
21. video encoding/decoding method according to claim 20, it is characterized in that, brightness/chroma value after the quantification of each pixel in the described 8x8 block of pixels has been formed a 8x8 brightness/chroma value matrix D according to the position of each pixel in a described 8x8 block of pixels, wherein, described programmable graphics processing unit by following processing obtain in the described 8x8 block of pixels position coordinates for (dx, the brightness/chroma value behind the inverse discrete cosine transformation of pixel dy):
By each transposed matrix C capable and inverse discrete cosine transformation coefficient matrix C with a described 8x8 brightness/chroma value matrix D TEach the row dot product, obtain intermediary matrix TR;
By with the transposed matrix C of the dy of described intermediary matrix TR row with described inverse discrete cosine transformation coefficient matrix C TThe capable dot product of dx, obtain position coordinates for (dx, the brightness/chroma value behind the inverse discrete cosine transformation of described pixel dy).
22. video decoding system according to claim 21 is characterized in that,
When the brightness behind the inverse discrete cosine transformation of a described pixel/bag degree value is other values beyond-256, the brightness/chroma value representation of described programmable graphics processing unit after with the inverse discrete cosine transformation of a described pixel is: (the absolute value of the brightness/chroma value behind the inverse discrete cosine transformation, 0.0,0.0, the value of symbol of the brightness/chroma value behind the inverse discrete cosine transformation)
When the brightness/chroma value behind the inverse discrete cosine transformation of a described pixel was-256, the brightness/chroma value representation of described programmable graphics processing unit after with the inverse discrete cosine transformation of a described pixel was: (1.0,1.0,0.0,1.0).
23. video decoding system according to claim 15, it is characterized in that the video data behind described variable length decoding and the counter-scanning further comprises following information: whether each macro block in the described frame picture is the information of predictive coding macro block, the reference direction information of each the predictive coding macro block in the described frame picture, the motion vector information of each macro block in the described frame picture, each macro block in the described frame picture is the information of a discrete cosine transform coding macro block or frame discrete cosine transform coding macro block, the field selection information of a described frame picture, field selection information with the reference frame of each macro block in the described frame picture.
24. video decoding system according to claim 23, it is characterized in that, for any one pixel in the described frame picture, if a described pixel place macro block is not the predictive coding macro block, then described programmable graphics processing unit is by with brightness/chroma value behind the inverse discrete cosine transformation of a described pixel and 128 additions, obtains the brightness/chroma value after the motion compensation of a described pixel.
25. video decoding system according to claim 23, it is characterized in that, for any one pixel in the described frame picture,, a described pixel place macro block carries out motion compensation by the brightness/chroma value of following processing after to the inverse discrete cosine transformation of a described pixel if being the described programmable graphics processing unit of predictive coding macro block:
According to the motion vector information of the reference direction information of a described pixel place macro block, a described pixel place macro block, a selection information and a described pixel place macro block the field selection information of reference frame, obtain on the reference frame that is positioned at a described pixel place macro block and corresponding to the decoded brightness/chroma value of finishing of the one other pixel of a described pixel;
By with the brightness/chroma value addition behind the inverse discrete cosine transformation of a decoded brightness/chroma value of finishing of described one other pixel and a described pixel, obtain the brightness/chroma value after the motion compensation of a described pixel.
26. video decoding system according to claim 15 is characterized in that, described programmable graphics processing unit carries out the color space conversion by the brightness/chroma value of following processing after to the motion compensation of any one pixel in the described frame picture:
By multiplying each other, obtain the color component of a described pixel by vector and the color space transition matrix that the brightness value after the motion compensation of a described pixel, chromatic value and 1 are formed.
CN2010101148474A 2010-02-24 2010-02-24 Video decoding method and system Pending CN102164284A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101148474A CN102164284A (en) 2010-02-24 2010-02-24 Video decoding method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101148474A CN102164284A (en) 2010-02-24 2010-02-24 Video decoding method and system

Publications (1)

Publication Number Publication Date
CN102164284A true CN102164284A (en) 2011-08-24

Family

ID=44465211

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101148474A Pending CN102164284A (en) 2010-02-24 2010-02-24 Video decoding method and system

Country Status (1)

Country Link
CN (1) CN102164284A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103841389A (en) * 2014-04-02 2014-06-04 北京奇艺世纪科技有限公司 Video playing method and player
CN106611432A (en) * 2015-10-21 2017-05-03 深圳市腾讯计算机系统有限公司 Picture format conversion method, device and system
CN107864678A (en) * 2015-06-26 2018-03-30 亚马逊技术公司 Detection and interpretation to visual detector
CN108268226A (en) * 2016-12-30 2018-07-10 乐视汽车(北京)有限公司 The picture of synchronous terminal screen to vehicle device method, terminal and system
CN108347604A (en) * 2012-04-26 2018-07-31 索尼公司 Video compression method, video-frequency compression method and non-transitory computer-readable media
CN113994677A (en) * 2019-06-07 2022-01-28 佳能株式会社 Image encoding device, image decoding device, method, and program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1868211A (en) * 2003-03-28 2006-11-22 微软公司 Accelerating video decoding using a graphics processing unit
CN101068364A (en) * 2006-06-16 2007-11-07 威盛电子股份有限公司 Video encoder and graph processing unit
CN101123723A (en) * 2006-08-11 2008-02-13 北京大学 Digital video decoding method based on image processor
CN101271680A (en) * 2008-05-16 2008-09-24 华硕电脑股份有限公司 Video serial data processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1868211A (en) * 2003-03-28 2006-11-22 微软公司 Accelerating video decoding using a graphics processing unit
CN101068364A (en) * 2006-06-16 2007-11-07 威盛电子股份有限公司 Video encoder and graph processing unit
CN101123723A (en) * 2006-08-11 2008-02-13 北京大学 Digital video decoding method based on image processor
CN101271680A (en) * 2008-05-16 2008-09-24 华硕电脑股份有限公司 Video serial data processing method

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108347604A (en) * 2012-04-26 2018-07-31 索尼公司 Video compression method, video-frequency compression method and non-transitory computer-readable media
CN108347604B (en) * 2012-04-26 2022-03-04 索尼公司 Video decompression method, video compression method, and non-transitory computer-readable medium
CN103841389A (en) * 2014-04-02 2014-06-04 北京奇艺世纪科技有限公司 Video playing method and player
CN103841389B (en) * 2014-04-02 2015-10-21 北京奇艺世纪科技有限公司 A kind of video broadcasting method and player
CN107864678A (en) * 2015-06-26 2018-03-30 亚马逊技术公司 Detection and interpretation to visual detector
CN107864678B (en) * 2015-06-26 2021-09-28 亚马逊技术公司 Detection and interpretation of visual indicators
CN106611432A (en) * 2015-10-21 2017-05-03 深圳市腾讯计算机系统有限公司 Picture format conversion method, device and system
CN106611432B (en) * 2015-10-21 2020-09-04 深圳市腾讯计算机系统有限公司 Picture format conversion method, device and system
CN108268226A (en) * 2016-12-30 2018-07-10 乐视汽车(北京)有限公司 The picture of synchronous terminal screen to vehicle device method, terminal and system
CN113994677A (en) * 2019-06-07 2022-01-28 佳能株式会社 Image encoding device, image decoding device, method, and program

Similar Documents

Publication Publication Date Title
CN102223525B (en) Video decoding method and system
JP5722761B2 (en) Video compression apparatus, image processing apparatus, video compression method, image processing method, and data structure of video compression file
US9607357B2 (en) Image processing device for displaying moving image and image processing method thereof
CN101123723B (en) Digital video decoding method based on image processor
US20140153635A1 (en) Method, computer program product, and system for multi-threaded video encoding
JP5826730B2 (en) Video compression apparatus, image processing apparatus, video compression method, image processing method, and data structure of video compression file
CN102164284A (en) Video decoding method and system
US8633940B2 (en) Method and system for texture compression in a system having an AVC decoder and a 3D engine
US20120307004A1 (en) Video decoding with 3d graphics shaders
CN101794456A (en) Methods of and apparatus for processing graphics
US11481929B2 (en) System and method for compressing and decompressing images using block-based compression format
US20110261885A1 (en) Method and system for bandwidth reduction through integration of motion estimation and macroblock encoding
CN107465939B (en) Method and device for processing video image data stream
US8624896B2 (en) Information processing apparatus, information processing method and computer program
CN102138158A (en) Unified texture compression framework
CN101097710A (en) Image processor and image processing method
US20120218292A1 (en) System and method for multistage optimized jpeg output
US20050219252A1 (en) Two-dimensional buffer, texture and frame buffer decompression
Qasim et al. History of image digital formats using in information technology
KR20230146629A (en) Predictive coding of boundary geometry information for mesh compression
KR101748397B1 (en) LUT Generating Method for Around View Monitor using OpenGL
US20240153150A1 (en) Mesh Compression Texture Coordinate Signaling and Decoding
US20230401755A1 (en) Mesh Compression Using Coding Units with Different Encoding Parameters
TWI485618B (en) Kvm switch of transmitting video signal to remote consoles and method thereof
Dunlop et al. Polymorphic Pixels

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20110824