WO2008029550A1 - Image data processing method, program for image data processing method, recording medium with recorded program for image data processing method and image data processing device - Google Patents
Image data processing method, program for image data processing method, recording medium with recorded program for image data processing method and image data processing device Download PDFInfo
- Publication number
- WO2008029550A1 WO2008029550A1 PCT/JP2007/062833 JP2007062833W WO2008029550A1 WO 2008029550 A1 WO2008029550 A1 WO 2008029550A1 JP 2007062833 W JP2007062833 W JP 2007062833W WO 2008029550 A1 WO2008029550 A1 WO 2008029550A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image data
- reference image
- cache memory
- stored
- memory
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/43—Hardware specially adapted for motion estimation or compensation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/423—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/43—Hardware specially adapted for motion estimation or compensation
- H04N19/433—Hardware specially adapted for motion estimation or compensation characterised by techniques for memory access
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
Definitions
- Image data processing method program for image data processing method, recording medium storing program for image data processing method, and image data processing apparatus
- the present invention relates to an image data processing method, a program for the image data processing method, a recording medium on which the program for the image data processing method is recorded, and an image data processing apparatus.
- the encoding of moving image data in the MPEG_4AVCZI TU_T H.264 format It can be applied to an encoding device and a decoding device.
- the present invention issues address data so as to specify areas each having a plurality of readout units in the horizontal direction and the vertical direction, and stores reference image data in a cache memory, thereby reducing the cache memory capacity.
- the memory bus access frequency can be reduced.
- moving image data for which encoding processing has been completed is decoded and held in a frame memory, and the moving image data held in the frame memory is referred to and continued.
- the moving image data of the frame is encoded.
- the decoded moving image data is held in the frame memory, and the moving image data such as the following frame is decoded by referring to the moving image data.
- FIG. 1 is a block diagram showing an MP EG-4 AVCZ ITU-TH.264 encoding apparatus.
- the encoding device 1 encodes input image data D 1 to generate an output cast AD 2.
- the encoding device 1 sequentially inputs the input image data D 1 to the subtraction unit 2 in the order according to the GOP structure.
- the subtraction unit 2 subtracts the prediction value output from the selection unit 3 from the input image data D 1 and outputs a prediction error value.
- the discrete cosine transform unit 4 The prediction error value is subjected to discrete cosine transform processing to output coefficient data.
- the quantization unit 5 quantizes the coefficient data and outputs it.
- the entropy coding unit 6 performs variable length coding processing on the output data of the quantization unit 5 and outputs the result.
- the encoding device 1 adds various control codes, a motion vector MV, etc. to the output data of the entropy encoding unit 6 to generate an output stream D2.
- the inverse quantization unit 7 performs an inverse quantization process on the output data of the quantization unit 5 and decodes the output data of the discrete cosine transform unit 4.
- the inverse discrete cosine transform unit 8 performs an inverse discrete cosine transform process on the output data of the inverse quantization unit 7 and decodes the output data of the subtraction unit 2.
- the addition unit 9 adds the predicted value output from the selection unit 3 to the output data of the inverse discrete transform unit 8, and decodes the input image data D1.
- the deblocking filter 10 removes block distortion from the input image data D 1 decoded by the adder 9 and outputs the result.
- the reference image memory 11 holds the image data D 3 output from the deblocking filter 10 as reference image data, and outputs it to the motion compensation unit 12.
- the motion vector detection unit 13 detects and outputs the motion vector MV from the input image data D 1 in the inter-frame encoding process.
- the motion compensation unit 12 compensates the motion of the reference image data with the motion vector MV and outputs it in the inter-frame coding process.
- the weighted prediction unit 14 weights and adds the image data output from the motion compensation unit 12 to generate a prediction value in the interframe coding process.
- the intra-screen prediction unit 15 generates a predicted value in the intra-frame encoding process from the output data of the addition unit 9 and outputs it.
- the selection unit 3 selects the prediction value output from the weighted prediction unit 14 and the prediction value output from the intra-screen prediction unit 15 and outputs the selected prediction value to the subtraction unit 2.
- the encoding control unit 17 controls the operation of each unit so that the code amount of the output stream D 2 becomes a predetermined target value.
- Japanese Patent Application Laid-Open No. 2 0 06-3 1 4 8 0 describes the characteristics of an image when transferring data from a main memory to a cache memory in a sub-processor that decodes a compressed image. Based on the parameter that represents A configuration is disclosed that increases the probability of hitting a cache in subsequent processing by adaptively changing the memory area to be cached.
- the present invention has been made in view of the above points.
- An image data processing method capable of reducing the access frequency of the memory bus while reducing the capacity of the cache memory, a program for the image data processing method, and an image data processing method intends to propose a recording medium and an image data processing apparatus on which the above program is recorded.
- the present invention generates a predicted value from reference image data held in a reference image memory using a cache memory, and encodes and / or decodes moving image data using the predicted value.
- Applied to the image data processing method to identify a region on the screen of the reference image data by one-dimensional address data of the reference image memory, and use the cache memory for generating the predicted value A request step of reference image data for requesting the reference image data; a cache memory search step of searching for reference image data corresponding to the request from the reference image data stored in the cache memory; The cache image is stored when the reference image data is stored in the cache memory.
- a first reference image data output step for outputting the reference image data stored in the memory in response to the request; and when the corresponding reference image data is not stored in the cache memory, the reference image memory
- the corresponding reference image data stored in the cache memory is stored in the cache memory, and the corresponding reference image data is output in response to the request.
- the reference image memory outputs the reference image data in a readout unit of a plurality of pixels continuous in a horizontal direction or a vertical direction, and the region is in a horizontal direction and a vertical direction. Each of the plurality of read unit areas is used.
- the present invention also provides an image data processing method for generating a predicted value from reference image data stored in a reference image memory using a cache memory, and encoding and / or decoding moving image data using the predicted value.
- the reference image data used for generating the predicted value in the cache memory is specified by a one-dimensional address data of the reference image memory to identify an area on the screen of the reference image data.
- the cache memory stores A first reference image data output step for outputting the stored reference image data in response to the request; and when the corresponding reference image data is not stored in the cache memory, the reference image memory A second reference image data output step of storing the corresponding reference image data stored in the cache memory and outputting the corresponding reference image data in response to the request.
- the memory outputs the reference image data in a reading unit of a plurality of pixels continuous in the horizontal direction or the vertical direction, and the area is a plurality of the reading unit areas in the horizontal direction and the vertical direction, respectively. To do.
- the present invention also provides an image data processing method for generating a predicted value from reference image data stored in a reference image memory using a cache memory, and encoding and / or decoding moving image data using the predicted value.
- the image data processing method specifies an area on the screen of the reference image data by one-dimensional address data of the reference image memory, and stores the cache memory in the cache memory.
- a reference image data requesting step for requesting the reference image data to be used for generating the predicted value, and a reference image data corresponding to the request from the reference image data stored in the cache memory.
- a first reference image data that is output in response to the request when the corresponding reference image data is stored in the cache memory.
- the present invention provides an image data processing apparatus that uses a cache memory to generate a prediction value from reference image data held in a reference image memory, and encodes and / or decodes moving image data using the prediction value. And applying the one-dimensional address data of the reference image memory to identify an area on the screen of the reference image data, and re-registering the reference image data used for generating the predicted value in the cache memory.
- the reference image data stored in the cache memory A first reference image data output unit that outputs in response to the request, and the corresponding reference image stored in the reference image memory when the corresponding reference image data is not stored in the cache memory.
- a second reference image data output unit for storing the image data in the cache memory and outputting the corresponding reference image data in response to the request, wherein the reference image memory is horizontal or vertical
- the reference image data is output in units of readout of a plurality of pixels that are continuous in the direction, so that the region is a region of a plurality of units of readout in the horizontal direction and the vertical direction, respectively.
- the shape of the requested area is variously devised, the cache memory is reduced in size, and the frequency of occurrence of cache misses is reduced. can do. Therefore, it is possible to reduce the access frequency of the memory path while reducing the capacity of the cache memory. According to the present invention, it is possible to reduce the access frequency of the memory bus while reducing the capacity of the cache memory.
- FIG. 1 is a block diagram showing a conventional encoding device.
- FIG. 2 is a block diagram showing a decoding apparatus according to Embodiment 1 of the present invention.
- FIG. 3 is a block diagram showing in detail a decoding unit of the decoding apparatus of FIG.
- FIG. 4 is a schematic diagram showing a configuration of a reference image memory in the decoding apparatus of FIG.
- FIG. 5 is a schematic diagram for explaining a cache memory in the decryption apparatus of FIG.
- FIG. 6 is a schematic diagram for explaining the index of the cache memory in the decoding device of FIG.
- FIG. 7 is a schematic diagram showing a configuration of a cache memory in the decoding apparatus of FIG.
- FIG. 8 is a schematic diagram showing areas to be stored in the cache memory of FIG.
- FIG. 9 is a schematic diagram for explaining the access to the cache memory in the decryption apparatus of FIG.
- FIG. 10 is a schematic diagram for explaining the access to the cache memory according to an example different from FIG.
- FIG. 11 is a schematic diagram for explaining the cache memory access in the conventional decryption apparatus.
- FIG. 12 is a schematic diagram showing a configuration of a cache memory in the decoding apparatus according to the second embodiment of the present invention.
- FIG. 13 is a schematic diagram for explaining an access to the cache memory in the decoding apparatus according to the second embodiment of the present invention.
- FIG. 14 is a schematic diagram for explaining the cache memory access in the conventional decryption apparatus.
- FIG. 15 is a schematic diagram for explaining the access to the cache memory according to an example different from FIG.
- FIG. 16 is a schematic diagram for explaining a cache memory in the encoding apparatus according to the third embodiment of the present invention.
- FIGS. 17 (A) and 17 (B) are schematic diagrams for explaining a cache memory in the encoding apparatus according to the fourth embodiment of the present invention.
- FIG. 18 is a schematic diagram for explaining an index in the encoding apparatus of FIG.
- FIG. 19 is a schematic diagram showing a specific configuration of FIG.
- FIG. 20 is a flowchart showing the processing procedure of the decoding unit core in the decoding apparatus according to Embodiment 5 of the present invention.
- FIG. 21 is a flowchart showing the continuation of FIG.
- FIGS. 22 (A) and 22 (B) are plan views showing macroblocks of MP E G 2.
- FIG. 22 (A) and 22 (B) are plan views showing macroblocks of MP E G 2.
- FIGS. 23 (A) to 23 (G) are plan views showing MPEG4 / AVC macroblocks.
- FIG. 24 (A), FIG. 24 (B) FIG. 24 is a schematic diagram for explaining region switching in the decoding apparatus according to the sixth embodiment of the present invention.
- FIG. 25 is a flowchart showing the processing procedure of the decoding unit core in the decoding apparatus according to Embodiment 6 of the present invention.
- FIG. 2 is a block diagram showing the decoding apparatus according to Embodiment 1 of the present invention.
- This decoding device 21 is an MPEG-4AVCZI TU_T H.264 decoding device, reproduces the input bit stream D 11 from the recording medium 22, and generates moving image data D 1 2 is decoded and output to the monitor device 2 3.
- the recording medium 22 is, for example, a hard disk device, a DVD (Digital Versatile Disk) or the like.
- the data reading unit 24 reproduces the input bit stream D 11 from the recording medium 22. Also, the packet header of the reproduced input bit stream D 11 is analyzed, information necessary for decoding control such as picture type is detected and output to the reproduction control unit 25, and the bit stream D 13 is decoded to the decoding unit 2 Output to 6.
- the reproduction control unit 25 controls the operation of each unit of the decoding device 21 based on information such as the picture type notified from the data reading unit 24.
- the cache control unit 25 A sets the reference image data area to be stored in the cache memory 27.
- the decoding unit 26 uses the reference image data D 16 stored in the reference image memory 28 B to sequentially process the bit stream D 13 output from the data reading unit 24 4 to obtain moving image data D 1 Decrypt 4 In this process, the decoding unit 26 processes the reference image data D 16 held in the reference image memory 28 B using the cache memory 27.
- the frame buffer 28 is formed of, for example, DRAM, and temporarily stores the moving image data D 14 decoded by the decoding unit 26 in the decoded image memory 28 A under the control of the GUI controller 29. .
- the stored moving image data is output to the monitor device 23 in the display order.
- the frame buffer 28 also temporarily stores and holds the moving image data D 14 output from the decoding unit 26 as reference image data D 16 in the reference image memory 28 B, and controls the decoding unit 26. Thus, the held reference image data D 16 is output to the decoding unit 26.
- the GUI controller 29 adjusts the read timing of the moving image data D 14 held in the decoded image memory 28 A.
- FIG. 3 is a block diagram showing the decoding unit 26 in detail along with related components.
- the decoding unit 26 has a decoding unit core 31 of an arithmetic processing circuit. By executing a predetermined processing program in the decoding unit core 31, the decoding unit core 31 can convert the video data D 1 from the bit stream D 13. Configure various function blocks to decrypt 4.
- this processing program is installed and provided in advance in this decryption device 21, it is provided that it is recorded on various recording media such as an optical disk, a magnetic disk, and a memory card instead of this prior installation. Alternatively, it may be provided by downloading via a network such as the Internet.
- the decoding unit core 31 detects the motion vector from the bit stream D 13 output from the data reading unit 24 by the motion vector detection unit 33. In addition, the decoding unit core 31 sequentially converts the bit stream D 13 by a variable length decoding unit, an inverse quantization unit, and an inverse discrete cosine transform unit (not shown), a variable length decoding process, an inverse quantization process, and an inverse discrete code. Performs sine conversion and decodes prediction error values.
- the decoding unit core 3 1 uses the motion compensation unit 3 4 to predict the predicted value based on the motion vector detected by the motion vector detection unit 3 3.
- the position of the reference image data D 16 used to generate the image is obtained, and the output of the reference image data D 16 at the obtained position is requested to the cache memory 27.
- the decoding unit core 3 1 transfers the requested reference image data D 1 6 from the cache memory 2 7 to the motion compensation unit 3 4. Output to.
- the decoding unit core 31 sends the requested reference image data D 16 to the reference image memory 28 B.
- the acquired reference image data D 1 6 is stored in the cache memory 27 and is output to the motion compensation unit 34.
- the decoding unit core 31 uses the reference image data D 16 to generate a prediction value used for decoding the image data.
- the prediction value is added to the prediction error value, and the moving image data is decoded.
- the decoding unit core 3 1 processes the decoded moving image data with a deblocking filter and outputs the processed data to the frame buffer 28.
- Fig. 4 shows the reference image by comparison with the reference image data D16 for one frame. It is a figure which shows the address map of image memory 28B.
- the most appropriate reference frame is selected from a plurality of reference frames and a predicted value is generated.
- the reference image memory 28 B is formed so as to hold a plurality of reference frames. Therefore, the reference image memory 28 B is assigned an address so as to hold the plurality of reference frames, and the cache memory 27 can selectively store the reference image data D 16 for the plurality of frames. become.
- the reference image memory 28B will be described as storing reference image data D16 for one frame.
- the reference image memory 28 B is set to 8 pixels that are output in a row in the horizontal direction, and the reference image data D 16 in the region of 8 pixels x 1 pixel is accessed once. It is formed so that it can output. Therefore, in the reference image memory 28B, the readout unit of the reference image data is set to 8 pixels that are continuous in the horizontal direction.
- multiple types of prediction value generation units are prepared in the interframe coding process, and prediction values are generated using the optimal generation unit.
- these multiple types of generation units have horizontal and vertical sizes of 16 pixels x 16 pixels, 16 pixels x 8 pixels, 8 pixels x 16 pixels, 8 pixels x 8 pixels, 8 pixels, respectively.
- Reference picture memory 2 8 B pixels that are output in a horizontal direction in a single access are 8 consecutive pixels in the horizontal direction.
- the generation unit with the largest horizontal direction (16 pixels) XI 6 pixels, 16 pixels x 8 pixels which is smaller than the number of pixels in the horizontal direction and half the number of pixels in the horizontal direction of these generation units.
- one address is assigned to the area of 8 pixels ⁇ 1 pixel that can be output all at once.
- Reference image data with this area as a unit
- FMh X vertical size FMv of the data D 1 6 as shown by the arrows in Fig. 4
- the vertical scanning from the raster scan start side is repeated in order, from 0 to FMv X ( One-dimensional addresses up to FMh-1) are assigned. Therefore, in the upper part of this FMv X (FMh-1) address, the predetermined bit is a raster scan of a vertically long area obtained by dividing one screen by the reference image data D 1 6 in units of 8 pixels horizontally. The order from the start end side will be shown.
- the lower bits of the FMv X (FMh-1) address indicate the order from the start side of the raster running in this vertically long area.
- FIG. 5 is a diagram showing the configuration of the cache memory 27.
- the cache memory 27 is a memory to which a one-dimensional address is assigned.
- the horizontal size is set to 8 pixels that can be output from the reference image memory 28B at one time, and the index number is set to a predetermined number MV. Accordingly, the cache memory 27 is configured to output the reference image data D 16 of 8 pixels continuous in the horizontal direction all at once.
- the cache memory 27 receives the reference image data of the predetermined area AR extracted from one screen of the reference image data D 16 from the reference image memory 28 B and holds it.
- this area AR is a rectangular area
- the horizontal size WS h is set to a size that is at least twice as large as 8 pixels that can be read from the reference image memory 28 B at one time.
- the vertical size WS V is set to a predetermined number of lines, and in the example of FIG. 6, this number of lines is set to 16 lines. Therefore, in this embodiment, regarding the read unit of the reference image data from the reference image memory 28B, the area AR to be cut out is set as a plurality of read units in the horizontal direction and the vertical direction, respectively.
- the vertical and horizontal sizes WS h and WS v of the region AR to be cut out are equal to or larger than the maximum size of the predicted value generation units in this embodiment.
- the index number Mv of the cache memory 27 is set to a multiplication value of the horizontal and vertical sizes WS h XWS V of the predetermined area AR.
- the decoding unit core 31 determines the position of this area AR based on the position of the reference image data D 16 used for generating the predicted value calculated by the motion compensation unit 34. Also, the cache memory 27 is requested to output the reference image data D 1 6 in this area AR. The At this time, the address data AD is issued to the cache memory 27 in the order indicated by the arrows in FIG. 6 and the output of the reference image data D 16 is requested to the cache memory 2 7. If the requested reference image data D 1 6 is not stored in the cache memory 27, the corresponding reference image data D 1 6 is obtained from the reference image memory 2 8 B, and a predicted value is generated. The acquired reference image data D 16 is stored in the cache memory 27.
- the decoding unit core 3 1 starts raster scanning of a vertically long area obtained by dividing one screen by the reference image data D 1 6 horizontally in units of 8 pixels from the address data ADFM issued to the reference image memory 28 B.
- the upper side indicating the order from the end side, the predetermined bit A, and the lower side indicating the order from the raster scanning start end side, and the predetermined bit B in each vertically long region are cut out and combined, and are continuously connected in the horizontal direction 8
- a two-dimensional address indicating the position of the reference image data D 16 on the screen is generated for each pixel reference image data.
- the decoding unit core 31 sets this two-dimensional address as an index of each reference image data held in the cache memory 27.
- variable AR which is cut out by changing the number of bits A and lower-order bits B, M and N.
- the number M of the upper bits A is increased, and the lower bits are correspondingly increased. This can be done by reducing the number N of bits of B.
- the cache memory 27 is composed of one way of 2560 bytes.
- 2 56 bytes are the size corresponding to the reference image data amount of the largest predicted value generation unit.
- the size of the area AR to be cut out is set to the largest predicted value generation unit size by the cache control unit 25 A. Therefore, in the area AR to be cut out, the horizontal size WSh is set to 16 pixels, and the vertical size WSV is set to 16 pixels.
- the decoding unit core 31 detects the motion compensation unit 3 4 by the magnitude of the motion vector MV 1.
- the area ARA is calculated by displacing the area of this macroblock MB in the opposite direction to the motion vector MV1.
- the address data AD of this area ARA is issued to the sequential cache memory 27, and the output of the reference image data D16 of this area ARA is requested to the cache memory 27.
- FIG. 9 shows a case where a predicted value is generated for a macroblock MB of 16 pixels ⁇ 16 pixels.
- the decoding unit core 31 creates two-dimensional address data from the upper bit A and the lower bit B of the address data AD in the same manner as described above with reference to FIG.
- the index set in the cache memory 27 is sequentially searched with the address data, and it is judged whether or not the requested reference image data is held in the cache memory 27.
- the decoding unit core 31 acquires this reference image data and generates a predicted value.
- the address data ADFM of the reference image data D1 6 not held is issued, and the corresponding reference image data D1 6
- the output is requested to the reference image memory 28 B, the reference image data is obtained from the reference image memory 28 B, and a predicted value is generated.
- the reference image data held in the cache memory 27 is updated with the acquired reference image data, and the index is updated so as to correspond to the update of the reference image data.
- the numbers in the cache memory 27 of each block of 8 pixels X 8 pixels in the reference image data D 16 are compared with those in FIG. And the storage location of the cache memory 27.
- the cache memory 27 has 16 pixels X 8 pixels, 8 pixels XI 6 pixels macroblocks MB 1 and MB 2 processed in sequence, for example, as shown in FIG.
- the reference image data referred to by the blocks MB 1 and MB 2 is stored.
- each macro block and sub is stored.
- the input bit stream D 1 1 reproduced from the recording medium 2 2 (FIG. 2) is decoded into moving image data D 1 4 by the decoding unit 26, and this moving image data D 1 4 is The data is output to the monitor device 23 via the decoded image memory 28 A and the GUI controller 29.
- the decoded moving image data D 14 is stored as reference image data D 16 in the reference image memory 28 B, and a prediction value is generated when the decoding unit 26 decodes the moving image data D 14. Used for.
- a motion vector is detected from the input bitstream D 11 by the motion vector detection unit 33 configured by the decoding unit core 31 of the decoding unit 26. Further, based on the detected motion vector, the address data AD of the reference image data D 16 used to generate the predicted value is obtained by the motion compensation unit 34, and the reference image data D 16 is obtained from the address data AD. Is done.
- the decoding device 21 acquires the reference image data used for generating the predicted value from the cache memory 27. .
- the decoding device 21 acquires the corresponding reference image data D 16 from the reference image memory 28 B. The corresponding reference image data D 16 is stored in the cache memory 27 and held.
- the reference image data held in the cache memory 27 is repeatedly used when the prediction value is continuously generated using the same reference image data in continuous prediction value generation units.
- the predicted value can be generated, and the moving image data D 14 can be decoded at a higher speed than when no cache memory is used.
- the cache memory 27 is managed with a one-dimensional address in units of a plurality of pixels that can be read out once, simply applying the conventional method increases cache misses. Therefore, it is necessary to frequently access the reference image memory 28 B, and the access frequency of the memory bus increases.
- FIG. 11 is referred to by the conventional method in comparison with Fig. 7 and Fig. 9.
- 6 is a schematic diagram illustrating a case where image data is stored in a cache memory 27.
- FIG. 11 When managing reference image data held by a one-dimensional address, the reference image data stored in the cache memory can only store one-dimensionally continuous image data on the screen of the reference image data D 16, As a result, when the cache memory of the size of this embodiment is used, as shown in Fig. 11, reference image data is stored only in a vertically long area of 8 pixels in the horizontal direction and 32 lines in the vertical direction. become unable.
- the reference image data of the area ARA of 16 pixels x 16 pixels corresponding to this macro block is used. Even if you try to read from the cache memory, it will eventually result in a cache miss. That is, in the example of FIG. 11, it is necessary to read the reference image data of the right half of the area A RA from the reference image memory again. Therefore, the access frequency of the memory path increases.
- One way to solve this problem is to enlarge the horizontal size of the cache memory to increase the number of pixels that can be read at the same time, and access the 1 x 6 x 16 pixel area even with one-dimensional address management.
- a possible way to configure the cache memory is also possible.
- the capacity of the cache memory becomes large and the configuration of the memory bus becomes complicated.
- a method of using a 2-way cache memory is also conceivable, but this also increases the capacity of the cache memory and complicates the memory bus configuration.
- 8 pixels by a plurality of continuous pixels in the horizontal direction which is a unit for reading the reference image data from the reference image memory 28B, are taken as a unit, and the horizontal direction and The reference image data is stored and held in the cache memory 27 by areas each having a plurality of readout units in the vertical direction. Further, the reference image data is requested by one-dimensional address data for specifying such an area.
- the reference image data is held in the cache memory by a continuous area in the vertical direction of a simple readout unit as in the past. Compared to the case, it is possible to reduce cache misses while reducing the capacity of the cache memory.
- the horizontal position on the screen of the reference image data D 16 is determined from the one-dimensional address data requesting the reference image data D 16.
- the high-order side indicating the predetermined bit A and the low-order side indicating the vertical position and the predetermined bit B are cut out and combined to generate a two-dimensional address indicating the position of the reference image data D 16 on the screen.
- this two-dimensional address is set as the index of the cache memory 27.
- the reference image data D 1 6 stored in the cache memory 27 is It can be managed with a two-dimensional address in units of multiple pixels that can be read once. Therefore, an area of a desired size is set in the horizontal direction and the vertical direction in units of a plurality of pixels that can be read out once, and the reference image data of this area is stored in the cache memory 27. Also, it can be loaded from the cache memory 27. Therefore, even if the size of the cache memory 27 is not increased, the reference image data D 16 of the area corresponding to the predicted value generation unit can be stored in the cache memory 27, which is less likely to cause a cache miss. The frequency can be reduced, and the memory bus access frequency can be reduced.
- a two-dimensional address is generated in the same manner for the address data for requesting the reference image data D 16, and the cache memory is compared by comparing the two-dimensional address with the cache memory 27. 2
- the reference image data D 1 6 held in 7 is output to the decoding unit core 31, and the corresponding reference image data D 1 6 is not held in the cache memory 2 7, the reference image memory 2
- the corresponding reference image data D 16 from 8 B is output to the decoding unit core 31.
- the reference image data D 16 read from the reference image memory 28 B is stored in the cache memory 27, and the index is updated so as to correspond to the storage of the reference image data D 16.
- Example 1 Effects of Example 1 According to the above configuration, the cache memory is reduced by issuing address data so as to identify areas each having a plurality of readout units in the horizontal direction and the vertical direction, and storing the reference image data in the cache memory.
- the memory path access frequency can be reduced while increasing the capacity.
- a two-dimensional address indicating the position of the reference image data on the screen is generated in units of a plurality of pixels that can be read at one time, and this two-dimensional address is set as an index of the cache memory. This makes it possible to reduce the memory bus access frequency while reducing the size of the cache memory.
- this two-dimensional address is converted from the address data that accesses the cache memory to the upper side indicating the horizontal position of the reference image data on one screen, a predetermined bit, and the lower side indicating the vertical position.
- a two-dimensional address can be created with a simple process by combining bits.
- FIG. 12 is a schematic diagram showing the configuration of the cache memory 47 applied to the decryption device according to the second embodiment of the present invention in comparison with FIG.
- the cache memory 47 is constituted by 2 ways.
- the decryption device of this embodiment is configured in the same manner as the decryption device 21 of the first embodiment except that the configuration related to the cache memory 47 is different. This will be explained with reference to the drawing in Example 1.
- the memory bus width is set to 64 bits, and the cache memory 47 is created in a 2-way configuration of 1 28 bytes. Therefore, it is possible to hold two areas of 8 pixels x 8 pixels in each way 4 7 A and 4 7 B, respectively.
- each way 4 7 A and 4 and 7 B store the reference image data referenced by Macroblock MB 1 and MB 2, respectively.
- each way 4 7 as described above with reference to FIG. A and 4 7 B can only store the reference image data D 1 6 in the vertically long area.
- a cache miss occurs in the subsequent processing of the prediction value generation unit of the same size, and the reference image memory It will be necessary to refer to 2 8 B.
- the reference image data D 1 6 of the area corresponding to the subsequent predicted value generation unit of the same size is stored in each cache memory 47. Since the data is stored in the ways 47A and 47B, the frequency of occurrence of cache misses can be reduced as compared with the conventional case.
- the present invention is applied to a moving image data encoding apparatus based on the MPEG-4 AVCZ l TU-TH. 2 6 4 system, and cached for processing of reference image data used for generating predicted values. Apply memory. Therefore, in the encoding apparatus of this embodiment, a cache memory is provided between the reference image memory 11 and the motion compensation unit 12 in the configuration of FIG.
- the encoding apparatus of this embodiment is configured in the same way as the encoding apparatus of FIG. 1 except that the configuration relating to this cache memory is different. This will be explained using the configuration of
- the encoding apparatus of this embodiment is further configured by an encoding unit core in which each function block shown in FIG. 1 is an arithmetic processing circuit.
- the processing program of the encoding unit core is provided in advance by being installed in the encoding device, but instead of this prior installation, an optical disk, a magnetic disk, a memory card, etc. It may be provided by being recorded on various recording media. It may be provided by downloading via a network such as a network.
- the cache memory holds the reference image data of the predetermined area AR on the screen of the reference image data D3.
- the AR size WS h and WS V are set in units of 8 pixels that are continuous in the horizontal direction, it may be difficult to make these ratios completely consistent.
- the vertical and horizontal sizes WS h and WS v of the area AR are set so that the ratio of the size WSh and WS V is as close as possible to the ratio of the horizontal size MVS Ah and the vertical size MVS AV. Is done.
- the vertical and horizontal sizes WS h and WS v of the area AR to be cut out are set to a size including the maximum size of the predicted value generation unit.
- the encoding unit core uses a motion vector detected by the motion vector detection unit 13 to generate a prediction value for the motion compensation unit 12.
- the position of the reference image data D3 used for generation is obtained.
- the cache memory is requested to output the reference image data D3 at the obtained position.
- the encoding unit core outputs the requested reference image data D 3 from the cache memory to the motion compensation unit 12.
- the encoding unit core obtains the requested reference image data D3 from the reference image memory 11 and acquires the motion compensation unit 1 Output to 2.
- the encoding unit core When acquiring this reference image data D 3 from the reference image memory 11 1, the encoding unit core acquires the reference image data D 3 of the area AR from the reference image memory 11 1 and holds it in the cache memory.
- the cache memory index is updated to correspond to the storage of the reference image data.
- the area AR is, for example, the position of the reference image data used to generate the predicted value is the center position of the area AR. Is set to In this case, the position of the reference image data used to generate the predicted value
- 1S Area A R may be set so that it is displaced from the center position of area A R to the position of the predicted value generation unit that follows.
- a two-dimensional address indicating the position of the reference image data on the screen is generated in units of a plurality of pixels that can be read out at one time by being applied to an encoding device.
- the present invention is applied to a moving image data encoding apparatus based on the MPEG-4 AVC / ITU-TH H.264 protocol similar to the third embodiment, and the area stored in the cache memory is changed. Change shape.
- the encoding device of this embodiment is configured in the same way as the encoding device of Embodiment 4 except that the configuration relating to this cache memory is different.
- the encoding apparatus of this embodiment uses the horizontal direction and vertical direction corresponding to the motion search range of 3 2 pixels ⁇ 16 pixels shown in FIG. 17 (A) as the area of the reference image data D 3 stored in the cache memory.
- a second area AR 2 of 8 pixels X 1 6 pixels is prepared. Therefore, when detecting a motion vector in a motion search range of 32 pixels ⁇ 16 pixels, the reference image data can be stored in the cache memory in the first area AR 1 to increase the hit rate.
- the reference image data is stored in the cache memory in the second area AR 2, The hit rate can be increased.
- the cache memory index is switched corresponding to the switching of the first and second areas AR 1 and AR 2. That is, also in this case, the bit A indicating the horizontal position and the bit B indicating the vertical position are cut out from the address data AD FM accessing the reference image memory 11 and joined to create an index of the cache memory.
- the number of bits M of the upper bits A is increased, and the lower bit B is correspondingly increased.
- the encoding unit core has indexes IN corresponding to the first and second areas AR 1 and AR 2 in the two logical operation units 51 and 52, respectively. 1. Generate IN 2.
- the AND section 53 is used to select and output the indentus I N 1 corresponding to the first area A R 1 according to the logical value of the selection signal S E L instructing the switching of the index.
- an inverted signal obtained by inverting the logic value of the selection signal SEL is generated using the inverter 54, and the index IN 2 corresponding to the second area AR2 is converted to the logic of the inverted signal using the AND section 55. Select output according to the value.
- the encoding unit core uses these OR units 56 to obtain the logical sum of the output values of the AND units 53 and 55, and corresponds to the first and second regions by switching the selection signal SEL. Toggles generation of indentus.
- the encoding unit core encodes the moving image data by switching the motion vector search range from various feature quantities of the moving image data to be encoded, and in conjunction with the switching of the motion vector search range.
- the shape of the area stored in the cache memory is switched between the first and second areas AR 1 and AR 2.
- the cache memory hit rate in the previous frame and the motion vector distribution detected in the previous frame can be applied to this feature amount.
- the cache memory hit rate in the previous frame is below a certain value. If this happens, the current motion vector search range can be switched. For example, when a motion vector is detected in the horizontally long motion vector search range, most of the motion vectors detected in the previous frame have a small horizontal component and a large vertical component.
- the search range of the toll is switched to a vertically long position. After all, these methods set the shape of the area of the reference image data stored in the cache memory in the current frame from the tendency in the previous frame.
- the search range is switched according to the user's instruction, and the reference image data stored in the cache memory is linked to this.
- the shape of the area may be switched.
- the hit rate can be further improved and the moving image data can be encoded more efficiently.
- the present invention is applied to a moving picture data encoding apparatus that switches between MPEG-4 AVC / I TU-TH.264 and MPE G 2 systems, and is stored in a cache memory.
- the area shape is switched in conjunction with this switching of the encoding method.
- the encoding apparatus of this embodiment is configured in the same way as the encoding apparatus of Embodiment 4 except that the configuration relating to the shape switching of the area stored in the cache memory is different.
- the hit rate generally increases when the shape of the area stored in the cache memory is set to a shape close to a square shape.
- the hit rate is higher when this region is formed in a horizontally long shape than in the case of the MPEG-4A VC / I TU-T H.264 system.
- the square shape of 16 pixels ⁇ 16 pixels is stored in the cache memory.
- Reference image data is stored.
- the reference image data is stored in the cache memory in a horizontally long shape of 16 pixels ⁇ 8 pixels.
- the hit rate can be further improved and the moving image data can be encoded more efficiently. it can.
- the present invention is applied to a moving image data decoding apparatus according to the MPEG-4 AVC / I TU-T H.264 system similar to that in the second embodiment.
- the shape of the area stored in the cache memory is switched as appropriate.
- the encoding apparatus of this embodiment is practical except that the configuration related to the cache memory is different.
- W The same configuration as the decoding device of the second embodiment.
- the first area AR 1 of 16 pixels ⁇ 8 pixels is used as the area of the reference image data D 16 to be stored in the cache memory, and 8 pixels.
- a second area AR 2 of X 16 pixels is prepared.
- the decoding unit core 31 executes the processing procedure of FIG. 20 and FIG. 21 in units of frames, and sets the area of the reference image data D 16 stored in the cache memory in units of frames as the first or second unit. Set to area AR 1 or AR 2.
- the decoding unit core 31 moves from step SP 1 to step SP 2 and determines whether or not the current frame is two frames or more of moving image data. If a negative result is obtained here, the decoding unit core 31 sets the area of the reference image data D 16 to be stored in the cache memory as the first or second area AR 1 or AR 2 set in advance. Move on to step SP 3.
- step S P 2 the process proceeds from step S P 2 to step SP 4.
- the decoding unit core 31 detects the motion compensation block size detected most frequently in the previous frame.
- an area satisfying the most detected motion compensation block size is selected from the area AR 1 or AR 2. More specifically, when the most frequently detected motion compensation block size is horizontally long, the first area AR 1 of 16 pixels ⁇ 8 pixels is selected. If the most commonly detected motion compensation block size is vertically long, the second area AR2 of 8 pixels X I 6 pixels is selected.
- the decoding unit core 31 sets the selected area as the area of the reference image data D 16 stored in the cache memory, and proceeds to step SP3.
- step SP 3 the decoding unit core 31 analyzes the header of the input bitstream, acquires information such as the picture type, and notifies the reproduction control unit 25 of the information.
- step SP7 the motion compensation block size is counted for each type.
- the decoding unit core 3 1 determines the area of the reference image data D 16 to be stored in the cache memory in the subsequent frame by determining the count result in step SP 7 in the processing of step SP 4 in the subsequent frame. .
- the decoding unit core 31 moves to step SP 8 to detect a motion vector, and in step SP 9, based on the motion vector detected in step SP 8. Calculate the position of the reference image data to generate the predicted value.
- the reference image data at the position obtained by this calculation is reference image data included in an area having a size corresponding to the processing target macroblock, and hence is referred to as a reference macroblock below.
- the decoding unit core 31 moves to step SP 10 and determines whether or not there is a reference macro block in the cache memory. If there is a reference macro block in the cache memory, the reference macro of this cache memory is determined. Generate predicted value from block and go to step SP 1 1. On the other hand, if there is no reference macroblock in the cache memory, the process moves to step SP12, where the reference macroblock is transferred from the reference image memory to the cache memory, and the predicted value is obtained from the transferred reference macroblock. Generate and move to step SP 1 1.
- step SP 11 the decoding unit core 3 1 generates a differential error value by sequentially performing variable-length decoding processing, inverse quantization processing, and inverse discrete cosine transform processing on the processing target macroblock of the input bitstream,
- step SP 13 the predicted value is added to the difference error value to decode the moving picture data of the macro block.
- the processing procedure of 13 is the case of inter-frame coding processing.
- intra-frame coding processing instead of this processing procedure, predicted values are generated by in-plane prediction, and moving image data is decoded. Turn into.
- step SP 14 After completing the processing of step SP 1 3, the decoding unit core 3 1 moves to step SP 14 and determines whether or not the processing of all macroblocks of the current frame has been completed. Return to SP 8. On the other hand, if an affirmative result is obtained in step SP 14, the process proceeds from step SP 14 to step SP 15 to output the moving image data of the decoded current frame, and then the process proceeds to step SP 1 6 to perform this processing procedure. Exit.
- the shape of the area of the reference image data D 16 stored in the cache memory is switched to a shape that is estimated to have a high hit rate to the cache memory. More specifically, in the front frame By switching to a shape corresponding to the generation unit predicted to be large in the current frame, the hit rate can be further improved and decoding can be performed efficiently.
- the present invention is applied to a moving picture data decoding apparatus which switches between MPEG-4 AVC / I TU-TH.264 and MPE G 2 systems, and is stored in a cache memory.
- the area shape is switched in conjunction with this switching of the encoding method.
- the decoding device of this embodiment is configured in the same manner as the decoding device of Embodiment 2 except that the configuration relating to the shape switching of the area stored in the cache memory is different.
- MPEG 2 provides two types of prediction value generation units: 16 pixels x 16 pixels and 16 pixels x 8 pixels. ing.
- MPEG-1 4AVC / I TU—TH.264 has 16 pixels XI 6 pixels, 16 pixels 8 pixels, 8 pixels XI 6 Seven types of pixels, 8 pixels x 8 pixels, 4 pixels x 8 pixels, 8 pixels x 4 pixels, and 4 pixels x 4 pixels are prepared as predicted value generation units.
- the cache memory when the input bit stream to be processed is the MP EG 2 system, the cache memory is configured in 2 ways, and as shown in Fig. 24 (A), 16 pixels Select the 8-pixel area AR 1 and store the reference image data in the cache memory.
- the cache memory when the input bit stream to be processed is MP E G-4 AVC / I TU-T H.264, the cache memory is configured with one way, as shown in Fig. 24 (B). Select the AR2 of 16 pixels x 8 pixels and store the reference image data in the cache memory.
- FIG. 25 is a flowchart showing a processing procedure of the decoding unit core 31 relating to the switching of the cache memory.
- the decoding unit core 31 executes this processing procedure when starting the decoding process. That is, when starting this processing procedure, the decoding unit core 31 moves from step SP21 to step SP22, where it analyzes the input bitstream and detects the type of codec. Also, it is determined whether or not the detected codec type is MPEG-2. If a positive result is obtained here, the decoding unit core 31 moves from step SP 22 to step SP 23 and sets the area of the reference image data D 16 stored in the cache memory as the first area AR 1. Set to. Then, the process proceeds to step SP 24, where each part is instructed to start the codec process. Then, the process proceeds to step SP 25 and the process procedure is terminated.
- step SP 22 the decoding unit core 31 moves from step SP 22 to step SP 26 and sets the area of the reference image data D 16 stored in the cache memory. After setting to the second area AR2, proceed to step SP24.
- the hit rate can be further improved and the moving image data can be efficiently decoded. it can.
- the present invention is applied to an encoding / decoding device.
- the encoding / decoding apparatus of this embodiment is configured to be able to switch the program of the arithmetic processing means constituting the decoding unit core 31 described above with reference to FIG. Therefore, the configuration is switched between the encoding device and the decoding device.
- moving image data is sequentially encoded by the configuration relating to the cache memory of the encoding device of the above-described embodiment.
- moving image data is sequentially decoded by the configuration relating to the cache memory of the decoding device of the above-described embodiment. Therefore, the coding / decoding apparatus of this embodiment switches the shape of the area of the reference image data stored in the cache memory between the case of encoding and the case of decoding.
- the hit rate is further improved and the moving image data is efficiently processed. Can be encoded and decoded.
- this cache memory power is composed of 1 2 8 ways. Each way has a capacity of 16 or 3 2 units of reference image data read from the reference image memory (2 words or 4 words when 8 pixels x 8 pixels are 1 word) Is set.
- this cache memory is configured so that the reference image data stored in each way can be specified by the tag of each way by omitting the setting of the index for the reference image data for 2 or 4 nodes.
- the address data of the reference image memory for the first 8 pixels stored in each way is set in each tag. Therefore, in this embodiment, whether or not the reference image data corresponding to the cache memory is stored is determined by comparing the address data for accessing the cache memory and the tag set in the cache memory. If the reference image data corresponding to the cache memory is not stored, the prediction value is generated by loading from the reference image memory and stored in the cache memory, and the address of the first 8 pixels is set in the tag. As in this embodiment, even if the number of ways is increased and the reference image data is identified only by the tag instead of the index, the same effect as in the above-described embodiment can be obtained.
- the unit for reading the reference image data from the reference image memory is set to 8 pixels that are continuous in the horizontal direction.
- Various numbers of pixels can be set.
- the present invention can be widely applied to a case where a plurality of pixels that are continuous in the vertical direction are used as a readout unit instead of a plurality of pixels that are continuous in the horizontal direction.
- the present invention is not limited to this.
- moving image data is interlaced.
- Reference image stored in the cache memory according to the moving image data such as when switching the shape of this area between the case of the method and the progressive method, or when switching the shape of this area by the frame rate, etc.
- the shape of the data area may be switched.
- the case has been described in which moving image data is processed by configuring each function block by the arithmetic processing means.
- the present invention is not limited to this, and moving image data is processed by a hardware configuration. Can also be widely applied.
- the present invention can be applied to, for example, a moving image data encoding apparatus and decoding apparatus based on the MPEG-4 AVCZl TU-TH.264 format.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP07767639A EP2061256A4 (en) | 2006-09-06 | 2007-06-20 | IMAGE DATA PROCESSING METHOD, PROGRAM FOR THE SAME METHOD, RECORDING MEDIUM WITH THE RECORDED PROGRAM FOR PROCESSING IMAGE DATA, AND IMAGE DATA PROCESSING DEVICE |
US12/374,114 US8400460B2 (en) | 2006-09-06 | 2007-06-20 | Image data processing method, program for image data processing method, recording medium with recorded program for image data processing method and image date processing device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006-240963 | 2006-09-06 | ||
JP2006240963A JP4535047B2 (ja) | 2006-09-06 | 2006-09-06 | 画像データ処理方法、画像データ処理方法のプログラム、画像データ処理方法のプログラムを記録した記録媒体及び画像データ処理装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2008029550A1 true WO2008029550A1 (en) | 2008-03-13 |
Family
ID=39156995
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2007/062833 WO2008029550A1 (en) | 2006-09-06 | 2007-06-20 | Image data processing method, program for image data processing method, recording medium with recorded program for image data processing method and image data processing device |
Country Status (7)
Country | Link |
---|---|
US (1) | US8400460B2 (ja) |
EP (1) | EP2061256A4 (ja) |
JP (1) | JP4535047B2 (ja) |
KR (1) | KR20090064370A (ja) |
CN (1) | CN101502125A (ja) |
TW (1) | TW200814790A (ja) |
WO (1) | WO2008029550A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012227608A (ja) * | 2011-04-15 | 2012-11-15 | Toshiba Corp | 画像符号化装置及び画像復号装置 |
JP2013126083A (ja) * | 2011-12-14 | 2013-06-24 | Fujitsu Ltd | 画像処理装置 |
JP2014513883A (ja) * | 2011-03-07 | 2014-06-05 | 日本テキサス・インスツルメンツ株式会社 | ビデオ符号化のためのキャッシュ方法およびシステム |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011097197A (ja) * | 2009-10-27 | 2011-05-12 | Yamaha Corp | メモリアクセス制御装置 |
JP2011097198A (ja) * | 2009-10-27 | 2011-05-12 | Yamaha Corp | メモリアクセス制御装置 |
JP5734700B2 (ja) * | 2011-02-24 | 2015-06-17 | 京セラ株式会社 | 携帯情報機器および仮想情報表示プログラム |
JP5736863B2 (ja) * | 2011-03-15 | 2015-06-17 | 富士通株式会社 | トランスコード装置及びトランスコード方法 |
KR20130043322A (ko) * | 2011-10-20 | 2013-04-30 | 삼성전자주식회사 | 디스플레이 컨트롤러 및 이를 포함하는 디스플레이 장치 |
CN102769753B (zh) * | 2012-08-02 | 2015-12-09 | 豪威科技(上海)有限公司 | H264编码器及编码方法 |
US20140184630A1 (en) * | 2012-12-27 | 2014-07-03 | Scott A. Krig | Optimizing image memory access |
CN106331720B (zh) * | 2015-06-17 | 2020-03-27 | 福州瑞芯微电子股份有限公司 | 一种视频解码相关信息存储方法和装置 |
KR20180028796A (ko) * | 2016-09-09 | 2018-03-19 | 삼성전자주식회사 | 이미지 표시 방법, 저장 매체 및 전자 장치 |
CN106776374B (zh) * | 2017-01-23 | 2021-04-13 | 中核控制系统工程有限公司 | 一种基于fpga的高效数据缓冲方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06231035A (ja) * | 1993-02-03 | 1994-08-19 | Oki Electric Ind Co Ltd | メモリアクセス装置 |
JPH11215509A (ja) * | 1998-01-28 | 1999-08-06 | Nec Corp | 動き補償処理方法及びシステム並びにその処理プログラムを記録した記録媒体 |
WO2005109205A1 (ja) * | 2004-04-15 | 2005-11-17 | Matsushita Electric Industrial Co., Ltd. | 矩形領域に対するバーストメモリアクセス方法 |
JP2006031480A (ja) | 2004-07-16 | 2006-02-02 | Sony Corp | 情報処理システム及び情報処理方法、並びにコンピュータプログラム |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4315312A (en) * | 1979-12-19 | 1982-02-09 | Ncr Corporation | Cache memory having a variable data block size |
US6125429A (en) * | 1998-03-12 | 2000-09-26 | Compaq Computer Corporation | Cache memory exchange optimized memory organization for a computer system |
US7234040B2 (en) * | 2002-01-24 | 2007-06-19 | University Of Washington | Program-directed cache prefetching for media processors |
JP4744510B2 (ja) | 2004-04-22 | 2011-08-10 | シリコン ハイブ ビー・ヴィー | データ値の多次元アレイへのパラレルなアクセスを提供するデータ処理装置 |
JP4180547B2 (ja) | 2004-07-27 | 2008-11-12 | 富士通株式会社 | 動画像データ復号装置、および復号プログラム |
US20060050976A1 (en) * | 2004-09-09 | 2006-03-09 | Stephen Molloy | Caching method and apparatus for video motion compensation |
US7275143B1 (en) * | 2004-12-13 | 2007-09-25 | Nvidia Corporation | System, apparatus and method for avoiding page conflicts by characterizing addresses in parallel with translations of memory addresses |
JP4275085B2 (ja) * | 2005-02-17 | 2009-06-10 | 株式会社ソニー・コンピュータエンタテインメント | 情報処理装置、情報処理方法、およびデータストリーム生成方法 |
US7965773B1 (en) * | 2005-06-30 | 2011-06-21 | Advanced Micro Devices, Inc. | Macroblock cache |
-
2006
- 2006-09-06 JP JP2006240963A patent/JP4535047B2/ja not_active Expired - Fee Related
-
2007
- 2007-06-20 EP EP07767639A patent/EP2061256A4/en not_active Withdrawn
- 2007-06-20 CN CNA2007800299581A patent/CN101502125A/zh active Pending
- 2007-06-20 US US12/374,114 patent/US8400460B2/en not_active Expired - Fee Related
- 2007-06-20 WO PCT/JP2007/062833 patent/WO2008029550A1/ja active Application Filing
- 2007-06-20 KR KR20097004638A patent/KR20090064370A/ko not_active Application Discontinuation
- 2007-06-20 TW TW96122123A patent/TW200814790A/zh not_active IP Right Cessation
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06231035A (ja) * | 1993-02-03 | 1994-08-19 | Oki Electric Ind Co Ltd | メモリアクセス装置 |
JPH11215509A (ja) * | 1998-01-28 | 1999-08-06 | Nec Corp | 動き補償処理方法及びシステム並びにその処理プログラムを記録した記録媒体 |
WO2005109205A1 (ja) * | 2004-04-15 | 2005-11-17 | Matsushita Electric Industrial Co., Ltd. | 矩形領域に対するバーストメモリアクセス方法 |
JP2006031480A (ja) | 2004-07-16 | 2006-02-02 | Sony Corp | 情報処理システム及び情報処理方法、並びにコンピュータプログラム |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014513883A (ja) * | 2011-03-07 | 2014-06-05 | 日本テキサス・インスツルメンツ株式会社 | ビデオ符号化のためのキャッシュ方法およびシステム |
JP2012227608A (ja) * | 2011-04-15 | 2012-11-15 | Toshiba Corp | 画像符号化装置及び画像復号装置 |
JP2013126083A (ja) * | 2011-12-14 | 2013-06-24 | Fujitsu Ltd | 画像処理装置 |
Also Published As
Publication number | Publication date |
---|---|
TWI347785B (ja) | 2011-08-21 |
CN101502125A (zh) | 2009-08-05 |
US8400460B2 (en) | 2013-03-19 |
KR20090064370A (ko) | 2009-06-18 |
EP2061256A1 (en) | 2009-05-20 |
TW200814790A (en) | 2008-03-16 |
US20090322772A1 (en) | 2009-12-31 |
JP4535047B2 (ja) | 2010-09-01 |
EP2061256A4 (en) | 2010-07-07 |
JP2008066913A (ja) | 2008-03-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2008029550A1 (en) | Image data processing method, program for image data processing method, recording medium with recorded program for image data processing method and image data processing device | |
RU2720648C1 (ru) | Способ и устройство для кодирования или декодирования изображения с предсказанием информации движения между уровнями в соответствии со схемой сжатия информации движения | |
TWI717776B (zh) | 應用於視訊內容編碼之多重參考鄰邊之畫面內預測之自適應性濾波方法、使用上述方法的視訊編碼裝置及視訊解碼裝置 | |
JP5340289B2 (ja) | 画像復号装置、画像復号方法、集積回路及びプログラム | |
JP4746550B2 (ja) | 画像符号化装置 | |
JP5152190B2 (ja) | 符号化装置、符号化方法、符号化プログラムおよび符号化回路 | |
JP4480156B2 (ja) | 画像処理装置及び方法 | |
JP2011160470A (ja) | 画像の符号化および復号化 | |
JP2007524309A (ja) | ビデオ復号の方法 | |
WO2010106670A1 (ja) | 画像符号化装置、画像符号化制御方法および画像符号化プログラム | |
JP4346573B2 (ja) | 符号化装置と方法 | |
JP4675383B2 (ja) | 画像復号化装置および方法、画像符号化装置 | |
JP2018511237A (ja) | コンテンツ適応型bピクチャパターンビデオエンコーディング | |
TWI418219B (zh) | 用於動態補償系統之資料映像方法及快取記憶體系統 | |
JP2007067526A (ja) | 画像処理装置 | |
JP2007325119A (ja) | 画像処理装置及び画像処理方法 | |
JP2003348594A (ja) | 画像復号装置及び方法 | |
JP5020391B2 (ja) | 復号化装置及び復号化方法 | |
KR101602871B1 (ko) | 데이터 부호화 방법 및 장치와 데이터 복호화 방법 및 장치 | |
JP2015226199A (ja) | 動画像符号化装置、動画像符号化方法及び動画像符号化プログラム | |
JP4515870B2 (ja) | 信号処理装置及び映像システム | |
JP5206070B2 (ja) | 復号装置および復号方法 | |
KR100556341B1 (ko) | 메모리 대역폭이 감소된 비디오 디코더 시스템 | |
JPH10164596A (ja) | 動き検出装置 | |
JP2011097488A (ja) | 映像圧縮符号化装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200780029958.1 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07767639 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12374114 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2007767639 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020097004638 Country of ref document: KR |
|
NENP | Non-entry into the national phase |
Ref country code: DE |