WO2009142003A1 - 画像符号化装置及び画像符号化方法 - Google Patents
画像符号化装置及び画像符号化方法 Download PDFInfo
- Publication number
- WO2009142003A1 WO2009142003A1 PCT/JP2009/002207 JP2009002207W WO2009142003A1 WO 2009142003 A1 WO2009142003 A1 WO 2009142003A1 JP 2009002207 W JP2009002207 W JP 2009002207W WO 2009142003 A1 WO2009142003 A1 WO 2009142003A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- encoding
- unit
- signal
- size
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/57—Motion estimation characterised by a search window with variable size or shape
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
- H04N19/139—Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/37—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability with arrangements for assigning different transmission priorities to video input data or to video coded data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/43—Hardware specially adapted for motion estimation or compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/56—Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
Definitions
- the present invention relates to an image encoding device and an image encoding method, and more particularly to an image encoding device that divides each picture into a plurality of images and encodes the divided images with a plurality of encoding units.
- FIG. 1 is a block diagram showing a configuration of a conventional image encoding device 100 described in Patent Document 1. As shown in FIG.
- the encoding unit 108 includes a first encoding unit 103A and a second encoding unit 103B.
- the image signal input terminal 101 is supplied with an input image signal (video signal) 110 having a high pixel rate.
- the high pixel rate input image signal 110 is a sequential signal such as a high-definition signal as described above.
- the signal dividing unit 102 generates the first divided image signal 111A and the second divided image signal 111B by dividing the input image signal 110 into two parts, for example, up and down. For example, when the effective image frame of the input image signal 110 is 480 lines, the first divided image signal 111A and the second divided image signal 111B are image signals for 240 lines, respectively.
- the first encoding unit 103A and the second encoding unit 103B are low pixel rate encoders.
- the first encoding unit 103A generates the first encoded signal 112A by compressing and encoding the first divided image signal 111A.
- the second encoding unit 103B generates the second encoded signal 112B by compressing and encoding the second divided image signal 111B.
- the first encoding unit 103A and the second encoding unit 103B perform motion detection and motion compensation on an image near the boundary between the first divided image signal 111A and the second divided image signal 111B
- the local decoded image corresponding to the first overlapping area 115A or the second overlapping area 115B generated by the encoding unit is used.
- FIG. 2A is a diagram showing the first divided image signal 111A and the first overlapping region 115A.
- FIG. 2B is a diagram illustrating the second divided image signal 111B and the second overlapping region 115B.
- the first encoding unit 103A generates a local decoded image of the first divided image signal 111A when the first divided image signal 111A is encoded.
- the first encoding unit 103A among the generated local decoded images, converts the local decoded image 113A corresponding to the second overlapping region 115B included in the second search range 116B of the second encoding unit 103B to the second code. To the conversion unit 103B.
- the second encoding unit 103B generates a local decoded image of the second divided image signal 111B when the second divided image signal 111B is encoded.
- the second encoding unit 103B converts the local decoded image 113B corresponding to the first overlapping region 115A included in the first search range 116A of the first encoding unit 103A, out of the generated local decoded images, to the first code. To the conversion unit 103A.
- the first encoding unit 103A uses a local decoded image corresponding to the first search range 116A when performing motion detection and motion compensation of the first divided image signal 111A. Further, the local decoded image 113B output from the second encoding unit 103B is used as the local decoded image corresponding to the first overlapping region 115A included in the first search range 116A.
- the second encoding unit 103B uses a local decoded image corresponding to the second search range 116B when performing motion detection and motion compensation of the second divided image signal 111B. Further, the local decoded image 113A output by the first encoding unit 103A is used as the local decoded image corresponding to the second overlapping area 115B included in the second search range 116B.
- the signal synthesizer 106 synthesizes the first encoded signal 112A and the second encoded signal 112B converted to the low pixel rate to generate an output encoded signal 114 having a high pixel rate, and the generated output encoding The signal 114 is output to the encoded signal output terminal 107.
- the image encoding device 100 described in Patent Document 1 realizes an image encoding device for a high pixel rate by using the first encoding unit 103A and the second encoding unit 103B for low pixel rate. is doing.
- the image encoding device 100 described in Patent Document 1 needs to share the local decoded images 113A and 113B referenced from the other between the adjacent first encoding unit 103A and the second encoding unit 103B. .
- the image encoding device 100 described in Patent Literature 1 increases the data transfer amount of the local decoded images 113A and 113B and increases the bandwidth as the pixel rate of the input image signal 110 increases. Has a problem.
- the transfer of the external bus connecting the integrated circuits is very slow and slow compared to the internal encoding process. Transfer via external bus.
- the conventional image encoding device 100 has a problem that an increase in the data transfer latency of the local decoded images 113A and 113B becomes a bottleneck in the speed of the encoding process.
- the present invention solves the above-described conventional problems, and an object thereof is to provide an image encoding device and an image encoding method capable of reducing the data amount of a local decoded image transferred between adjacent encoding units. To do.
- an image encoding device is an image encoding device that generates an output encoded signal by encoding an input image signal, and each image included in the input image signal
- a signal dividing unit that divides a picture into a plurality of encoding target images, and a coding process that corresponds to each of the plurality of encoding target images and performs a coding process including a motion compensation process on the corresponding encoding target image.
- a plurality of encoding units that generate a local decoded image by encoding and decoding the corresponding encoding target image, and a plurality of encoded signals generated by the plurality of encoding units.
- the encoding unit uses a first local decoded image of the encoding target image generated by itself included in the search range and a second local decoded image of the overlapping region generated by another encoding unit. Then, the motion compensation process is performed, and the signal dividing unit switches the size of the overlapping area according to a predetermined condition.
- the image encoding device is transferred between adjacent encoding units by reducing the overlapping area, for example, when the data transfer amount between the encoding units increases.
- the amount of local decoded image data can be reduced.
- the communication bandwidth between encoding parts can be reduced.
- the image encoding apparatus can process an input image signal with a higher pixel rate.
- the signal dividing unit determines the size of the overlapping region to be a first size when a pixel rate, which is the number of pixels to be processed per unit time by the image encoding device, is smaller than a first threshold value. If the pixel rate is larger than the first threshold, the size of the overlapping area may be determined to be a second size smaller than the first size.
- the image encoding device can reduce the data amount of the second local decoded image transmitted between the encoding units when the pixel rate is high. Thereby, the communication bandwidth between encoding parts can be reduced. Furthermore, by reducing the bandwidth between the encoding units, the image encoding apparatus according to the present invention can process an input image signal with a higher pixel rate.
- the image coding apparatus can cope with a higher pixel rate while suppressing deterioration in coding efficiency and image quality.
- Each of the encoding units detects a motion vector of each of a plurality of blocks included in the corresponding encoding target image, and the motion compensation using the motion vector detected by the motion detection unit.
- a motion compensation unit that performs processing, and the image encoding device further corresponds to each of the plurality of encoding units, and stores the corresponding encoding target image and the corresponding image of the overlapping region as an original image.
- An original image storage unit, and the motion detection unit may detect the motion vector using the original image stored in the original image storage unit.
- the image encoding device of the present invention can acquire the second local decoded image while the motion detection unit calculates the motion vector using the original image. Therefore, the image coding apparatus according to the present invention can reduce the waiting time of the motion detection unit until the second local decoded image is acquired, and thus can reduce latency. Thereby, the image coding apparatus according to the present invention can cope with a higher pixel rate.
- the image encoding device further corresponds to each of the plurality of encoding units, and the corresponding encoding unit stores the first local decoded image and the second local decoded image used for the motion compensation processing.
- a local decoded image storage unit wherein the motion detection unit detects the motion vector using the original image stored in the original image storage unit when the pixel rate is greater than a second threshold; When the pixel rate is smaller than the second threshold value, the motion vector may be detected using the first local decoded image and the second local decoded image stored in the local decoded image storage unit.
- the image encoding device of the present invention can reduce latency by acquiring the second local decoded image while calculating a motion vector using the original image at a high pixel rate. Furthermore, the image coding apparatus according to the present invention can suppress deterioration in image quality by calculating a motion vector using a local decoded image at a low pixel rate.
- each encoding unit requests the second local decoded image from another encoding unit, and the motion detection unit, in response to the request, if the pixel rate is greater than a second threshold value, A process of detecting the motion vector using the original image stored in the original image storage unit before the encoding unit obtains the second local decoded image output by the encoding unit. You may start.
- the image encoding device of the present invention can acquire the second local decoded image from another encoding unit while the motion detection unit calculates the motion vector using the original image. Therefore, the image coding apparatus according to the present invention can reduce the waiting time of the motion detection unit until the second local decoded image is acquired, and thus can reduce latency.
- the image encoding device may further include a pixel rate acquisition unit that acquires the pixel rate specified by a user operation.
- the image encoding apparatus further includes a first calculation unit that calculates at least one of an image size and a frame rate of the input image signal using information included in the input image signal, and the first calculation.
- a second calculation unit that calculates the pixel rate using at least one of the pixel size and the frame rate calculated by the unit.
- the image encoding device can automatically determine the pixel rate of the input image signal and change the size of the overlapping region according to the determined pixel rate.
- the first calculation unit uses at least one of a pixel clock, a horizontal synchronization signal, and a vertical synchronization signal included in the input image signal, and uses at least one of an image size and a frame rate of the input image signal. One may be calculated.
- the image encoding device further includes a first storage unit that stores the first local decoded image generated by the plurality of encoding units, and the signal dividing unit is an empty space in the first storage unit.
- the size of the overlapping area is determined as the first size
- the free capacity is smaller than the first threshold
- the size of the overlapping area is set as the first size. You may determine to the 2nd magnitude
- the first storage unit corresponds to each of the plurality of encoding units, and the corresponding encoding unit stores a plurality of the first local decoded image and the second local decoded image used for the motion compensation process.
- a second storage unit wherein the signal dividing unit determines the size of the overlapping area when the smallest free space of the plurality of second storage units is larger than the first threshold. If the smallest available capacity is smaller than the first threshold value, the size of the overlapping area may be determined as the second size.
- Each of the encoding units detects a motion vector of each of a plurality of blocks included in the corresponding encoding target image, and the motion compensation using the motion vector detected by the motion detection unit.
- a motion compensation unit that performs processing, and when the motion vector is greater than a first threshold, the signal division unit determines a size of the overlapping region to be a first size, and the motion vector is If the threshold value is smaller than 1, the size of the overlapping area may be determined to be a second size smaller than the first size.
- the image encoding device reduces the overlapping area when the motion vector is small.
- the coding efficiency and the image quality are unlikely to deteriorate even if the size of the search range is reduced.
- the image encoding device can cope with encoding processing at a higher pixel rate while suppressing deterioration in encoding efficiency and image quality.
- the signal dividing unit determines the size of the overlapping region when the largest motion vector among the motion vectors straddling boundaries of the plurality of encoding target images is larger than the first threshold. If the largest motion vector is smaller than the first threshold value, the size of the overlapping region may be determined as the second size.
- the image encoding device is configured so that the size of the overlapping region is determined according to the motion vector of the block adjacent to the boundary of the encoding target image on which motion compensation processing is performed using the image of the overlapping region. Change the size.
- the image coding apparatus can reduce the amount of data transferred between the coding units while suppressing deterioration in coding efficiency and image quality.
- Each of the encoding units detects a motion vector of each of a plurality of blocks included in the corresponding encoding target image, and the motion compensation using the motion vector detected by the motion detection unit.
- a motion compensation unit that performs processing, and each of the encoding units uses a motion vector of a block adjacent to the overlap region and a boundary corresponding to the corresponding encoding target image as a motion vector of a block around the block.
- the predicted motion vector indicates the direction of the boundary
- the second local decoded image generated by another encoding unit is acquired, and the predicted motion vector indicates the direction of the boundary. If not, the second local decoded image generated by another encoding unit may not be acquired.
- the image encoding device does not acquire the second local decoded image when the second local decoded image is not necessary.
- the image coding apparatus can further reduce the data transfer amount between the coding units.
- the present invention can be realized not only as such an image encoding device, but also as an image encoding method using characteristic means included in the image encoding device as a step, or such a characteristic step. It can also be realized as a program for causing a computer to execute. Needless to say, such a program can be distributed via a recording medium such as a CD-ROM and a transmission medium such as the Internet.
- the present invention can be realized as a semiconductor integrated circuit (LSI) that realizes part or all of the functions of such an image encoding device, or can be realized as a camera equipped with such an image encoding device.
- LSI semiconductor integrated circuit
- the present invention can provide an image encoding device and an image encoding method that can reduce the data amount of a local decoded image transferred between adjacent encoding units.
- FIG. 1 is a block diagram of a conventional image coding apparatus.
- FIG. 2A is a diagram illustrating an example of an image in a conventional image encoding device.
- FIG. 2B is a diagram illustrating a screen example in a conventional image encoding device.
- FIG. 3 is a block diagram of the image coding apparatus according to Embodiment 1 of the present invention.
- FIG. 4A is a diagram showing an example of image division by the image coding apparatus according to Embodiment 1 of the present invention.
- FIG. 4B is a diagram showing an example of image division by the image coding apparatus according to Embodiment 1 of the present invention.
- FIG. 5 is a flowchart of processing performed by the image coding apparatus according to Embodiment 1 of the present invention.
- FIG. 5 is a flowchart of processing performed by the image coding apparatus according to Embodiment 1 of the present invention.
- FIG. 6 is a flowchart of image segmentation processing by the image coding apparatus according to Embodiment 1 of the present invention.
- FIG. 7A is a diagram showing an example of image division at a high pixel rate by the image coding apparatus according to Embodiment 1 of the present invention.
- FIG. 7B is a diagram showing an example of image division at the low pixel rate by the image coding device according to Embodiment 1 of the present invention.
- FIG. 8 is a block diagram of the first encoding unit according to Embodiment 1 of the present invention.
- FIG. 9 is a flowchart of the encoding process performed by the image encoding apparatus according to Embodiment 1 of the present invention.
- FIG. 7A is a diagram showing an example of image division at a high pixel rate by the image coding apparatus according to Embodiment 1 of the present invention.
- FIG. 7B is a diagram showing an example of image division at the low pixel rate by the image coding device according to Embodiment
- FIG. 10 is a diagram showing a usage state of the original image and the local decoding in the image coding apparatus according to Embodiment 1 of the present invention.
- FIG. 11 is a flowchart of motion detection and motion compensation processing by the image coding apparatus according to Embodiment 1 of the present invention.
- FIG. 12 is a timing chart showing an example of signal processing in the image coding apparatus according to Embodiment 1 of the present invention.
- FIG. 13 is a flowchart of local decoding image necessity determination processing by the image coding apparatus according to Embodiment 1 of the present invention.
- FIG. 14 is a diagram showing motion vector prediction processing in the image coding apparatus according to Embodiment 1 of the present invention.
- FIG. 11 is a flowchart of motion detection and motion compensation processing by the image coding apparatus according to Embodiment 1 of the present invention.
- FIG. 12 is a timing chart showing an example of signal processing in the image coding apparatus according to Embodiment 1 of the present invention.
- FIG. 13 is
- FIG. 15 is a block diagram of an image coding apparatus according to Embodiment 2 of the present invention.
- FIG. 16 is a flowchart of processing performed by the image coding apparatus according to Embodiment 2 of the present invention.
- FIG. 17 is a block diagram of an image coding apparatus according to Embodiment 3 of the present invention.
- FIG. 18 is a flowchart of processing performed by the image coding apparatus according to Embodiment 3 of the present invention.
- FIG. 19 is a flowchart of image division processing by the image coding apparatus according to Embodiment 3 of the present invention.
- FIG. 20 is a block diagram of an image coding apparatus according to Embodiment 4 of the present invention.
- FIG. 21 is a flowchart of processing performed by the image coding apparatus according to Embodiment 4 of the present invention.
- FIG. 22 is a flowchart of image segmentation processing by the image coding apparatus according to Embodiment 4 of the present invention.
- Embodiment 1 The image coding apparatus 300 according to Embodiment 1 of the present invention reduces the size of the overlapping region used as the search range for motion detection processing and motion compensation processing when the pixel rate is high. Thereby, the data amount of the local decoded image transferred between the encoding units can be reduced at a high pixel rate.
- FIG. 3 is a block diagram showing a configuration of image coding apparatus 300 according to Embodiment 1 of the present invention.
- the input image signal 310 is a sequential signal including a plurality of pictures, for example, a high-definition image signal or an image signal taken at a high speed.
- high-speed shooting is shooting performed at a frame rate (for example, 300 frames per second) higher than a normal frame rate (for example, 30 or 60 frames per second).
- the image encoding device 300 includes an imaging method switching unit 301, a signal dividing unit 302, a first encoding unit 303A, a second encoding unit 303B, a first storage area connecting unit 304A, and a second storage area.
- a connection unit 304B, a first external connection unit 305A, a second external connection unit 305B, a first storage unit 306A, a second storage unit 306B, and a signal synthesis unit 307 are provided.
- the photographing method switching unit 301 corresponds to the pixel rate acquisition unit of the present invention, and acquires a pixel rate specified by a user operation. Specifically, the imaging method switching unit 301 acquires any of the pixel rates having i stages set according to the user's switch operation. In addition, the imaging method switching unit 301 outputs an identification signal 311 indicating the set pixel rate to the signal dividing unit 302, the first encoding unit 303A, and the second encoding unit 303B.
- the pixel rate is the number of pixels to be encoded by the image encoding device 300 per unit time.
- the pixel rate corresponds to the processing amount that the image encoding device 300 should perform the encoding process per unit time.
- the pixel rate is the product of the image size and the frame rate.
- the signal dividing unit 302 divides each picture included in the input image signal 310 into, for example, up and down, for example, in a state including a partially overlapping region, thereby obtaining the first divided image signal 312A and the second divided image signal 312B. Generate.
- a picture is a single image included in the input image signal 310 and is a frame or a field.
- FIG. 4A is a diagram showing a configuration of the first divided image signal 312A.
- FIG. 4B is a diagram illustrating a configuration of the second divided image signal 312B.
- the first divided image signal 312A includes a first encoding target range 315A and a first overlapping region 316A.
- the second divided image signal 312B includes a second encoding target range 315B and a second overlapping region 316B.
- the first encoding target range 315A and the second encoding target range 315B are obtained by equally dividing each picture included in the input image signal 310 vertically. Each is an image area for 360 lines. Also, the first encoding target range 315A is an image region that is encoded by the first encoding unit 103A, and the second encoding target range 315B is an image region that is encoded by the second encoding unit 103B. is there.
- the first overlapping area 316A is an image area for n lines (n is an integer of 1 or more) included in the second encoding target range 315B and adjacent to the first encoding target range 315A.
- the second overlapping region 316B is an image region for n lines (n is an integer of 1 or more) included in the first encoding target range 315A and adjacent to the second encoding target range 315B.
- the signal dividing unit 302 divides each picture included in the input image signal 310 into the first encoding target range 315A and the second encoding target range 315B.
- the first divided image signal 312A and the second divided image signal 312B correspond to the motion vector search range in the motion detection process and the motion compensation process by the first encoding unit 303A and the second encoding unit 303B, respectively. . That is, the signal dividing unit 302 determines a motion vector search range in the motion detection process and the motion compensation process by the first encoding unit 303A and the second encoding unit 303B.
- the signal dividing unit 302 changes the number n of lines of the first overlapping region 316A and the second overlapping region 316B in i stages according to the identification signal 311 output from the imaging method switching unit 301. Specifically, when the pixel rate is smaller than a predetermined threshold, the signal dividing unit 302 determines the size of the first overlapping region 316A and the second overlapping region 316B as the first size (for example, 32 lines). When the pixel rate is larger than the predetermined threshold, the sizes of the first overlapping region 316A and the second overlapping region 316B are determined to be a second size (for example, 16 lines) smaller than the first size.
- the signal dividing unit 302 outputs the generated first divided image signal 312A to the first encoding unit 103A, and outputs the generated second divided image signal 312B to the second encoding unit 103B.
- the first encoding unit 303A generates the first encoded signal 313A by encoding the first encoding target range 315A included in the first divided image signal 312A divided into two by the signal dividing unit 302. Further, the first encoding unit 303A outputs the generated first encoded signal 313A to the signal synthesis unit 307.
- the second encoding unit 303B generates the second encoded signal 313B by encoding the second encoding target range 315B included in the second divided image signal 312B divided into two by the signal dividing unit 302. Further, the second encoding unit 303B outputs the generated second encoded signal 313B to the signal synthesis unit 307.
- the first encoding unit 303A generates the first local decoded image 317A by encoding the first encoding target range 315A and then decoding it.
- the second encoding unit 303B generates the first local decoded image 317B by encoding the second encoding target range 315B and then decoding it. That is, the local decoded image is the same image as the image generated by decoding the output encoded signal 314 by the decoding device.
- the first storage unit 306A is used as a line memory or a frame memory when the first encoding unit 303A performs encoding.
- the first storage unit 306A stores the original image (first divided image signal 312A), the first local decoded image 317A, and the second local decoded image 318A corresponding to the first overlapping region 316A as a reference image.
- the first encoding unit 303A reads the first divided image signal 312A, the first local decoded image 317A, and the second local decoded image 318A as a reference image from the first storage unit 306A, and reads the first The current picture is encoded using the divided image signal 312A, the first local decoded image 317A, and the second local decoded image 318A.
- the first encoding unit 303A stores the first divided image signal 312A and the first local decoded image 317A of the current picture in the first storage unit 306A as reference images used for encoding processing of subsequent pictures. Write.
- the second storage unit 306B is used as a line memory or a frame memory when the second encoding unit 303B performs encoding.
- the second storage unit 306B stores the original image (second divided image signal 312B), the first local decoded image 317B, and the second local decoded image 318B corresponding to the second overlapping region 316B as reference images.
- the second encoding unit 303B reads the past second divided image signal 312B, the first local decoded image 317B, and the second local decoded image 318B as reference images from the second storage unit 306B, and reads the read second The current picture is encoded using the divided image signal 312B, the first local decoded image 317B, and the second local decoded image 318B.
- the second encoding unit 303B writes the second divided image signal 312B and the first local decoded image 317B of the current picture in the second storage unit 306B as reference images used for encoding processing of subsequent pictures.
- the first external coupling unit 305A reads the second local decoded image 318A included in the first local decoded image 317B stored in the second storage unit 306B, and reads the read second local decoded image 318A to the first storage unit 306A.
- the second external coupling unit 305B reads the second local decoded image 318B included in the first local decoded image 317A stored in the first storage unit 306A, and reads the read second local decoded image 318B to the second storage unit 306B.
- the first storage area concatenation unit 304A transfers data between the first encoding unit 303A, the first storage unit 306A, and the first external connection unit 305A.
- the second storage area coupling unit 304B performs data transfer among the second encoding unit 303B, the second storage unit 306B, and the second external coupling unit 305B.
- the signal synthesis unit 307 combines one bit of the first encoded signal 313A generated by the first encoding unit 303A and the second encoded signal 313B generated by the second encoding unit 303B. An output encoded signal 314 that is a stream is generated.
- FIG. 5 is a flowchart showing an operation flow of the image coding apparatus 300 according to Embodiment 1 of the present invention.
- the photographing method switching unit 301 acquires one of the i-stage pixel rates set according to the user's switch operation or the like (S101).
- S101 the user's switch operation or the like
- the shooting method switching unit 301 generates an identification signal 311 indicating the set pixel rate.
- the signal dividing unit 302 divides each picture included in the input image signal 310 into two to generate a first divided image signal 312A and a second divided image signal 312B (S102).
- the first encoding unit 303A generates the first encoded signal 313A by encoding the first divided image signal 312A, and the second encoding unit 303B encodes the second divided image signal 312B. By doing so, a second encoded signal 313B is generated (S103).
- the signal synthesis unit 307 generates the output encoded signal 314 by synthesizing the first encoded signal 313A and the second encoded signal 313B (S104).
- FIG. 6 is a flowchart showing a flow of signal division processing by the signal division unit 302.
- the signal dividing unit 302 refers to the identification signal 311 and determines whether one of the low pixel rate imaging method and the high pixel rate imaging method is set (S120).
- the signal dividing unit 302 sets the first overlapping area 316A and the second overlapping area 316B narrowly (S121).
- the signal dividing unit 302 divides the input image signal 310, thereby the first divided image signal 312A including the first overlapping region 316A having the size set in step S121 and the second overlapping having the set size.
- a second divided image signal 312B including the region 316B is generated (S123).
- the signal dividing unit 302 sets the first overlapping region 316A and the second overlapping region 316B widely (S122).
- the signal dividing unit 302 divides the input image signal 310, thereby the first divided image signal 312A including the first overlapping region 316A having the size set in step S122 and the second overlapping having the set size.
- a second divided image signal 312B including the region 316B is generated (S123).
- FIG. 7A is a diagram illustrating an example of the first divided image signal 312A when the high pixel rate imaging method is set.
- FIG. 7B is a diagram illustrating an example of the first divided image signal 312A when the low pixel rate imaging method is set.
- the number n of lines of the first overlapping region 316A is set to “16”.
- the number n of lines of the first overlapping region 316A is set to “32”.
- the setting of the number n of lines of the second overlapping region 316B is the same.
- FIG. 8 is a block diagram showing the configuration of the first encoding unit 303A, the first storage area coupling unit 304A, the first external coupling unit 305A, and the first storage unit 306A.
- the first storage unit 306A includes an original image storage unit 702 and a local decoded image storage unit 712.
- the original image storage unit 702 stores an original image (first divided image signal 312A).
- the local decoded image storage unit 712 stores the first local decoded image 317A and the second local decoded image 318A.
- the first encoding unit 303A includes a motion detection unit 701, a subtraction unit 703, switches 704, 711, and 713, a conversion unit 705, a quantization unit 706, a variable length encoding unit 707, and an inverse quantum.
- the switch 713 includes an original image (first divided image signal 312A) stored in the original image storage unit 702, and a first local decoded image 317A and a second local decoded image 318A stored in the local decoded image storage unit 712. One of them is output to the motion detection unit 701. Specifically, the switch 713 outputs the first divided image signal 312A to the motion detection unit 701 when the identification signal 311 indicates the high pixel rate imaging method, and the identification signal 311 indicates the high pixel rate imaging method. In this case, the original image is output to the motion detection unit 701.
- the first divided image signal 312A includes an I picture (intra picture) and a P picture (inter picture).
- the I picture is a picture that is subjected to intra-picture encoding using data in the I picture
- the P picture is a picture that is subjected to inter-picture encoding using data of another picture.
- a processing target picture (hereinafter referred to as a target picture) included in the first divided image signal 312A is a P picture
- the motion detection unit 701 performs motion detection processing on the picture using another picture, A motion vector 725 is generated.
- the other pictures are pictures included in the original image or the local decoded image (first local decoded image 317A and second local decoded image 318A) output by the switch 713.
- the motion detection unit 701 detects the motion vector 725 of the target picture using the original image stored in the original image storage unit 702. Further, when the pixel rate is smaller than the predetermined threshold, the motion detection unit 701 uses the first local decoded image 317A and the second local decoded image 318A stored in the local decoded image storage unit 712, and uses the first local decoded image 318A. A motion vector 725 is detected.
- the motion detection process refers to searching for an image block of a past picture that is close to the image of the image block for each image block included in the target picture, and then the image included in the image block is the past picture. This is a process of calculating the amount and direction (motion vector 725) of the motion moved from the position.
- the motion compensation unit 714 outputs the motion vector 725 generated by the motion detection unit 701 and other pictures included in the first local decoded image 317A and the second local decoded image 318A stored in the local decoded image storage unit 712.
- the predicted image 726 is generated by performing the motion compensation process.
- the motion compensation process is a process of generating a predicted image 726 corresponding to the image of the target picture by spatially shifting the image included in the past picture by the amount of motion indicated by the motion vector 725. is there.
- the subtraction unit 703 generates a prediction error signal 721 by subtracting the predicted image 726 from the target picture included in the first divided image signal 312A.
- the switch 704 When the target picture included in the first divided image signal 312A is a P picture, the switch 704 outputs the prediction error signal 721 generated by the subtraction unit 703 to the conversion unit 705, and when the target picture is an I picture, The first divided image signal 312A is output to the conversion unit 705.
- the transform unit 705 generates DCT coefficients 722 by performing DCT transform (orthogonal transform) on the prediction error signal 721 or the first divided image signal 312A output by the switch 704.
- DCT transform orthogonal transform
- the DCT conversion is a process of converting an input signal from a space plane to a frequency plane.
- the quantization unit 706 generates the quantization coefficient 723 by quantizing the DCT coefficient 722 generated by the conversion unit 705. Specifically, the quantization unit 706 generates the quantization coefficient 723 by dividing the DCT coefficient 722 by the quantization value Q.
- variable length coding unit 707 performs lossless variable length coding on the quantization coefficient 723 generated by the quantization unit 706, thereby reducing the information amount of the first coded signal from the first divided image signal 312A. 313A is generated.
- the inverse quantization unit 708 generates the DCT coefficient 724 by inversely quantizing the quantization coefficient 723 generated by the quantization unit 706. Specifically, the inverse quantization unit 708 generates the DCT coefficient 724 by multiplying the quantization coefficient 723 by the quantization value Q used by the quantization unit 706.
- a quantization error when the DCT coefficient 722 is quantized, a quantization error always occurs.
- a signal obtained by adding a quantization error to the DCT coefficient 722, that is, the DCT coefficient 724 that is the same as the DCT coefficient when being decoded by the decoding device is obtained. Can be generated.
- the inverse transform unit 709 generates a local decode signal 727 obtained by adding quantization distortion to the original first divided video signal 312 by performing inverse DCT transform on the DCT coefficient 724 generated by the inverse quantization unit 708.
- the switch 711 supplies the predicted image 726 generated by the motion compensation unit 714 to the addition unit 710, and when the target picture is an I picture, The switch 711 is opened.
- the adding unit 710 adds the local decoded signal 727 generated by the inverse transform unit 709 and the predicted image 726 output by the switch 711.
- the first local decoded image 317A is generated.
- the adding unit 710 outputs the local decode signal 727 generated by the inverse transform unit 709 as the first local decoded image 317A.
- the first local decoded image 317A generated by the adding unit 710 is stored in the local decoded image storage unit 712.
- FIG. 9 is a flowchart showing the flow of the operation of the encoding process (S103) by the first encoding unit 303A for one target picture included in the first divided image signal 312A.
- the first encoding unit 303A uses the storage area concatenating unit 304 as a reference image used for detecting a motion vector in the subsequent pictures using the target picture included in the first divided image signal 312A. Then, the original image is written in the original image storage unit 702 as it is (S131). Note that the timing of writing the target picture in the original image storage unit 702 is not limited to the timing shown in FIG. 9, and may be performed at an arbitrary timing.
- the first encoding unit 303A determines whether the target picture included in the first divided image signal 312A is an I picture or a P picture (S132).
- the target picture is an I picture (No in S132)
- the first divided image signal 312A is supplied to the motion detection unit 701. At this time, the motion detection unit 701 does not perform vector calculation processing on the processing target picture.
- the switch 704 outputs the first divided image signal 312A to the conversion unit 705.
- the conversion unit 705 converts the first encoding target range 315A of the target picture included in the first divided image signal 312A into the DCT coefficient 722 (S135).
- the quantization unit 706 generates the quantization coefficient 723 by quantizing the DCT coefficient 722 (S136).
- the variable length encoding unit 707 generates the first encoded signal 313A by performing variable length encoding on the quantization coefficient 723 (S137).
- the first encoded signal 313A in which the data amount of the first encoding target range 315A of the target picture is compressed is generated.
- the inverse quantization unit 708 generates the DCT coefficient 724 by inversely quantizing the quantization coefficient 723 generated in step S136.
- the inverse transform unit 709 generates a local decode signal 727 by performing a DCT coefficient 724 inverse DCT transform.
- the local decoding signal 727 subjected to inverse DCT conversion is not subjected to addition processing by the adding unit 710 and is directly used as the first local decoded image 317A.
- the data is written into the decoded image storage unit 712 via the storage area connection unit 304 (S138).
- FIG. 10 is a diagram showing the relationship between the original image and the local decoded image.
- the first encoding unit 303A generates a first local decoded image 317A corresponding to the first encoding target range 315A included in the original image of the target picture, and the first encoding unit 303A It is stored in the local decoded image storage unit 712.
- the first local decoded image 317A includes a second local decoded image 318B corresponding to the second overlapping region 316B used in the second encoding unit 303B.
- the order of the process of creating the first local decoded image 317A (S138) and the variable length encoding process (S137) may be arbitrary. Further, at least a part of the processing included in the generation processing (S137) of the first local decoded image 317A and the variable length encoding processing (S138) may be performed simultaneously.
- step S132 when the target picture is a P picture (Yes in S132), the motion detection unit 701 performs a motion detection process and a motion compensation process for the target picture (S133).
- FIG. 11 is a flowchart showing the flow of motion detection processing and motion compensation processing (S133) by the first encoding unit 303A.
- the first encoding unit 303A determines whether a local decoded image corresponding to the first overlapping region 316A is necessary (S151). Details of step S151 will be described later.
- the first encoding unit 303A passes through the first external connection unit 305A and the second external connection unit 305B.
- a transfer request for the second local decoded image 318A corresponding to the first overlapping area 316A stored in the local decoded image storage unit 712 is output to the local decoded image storage unit 712 included in the second storage unit 306B (S152).
- the switch 713 outputs the original image stored in the original image storage unit 702 to the motion detection unit 701.
- the motion detection unit 701 calculates a motion vector 725 by performing motion detection processing using the original image output from the switch 713 (S154).
- the search range used for the motion detection process includes a first encoding target range 315A (360 lines) and a first overlapping region 316A (16 lines) of the original image.
- the first encoding unit 303A acquires the second local decoded image 318A output from the local decoded image storage unit 712 included in the second storage unit 306B in response to the transfer request in step S152, and acquires the acquired second
- the local decoded image 318A is stored in the local decoded image storage unit 712 as a reference image (S155).
- the motion compensation unit 714 performs prediction by performing motion compensation processing using the first local decoded image 317A and the second local decoded image 318A stored in the local decoded image storage unit 712.
- An image 726 is generated (S156).
- the first encoding unit 303A first stores the local decoded image stored in the second storage unit 306B in response to the transfer request in step S152.
- the second local decoded image 318A output from the unit 712 is acquired, and the acquired second local decoded image 318A is stored in the local decoded image storage unit 712 as a reference image (S157).
- the switch 713 outputs the original image stored in the original image storage unit 702 to the motion detection unit 701.
- the motion detection unit 701 may acquire the original image directly from the first divided image signal 312A.
- the motion detection unit 701 calculates a motion vector 725 by performing motion detection processing using the first local decoded image 317A and the second local decoded image 318A output from the switch 713 (S158).
- the search range used for the motion detection process is the first local decoded image corresponding to the first encoding target range 315A (360 lines) and the first overlapping region 316A (32 lines). 317A and a second local decoded image 318A.
- the motion compensation unit 714 performs prediction by performing motion compensation processing using the first local decoded image 317A and the second local decoded image 318A stored in the local decoded image storage unit 712.
- An image 726 is generated (S156).
- the motion detection unit 701 uses the first local decoded image 317A output by the switch 713 to perform motion detection processing. To calculate a motion vector 725 (S158).
- the motion compensation unit 714 generates a predicted image 726 by performing motion compensation processing using the first local decoded image 317A stored in the local decoded image storage unit 712 (S156).
- step S156 as shown in FIG. 9, next, the subtraction unit 703 generates the prediction error signal 721 by subtracting the predicted image 726 from the original image (S134).
- the quantization unit 706 generates the quantization coefficient 723 by quantizing the DCT coefficient 722 (S136).
- the variable length encoding unit 707 generates the first encoded signal 313A by performing variable length encoding on the quantization coefficient 723 (S137).
- the first encoded signal 313A in which the data amount of the first encoding target range 315A of the target picture is compressed is generated.
- the inverse quantization unit 708 generates the DCT coefficient 724 by inversely quantizing the quantization coefficient 723 generated in step S136.
- the inverse transform unit 709 generates a local decode signal 727 by performing a DCT coefficient 724 inverse DCT transform.
- the switch 711 outputs the predicted image 726 to the addition unit 710.
- the adder 710 generates the first local decoded image 317A by adding the local decoded signal 727 and the predicted image 726, and the generated first local decoded image 317A is stored in the local decoded image storage unit 712. Data is written via 304 (S138).
- FIG. 12 is a diagram showing the processing time at the low pixel rate and at the high pixel rate.
- the motion detection unit 701 acquires the second local decoded image 318A output by the second encoding unit 303B in response to the request in step S152, by the first encoding unit 303A.
- a process (S154) of detecting a motion vector using the original image stored in the original image storage unit 702 is started.
- the image encoding device 300 improves the processing speed by performing motion detection using the original image stored in the first storage unit 306A without waiting for the acquisition of the second local decoded image 318A. it can.
- FIG. 13 is a flowchart showing a flow of processing for determining whether or not the second local decoded image 318A corresponding to the first overlapping area 316A is necessary. The process illustrated in FIG. 13 is performed for each processing block included in the first encoding target range 315A.
- the first encoding unit 303A determines whether or not the first overlapping region 316A is included in the search range of the block to be processed (S161).
- the first encoding unit 303A determines that the second local decoded image 318A of the first overlapping area 316A is unnecessary. (S164).
- the first encoding unit 303A when the first overlapping area 316A is included in the search range of the block to be processed (Yes in S161), the first encoding unit 303A then correlates the motion vectors between the calculated plurality of surrounding blocks. It is determined whether or not is large (S162). Specifically, the first encoding unit 303A determines whether or not the correlation of motion vectors between a plurality of surrounding blocks is equal to or greater than a predetermined threshold.
- the first encoding unit 303A When the correlation of motion vectors between a plurality of surrounding blocks is large (Yes in S162), the first encoding unit 303A then splits a boundary that is a boundary between the first encoding target range 315A and the first overlapping region 316A. The motion vector of the block adjacent to the current block is predicted using the motion vectors of the blocks around the block. Next, the first encoding unit 303A determines whether or not the predicted motion vector is upward (in the opposite direction to the first overlapping region 316A) (S163).
- FIG. 14 is a diagram illustrating an example of a motion vector near the division boundary between the first encoding target range 315A and the first overlapping region 316A.
- the motion vector of the certain block can be predicted using the motion vector of the surrounding block.
- the motion vector 902 of the encoding block 901 to be processed can be predicted using the motion vectors 904, 906, and 908 of adjacent blocks 903, 905, and 907 adjacent to the encoding block 901.
- the first encoding unit 303A predicts the direction of the motion vector 902 to be upward by calculating the average value of the horizontal and vertical components of the motion vectors 904, 906, and 908 of adjacent blocks.
- the motion compensation process can be performed only with the first local decoded image 317A generated by itself. Therefore, the first encoding unit 303A determines that the second local decoded image 318A of the first overlapping region 316A is unnecessary (S164).
- the first encoding unit 303A determines that the second local decoded image 318A of the first overlapping region 316A is necessary (S165).
- the first encoding unit 303A acquires the second local decoded image 318A generated by the second encoding unit 303B when the predicted motion vector indicates the direction of the division boundary, and the predicted motion vector is When the direction of the division boundary is not indicated, the second local decoded image 318A generated by the second encoding unit 303B is not acquired.
- the image encoding device 300 when the pixel rate of the input image signal 310 is low, the image encoding device 300 according to the first embodiment of the present invention enlarges the first overlapping region 316A and the second overlapping region 316B and increases the input image.
- the pixel rate of the signal 310 is high, the first overlapping region 316A and the second overlapping region 316B are made smaller.
- the image encoding device 300 can reduce the data amount of the second local decoded images 318A and 318B transmitted between the first encoding unit 303A and the second encoding unit 303B at the time of a high pixel rate. .
- the communication bandwidth between the first encoding unit 303A and the second encoding unit 303B can be reduced.
- the storage capacity of the local decoded image storage unit 712 that stores the acquired second local decoded images 318A and 318B can be reduced.
- the image encoding device 300 can process the input image signal 310 with a higher pixel rate.
- the image encoding device 300 can cope with the input image signal 310 having a higher pixel rate while suppressing the deterioration of the encoding efficiency and the image quality.
- the image coding apparatus 300 can improve the coding efficiency and the image quality by expanding the search range of the motion compensation process at a low pixel rate at which a processing speed is not required compared with a high pixel rate.
- the image encoding device 300 calculates a motion vector using an original image at a high pixel rate.
- the image encoding device 300 is able to calculate the motion vector from the local decoding image storage unit 712 of the other encoding unit while the motion detection unit 701 of one encoding unit calculates the motion vector using the original image as a search range.
- the second local decoded image 318A or 318B corresponding to the first overlapping area 316A or the second overlapping area 316B can be transferred. Therefore, the image encoding device 300 can reduce the waiting time of the motion detection unit 701 until the counterpart second local decoded image 318A or 318B is acquired, so that latency can be reduced.
- the image coding apparatus 300 determines whether or not the second local decoded image 318A or 318B of the first overlapping region 316A or the second overlapping region 316B is necessary by performing motion vector prediction. Thereby, since the image coding apparatus 300 can reduce the transfer amount of the second local decoded image 318A or 318B, it is possible to reduce the bandwidth between the first coding unit 303A and the second coding unit 303B.
- the pixel rate is set by the user's switch operation.
- the image size and the frame rate are set by the user's switch operation, and the shooting method switching unit 301 sets the image size and the frame rate. It may be used to calculate the pixel rate.
- the image size and the frame rate may be set by the user's switch operation.
- the pixel rate in the above description may be replaced with the image size or the frame rate.
- Embodiment 2 In the second embodiment of the present invention, a modification of the image coding apparatus 300 according to the first embodiment described above will be described.
- the image encoding device 300A according to Embodiment 2 of the present invention determines the pixel rate of the input image signal 310 using information included in the input image signal 310, and the first overlapping region 316A according to the determined pixel rate. And the size of the second overlapping area 316B is changed.
- FIG. 15 is a block diagram showing a configuration of an image encoding device 300A according to Embodiment 2 of the present invention.
- symbol is used about the same component as FIG. 3, description is abbreviate
- 15 includes a pixel rate monitoring unit 308 in addition to the configuration of the image encoding device 300 illustrated in FIG. Further, the configuration of the photographing method switching unit 301A is different.
- the pixel rate monitoring unit 308 corresponds to the first calculation unit of the present invention, and uses the pixel clock, the horizontal synchronization signal, and the vertical synchronization signal included in the input image signal 310 output from the imaging device, and the input image signal 310. The image size and frame rate are calculated. Further, the pixel rate monitoring unit 308 outputs a monitoring result 320 including the calculated image size and frame rate to the imaging method switching unit 301A.
- the imaging method switching unit 301A corresponds to the second calculation unit of the present invention, acquires the monitoring result 320 generated by the pixel rate monitoring unit 308, and uses this monitoring result 320 to determine the pixel rate of i stages. Set one of them. Specifically, the imaging method switching unit 301A calculates the estimated pixel rate by multiplying the image size included in the monitoring result 320 by the frame rate. The imaging method switching unit 301A sets a higher pixel rate among the i-stage pixel rates as the calculated estimated pixel rate is larger. For example, when i is 2, the imaging method switching unit 301A sets the high frame rate imaging method when the estimated pixel rate is larger than a predetermined value, and the low frame rate when the estimated pixel rate is smaller than the predetermined value. Set the shooting method. In addition, the imaging method switching unit 301A outputs an identification signal 311 indicating the set pixel rate to the signal dividing unit 302, the first encoding unit 303A, and the second encoding unit 303B.
- the imaging method switching unit 301A may set one of the i-stage pixel rates using only one of the image size and the frame rate included in the monitoring result 320. Specifically, the imaging method switching unit 301A sets a higher pixel rate among the i-stage pixel rates as the image size included in the monitoring result 320 is larger. In addition, the imaging method switching unit 301A sets a higher pixel rate among the i-stage pixel rates as the frame rate included in the monitoring result 320 is higher.
- FIG. 16 is a flowchart showing an operation flow of the image coding device 300A according to Embodiment 2 of the present invention.
- symbol is used about the process similar to FIG. 5, and description is abbreviate
- the pixel rate monitoring unit 308 acquires an image size and a frame rate of the input image signal 310 from information included in the input image signal 310 output from the image sensor (S201). Specifically, the pixel rate monitoring unit 308 calculates the image size and the frame rate that are the monitoring result 320 using the pixel clock, the horizontal synchronization signal, and the vertical synchronization signal included in the input image signal 310.
- the imaging method switching unit 301A acquires the monitoring result 320 generated by the pixel rate monitoring unit 308, and calculates a pixel rate corresponding to the image size and the frame rate included in the monitoring result 320 (S202). .
- the photographing method switching unit 301A determines a photographing method based on the calculated pixel rate (S203).
- the imaging method switching unit 301A outputs an identification signal 311 indicating the determined imaging method to the signal dividing unit 302, the first encoding unit 303A, and the second encoding unit 303B.
- step S102 since the process after step S102 is the same as that of Embodiment 1, description is abbreviate
- the image coding apparatus 300A according to Embodiment 2 of the present invention can obtain the same effects as those of Embodiment 1 described above.
- the pixel rate monitoring unit 308 directly monitors the input image signal 310 output from the image sensor, so that it does not depend on a specific image sensor.
- the image size and the frame rate of the input image signal 310 output from the image sensor can be identified dynamically or adaptively. Further, the image size and the frame rate of the image sensor are changed according to a command output from a control microcomputer or the like.
- the image coding apparatus 300A includes the pixel rate monitoring unit 308, and can adaptively cope with these changes.
- the image encoding device 300A according to Embodiment 2 of the present invention can reduce the bandwidth between the encoding units and code at a higher pixel rate without depending on the configuration of the image sensor or the control microcomputer. Can be realized.
- Image coding apparatus 300B In the third embodiment of the present invention, a modified example of the image coding apparatus 300 according to the first embodiment described above will be described.
- Image coding apparatus 300B according to Embodiment 3 of the present invention changes the size of first overlapping region 316A and second overlapping region 316B according to the remaining buffer capacity of first storage unit 306A and second storage unit 306B. To do.
- FIG. 17 is a block diagram showing a configuration of an image encoding device 300B according to Embodiment 3 of the present invention.
- symbol is used about the same component as FIG. 3, and description is abbreviate
- 17 includes a remaining buffer capacity monitoring unit 309 in addition to the configuration of the image encoding apparatus 300 illustrated in FIG. Further, the configuration of the imaging method switching unit 301B is different.
- the remaining buffer capacity monitoring unit 309 uses a remaining buffer capacity 321A that is an empty area in the storage area of the first storage unit 306A used as a frame buffer or a line buffer by the first encoding unit 303A at predetermined time intervals. get. In addition, the remaining buffer capacity monitoring unit 309 acquires the remaining buffer capacity 321B of the second storage unit 306B that the second encoding unit 303B uses as a frame buffer or a line buffer. In addition, the remaining buffer capacity monitoring unit 309 generates a remaining buffer capacity 322 indicating the smaller one of the remaining buffer capacity 321A and 321B, and outputs the generated remaining buffer capacity 322 to the imaging method switching unit 301B.
- the imaging method switching unit 301B acquires the remaining buffer capacity 322 output by the remaining buffer capacity monitoring unit 309. In addition, the imaging method switching unit 301B sets one of the i-stage pixel rates according to the acquired remaining buffer capacity 322.
- the remaining buffer capacities 321A and 321B have a low pixel rate. Less than the encoding process.
- the shooting method switching unit 301B can determine the pixel rate using the remaining buffer capacities 321A and 321B.
- the photographing method switching unit 301B sets the pixel rate higher as the remaining buffer capacity 322 is larger. For example, when i is 2, the shooting method switching unit 301B sets the high pixel rate shooting method when the remaining buffer capacity 322 is larger than a predetermined value, and the lower pixel when the remaining buffer capacity is smaller than the predetermined value. Set the rate shooting method.
- the imaging method switching unit 301B outputs an identification signal 311 indicating the set pixel rate to the signal dividing unit 302, the first encoding unit 303A, and the second encoding unit 303B.
- the imaging method switching unit 301B may output the identification signal 311 indicating whether or not the remaining buffer capacity 322 is larger than a predetermined value, instead of estimating the pixel rate from the remaining buffer capacity 322.
- the identification signal 311 is used to set the width n of the first overlapping area 316A, the second overlapping area 316B, and the second local decoded images 318A and 318B. Specifically, the signal dividing unit 302 uses the identification signal 311 to set the width n of the first overlap region 316A and the second overlap region 316B. Also, the first encoding unit 303A and the second encoding unit 303B use this identification signal 311 to determine a motion search range.
- FIG. 18 is a flowchart showing an operation flow of the image coding apparatus 300B according to Embodiment 3 of the present invention.
- symbol is used about the process similar to FIG. 5, and description is abbreviate
- the remaining buffer capacity monitoring unit 309 includes a first storage unit 306A used as a line buffer or a frame buffer by the first encoding unit 303A and the second encoding unit 303B, and a remaining buffer capacity 321A of the second storage unit 306B. And 321B are acquired (S301). Then, the remaining buffer capacity monitoring unit 309 generates a remaining buffer capacity 322 indicating the smaller capacity of the remaining buffer capacity 321A and 321B.
- the imaging method switching unit 301B acquires the remaining buffer capacity 322 output from the remaining buffer capacity monitoring unit 309.
- the imaging method switching unit 301B determines the pixel rate according to the acquired remaining buffer capacity 322.
- the imaging method switching unit 301B outputs an identification signal 311 indicating the set pixel rate to the signal dividing unit 302, the first encoding unit 303A, and the second encoding unit 303B.
- step S102 since the process after step S102 is the same as that of Embodiment 1, description is abbreviate
- FIG. 19 is a flowchart showing a flow of signal division processing by the signal division unit 302. *
- the signal dividing unit 302 refers to the identification signal 311 and determines whether or not the remaining buffer capacity 322 (pixel rate) is equal to or less than a predetermined value (S320).
- the signal dividing unit 302 sets the first overlapping area 316A and the second overlapping area 316B to be narrow (S121).
- the signal dividing unit 302 divides the input image signal 310, thereby the first divided image signal 312A including the first overlapping region 316A having the size set in step S121 and the second overlapping having the set size.
- a second divided image signal 312B including the region 316B is generated (S123).
- the signal dividing unit 302 sets the first overlap area 316A and the second overlap area 316B widely (S122).
- the signal dividing unit 302 divides the input image signal 310, thereby the first divided image signal 312A including the first overlapping region 316A having the size set in step S122 and the second overlapping having the set size.
- a second divided image signal 312B including the region 316B is generated (S123).
- the remaining buffer capacity monitoring unit 309 is used as a line buffer or a frame buffer by the first encoding unit 303A and the second encoding unit 303B.
- the remaining buffer capacities 321A and 321B of the first storage unit 306A and the second storage unit 306B are monitored.
- the image encoding device 300B determines the margin of resources in the first encoding unit 303A and the second encoding unit 303B and the progress of the encoding processing status using the remaining buffer capacities 321A and 321B,
- the identification signal 311 can be generated according to the determination result.
- the image coding device 300B according to Embodiment 3 of the present invention can obtain the same effects as those of Embodiment 1 described above.
- the image coding device 300B when the remaining buffer capacities 321A and 321B are reduced, the number n of lines in the first overlap region 316A and the second overlap region 316B is decreased. Can be made. In this way, when there is no room in the resources and processing status of the first encoding unit 303A and the second encoding unit 303B regardless of the pixel rate, the first encoding unit 303A and the second code The amount of processing performed by the conversion unit 303B can be reduced.
- the image coding apparatus 300B according to Embodiment 3 of the present invention reduces the storage area according to the resource status of the line buffer or the frame buffer included in the first coding unit 303A and the second coding unit 303B.
- Image coding apparatus 300C changes the size of first overlapping region 316A and second overlapping region 316B according to the motion vector.
- FIG. 20 is a block diagram showing a configuration of an image encoding device 300C according to Embodiment 4 of the present invention.
- symbol is used about the same component as FIG. 3, and description is abbreviate
- the 20 includes a motion vector monitoring unit 330 in addition to the configuration of the image encoding device 300 illustrated in FIG. Further, the configuration of the imaging method switching unit 301C is different.
- the motion vector monitoring unit 330 acquires the motion vector 331A generated by the first encoding unit 303A and the motion vector 331B generated by the second encoding unit 303B. Then, the motion vector monitoring unit 330 outputs the largest motion vector 332 among the motion vectors 331A and 331B to the imaging method switching unit 301C. Specifically, the motion vector monitoring unit 330, for example, a dividing boundary (a boundary between the first encoding target range 315A and the second encoding target range 315B) among a plurality of motion vectors included in the motion vectors 331A and 331B. The largest motion vector 332 straddling is output to the imaging method switching unit 301C.
- the motion vectors 331A and 331B are a plurality of motion vectors 725 for one picture generated by the motion detection unit 701.
- the motion vector monitoring unit 330 selects the largest motion vector 332 for each picture included in the input image signal 310.
- the imaging method switching unit 301C acquires the motion vector 332 output from the motion vector monitoring unit 330. In addition, the imaging method switching unit 301C sets any one of the i-stage pixel rates according to the acquired motion vector 332.
- the imaging method switching unit 301C can determine the pixel rate using the motion vector 332.
- the imaging method switching unit 301C sets the pixel rate lower as the motion vector 332 is larger. For example, when i is 2, the shooting method switching unit 301C sets the low pixel rate shooting method when the motion vector 332 is larger than a predetermined value, and the high pixel rate when the motion vector 332 is smaller than the predetermined value. Set the shooting method. In addition, the imaging method switching unit 301C outputs an identification signal 311 indicating the set pixel rate to the signal dividing unit 302, the first encoding unit 303A, and the second encoding unit 303B.
- FIG. 21 is a flowchart showing an operation flow of the image coding apparatus 300C according to Embodiment 4 of the present invention. Note that the same processes as those in FIG. 6 are denoted by the same reference numerals, and description thereof is omitted.
- the motion vector monitoring unit 330 acquires the motion vectors 331A and 331B generated by the first encoding unit 303A and the second encoding unit 303B (S401). Then, the motion vector monitoring unit 330 outputs the largest motion vector 332 across the division boundary among the plurality of motion vectors included in the acquired motion vectors 331A and 331B to the imaging method switching unit 301C.
- the imaging method switching unit 301 ⁇ / b> C acquires the motion vector 332 output by the motion vector monitoring unit 330.
- the imaging method switching unit 301 ⁇ / b> C determines the pixel rate according to the acquired motion vector 332.
- the imaging method switching unit 301C outputs an identification signal 311 indicating the set pixel rate to the signal dividing unit 302, the first encoding unit 303A, and the second encoding unit 303B.
- step S102 since the process after step S102 is the same as that of Embodiment 1, description is abbreviate
- FIG. 22 is a flowchart showing a flow of signal division processing by the signal division unit 302. *
- the signal dividing unit 302 refers to the identification signal 311 and determines whether or not the motion vector 332 (pixel rate) is equal to or less than a predetermined value (S420).
- the signal dividing unit 302 sets the first overlapping region 316A and the second overlapping region 316B to be narrow (S121).
- the signal dividing unit 302 divides the input image signal 310, thereby the first divided image signal 312A including the first overlapping region 316A having the size set in step S121 and the second overlapping having the set size.
- a second divided image signal 312B including the region 316B is generated (S123).
- the signal dividing unit 302 sets the first overlapping region 316A and the second overlapping region 316B widely (S122).
- the signal dividing unit 302 divides the input image signal 310, thereby the first divided image signal 312A including the first overlapping region 316A having the size set in step S122 and the second overlapping having the set size.
- a second divided image signal 312B including the region 316B is generated (S123).
- the image encoding device 300C monitors the maximum values of the motion vectors 331A and 331B. Further, the image encoding device 300C can determine the status of the prediction signal of the encoding process according to the maximum values of the motion vectors 331A and 331B, and can generate the identification signal 311 according to the determination result.
- the image coding apparatus 300C according to Embodiment 4 of the present invention can obtain the same effects as those of Embodiment 1 described above.
- the image encoding device 300C according to Embodiment 4 of the present invention reduces the number n of lines in the first overlap region 316A and the second overlap region 316B when the motion vector is small.
- the coding efficiency and the image quality are unlikely to deteriorate even if the size of the search range of the motion compensation process (the first overlap region 316A and the second overlap region 316B) is reduced.
- the image encoding device 300C according to the fourth embodiment of the present invention can cope with encoding processing at a higher pixel rate while suppressing deterioration in encoding efficiency and image quality.
- the image coding device 300C according to Embodiment 4 of the present invention narrows the first overlapping region 316A and the second overlapping region 316B when the motion vector is small regardless of the pixel rate. Thereby, the data transfer amount between the first encoding unit 303A and the second encoding unit 303B can be further reduced.
- the image coding apparatus 300C performs the first overlapping region according to the motion vectors 331A and 331B generated by the first coding unit 303A and the second coding unit 303B.
- the sizes of 316A and the second overlapping region 316B it is possible to reduce the bandwidth between the encoding units and perform encoding processing at a higher pixel rate.
- the image encoding device 300C determines that the first overlap region 316A and the second overlap are in accordance with the largest motion vector 332 that crosses the division boundary (the boundary between the first encoding target range 315A and the second encoding target range 315B).
- the size of the area 316B is changed. Thereby, more appropriate sizes of the first overlapping area 316A and the second overlapping area 316B can be set.
- the image encoding device 300C can set the sizes of the first overlapping region 316A and the second overlapping region 316B small. Thereby, the image encoding device 300C can further reduce the data transfer amount between the first encoding unit 303A and the second encoding unit 303B while suppressing deterioration in encoding efficiency and image quality.
- the image encoding devices 300 to 300C change the sizes of the first overlap region 316A and the second overlap region 316B in two steps. It may be changed as described above. In this case, the image encoding devices 300 to 300C may reduce the sizes of the first overlapping region 316A and the second overlapping region 316B as the pixel rate increases.
- the image encoding devices 300 to 300C may include three or more encoding units that divide the input image signal 310 into three or more images and encode the divided images, respectively.
- the sizes of the divided images may be different. Further, the size of the first overlapping region 316A and the second overlapping region 316B may be different.
- each of the image encoding devices 300 to 300C is any one of the user's switch operation, the information included in the input image signal 310, the remaining buffer capacity 322, and the motion vector 332, respectively.
- the size of the first overlap region 316A and the second overlap region 316B is changed according to one, but the first overlap region 316A and the second overlap region according to two or more of the above four The size of 316B may be changed.
- the image encoding devices 300 to 300C change the sizes of the first overlapping region 316A and the second overlapping region 316B according to the remaining buffer capacity 322 and the motion vector 332, the image encoding devices 300 to 300C
- the remaining buffer capacity 322 is larger than a predetermined value and the motion vector 332 is larger than a predetermined value
- the first overlapping area 316A and the second overlapping area 316B are widened.
- the remaining buffer capacity 322 is predetermined
- the first overlapping region 316A and the second overlapping region 316B are narrowed when at least one of the smaller value and (2) the motion vector 332 is smaller than the predetermined value is satisfied.
- the image encoding apparatus changes the size of the first overlap region 316A and the second overlap region 316B in three or more stages according to the combination of the size of the remaining buffer capacity 322 and the size of the motion vector 332. May be.
- the motion detection process is performed using any one of the reference of the pixel rate for determining the sizes of the first overlap region 316A and the second overlap region 316B, and the original image and the local decoded image.
- the pixel rate criterion for determining whether or not to perform is the same, but may be different.
- the image coding apparatus 300 determines the sizes of the first overlapping region 316A and the second overlapping region 316B according to whether or not the pixel rate is greater than the first threshold, and the pixel rate is the first Depending on whether it is larger than a second threshold value different from the threshold value, it may be determined which of the original image and the local decoded image is used for the motion detection process.
- each processing unit included in the image coding apparatuses 300 to 300C according to the first to fourth embodiments is typically realized as an LSI that is an integrated circuit. These may be individually made into one chip, or may be made into one chip so as to include a part or all of them.
- all the processing units shown in FIG. 3 except for the first storage unit 306A and the second storage unit 306B are realized as a one-chip LSI.
- circuits are not limited to LSI, and may be realized by a dedicated circuit or a general-purpose processor.
- An FPGA Field Programmable Gate Array
- reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
- some or all of the functions of the image coding apparatuses 300 to 300C according to Embodiments 1 to 4 of the present invention may be realized by a processor such as a CPU executing a program.
- the present invention may be the above program or a recording medium on which the above program is recorded.
- the program can be distributed via a transmission medium such as the Internet.
- the present invention can be applied to an encoding device. Further, the present invention is useful for a digital still camera and a digital video camera that perform high-speed shooting that requires high pixel rate encoding processing or high-definition image shooting.
- Image coding apparatus 101 Image signal input terminal 102, 302 Signal dividing unit 103A, 303A First coding unit 103B, 303B Second coding unit 106, 307 Signal combining unit 107 Coded signal Output terminal 108 Encoding section 110, 310 Input image signal 111A, 312A First divided image signal 111B, 312B Second divided image signal 112A, 313A First encoded signal 112B, 313B Second encoded signal 113A, 113B Local decoded image 114, 314 Output encoded signal 115A, 316A First overlap region 115B, 316B Second overlap region 116A First search range 116B Second search range 301, 301A, 301B, 301C Imaging method switching unit 304A First storage region concatenation unit 304B Second Storage area connection unit 305A First external connection unit 305B Second external connection unit 306A First storage unit 306B Second storage unit 308 Pixel rate monitoring unit 309 Remaining buffer
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
2008年5月20日に出願された出願番号2008-131508の日本出願の明細書、図面および請求の範囲における開示は、その全体を参照により本願に取り込む。
本発明の実施の形態1に係る画像符号化装置300は、動き検出処理及び動き補償処理の探索範囲として用いられる重複領域の大きさを、画素レートが高い場合には小さくする。これにより、高画素レート時に、符号化部間で転送されるローカルデコード画像のデータ量を削減できる。
本発明の実施の形態2では、上述した実施の形態1に係る画像符号化装置300の変形例について説明する。本発明の実施の形態2に係る画像符号化装置300Aは、入力画像信号310に含まれる情報を用いて入力画像信号310の画素レートを判定し、判定した画素レートに応じて第1重複領域316A及び第2重複領域316Bの大きさを変更する。
本発明の実施の形態3では、上述した実施の形態1に係る画像符号化装置300の変形例について説明する。本発明の実施の形態3に係る画像符号化装置300Bは、第1記憶部306A及び第2記憶部306Bの残りバッファ容量に応じて第1重複領域316A及び第2重複領域316Bの大きさを変更する。
本発明の実施の形態4では、上述した実施の形態1に係る画像符号化装置300の変形例について説明する。本発明の実施の形態4に係る画像符号化装置300Cは、動きベクトルに応じて第1重複領域316A及び第2重複領域316Bの大きさを変更する。
101 画像信号入力端子
102、302 信号分割部
103A、303A 第1符号化部
103B、303B 第2符号化部
106、307 信号合成部
107 符号化信号出力端子
108 符号化部
110、310 入力画像信号
111A、312A 第1分割画像信号
111B、312B 第2分割画像信号
112A、313A 第1符号化信号
112B、313B 第2符号化信号
113A、113B ローカルデコード画像
114、314 出力符号化信号
115A、316A 第1重複領域
115B、316B 第2重複領域
116A 第1探索範囲
116B 第2探索範囲
301、301A、301B、301C 撮影方式切替部
304A 第1記憶領域連結部
304B 第2記憶領域連結部
305A 第1外部連結部
305B 第2外部連結部
306A 第1記憶部
306B 第2記憶部
308 画素レート監視部
309 残りバッファ容量監視部
311 識別信号
315A 第1符号化対象範囲
315B 第2符号化対象範囲
317A、317B 第1ローカルデコード画像
318A、318B 第2ローカルデコード画像
320 監視結果
321A、321B、322 残りバッファ容量
330 動きベクトル監視部
331A、331B、332 動きベクトル
701 動き検出部
702 原画像記憶部
703 減算部
704、711、713 スイッチ
705 変換部
706 量子化部
707 可変長符号化部
708 逆量子化部
709 逆変換部
710 加算部
712 ローカルデコード画像記憶部
714 動き補償部
721 予測誤差信号
722、724 DCT係数
723 量子化係数
725 動きベクトル
726 予測画像
727 ローカルデコード信号
901 符号化ブロック
902、904、906、908 動きベクトル
903、905、907 隣接ブロック
Claims (16)
- 入力画像信号を符号化することにより出力符号化信号を生成する画像符号化装置であって、
前記入力画像信号に含まれる各ピクチャを複数の符号化対象画像に分割する信号分割部と、
前記複数の符号化対象画像にそれぞれ対応し、対応する前記符号化対象画像に動き補償処理を含む符号化処理を行うことにより符号化信号を生成するとともに、対応する前記符号化対象画像を符号化及び復号することによりローカルデコード画像を生成する複数の符号化部と、
前記複数の符号化部により生成された複数の符号化信号を合成することにより前記出力符号化信号を生成する信号合成部とを備え、
前記信号分割部は、前記各符号化部による前記動き補償処理の際の探索範囲を、当該符号化部に対応する符号化対象画像と、当該符号化対象画像に隣接し、かつ当該符号化対象画像に隣接する他の符号化対象画像に含まれる重複領域とを含む範囲に決定し、
前記各符号化部は、前記探索範囲に含まれる、自身が生成した前記符号化対象画像の第1ローカルデコード画像と、他の符号化部により生成された前記重複領域の第2ローカルデコード画像とを用いて、前記動き補償処理を行い、
前記信号分割部は、前記重複領域の大きさを所定の条件に応じて切り替える
画像符号化装置。 - 前記信号分割部は、当該画像符号化装置が単位時間当たりに処理すべき画素数である画素レートが第1の閾値より小さい場合、前記重複領域の大きさを第1の大きさに決定し、前記画素レートが前記第1の閾値より大きい場合、前記重複領域の大きさを第1の大きさより小さい第2の大きさに決定する
請求項1記載の画像符号化装置。 - 前記各符号化部は、
対応する前記符号化対象画像に含まれる複数のブロックそれぞれの動きベクトルを検出する動き検出部と、
前記動き検出部により検出された動きベクトルを用いて前記動き補償処理を行う動き補償部とを備え、
前記画像符号化装置は、さらに、
前記複数の符号化部にそれぞれ対応し、対応する前記符号化対象画像及び対応する前記重複領域の画像を原画像として記憶する原画像記憶部とを備え、
前記動き検出部は、前記原画像記憶部に記憶されている前記原画像を用いて、前記動きベクトルを検出する
請求項2記載の画像符号化装置。 - 前記画像符号化装置は、さらに、
前記複数の符号化部にそれぞれ対応し、対応する符号化部が前記動き補償処理に用いる前記第1ローカルデコード画像及び前記第2ローカルデコード画像を記憶するローカルデコード画像記憶部を備え、
前記動き検出部は、前記画素レートが第2の閾値より大きい場合、前記原画像記憶部に記憶されている前記原画像を用いて、前記動きベクトルを検出し、前記画素レートが前記第2の閾値より小さい場合、前記ローカルデコード画像記憶部に記憶されている前記第1ローカルデコード画像及び前記第2ローカルデコード画像を用いて、前記動きベクトルを検出する
請求項3記載の画像符号化装置。 - 前記各符号化部は、他の符号化部に前記第2ローカルデコード画像を要求し、
前記動き検出部は、前記画素レートが第2の閾値より大きい場合、前記要求に応じて前記他の符号化部により出力される前記第2ローカルデコード画像を当該符号化部が取得する前に、前記原画像記憶部に記憶されている前記原画像を用いて、前記動きベクトルを検出する処理を開始する
請求項4記載の画像符号化装置。 - 前記画像符号化装置は、さらに、
ユーザの操作により指定される前記画素レートを取得する画素レート取得部を備える
請求項2記載の画像符号化装置。 - 前記画像符号化装置は、さらに、
前記入力画像信号に含まれる情報を用いて、当該入力画像信号の画像サイズ及びフレームレートのうち少なくとも一方を算出する第1算出部と、
前記第1算出部により算出された前記画素サイズ及びフレームレートのうち少なくとも一方を用いて前記画素レートを算出する第2算出部とを備える
請求項2記載の画像符号化装置。 - 前記第1算出部は、前記入力画像信号に含まれる、ピクセルクロック、水平同期信号、及び垂直同期信号のうち少なくとも一つを用いて、当該入力画像信号の画像サイズ及びフレームレートのうち少なくとも一方を算出する
請求項7記載の画像符号化装置。 - 前記画像符号化装置は、さらに、
前記複数の符号化部により生成された前記第1ローカルデコード画像を記憶する第1記憶部を備え、
前記信号分割部は、前記第1記憶部の空き容量が第1の閾値より大きい場合、前記重複領域の大きさを第1の大きさに決定し、前記空き容量が前記第1の閾値より小さい場合、前記重複領域の大きさを第1の大きさより小さい第2の大きさに決定する
請求項1記載の画像符号化装置。 - 前記第1記憶部は、
前記複数の符号化部にそれぞれ対応し、対応する符号化部が前記動き補償処理に用いる前記第1ローカルデコード画像及び前記第2ローカルデコード画像を記憶する複数の第2記憶部を含み、
前記信号分割部は、前記複数の第2記憶部の空き容量のうち最も少ない空き容量が前記第1の閾値より大きい場合、前記重複領域の大きさを前記第1の大きさに決定し、前記最も少ない空き容量が前記第1の閾値より小さい場合、前記重複領域の大きさを前記第2の大きさに決定する
請求項9記載の画像符号化装置。 - 前記各符号化部は、
対応する前記符号化対象画像に含まれる複数のブロックそれぞれの動きベクトルを検出する動き検出部と、
前記動き検出部により検出された動きベクトルを用いて前記動き補償処理を行う動き補償部とを備え、
前記信号分割部は、前記動きベクトルが第1の閾値より大きい場合、前記重複領域の大きさを第1の大きさに決定し、前記動きベクトルが前記第1の閾値より小さい場合、前記重複領域の大きさを第1の大きさより小さい第2の大きさに決定する
請求項1記載の画像符号化装置。 - 前記信号分割部は、前記複数の符号化対象画像の境界をまたぐ前記動きベクトルのうち、最も大きい動きベクトルが前記第1の閾値より大きい場合、前記重複領域の大きさを前記第1の大きさに決定し、前記最も大きい動きベクトルが前記第1の閾値より小さい場合、前記重複領域の大きさを前記第2の大きさに決定する
請求項11記載の画像符号化装置。 - 前記各符号化部は、
対応する前記符号化対象画像に含まれる複数のブロックそれぞれの動きベクトルを検出する動き検出部と、
前記動き検出部により検出された動きベクトルを用いて前記動き補償処理を行う動き補償部とを備え、
前記各符号化部は、対応する前記符号化対象画像と対応する前記重複領域と境界に隣接するブロックの動きベクトルを、当該ブロックの周辺のブロックの動きベクトルを用いて予測し、予測した動きベクトルが前記境界の方向を示す場合、他の符号化部により生成された前記第2ローカルデコード画像を取得し、前記予測した動きベクトルが前記境界の方向を示さない場合、他の符号化部により生成された前記第2ローカルデコード画像を取得しない
請求項2記載の画像符号化装置。 - 入力画像信号を符号化することにより出力符号化信号を生成する画像符号化方法であって、
前記入力画像信号に含まれる各ピクチャを複数の符号化対象画像に分割する信号分割ステップと、
前記複数の符号化対象画像にそれぞれ対応する複数の符号化部が、対応する前記符号化対象画像に動き補償処理を含む符号化処理を行うことにより符号化信号を生成するとともに、対応する前記符号化対象画像を符号化及び復号することによりローカルデコード画像を生成する符号化ステップと、
前記符号化ステップで生成された複数の符号化信号を合成することにより前記出力符号化信号を生成する信号合成ステップとを含み、
前記信号分割ステップでは、前記各符号化部による前記動き補償処理の際の探索範囲を、当該符号化部に対応する符号化対象画像と、当該符号化対象画像に隣接し、かつ当該符号化対象画像に隣接する他の符号化対象画像に含まれる重複領域とを含む範囲に決定し、
前記符号化ステップでは、前記各符号化部が、前記探索範囲に含まれる、自身が生成した前記符号化対象画像の第1ローカルデコード画像と、他の符号化部により生成された前記重複領域の第2ローカルデコード画像とを用いて、前記動き補償処理を行い、
前記信号分割ステップでは、前記重複領域の大きさを所定の条件に応じて切り替える
画像符号化方法。 - 請求項14記載の画像符号化方法をコンピュータに実行させる
プログラム。 - 入力画像信号を符号化することにより出力符号化信号を生成する集積回路であって、
前記入力画像信号に含まれる各ピクチャを複数の符号化対象画像に分割する信号分割部と、
前記複数の符号化対象画像にそれぞれ対応し、対応する前記符号化対象画像に動き補償処理を含む符号化処理を行うことにより符号化信号を生成するとともに、対応する前記符号化対象画像を符号化及び復号することによりローカルデコード画像を生成する複数の符号化部と、
前記複数の符号化部により生成された複数の符号化信号を合成することにより前記出力符号化信号を生成する信号合成部とを備え、
前記信号分割部は、前記各符号化部による前記動き補償処理の際の探索範囲を、当該符号化部に対応する符号化対象画像と、当該符号化対象画像に隣接し、かつ当該符号化対象画像に隣接する他の符号化対象画像に含まれる重複領域とを含む範囲に決定し、
前記各符号化部は、前記探索範囲に含まれる、自身が生成した前記符号化対象画像の第1ローカルデコード画像と、他の符号化部により生成された前記重複領域の第2ローカルデコード画像とを用いて、前記動き補償処理を行い、
前記信号分割部は、前記重複領域の大きさを所定の条件に応じて切り替える
集積回路。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/669,630 US8654850B2 (en) | 2008-05-20 | 2009-05-19 | Image coding device and image coding method |
CN200980000574A CN101755462A (zh) | 2008-05-20 | 2009-05-19 | 图像编码装置以及图像编码方法 |
JP2009546592A JP5340172B2 (ja) | 2008-05-20 | 2009-05-19 | 画像符号化装置及び画像符号化方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008-131508 | 2008-05-20 | ||
JP2008131508 | 2008-05-20 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2009142003A1 true WO2009142003A1 (ja) | 2009-11-26 |
Family
ID=41339943
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2009/002207 WO2009142003A1 (ja) | 2008-05-20 | 2009-05-19 | 画像符号化装置及び画像符号化方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US8654850B2 (ja) |
JP (1) | JP5340172B2 (ja) |
CN (1) | CN101755462A (ja) |
WO (1) | WO2009142003A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014195169A (ja) * | 2013-03-28 | 2014-10-09 | Olympus Corp | 画像処理装置 |
JP2018117308A (ja) * | 2017-01-20 | 2018-07-26 | キヤノン株式会社 | 再生装置及びその制御方法 |
JP2020113967A (ja) * | 2018-12-06 | 2020-07-27 | アクシス アーベー | 複数のイメージフレームをエンコーディングする方法及びデバイス |
JP2021061546A (ja) * | 2019-10-08 | 2021-04-15 | キヤノン株式会社 | 撮像装置、撮像装置の制御方法、及びプログラム |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010116731A1 (ja) * | 2009-04-08 | 2010-10-14 | パナソニック株式会社 | 撮像装置、再生装置、撮像方法及び再生方法 |
JP5368631B2 (ja) * | 2010-04-08 | 2013-12-18 | 株式会社東芝 | 画像符号化方法、装置、及びプログラム |
US9813730B2 (en) * | 2013-12-06 | 2017-11-07 | Mediatek Inc. | Method and apparatus for fine-grained motion boundary processing |
US9836857B2 (en) | 2013-12-17 | 2017-12-05 | Beijing Zhigu Rui Tuo Tech Co., Ltd. | System, device, and method for information exchange |
CN103729824B (zh) * | 2013-12-17 | 2018-02-02 | 北京智谷睿拓技术服务有限公司 | 信息交互方法及信息交互系统 |
US20150189333A1 (en) * | 2013-12-27 | 2015-07-02 | Industrial Technology Research Institute | Method and system for image processing, decoding method, encoder, and decoder |
US9832338B2 (en) * | 2015-03-06 | 2017-11-28 | Intel Corporation | Conveyance of hidden image data between output panel and digital camera |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05183891A (ja) * | 1991-12-27 | 1993-07-23 | Sony Corp | 動画像符号化装置 |
JPH06303590A (ja) * | 1993-04-13 | 1994-10-28 | Matsushita Electric Ind Co Ltd | 並列処理画像符号化方法及び復号化方法 |
JPH10178643A (ja) * | 1996-12-17 | 1998-06-30 | Sony Corp | 信号圧縮装置 |
JPH11275586A (ja) * | 1998-03-18 | 1999-10-08 | Nec Corp | 符号化装置および符号化方法、並びに記録媒体 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5351083A (en) * | 1991-10-17 | 1994-09-27 | Sony Corporation | Picture encoding and/or decoding system |
JPH0622297A (ja) * | 1992-06-29 | 1994-01-28 | Canon Inc | 動き補償符号化装置 |
EP0577310B1 (en) * | 1992-06-29 | 2001-11-21 | Canon Kabushiki Kaisha | Image processing device |
JPH06351000A (ja) * | 1993-06-07 | 1994-12-22 | Matsushita Electric Ind Co Ltd | 画像信号符号化装置と画像信号復号装置 |
JPH0918878A (ja) | 1995-06-30 | 1997-01-17 | Sony Corp | 画像符号化装置および画像符号化方法 |
JPH10327416A (ja) | 1997-05-22 | 1998-12-08 | Toshiba Corp | 動画像符号化装置 |
JP2000278688A (ja) | 1999-03-24 | 2000-10-06 | Sony Corp | 動きベクトル検出装置およびその方法と画像処理装置 |
CN101448162B (zh) * | 2001-12-17 | 2013-01-02 | 微软公司 | 处理视频图像的方法 |
TWI257817B (en) * | 2005-03-08 | 2006-07-01 | Realtek Semiconductor Corp | Method and apparatus for loading image data |
JP2007124408A (ja) * | 2005-10-28 | 2007-05-17 | Matsushita Electric Ind Co Ltd | 動きベクトル検出装置および動きベクトル検出方法 |
DE602007009730D1 (de) * | 2007-06-29 | 2010-11-18 | Fraunhofer Ges Forschung | Skalierbare videocodierung, die pixelwert-verfeinerungsskalierbarkeit unterstützt |
JP2010028221A (ja) * | 2008-07-15 | 2010-02-04 | Sony Corp | 動きベクトル検出装置、動きベクトル検出方法、画像符号化装置及びプログラム |
-
2009
- 2009-05-19 US US12/669,630 patent/US8654850B2/en not_active Expired - Fee Related
- 2009-05-19 JP JP2009546592A patent/JP5340172B2/ja not_active Expired - Fee Related
- 2009-05-19 WO PCT/JP2009/002207 patent/WO2009142003A1/ja active Application Filing
- 2009-05-19 CN CN200980000574A patent/CN101755462A/zh active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05183891A (ja) * | 1991-12-27 | 1993-07-23 | Sony Corp | 動画像符号化装置 |
JPH06303590A (ja) * | 1993-04-13 | 1994-10-28 | Matsushita Electric Ind Co Ltd | 並列処理画像符号化方法及び復号化方法 |
JPH10178643A (ja) * | 1996-12-17 | 1998-06-30 | Sony Corp | 信号圧縮装置 |
JPH11275586A (ja) * | 1998-03-18 | 1999-10-08 | Nec Corp | 符号化装置および符号化方法、並びに記録媒体 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014195169A (ja) * | 2013-03-28 | 2014-10-09 | Olympus Corp | 画像処理装置 |
JP2018117308A (ja) * | 2017-01-20 | 2018-07-26 | キヤノン株式会社 | 再生装置及びその制御方法 |
JP7020782B2 (ja) | 2017-01-20 | 2022-02-16 | キヤノン株式会社 | 再生装置及びその制御方法 |
JP2020113967A (ja) * | 2018-12-06 | 2020-07-27 | アクシス アーベー | 複数のイメージフレームをエンコーディングする方法及びデバイス |
JP2021061546A (ja) * | 2019-10-08 | 2021-04-15 | キヤノン株式会社 | 撮像装置、撮像装置の制御方法、及びプログラム |
JP7401246B2 (ja) | 2019-10-08 | 2023-12-19 | キヤノン株式会社 | 撮像装置、撮像装置の制御方法、及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
JPWO2009142003A1 (ja) | 2011-09-29 |
CN101755462A (zh) | 2010-06-23 |
US20100296582A1 (en) | 2010-11-25 |
US8654850B2 (en) | 2014-02-18 |
JP5340172B2 (ja) | 2013-11-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5340172B2 (ja) | 画像符号化装置及び画像符号化方法 | |
JP5295045B2 (ja) | 組み込み機器において高解像度画像を提供するための方法及び装置 | |
JP6286718B2 (ja) | 高効率な次世代ビデオコーディングのためのフレーム階層応答型量子化を使用する、コンテンツ適応ビットレートおよび品質管理 | |
TWI736557B (zh) | 資料編碼設備以及資料編碼方法 | |
JP4606311B2 (ja) | 画像符号化装置および画像符号化方法 | |
JPWO2006013690A1 (ja) | 画像復号装置 | |
KR20150039582A (ko) | 동화상 부호화 장치 및 그 동작 방법 | |
WO2010052837A1 (ja) | 画像復号装置、画像復号方法、集積回路及びプログラム | |
WO2009139123A1 (ja) | 画像処理装置およびそれを搭載した撮像装置 | |
JP2009130562A (ja) | 撮像装置、撮像装置の制御方法および撮像装置の制御プログラム、ならびに、データ処理装置、データ処理方法およびデータ処理プログラム | |
JP2010063092A (ja) | 画像符号化装置、画像符号化方法、画像符号化集積回路およびカメラ | |
US8488892B2 (en) | Image encoder and camera system | |
US10666970B2 (en) | Encoding apparatus, encoding method, and storage medium | |
JPWO2007055013A1 (ja) | 画像復号化装置および方法、画像符号化装置 | |
JP5580541B2 (ja) | 画像復号化装置および画像復号化方法 | |
JP2008294669A (ja) | 画像符号化装置 | |
JP2014078891A (ja) | 画像処理装置、画像処理方法 | |
JP2019004439A (ja) | 符号化装置、撮像装置および符号化方法 | |
US20110122952A1 (en) | Motion estimation device | |
JP6610115B2 (ja) | 動画像符号化装置、動画像符号化方法、及び動画像符号化プログラム | |
JP2007336005A (ja) | 画像符号化装置および画像符号化方法 | |
JP7451131B2 (ja) | 画像符号化装置、画像符号化方法、及びプログラム | |
US11611749B2 (en) | Encoding apparatus, image capturing apparatus, control method, and storage medium | |
JP2011097488A (ja) | 映像圧縮符号化装置 | |
WO2009085788A1 (en) | System, method and device for processing macroblock video data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200980000574.6 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2009546592 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12669630 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09750365 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 09750365 Country of ref document: EP Kind code of ref document: A1 |