JP2008211502A - Image encoding device, image decoding device, image encoding method, image decoding method, imaging apparatus, and control method and program of imaging apparatus - Google Patents

Image encoding device, image decoding device, image encoding method, image decoding method, imaging apparatus, and control method and program of imaging apparatus Download PDF

Info

Publication number
JP2008211502A
JP2008211502A JP2007045961A JP2007045961A JP2008211502A JP 2008211502 A JP2008211502 A JP 2008211502A JP 2007045961 A JP2007045961 A JP 2007045961A JP 2007045961 A JP2007045961 A JP 2007045961A JP 2008211502 A JP2008211502 A JP 2008211502A
Authority
JP
Japan
Prior art keywords
image
motion compensation
camera
step
search range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2007045961A
Other languages
Japanese (ja)
Other versions
JP4810465B2 (en
Inventor
Satoru Okamoto
悟 岡本
Original Assignee
Fujifilm Corp
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujifilm Corp, 富士フイルム株式会社 filed Critical Fujifilm Corp
Priority to JP2007045961A priority Critical patent/JP4810465B2/en
Publication of JP2008211502A publication Critical patent/JP2008211502A/en
Application granted granted Critical
Publication of JP4810465B2 publication Critical patent/JP4810465B2/en
Application status is Expired - Fee Related legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Abstract

<P>PROBLEM TO BE SOLVED: To attain high speed of encoding processing and decoding processing, reduction in power consumption, and reduction in data amount after encoding. <P>SOLUTION: A first encoding processing system 18 has a first encoding circuit 24 for encoding a first image from a first camera 12, and a first motion compensation circuit 26 for predicting motion compensation of the first image. A second encoding processing system 20 has a second encoding circuit 60 for encoding a second image from a second camera 14, and a second motion compensation circuit 62 for predicting motion compensation of the second image. Further, the second encoding processing system 20 has a storage part 90 storing deviation information (deviation direction and deviation amount) of the second image from the first image, and a search range changing circuit 92 for changing a search range of a motion vector of the second image in the second motion compensation circuit 62 on the basis of the deviation information stored in the storage part 90. <P>COPYRIGHT: (C)2008,JPO&INPIT

Description

  The present invention relates to an image encoding device, an image decoding device, an image encoding method, an image decoding method, an imaging device, an imaging device control method, and a program.

  In recent years, with the spread of digital cameras and digital videos, various techniques for encoding 3D moving images have been proposed.

  For example, Patent Document 1 discloses that, regarding motion compensation prediction of three-dimensional volume data, the amount of calculation is reduced by using a correlation with a depth hierarchy or performing adaptive motion compensation.

  Patent Document 2 discloses that in a stereoscopic imaging apparatus that fixes a convergence point, a problem that occurs when the convergence point is shifted by the magnification of the electronic zoom is disclosed.

  Patent Document 3 discloses that, in the DVD video standard for 3D images, 2D and 3D images can be used on a general-purpose DVD by storing depth information in an optional area. Yes.

  Patent Document 4 discloses that a stereoscopic video image is encoded and decoded using a MAC defined in MPEG-4.

  In Patent Document 5, vertical processing is performed in motion compensation prediction, and further, horizontal processing of MPEG or H.264 is performed. It is disclosed that any one of the H.261 horizontal processes is switched.

  In Patent Document 6, a stereoscopic video camera uses three video cameras, left, right, and front, to compare the left and right images based on the front, predict the movement of the left and right images, and set the reference image in the direction of the prediction vector. Shifting is disclosed.

  Patent Document 7 discloses that a search range for determining a corresponding portion is determined in consideration of a direction based on parallax between a right eye image and a left eye image.

JP-A-6-111004 Japanese Patent Laid-Open No. 9-37302 JP 2006-191357 A Special table 2006-512809 gazette JP-A-6-113265 JP 11-69381 A JP 2000-165909 A

  In the conventional 3D image encoding process including Patent Documents 1 to 7 described above, a plurality of images are independently encoded. In this case, since the conventional two-dimensional image encoding method and system can be used as they are, there is an advantage that they can be easily realized.

  However, such a method of independently encoding a plurality of images does not take into account that the correlation between the plurality of images is high, and thus has the following problems.

(1) The amount of data after encoding becomes twice the capacity of a two-dimensional image.

(2) The processing amount and time required for motion vector search are significantly increased as compared to the case of a two-dimensional image.

(3) Power consumption also increases.

(4) The amount of processing and time required for decoding increase significantly as with encoding.

  The present invention has been made in consideration of such problems, and reduces the amount of processing and time required for motion compensation prediction when applied to encoding and decoding processing of 3D images and multi-images. Image encoding apparatus, image decoding apparatus, image encoding method, and image encoding apparatus, image decoding apparatus, image encoding method, image encoding apparatus, image decoding apparatus, image encoding method, and image encoding apparatus It is an object to provide a decoding method, an imaging apparatus, an imaging apparatus control method, and a program.

  An image encoding apparatus according to a first aspect of the present invention includes a storage unit that stores second-image shift information with respect to a first image, first motion compensation means that performs motion compensation prediction of the first image, and A second motion compensation unit that performs motion compensation prediction of two images, and a search range of a motion vector of the second image in the second motion compensation unit is changed based on the shift information stored in the storage unit And a search range changing means.

  This makes it possible to reduce the amount of processing and time required for motion compensation prediction when applied to 3D image and multi-image encoding processing, speeding up encoding processing, reducing power consumption, and encoding. The amount of data after conversion can be reduced.

  In the first aspect of the present invention, the convergence angle between the first camera that captures the subject and obtains the first image and the second camera that captures the subject and obtains the second image, and the first camera You may make it have a means to set the shift | offset | difference information based on at least one among the space | intervals of an optical axis and the optical axis of the said 2nd camera, and memorize | store in the said memory | storage part. In this case, since the search range of the motion vector can be changed according to the optical axis interval and the convergence angle, it is possible to perform optimum encoding processing according to the subject and the image to be captured.

  In the first aspect of the present invention, the second motion compensation unit includes a plurality of macroblocks included in the search range changed by the search range change unit and a plurality of macroblocks included in the first image. Are compared with each other in order, and the comparison result from the comparison means indicates that the comparison results are the same among the plurality of macroblocks of the second image. Instead of the existing macroblock, output means for outputting code information indicating the macroblock of the first image compared with the macroblock may be included. In this case, the motion vector search process can be simplified, the processing time can be reduced, and the amount of data after encoding can be reduced.

  Further, in the first aspect of the present invention, the search range changing means includes means for setting deviation information in units of macroblocks based on depth information in units of macroblocks relating to the first image and storing the information in the storage unit. The second motion compensation unit may change the search range of the motion vector for each macroblock of the second image based on the shift information stored in the storage unit. In this case, since the search range of the motion vector can be changed according to the macroblock distance information, it is possible to perform optimum encoding processing according to the subject and the image to be captured.

  In the first aspect of the present invention, the second motion compensation unit compares the macroblock included in the search range changed by the search range change unit with the macroblock included in the first image. In the case where the comparison means and the comparison result from the comparison means indicate that they match or substantially match, instead of the plurality of macroblocks of the second image, the macroblock that is the target of the comparison result, You may make it have an output means which outputs the code information which shows the macroblock of the said 1st image compared with this macroblock. In this case, the motion vector search process can be simplified, the processing time can be reduced, and the amount of data after encoding can be reduced.

  Next, an image decoding apparatus according to a second aspect of the present invention includes a first restoration unit that decodes encoded data of a first image and restores the first image, and a shift of the second image with respect to the first image. A second restoration unit that decodes encoded data of the second image including code information associated based on the information and restores the second image, and the second restoration unit includes the code information Based on this, the image processing apparatus includes an association unit that associates the encoded data of the second image with the restored first image.

  As a result, when applied to decoding processing of 3D images and multi-images, the processing amount and time required for image restoration can be reduced, and the decoding processing speed can be increased and the power consumption can be reduced. Can do.

  And in the second aspect of the present invention, the associating means, as the decoded data of the macroblock in which code information indicating the macroblock of the first image is recorded among the encoded data of the second image, You may make it make it the decoding data of the macroblock of 1 image. In this case, the image restoration process can be simplified and the processing time can be reduced.

  Next, an imaging apparatus according to a third aspect of the present invention includes a first camera that captures a subject to obtain a first image, a second camera that captures the subject and obtains a second image, and the first camera. Based on at least one of the convergence angle of the second camera and the distance between the optical axis of the first camera and the optical axis of the second camera, deviation information of the second image with respect to the first image is set. Storing in the storage unit, first motion compensation means for performing motion compensation prediction of the first image, second motion compensation means for performing motion compensation prediction of the second image, and the second motion compensation means. Search range changing means for changing the search range of the motion vector of the second image based on the shift information stored in the storage unit.

  Next, an imaging device according to a fourth aspect of the present invention relates to a first camera that captures a subject and obtains a first image, a second camera that captures the subject and obtains a second image, and the first image. Based on depth information in units of macroblocks, means for setting deviation information in units of macroblocks and storing the information in the storage unit, first motion compensation means for performing motion compensation prediction of the first image, and the second image A second motion compensation unit that performs motion compensation prediction of the second image, and a search range of a motion vector for each macroblock of the second image in the second motion compensation unit based on the shift information stored in the storage unit Search range changing means for changing.

  In the third and fourth aspects of the present invention, when applied to the encoding process of a three-dimensional image or a multi-image, the processing amount and time required for motion compensation prediction can be reduced, and the encoding process can be speeded up. Thus, it is possible to reduce power consumption and reduce the amount of data after encoding.

  Next, an image coding method according to a fifth aspect of the present invention includes a first motion compensation step for performing motion compensation prediction of a first image, a second motion compensation step for performing motion compensation prediction of a second image, The search range of the motion vector of the second image in the second motion compensation step is performed based on the shift information of the second image with respect to the first image stored in the storage unit. And a search range changing step for changing the search range.

  In the fifth aspect of the present invention, the convergence angle between the first camera that captures the subject and obtains the first image and the second camera that captures the subject and obtains the second image, and the first You may make it have the step which sets and memorize | stores the said shift | offset | difference information based on at least one among the space | intervals of the optical axis of a camera, and the optical axis of a said 2nd camera.

  In this case, the second motion compensation step sequentially compares a plurality of macroblocks included in the search range changed in the search range change step and a plurality of macroblocks included in the first image. When the comparison step and the comparison result from the comparison step indicate that they match or substantially match, instead of the macroblock that is the target of the comparison result among the plurality of macroblocks of the second image And an output step of outputting code information indicating the macroblock of the first image compared with the macroblock.

  Further, in the fifth aspect of the present invention, the method further includes a step of setting shift information in units of macroblocks based on depth information in units of macroblocks related to the first image and storing the information in the storage unit, In the changing step, the search range of the motion vector for each macroblock of the second image in the second motion compensating step may be changed based on the shift information stored in the storage unit.

  In this case, the second motion compensation step includes a comparison step of comparing the macroblock included in the search range changed in the search range change step with the macroblock included in the first image, and the comparison When the comparison result from the step indicates that they match or almost match, the macroblock of the second image is compared with the macroblock instead of the macroblock that is the target of the comparison result. An output step of outputting code information indicating the macroblock of the first image.

  Next, an image decoding method according to a sixth aspect of the present invention includes a first restoration step of decoding encoded data of a first image and restoring the first image, and a shift of the second image with respect to the first image. A second restoration step of decoding encoded data of the second image including code information associated based on the information to restore the second image, and the second restoration step includes adding the code information to the code information. Based on this, the method includes an associating step of associating the encoded data of the second image with the restored first image.

  In the associating step, the decoded data of the macroblock of the first image is used as the decoded data of the macroblock in which code information indicating the macroblock of the first image is recorded among the encoded data of the second image. You may make it.

  Next, according to a seventh aspect of the present invention, there is provided a method for controlling an imaging apparatus, comprising: a first camera that captures a subject to obtain a first image; and a second camera that captures the subject and obtains a second image. In the apparatus control method, the first image is based on at least one of a convergence angle between the first camera and the second camera and an interval between the optical axis of the first camera and the optical axis of the second camera. A step of setting and storing the second image shift information for the first image, a first motion compensation step for performing motion compensation prediction of the first image, and a second motion for performing motion compensation prediction of the second image. Compensation step and prior to the second motion compensation step, the search range of the motion vector of the second image in the second motion compensation step is changed based on the deviation information stored in the storage unit Search range change And having a step.

  Next, according to an eighth aspect of the present invention, there is provided a method for controlling an imaging apparatus, comprising: a first camera that captures a subject to obtain a first image; and a second camera that captures the subject and obtains a second image. In the apparatus control method, a step of setting deviation information in units of macroblocks based on depth information in units of macroblocks related to the first image and storing the information in the storage unit, and motion compensation prediction of the first image are performed. A first motion compensation step; a second motion compensation step for performing motion compensation prediction of the second image; and a macro of the second image in the second motion compensation step, which is performed prior to the second motion compensation step. And a search range changing step of changing a search range of motion vectors in block units based on the shift information stored in the storage unit.

  Next, a program according to a ninth aspect of the present invention provides a computer having a storage unit in which shift information of a second image with respect to a first image is stored, a first motion compensation unit that performs motion compensation prediction of the first image, A second motion compensation unit configured to perform motion compensation prediction of the second image, and a search range of the motion vector of the second image in the second motion compensation unit is changed based on the shift information stored in the storage unit This is a program for functioning as search range changing means.

  Next, a program according to a tenth aspect of the present invention is directed to a computer having a storage unit in which shift information of a second image with respect to a first image is stored, the encoded data of the first image is decoded, and the first image is decoded. The first restoring means for restoring to the second image, the second restoring means for decoding the encoded data of the second image to restore the second image, and the shift stored in the storage unit, included in the second restoring means This is a program for causing the encoded data of the second image and the restored first image to function as association means based on the information.

  Next, a program according to an eleventh aspect of the present invention is directed to an imaging apparatus including a first camera that captures a subject and obtains a first image, and a second camera that captures the subject and obtains a second image. The displacement of the second image with respect to the first image based on at least one of a convergence angle between the first camera and the second camera and an interval between the optical axis of the first camera and the optical axis of the second camera. Means for setting information and storing it in a storage unit; first motion compensation means for performing motion compensation prediction of the first image; second motion compensation means for performing motion compensation prediction of the second image; and second motion compensation means. This is a program for causing the search range of the motion vector of the second image to function as search range changing means for changing the search range of the second image based on the shift information stored in the storage unit.

  Next, a program according to a twelfth aspect of the present invention provides an imaging apparatus including a first camera that captures a subject and obtains a first image, and a second camera that captures the subject and obtains a second image. Based on the depth information in units of macroblocks relating to the first image, means for setting deviation information in units of macroblocks and storing the information in the storage unit, first motion compensation means for performing motion compensation prediction of the first image, A second motion compensation unit that performs motion compensation prediction of two images, and a search range of motion vectors in units of macroblocks of the second image in the second motion compensation unit based on the shift information stored in the storage unit. This is a program for functioning as search range changing means for changing the search range.

  As described above, according to the image encoding device, the image decoding device, the image encoding method, the image decoding method, the image decoding method, the image capturing device control method, and the program according to the present invention, a three-dimensional image or multi-image When applied to the encoding process and decoding process, the processing amount and time required for motion compensation prediction can be reduced, the encoding process, the speed of the decoding process, the reduction in power consumption, and the encoding The amount of data can be reduced later.

  Embodiments of an image encoding device, an image decoding device, an image encoding method, an image decoding method, an imaging device, an imaging device control method, and a program according to the present invention will be described below with reference to FIGS. While explaining.

  First, as shown in FIG. 1, an image encoding device according to the first embodiment (hereinafter referred to as a first encoding device 10A) captures a subject and obtains a moving image of the subject (first step). 1 camera 12 and second camera 14), a first encoding processing system 18 that encodes a moving image output from the first camera 12 and records it on, for example, the recording medium 16, and an output from the second camera 14. A second encoding processing system 20 that encodes the moving image and records the moving image on the recording medium 16, for example.

  The first encoding processing system 18 encodes a first buffer memory 22 that holds a moving image output from the first camera 12 in units of frames, and an image (first image) held in the first buffer memory 22. The first encoding circuit 24, the first motion compensation circuit 26 that performs motion compensation prediction of the first image, and the second buffer memory 28 that temporarily holds the compression encoded data of the first image.

  As shown in FIG. 2, the first encoding circuit 24 converts the first DCT unit 30 that performs discrete cosine transform on the first image from the first buffer memory 22 and the data from the first DCT unit 30 into predetermined bits. The first quantizer 32 that performs quantization, the first variable length encoder 34 that performs variable length coding on the quantized data and stores the quantized data in the second buffer memory 28, and the second buffer memory 28 A first code amount controller that detects an error code amount between the generated code amount data and the target code amount and feeds back the error code amount to the first quantizer 32; In addition, the first encoding circuit 24 is turned on at the time of intra-frame prediction encoding, and the first image and the first image from the first buffer memory 22 at the time of forward prediction and bidirectional prediction. The first subtractor 40 for obtaining subtraction data from the output data from the one motion compensation circuit 26 is provided.

  The first motion compensation circuit 26 includes a first inverse quantizer 42, a first inverse DCT device 44, a first adder 46, a first image memory 48, a second switch 50, and a first motion compensation prediction. Instrument 52. The first image or difference data subjected to the discrete cosine transform and quantization processing is restored via the first inverse quantizer 42 and the first inverse DCT unit 44 and stored in the first image memory 48. In particular, the restored difference data is added by the first adder 46 with the data from the first motion compensation predictor 52 via the second switch 50.

  Therefore, for example, as shown in FIG. 3, when considering the first images 54a to 54c of the (n-1) th frame, the nth frame, and the (n + 1) th frame, first, the first image 54a of the (n-1) th frame is encoded. The n-1 frame first image 54a is written to the first image memory 48, and the n frame first image 54b is stored in the first buffer memory 22. .

  Then, in the first motion compensation predictor 52, pattern matching processing for each macroblock is performed between the current n-frame first image 54b and the n−1-frame first image 54a stored in the first image memory 48. The first image of n−1 frames shifted by the motion vector obtained as a result is supplied to the first subtractor 40 and the first adder 46 as the reference image 54ar, respectively.

  The first subtracter 40 outputs difference data between the first image 54b of n frames and the reference image 54ar of n-1 frames (the first image of n-1 frames subjected to motion compensation prediction). For example, as shown in FIG. 3, the reference image 54ar is an image that is shifted to a position indicated by a broken line so that the ball image 56 is located at the same position as the first image 54b of the n frame. Therefore, the difference data output from the first subtracter 40 is data in which the data amount is significantly reduced as compared with the case where motion compensation prediction is not performed. The difference data is encoded by passing through the first DCT unit 30, the first quantizer 32, and the first variable length encoder 34. At this time, the motion vector obtained by the first motion compensated predictor 52 is also sent to the first variable length encoder 34 to be variable length encoded. The encoded data is stored in the second buffer memory 28, read out at a predetermined transfer rate, output to the outside, and recorded on the recording medium 16.

  In the above example, forward prediction has been mainly described, but bi-directional prediction is also performed as defined in MPEG (Motion Picture Experts Group).

  On the other hand, the difference data is restored through the first inverse quantizer 42 and the first inverse DCT unit 44, and the restored difference data and the first motion compensated predictor 52 through the second switch 50. The reference image 54ar (the n-1 frame first image after motion compensation) supplied in this way is added by the first adder 46 and stored in the first image memory 48 as the first image 54b of n frames. .

  Thereafter, similarly to the series of processes described above, in the first motion compensation predictor 52, the current n + 1 frame first image 54c and the n frame first image 54b accumulated in the first image memory 48 are used. Pattern matching processing for each macroblock is performed, and the first image of n frames shifted by the motion vector obtained as a result is supplied to the first subtractor 40 and the first adder 46 as a reference image 54br.

  The first subtracter 40 outputs difference data between the first image 54c of n + 1 frames and the reference image 54br (the first image of n frames subjected to motion compensation prediction). For example, as shown in FIG. 3, the reference image 54br is an image that is shifted to a position indicated by a broken line so that the ball image 56 is located at the same position as the first image 54c of the (n + 1) th frame.

  The difference data is encoded by passing through the first DCT unit 30, the first quantizer 32, and the first variable length encoder 34, and the motion vector obtained by the first motion compensated predictor 52 is also the first. The data is sent to the variable length coder 34 for variable length coding. The encoded data is stored in the second buffer memory 28.

  The difference data is restored through the first inverse quantizer 42 and the first DCT unit 44, and the restored difference data and the first motion compensation predictor 52 are supplied via the second switch 50. The reference image 54br (the first image of n frames after motion compensation) is added by the first adder 46 and stored in the first image memory 48 as the first image 54c of n + 1 frames.

  The above-described processing is repeated, and the moving image from the first camera 12 is encoded and stored in the second buffer memory 28 is read out at a predetermined transfer rate and output to the outside, for example, an external recording It is recorded on the medium 16.

  On the other hand, as shown in FIG. 1, the second encoding processing system 20 is a third buffer memory that holds the moving image output from the second camera 14 in units of frames, as in the first encoding processing system 18 described above. 58, a second encoding circuit 60 that encodes the image (second image) held in the third buffer memory 58, a second motion compensation circuit 62 that performs motion compensation prediction of the second image, and a second And a fourth buffer memory 64 for temporarily storing compressed encoded data of the image.

  As shown in FIG. 4, the second encoding circuit 60 includes a second DCT unit 66 that performs discrete cosine transform on the second image from the third buffer memory 58, similarly to the first encoding circuit 24 described above. A second quantizer 68 that quantizes the data from the second DCT device 66 into predetermined bits, and a second variable length encoder 70 that performs variable length coding on the quantized data and stores the quantized data in the fourth buffer memory 64. A second code amount controller 72 that detects an error code amount between the generated code amount data transmitted from the fourth buffer memory 64 and the target code amount and feeds back to the second quantizer 68; Have The second encoding circuit 60 also turns on the third switch 74 that is turned on during intra-frame prediction encoding, the second image from the third buffer memory 58 and the second switch during forward prediction and bidirectional prediction. The second subtractor 76 for obtaining subtraction data from the output data from the two motion compensation circuit 62 is provided.

  Similar to the first motion compensation circuit 26 described above, the second motion compensation circuit 62 includes a second inverse quantizer 78, a second inverse DCT device 80, a second adder 82, and a second image memory 84. , A fourth switch 86 and a second motion compensation predictor 88. The second image or difference data subjected to the discrete cosine transform and quantization processing is restored via the second inverse quantizer 78 and the second inverse DCT device 80 and stored in the second image memory 84. In particular, the restored difference data is added by the second adder 82 to the data from the second motion compensation predictor 88 via the fourth switch 86.

  Further, as shown in FIG. 1, the second encoding processing system 20 includes a storage unit 90 in which shift information (shift direction and shift amount) of the second image with respect to the first image is stored, and a second motion compensation circuit. And a search range change circuit 92 that changes the search range of the motion vector of the second image at 62 based on the shift information stored in the storage unit 90. The shift direction indicates the installation direction (horizontal direction, vertical direction) of a plurality of cameras, and the shift amount indicates a shift amount based on the distance from one camera to the subject, for example.

  Here, the function of the search range changing circuit 92 will be described with reference to FIGS.

  First, as shown in FIG. 5, the image of the subject included in the first image 54 obtained by imaging with the first camera 12 and the second image 94 obtained by imaging with the second camera 14 are included. There is a deviation 96 due to the distance to the subject between the subject image and the subject image. For example, if the first camera 12 and the second camera 14 are provided side by side in the horizontal direction, as shown in FIG. 5, a shift 96 occurs in the horizontal direction, and the first camera 12 and the second camera 14 are vertical. If they are provided along the direction, as shown in FIG. 6, a deviation 96 occurs in the vertical direction. When four cameras are arranged in a matrix, a horizontal displacement 96 and a vertical displacement 96 occur as shown in FIG. For the sake of simplicity, here, as shown in FIG. 5, a case where two cameras (first camera 12 and second camera 14), which are general usage forms, are provided side by side in the horizontal direction will be mainly described.

  As described above, when there is a horizontal displacement 96 between the first image 54 and the second image 94, the image of the subject corresponding to the overlapping portion 98 and the second image 94 in the first image 54. Among them, the images of the subject corresponding to the overlapping portion 98 are substantially the same. If the search range of the motion vector in units of macroblocks for the second image 94 in the second motion compensation predictor 88 is changed based on the shift information stored in the storage unit 90 using this characteristic, the first The processing is the same as the motion compensation prediction for the image 54.

  Normally, a search for a motion vector in units of macroblocks is performed from the left end of the second image 94. In the present embodiment, as shown in FIG. 5, from the position shifted by 96 minutes from the left end of the second image 94. Search for motion vectors in units of macroblocks. Moreover, if the motion compensation prediction for the first image 54 has already been completed, the processing time and the amount of encoded data for encoding the second image 94 can be significantly reduced by using the data. it can.

  Therefore, based on the shift information stored in the storage unit 90 from the search range changing circuit 92 described above, information indicating the order for reading out from the first image 54 in units of macroblocks (first reading order information); If information indicating the order for reading out the macroblock unit from the two images 94 (second reading order information) is output, the macro of the first image 54 read based on the first reading order information is output. It is possible to compare the block and the macroblock of the second image 94 read based on the second reading order information, and as described above, the first image 54 regardless of the installation positions of the plurality of cameras. And the second image 94 can be easily detected.

  Therefore, as shown in FIG. 4, in addition to the various circuits described above, the second motion compensation circuit 62 includes a first extraction circuit 100, a second extraction circuit 102, a comparison circuit 104, and a code information creation circuit 106. A code information transfer circuit 108, a compression processing table creation circuit 110, and a third extraction circuit 112.

  The first extraction circuit 100 reads data from the first image 54 of the first buffer memory 22 in units of macroblocks based on the first reading order information from the search range changing circuit 92.

  The second extraction circuit 102 reads data in units of macroblocks from the second image 94 in the third buffer memory 58 based on the second reading order information from the search range changing circuit 92.

  The comparison circuit 104 compares the macroblock of the first image 54 read by the first extraction circuit 100 with the macroblock of the second image 94 read by the second extraction circuit 102. In one example, for example, a macro block related to 1 row and 1 column with the left end of the first image 54 as a reference, and a macro block related to 1 row and 1 column based on a position shifted by a shift 96 of the second image 94. Are compared. Similarly, the macroblock for the first row and the second column with respect to the left end of the first image 54 is compared with the macroblock for the first row and the second column with the position shifted by the displacement 96 of the second image 94 as a reference. The

  The code information creation circuit 106 creates the code information 114 shown in FIG. The code information 114 has a data code (start code 116) indicating the start of the code information 114 at the beginning and a data code (end code 118) indicating the end of the code information 114 at the end, and the start code 116 and the end code. A format configuration in which a plurality of address codes 120 are arranged between the address codes 120 and 118.

  Then, as shown in FIG. 4, the code information creation circuit 106 generates a start code 116 based on the search start signal from the search range change circuit 92 and arranges it at the head of the code information 114. Thereafter, the macro block of the first image 54 read by the first extraction circuit 100 and the macro block of the second image 94 read by the second extraction circuit 102 are compared by the comparison circuit 104, and the comparison is performed. When the comparison result from the circuit 104 indicates that they match or almost match, an address code 120 including the macroblock address of the first image 54 to be compared and the macroblock address of the second image 94 (see FIG. 8). ) Is generated and placed after the start code 116.

  Similarly, the macro block of the first image 54 sequentially read by the first extraction circuit 100 and the macro block of the second image 94 sequentially read by the second extraction circuit 102 are respectively compared by the comparison circuit 104 and the comparison result is obtained. Are coincident or almost coincident, an address code 120 including the address of the macroblock of the first image 54 to be compared and the address of the macroblock of the second image 94 is generated and sequentially arranged.

  Then, the code information creation circuit 106 generates a termination code 118 based on the search end signal from the search range changing circuit 92 and places it at the end of the code information 114. Code information 114 is created by a series of these processes.

  When the code information 114 is created, a transfer instruction signal is output from the code information creation circuit and supplied to the code information transfer circuit 108.

  The code information transfer circuit 108 transfers the code information 114 to the second variable length encoder 70 when the transfer instruction signal from the code information creation circuit 106 is supplied. As a result, the code information 114 is subjected to variable-length coding and then stored in the fourth buffer memory 64.

  On the other hand, based on the address of the macroblock of the second image 94 registered in the created code information 114, the compression processing table creation circuit 110 creates macroblocks of the second image 94 that are not registered in the code information 114. A compression processing table 122 in which addresses are arranged is created. In other words, the addresses arranged in the compression processing table 122 do not match (or almost match) the address of the macroblock out of the overlapping portion 98 (see, for example, FIG. 5) or the first image in the second image 94. The address of the macro block indicates the address of the macro block that requires normal motion compensation prediction.

  The third extraction circuit 112 reads a macroblock corresponding to the address stored in the compression processing table 122 from the second image 94 stored in the third buffer memory 58 and supplies the macroblock to the second encoding circuit 60.

  Therefore, for example, when considering each second image 94 of n-1 frame and n frame, first, a macroblock group corresponding to the address stored in the compression processing table 122 in the second image 94 of n-1 frame. (Hereinafter referred to as a partial image) is encoded and written to the fourth buffer memory 64, and the restored n-1 frame partial image is written to the second image memory 84. N frames of the second image 94 are stored.

  Then, in the second motion compensation predictor 88, among the second image 94 of the current n frame, a macroblock group (n frame partial image) corresponding to the address stored in the compression processing table 122, and the second image Pattern matching processing for each macroblock is performed between the n-1 frame partial images stored in the memory 84, and the resulting n-1 frame partial image shifted by the motion vector is used as a reference image. The signals are supplied to the second subtracter 76 and the second adder 82, respectively.

  The second subtracter 76 outputs difference data between the n-frame partial image and the motion compensation predicted n-1 frame partial image (reference image). The difference data is compressed and encoded by passing through the second DCT unit 66, the second quantizer 68, and the second variable length encoder 70. At this time, the motion vector obtained by the second motion compensated predictor 88 is also sent to the second variable length encoder 70 to be variable length encoded. The compression encoded data is stored in the fourth buffer memory 64. In the above example, forward prediction has been mainly described, but bidirectional prediction is also performed as defined in MPEG.

  The compression-encoded data (including code information 114) stored in the fourth buffer memory 64 is read out at a predetermined transfer rate, outputted to the outside, and recorded on the external recording medium 16, for example. Become.

  As described above, the second motion compensation predictor 88 of the second motion compensation circuit 62 does not search for motion vectors for all of the second images 94 but is arranged in the compression processing table 122 in the second images 94. The motion vector for the macroblock corresponding to the specified address is searched. Therefore, the processing time related to the compression encoding for the second image 94 can be greatly shortened. Moreover, since the code information 114 can be composed of text data, the amount of encoded data can be greatly reduced.

  Next, a basic processing operation of the first encoding device 10A will be described with reference to FIG.

  As shown in FIG. 9, first, in step S <b> 1, the search range changing circuit 92 of the second encoding processing system 20 reads deviation information from the storage unit 90.

  Thereafter, in step S2, the search range changing circuit 92 sets the search range of the motion vector of the second image 94 based on the read deviation information.

  In step S <b> 3, the first encoding processing system 18 performs compression encoding of the first image 54, and stores the compression-encoded first image 54 in the recording medium 16 via the second buffer memory 28. .

  On the other hand, in step S4, the code information creation circuit 106 creates code information 114. That is, a portion that matches or substantially matches the first image 54 in the changed search range is extracted, and code information 114 is created based on the address information and the like of the extracted portion.

  Thereafter, in step S <b> 5, the second encoding processing system 20 encodes the generated code information 114 and stores the encoded code information 114 in the recording medium 16 via the fourth buffer memory 64.

  Thereafter, in step S <b> 6, the second encoding processing system 20 performs compression encoding of the remaining partial images in the second image 94, and records the compression-encoded partial images via the fourth buffer memory 64. Save to medium 16.

  Then, the processes in steps S1 to 6 are repeated until the photographing is completed (step S7).

  In the example of FIG. 9, the compression coding process in the first coding processing system 18 is shown in step S3, and the compression coding process in the second coding processing system 20 is shown in steps S4 to S6. The compression coding process in the one coding processing system 18 and the compression coding process in the second coding processing system 20 are performed by a multitask method.

  Next, an example of a specific processing operation of the basic operation shown in FIG. 9 will be described with reference to FIGS.

  First, the processing operation of the first encoding processing system 18 will be described with reference to FIG.

  In step S101 in FIG. 10, the first encoding processing system 18 stores an initial value “1” in a counter m that counts frames.

  Thereafter, in step S <b> 102, the first encoding processing system 18 stores the first image 54 of the mth frame in the first buffer memory 22.

  Next, in step S <b> 103, the first encoding circuit 24 of the first encoding processing system 18 performs intra-frame predictive encoding on the first image 54 of the m-th frame and stores it in the second buffer memory 28. At this time, the first motion compensation circuit 26 decodes the m-frame encoded first image 54 and records it in the first image memory 48 as the m-th first image 54.

  Thereafter, in step S104, the first encoding processing system 18 stores the first image 54 of the (m + 1) th frame in the first buffer memory 22.

  Thereafter, in step S105, the first motion compensation circuit 26 performs motion compensation prediction of the first image 54 in the m-th frame. As a result, the difference data based on the forward prediction and the difference data based on the bidirectional prediction are compressed and encoded by the first encoding circuit 24 and stored in the second buffer memory 28.

  Thereafter, in step S106, the compression encoded data (including motion compensation prediction data) of the first image 54 of the m-th frame is read from the second buffer memory 28 at a predetermined transfer rate, for example, an external recording medium 16 is recorded.

  Thereafter, in step S107, the first encoding processing system 18 updates the value of the counter m by +1.

  Next, in step S108, the first encoding processing system 18 determines whether or not the photographing is finished. This determination is made, for example, based on whether or not an interrupt signal indicating the end of shooting has been input.

  If there is no end request, the process returns to step S103, and the processes after step S103 are repeated.

  If it is determined in step S108 that the request is an end request, the processing in the first encoding processing system 18 ends.

  Next, the processing operation of the second encoding processing system 20 will be described with reference to FIGS. 11 and 12.

  In step S201 of FIG. 11, the second encoding processing system 20 stores an initial value “1” in a counter n that counts frames.

  Thereafter, in step S202, the second encoding processing system 20 stores the second image 94 of the nth frame in the third buffer memory 58.

  Thereafter, in step S <b> 203, the search range changing circuit 92 reads deviation information from the storage unit 90.

  Thereafter, in step S204, the search range changing circuit 92 sets a search range for the motion vector of the second image 94 based on the read deviation information. Specifically, a first reading order for each macroblock from the first image 54 and a second reading order for each macroblock from the second image 94 are set.

  Thereafter, in step S205, the code information creation circuit 106 generates a start code 116 of the code information 114 and stores it in the work memory.

  Thereafter, in step S206, the macroblock is read from the first image 54 of the nth frame based on the first reading order.

  Next, in step S207, the macroblock is read from the second image 94 in the nth frame based on the second reading order.

  In step S208, the macro block read in step S206 is compared with the macro block read in step S207. If the comparison results match or substantially match, these macro blocks are checked in step S209. An address code 120 in which an address related to the block is recorded is generated and stored in the work memory.

  In the next step S115, the code information creation circuit 106 determines whether or not to end the generation process of the code information 114. This determination is made based on whether or not a search end signal output from the search range changing circuit 92 is input based on the end of the first reading order and the second reading order.

  If it is determined that the generation process of the code information 114 is not completed, the process after step S206 is repeated to generate and store the address code 120.

  When the generation process of the code information 114 is completed, the process proceeds to the next step S211 and the end code 118 is stored in the work memory. At this stage, the creation of the code information 114 of the nth frame is completed.

  Thereafter, in step S <b> 212 of FIG. 12, the code information transfer circuit 108 transfers the code information 114 of the nth frame recorded in the work memory to the second variable length encoder 70. As a result, the code information 114 is subjected to variable-length coding and then stored in the fourth buffer memory 64.

  Thereafter, in step S213, the compression encoded data of the code information 114 of the nth frame is read from the fourth buffer memory 64 at a predetermined transfer rate and recorded on the external recording medium 16, for example.

  After that, in step S214 of FIG. 12, the compression processing table creation circuit 110 generates the first unregistered code information 114 based on the macroblock address of the second image 94 registered in the created code information 114. The compression processing table 122 in which the addresses of the macro blocks of the two images 94 are arranged is created.

  Thereafter, in step S215, the third extraction circuit 112 selects a macroblock (n frame) corresponding to the address stored in the compression processing table 122 from the second image 94 of the n frame stored in the third buffer memory 58. A partial image of the eye) is read out and supplied to the second encoding circuit 60. The second encoding circuit 60 performs intra-frame predictive encoding on the n-th frame partial image and stores it in the fourth buffer memory 64. At this time, the second motion compensation circuit 62 decodes the encoded partial image of the nth frame and records it in the second image memory 84 as a partial image of the nth frame.

  Thereafter, in step S216, the second encoding processing system 20 stores the second image 94 of the (n + 1) th frame in the third buffer memory 58.

  Thereafter, in step S217, the second motion compensation circuit 62 performs motion compensation prediction for the nth partial image. As a result, the difference data based on the forward prediction and the difference data based on the bidirectional prediction are compression encoded by the second encoding circuit 60 and stored in the fourth buffer memory 64.

  In step S218, the compression encoded data (including motion compensated prediction data) of the nth partial image is read from the fourth buffer memory 64 at a predetermined transfer rate, and is stored in, for example, the external recording medium 16. To be recorded.

  Thereafter, in step S219, the second encoding processing system 20 updates the value of the counter n by +1.

  Next, in step S220, the second encoding processing system 20 determines whether or not the shooting is finished. This determination is made, for example, based on whether or not an interrupt signal indicating the end of shooting has been input.

  If there is no end request, the process returns to step S205 in FIG. 11 to repeat the processes after step S205.

  If it is determined in step S220 that the request is an end request, the processing in the second encoding processing system 20 ends.

  The processing of the first encoding processing system 18 shown in FIG. 10 and the processing of the second encoding processing system 20 shown in FIGS. 11 and 12 are performed by the multitask method, and the value of the counter m The value of the counter n is also updated almost synchronously.

  Next, decoding apparatus 124 according to the present embodiment will be described with reference to FIG.

  As shown in FIG. 13, the decoding device 124 includes, for example, a first decoding processing system 126 that decodes moving image encoded data for the first camera 12 read from the recording medium 16, for example, a recording A second decoding processing system 128 that decodes moving image encoded data of the second camera 14 read from the medium 16.

  The first decoding processing system 126 includes a fifth buffer memory 130 that holds encoded video data of the first camera 12 supplied from the recording medium 16 in units of frames, and data held in the fifth buffer memory 130. The movement of the first decoding circuit 132 for decoding (encoded data of the first image 54) and the encoded data of the first image 54 (particularly, difference data for forward prediction and difference data for bidirectional prediction). It has the 3rd motion compensation circuit 134 which performs compensation prediction, and the 6th buffer memory 136 which hold | maintains the 1st image 54 by which the expansion decoding was carried out temporarily.

  As shown in FIG. 14, the first decoding circuit 132 includes a first variable length decoder 138 for variable length decoding the encoded data of the first image 54 from the fifth buffer memory 130, and variable length decoding. Inverse quantizer 140 that performs inverse quantization processing on the processed data, and inverse DCT device 142 that performs inverse discrete cosine transform on the inversely quantized data. Further, the first decoding circuit 132 adds data with the output data from the third motion compensation circuit 134 when the inverse discrete cosine transformed data is forward prediction difference data or bidirectional prediction difference data. A third adder 144 for obtaining In addition, if the variable length decoded data includes a motion vector, the first variable length decoder 138 transmits the motion vector to the third motion compensation circuit 134.

  The third motion compensation circuit 134 includes a third image memory 146 and a third motion compensation predictor 148. The third image memory 146 stores the first image 54 subjected to inverse discrete cosine transform (the restored first image 54 of n frames, for example). In particular, when restoring difference data for forward prediction or difference data for bidirectional prediction, a part of the first image 54 stored in the third image memory 146 is shifted based on the motion vector and the third addition is performed. The first image 54 of the (n + 1) th frame is restored by this.

  By sequentially performing processing in various circuits in the first decoding circuit 132, encoded data for the first camera 12 is restored and temporarily stored in the sixth buffer memory 136, and the sixth buffer memory 136 is restored. Is read out at a predetermined transfer rate and reproduced and output as a moving image shot by the first camera 12.

  On the other hand, as shown in FIG. 13, the second decoding processing system 128 encodes moving image encoded data (the code information 114 of the second image 94 and the code of the partial image) for the second camera 14 supplied from the recording medium 16. The second buffer for decoding the data (the code information 114 of one frame and the encoded data of the partial image) held in the seventh buffer memory 150. A decoding circuit 152; a fourth motion compensation circuit 154 that performs motion compensation prediction of encoded data of partial images (particularly difference data for forward prediction and difference data for bidirectional prediction); a code information processing circuit 156; A synthesis circuit 158 that synthesizes the restored data from the second decoding circuit 152 and the restored data from the code information processing circuit 156 to restore the second image 94; And a eighth buffer memory 160 for temporarily holding the second image 94.

  As shown in FIG. 15, the second decoding circuit 152 includes a second variable length decoder 162 that performs variable length decoding on the code information 114 and the encoded data of the partial image from the seventh buffer memory 150, and a variable length. Among the decoded data, a second inverse quantizer 164 that performs inverse quantization processing on the variable-length decoded data of the partial image, and a second inverse quantization that performs inverse discrete cosine transform on the inversely quantized data. DCT unit 166. Further, the second decoding circuit 152 adds data with the output data from the fourth motion compensation circuit 154 when the data subjected to inverse discrete cosine transform is difference data for forward prediction or difference data for bidirectional prediction. A fourth adder 168 for obtaining

  In particular, the second variable length decoder 162 transmits the motion vector to the fourth motion compensation circuit 154 if the motion vector is included in the variable length decoded data. If the code information 114 is included in the variable length decoded data, the code information 114 is transmitted to the code information processing circuit 156.

  The fourth motion compensation circuit 154 includes a fourth image memory 170 and a fourth motion compensation predictor 172. The fourth image memory 170 stores a partial image subjected to inverse discrete cosine transform (a restored partial image of, for example, n frames). In particular, when restoring difference data for forward prediction and difference data for bidirectional prediction, a part of the partial image stored in the fourth image memory 170 is shifted based on the motion vector, and the fourth adder 168 is shifted. Thus, the partial image of the (n + 1) th frame is restored.

  Then, the partial image restoration data from the second decoding circuit 152 and the restoration data from the code information processing circuit 156 are synthesized by the synthesis circuit 158 and output as restoration data of the second image 94. . By sequentially performing the processes in the second decoding circuit 152 and the code information processing circuit 156, the encoded data for the second camera 14 is restored and temporarily stored in the eighth buffer memory 160. By reading out from the buffer memory 160 at a predetermined transfer rate, it is reproduced and output as a moving image shot by the second camera 14.

  The code information processing circuit 156 includes a ninth buffer memory 174 that temporarily holds the code information 114 transmitted from the second variable length decoder 162, a code reading circuit 176 that sequentially reads address codes from the code information 114, A macro that reads a macroblock corresponding to the macroblock address of the first image 54 registered in the read address code from the first image 54 stored in the sixth buffer memory 136 of the one decoding processing system 126. The block readout circuit 178 and the macroblock address of the second image 94 registered in the address code in the tenth buffer memory 180 (having a capacity for recording the second image 94) are used as the read macroblock. Macroblock writing circuit 182 for writing to the address corresponding to Based on the read end signal from the 76, and a data transfer circuit 184 for transferring data stored in the tenth buffer memory 180 to the synthesizing circuit 158.

  The combining circuit 158 combines the restored data of the partial image from the second decoding circuit 152 and the data transferred from the code information processing circuit 156 to restore the second image 94.

  As described above, the fourth motion compensation predictor 172 of the fourth motion compensation circuit 154 does not restore all the second images 94 by reflecting the corresponding motion vectors. Only the macroblock group (partial image) corresponding to the addresses arranged in the processing table 122 is restored. Therefore, it is possible to greatly reduce the processing time related to decompression decoding for restoring the second image 94, and it is possible to greatly reduce the amount of data for decoding.

  Next, a basic processing operation of the decoding device 124 will be described with reference to FIG.

  As shown in FIG. 16, first, in step S301, the first decoding processing system 126 performs decompression decoding of the first image 54, and reproduces and outputs the first image 54 subjected to decompression decoding.

  Thereafter, in step S302, the second decoding processing system 128 reads out the code information 114 included in the encoded data of the second image 94 and transmits it to the code information processing circuit 156.

  Thereafter, in step S303, the code information processing circuit 156 uses the first image 54 reconstructed by the first decoding circuit 132 based on the address code included in the code information 114, and stores the first image 94. Restore the part.

  Thereafter, in step S304, the second decoding circuit 152 and the fourth motion compensation circuit 154 perform decompression decoding of the partial image of the second image 94.

  After that, in step S305, the synthesis circuit 158 combines a part of the second image 94 restored based on the code information 114 and the partial image restored by the second decoding circuit 152 and the fourth motion compensation circuit 154. By synthesizing, the second image 94 is restored, and the second image 94 is reproduced and output.

  Then, the processes in steps S301 to S305 are repeated until the image data is completed (step S306).

  Next, an encoding apparatus according to the second embodiment (hereinafter referred to as second encoding apparatus 10B) will be described with reference to FIGS.

  As shown in FIG. 17, the second encoding device 10B has substantially the same configuration as the first encoding device 10A described above, but the convergence angle between the first camera 12 and the second camera 14 and the first camera. The optical system information detection circuit 186 that detects the interval between the 12 optical axes and the optical axis of the second camera 14, and the deviation information based on at least one of the interval and the convergence angle detected by the optical system information detection circuit 186 Is different in that it has a first deviation information setting circuit 188 that stores the information in the storage unit 90.

  The optical system information detection circuit 186 may be configured by various sensors that detect the convergence angle between the first camera 12 and the second camera 14 and the distance between the optical axis of the first camera 12 and the optical axis of the second camera 14. Alternatively, it may be configured by a calculation circuit that is obtained by calculation based on attributes (current value, voltage value, etc.) of drive control signals to the drive mechanisms of the first camera 12 and the second camera 14.

  The first deviation information setting circuit 188 sets the deviation amount to be larger as the distance between the optical axis of the first camera 12 and the optical axis of the second camera 14 is larger, and congestion between the first camera 12 and the second camera 14 is established. The smaller the angle, the larger the shift amount.

  Further, in the first deviation information setting circuit 188, the deviation amount is determined by both the distance between the optical axis of the first camera 12 and the optical axis of the second camera 14 and the convergence angle between the first camera 12 and the second camera 14. When setting, the correspondence relationship between the interval and the convergence angle and the deviation amount is obtained in advance, and stored as an information table in a memory (ROM, flash memory, etc.), and corresponds to the detected interval and the convergence angle. The deviation amount may be set by reading from the information table.

  Here, a basic processing operation of the second encoding device 10B will be described with reference to FIG.

  As shown in FIG. 18, first, in step S401, the optical system information detection circuit 186 determines the distance between the optical axis of the first camera 12 and the optical axis of the second camera 14, the first camera 12 and the second camera 14, and so on. Detect the angle of convergence.

  Thereafter, in step S <b> 402, the first deviation information setting circuit 188 sets deviation information based on at least one of the interval and the convergence angle detected by the optical system information detection circuit 186 and stores the deviation information in the storage unit 90.

  Thereafter, in step S <b> 403, the search range change circuit 92 of the second encoding processing system 20 reads the deviation information from the storage unit 90.

  Thereafter, in step S404, the search range changing circuit 92 sets the search range of the motion vector of the second image 94 based on the read deviation information.

  On the other hand, in step S <b> 405, the first encoding processing system 18 performs compression encoding of the first image 54, and stores the compression-encoded first image 54 in the recording medium 16 via the second buffer memory 28. .

  On the other hand, in step S406, the code information creation circuit 106 creates the code information 114. That is, a portion that matches or substantially matches the first image 54 in the changed search range is extracted, and code information 114 is created based on the address information and the like of the extracted portion.

  Thereafter, in step S407, the second encoding processing system 20 encodes the generated code information 114 and stores the encoded code information 114 in the recording medium 16 via the fourth buffer memory 64.

  Thereafter, in step S408, the second encoding processing system 20 performs compression encoding of the remaining partial images in the second image 94, and records the compression-encoded partial images via the fourth buffer memory 64. Save to medium 16.

  Then, the processing from step S405 to step S408 is repeated until shooting is completed (step S409).

  In the second encoding device 10B, since the deviation information is set based on the distance between the optical axis of the first camera 12 and the optical axis of the second camera 14 and the convergence angle, the hardware is mainly configured. And the complexity of the computer program can be avoided.

  In the example of FIG. 18, the search range is set based on the detection result of the optical system information at the start of shooting (steps S401 to S404), and then the compression coding in the first encoding processing system 18 is performed until the end of shooting. The process (step S405) and the compression encoding process (steps S406 to S408) in the second encoding processing system 20 are performed. The search range setting and the compression encoding in the first encoding processing system 18 are performed. The encoding process and the compression encoding process in the second encoding processing system 20 may be performed by a multitask method. In this case, the shift information can be set according to the change of the subject, and accordingly, the time required for encoding in the second encoding processing system can be shortened, and the decoding in the decoding apparatus can be performed accordingly. Such time can also be shortened.

  Next, an encoding apparatus according to a third embodiment (hereinafter referred to as a third encoding apparatus 10C) will be described with reference to FIGS.

  As shown in FIG. 19, the third encoding device 10C has substantially the same configuration as the first encoding device 10A described above, but includes a distance measuring circuit 190 that measures the distance (depth) to the subject, The difference is that a second deviation information setting circuit 192 that sets deviation information based on distance data from the distance circuit 190 and stores the deviation information in the storage unit 90 is provided.

  The distance measuring circuit 190 is a circuit that measures the distance (depth) to the subject by a method of irradiating the subject with light such as infrared rays and measuring the time taken to reflect and return.

  The second shift information setting circuit 192 sets the shift amount between the first image 54 and the second image 94 to be smaller as the depth is larger. Further, the second deviation information setting circuit 192 may set the deviation amount based on the depth and the arrangement and interval (interval between optical axes) of the first camera 12 and the second camera 14. In this case, an information table may be created in advance, and the deviation amount may be set based on the information table, or the deviation amount may be obtained by calculation.

  Detection of the distance to the subject by the distance measuring circuit 190 may be performed only once at the start of photographing or may be performed every frame period. Alternatively, it may be performed every several frame periods. In this case, the second deviation information setting circuit 192 may set the deviation information only once at the start of photographing in accordance with the distance detection timing in the distance measuring circuit 190, or every frame period or several times. The shift information may be reset every frame period. The distance data from the distance measuring circuit 190 may be distance data for each macroblock (distance image for each macroblock) or distance data for each pixel (distance image for each pixel). In this case, the shift amount may be set based on the average value of the distance data in the center portion of the distance image or the nearest distance data.

  Here, a basic processing operation of the third encoding device 10C will be described with reference to FIG.

  As shown in FIG. 20, first, in step S501, the distance measuring circuit 190 measures the distance (depth) to the subject in units of one frame or several frames.

  Thereafter, in step S <b> 502, the second deviation information setting circuit 192 sets deviation information based on the distance data from the distance measuring circuit 190 and stores it in the storage unit 90.

  Thereafter, in step S503, the search range changing circuit 92 of the second encoding processing system 20 reads the shift information from the storage unit 90.

  Thereafter, in step S504, the search range changing circuit 92 sets a search range for the motion vector of the second image 94 based on the read shift information.

  In step S505, the first encoding processing system 18 performs compression encoding of the first image 54, and stores the compression encoded first image 54 in the recording medium 16 via the second buffer memory 28. .

  On the other hand, in step S506, the code information creation circuit 106 creates code information 114. That is, a portion that matches or substantially matches the first image 54 in the changed search range is extracted, and code information 114 is created based on the address information and the like of the extracted portion.

  Thereafter, in step S507, the second encoding processing system 20 encodes the generated code information 114, and stores the encoded code information 114 in the recording medium 16 via the fourth buffer memory 64.

  Thereafter, in step S508, the second encoding processing system 20 performs compression encoding of the remaining partial images in the second image 94, and records the compression-encoded partial images via the fourth buffer memory 64. Save to medium 16.

  Then, the processing from step S501 to step S508 is repeated until shooting is completed.

  In the third encoding apparatus 10C, the shift amount is set for each frame or every several frames based on the distance data from the distance measuring circuit 190, so that the shift information is set according to the change of the subject. It is possible to reduce the time required for encoding in the second encoding processing system, and it is also possible to reduce the time required for decoding in the decoding apparatus.

  In the example of FIG. 20, the setting of the search range based on the distance to the subject is shown in steps S501 to S504, the compression encoding process in the first encoding processing system 18 is shown in step S505, and the second encoding processing system 20 In step S506 to step S508, the search range is set based on the distance to the subject, the compression encoding process in the first encoding processing system 18, and the second encoding processing system. The compression encoding process at 20 may be performed by a multitask method.

  Next, the digital camera 200 provided with the function of the second encoding device 10B or the third encoding device 10C and the function of the decoding device 124 will be described with reference to FIG.

  As shown in FIG. 21, the digital camera 200 according to the present embodiment includes a first camera 12, a second camera 14, a first photometry / ranging CPU 202, a second photometry / ranging CPU 204, First aperture control circuit 206, second aperture control circuit 208, first strobe / light device 210, second strobe / light device 212, first charging / light emission control circuit 214, and second charging / light emission control A circuit 216, a first YC processing circuit 218, a second YC processing circuit 220, an EEPROM 222, an operation panel 224, a two-dimensional / three-dimensional mode flag 226, and a power supply circuit 228 are included.

  Further, the digital camera 200 includes a camera interval / convergence angle processing circuit 230, a first storage circuit 232, a distance measurement processing circuit 234, a second storage circuit 236, an image recording processing circuit 238, and an image display processing circuit. 240, a first compression / decompression circuit 242, a first work memory 244, a second compression / decompression circuit 246, and a second work memory 248.

  These various circuits are controlled by the main CPU 250 through the system bus 252 and various control buses.

  The first camera 12 includes a first optical system 254 having a lens, a focus control mechanism, an aperture control mechanism, and the like, a first CCD image sensor 256 that captures an image of a subject incident through the first optical system 254, and the first camera 12. A first timing generator 258 that supplies a plurality of timing pulses for imaging to the 1 CCD image sensor 256; a first A / D converter 260 that digitally converts an image signal from the first CCD image sensor 256; A first correction circuit 262 that applies white balance and γ correction to digital data from the 1A / D converter 260, and an eleventh buffer memory that holds image data corrected by the first correction circuit 262 in units of frames, for example. H.264.

  The second camera 14 includes a second optical system 266 having a lens, a focus control mechanism, an aperture control mechanism, and the like, a second CCD image sensor 268 that captures an image of a subject incident through the second optical system 266, and the second camera A second timing generator 270 that supplies a plurality of timing pulses for imaging to the 2CCD image sensor 268; a second A / D converter 272 that digitally converts an image signal from the second CCD image sensor 268; A second correction circuit 274 that applies white balance and γ correction to the digital data from the 2A / D converter 272, and a twelfth buffer memory that holds the image data corrected by the second correction circuit 274 in units of one frame, for example. 276.

  The first photometry / ranging CPU 202 controls the first aperture control circuit 206, the focusing of the first optical system 254, the first charging / light emission control circuit 214 based on the control signal from the main CPU 250, and the second photometry / light emission control circuit 214. The distance measurement CPU 204 controls the second aperture control circuit 208, the focusing of the second optical system 266, and the second charging / light emission control circuit 216 based on the control signal from the main CPU 250.

  The first aperture control circuit 206 controls the lens aperture of the first optical system 254 based on the control signal from the first photometry / ranging CPU 202, and the second aperture control circuit 206 performs the second photometry / ranging. The lens diaphragm of the second optical system 266 is controlled based on the control signal from the CPU 202.

  The first charging / light emission control circuit 214 controls charging and light emission of the first strobe / light device 210 based on the control signal from the first photometry / ranging CPU 202, and the second charging / light emission control circuit 216 Based on a control signal from the second photometry / ranging CPU 204, charging and light emission of the second strobe / light device 212 are controlled.

  The first YC processing circuit 218 performs processing of reading out the imaging data from the eleventh buffer memory 264, converting it into YC image data (Y: luminance signal, C: color difference signal), and storing it again in the eleventh buffer memory 264, The second YC processing circuit 220 reads the imaging data from the twelfth buffer memory 276, converts the imaging data into YC image data (Y: luminance signal, C: chrominance signal), and stores it again in the twelfth buffer memory 276. Process.

  The EEPROM 222 stores programs that operate at startup and parameters necessary for the operation of various programs.

  The operation panel 224 includes at least a moving image release 278, a still image release 280, an operation guidance LCD unit (liquid crystal display unit 282), an operation unit 284 for controlling the optical system, a mode dial, a cross key, Various switches 286 such as a power switch and a two-dimensional / three-dimensional switch 288 are provided.

  The two-dimensional / three-dimensional mode flag 226 is a flag indicating whether a two-dimensional image can be captured or a three-dimensional image can be captured, and is set based on the two-dimensional / three-dimensional switch 288.

  The camera interval / convergence angle processing circuit 230 includes a first camera detection circuit 290, a second camera detection circuit 292, a camera control circuit 294, a first camera drive circuit 296, and a second camera drive circuit 298.

  The first camera detection circuit 290 detects the installation position of the first camera 12 and the installation angle of the first camera 12, and the second camera detection circuit 292 detects the installation position of the second camera 14 and the installation angle of the second camera 14. Is detected.

  The first camera drive circuit 296 drives the first camera 12 based on the control signal from the camera control circuit 294, and the second camera drive circuit 298 is based on the control signal from the camera control circuit 294. Drive.

  The camera control circuit 294 detects the detection information from the first camera detection circuit 290 and the position information from the main CPU 250 (the distance between the optical axis of the first camera 12 and the optical axis of the second camera 14, the first camera 12 and the first camera The first camera drive circuit 296 is drive-controlled based on the angle of convergence of the two cameras 14, thereby feedback-controlling the installation position and installation angle of the first camera 12. Similarly, the second camera drive circuit 298 is drive-controlled based on the detection information from the second camera detection circuit 292 and the position information from the main CPU 250, thereby feedback control of the installation position and installation angle of the second camera 14. To do. Position information from the main CPU 250 is stored in the first storage circuit 232.

  The ranging processing circuit 234 includes a first ranging light emitting element 300, a second ranging light emitting element 302, a first ranging imaging element 304, a second ranging imaging element 306, and each of these elements. A ranging control circuit 308 for controlling the image, a third A / D converter 310 for digitally converting an imaging signal from the first ranging imaging device 304, and an imaging signal from the second ranging imaging device 306 4A / D converter 312 for digitally converting the image, distance calculation circuit for calculating distance information to the subject based on the image data from the third A / D converter 310 and the image data from the fourth A / D converter 312 314. The distance information obtained by the distance calculation circuit 314 is stored in the second storage circuit 236.

  Infrared rays are emitted from the first distance measuring light emitting element 300 and the second distance measuring light emitting element 302, respectively, and the infrared rays reflected by the subject are received by the first distance measuring image pickup element 304 and the second distance measuring image pickup element 306. To be configured. Then, by measuring the time from when the infrared rays are emitted to when the reflected infrared rays are received, the distance of the subject can be obtained.

  The image recording processing circuit 238 includes a thirteenth buffer memory 316 for temporarily storing encoded data, a first controller 318, an interface 320, and a memory card 322.

  The first controller 318 writes the encoded data from the first compression / decompression circuit 242 and the encoded data from the second compression / decompression circuit 246 to the thirteenth buffer memory 316 at a predetermined transfer rate, The stored encoded data is read at a predetermined transfer rate and transferred to the system bus 252 or transferred to the memory card 322 via the interface 320.

  The image display processing circuit 240 includes a fourteenth buffer memory 324 for temporarily storing decoded data, a second controller 326, an image data conversion circuit 328, a display drive circuit 330, and a liquid crystal display device 332. .

  The second controller 326 writes each decoded data from the first compression / decompression circuit 242 and the second compression / decompression circuit 246 to the fourteenth buffer memory 324 at a predetermined transfer rate or is stored in the fourteenth buffer memory 324. The decoded data is read out at a predetermined transfer rate and transferred to the image data conversion circuit 328.

  The image data conversion circuit 328 converts the decoded data (YC image data) transferred via the fourteenth buffer memory 324 into RGB image data. The RGB image data is sent to the liquid crystal display device 332 via the display drive circuit 330, whereby a two-dimensional image or a three-dimensional image of the subject is displayed on the liquid crystal display device 332.

  In addition, although the example using the 2D / 3D changeover switch 288 described above is shown for switching between the 2D image and the 3D image, the menu displayed on the liquid crystal display device 332 or the liquid crystal display unit 282 for operation guidance is shown. You may make it switch by the input to a screen.

  The adjustment of the distance between the optical axis of the first camera 12 and the optical axis of the second camera 14 and the convergence angle is selected from the menu screen, and the camera control circuit 294 is controlled by the input operation of the cross key or the dedicated key on the operation panel 224. The first camera driving circuit 296 and the second camera driving circuit 298 may be driven for adjustment. Of course, the adjustment may be performed manually.

  The distance between the optical axis of the first camera 12 and the optical axis of the second camera 14 and the convergence angle may be detected by a dedicated sensor, the number of pulses using the optical sensor from the reference position, or a stepping motor. Alternatively, the detection may be performed with the number of pulses sent to.

  The first ranging image sensor 304 and the second ranging image sensor 306 may be of a type in which photodiodes are two-dimensionally arranged, of a type in which the photodiodes are arranged in a line, or provided with a single photodiode. It may be of the type.

  In the above example, two systems for distance detection are provided, but one system may be used.

  In the above example, the main CPU 250, the first photometry / ranging CPU 202, and the second photometry / ranging CPU 204 are provided in order to reduce the load on the CPU. You may make it control.

  Switching between moving images and still images can be realized by selecting and operating the release 278 for moving images and the release 280 for still images. However, a single release is used to select the mode for moving images and still images. Switching by an input operation to the menu screen may be used.

  The digital camera 200 has a specification that can capture a 2D image of a subject in addition to capturing a 3D image of the subject. In addition, the recording / playback of moving images, still images, and audio is possible.

  Further, instead of the liquid crystal display device 332, an organic EL (electroluminescence) or the like may be used.

  Next, the processing operation of the digital camera 200 according to the present embodiment, in particular, the first processing operation including the compression encoding processing based on the second encoding device 10B will be described with reference to FIG. The compression encoding process based on the second encoding device 10B will be described with reference to the components in FIG.

  First, in step S601 in FIG. 22, the main CPU 250 determines whether or not it is in the three-dimensional mode. This determination is made based on the contents of the three-dimensional / two-dimensional mode flag 226. If it is the three-dimensional mode, the process proceeds to the next step S602, and if it is the two-dimensional mode, the control is transferred to the processing of the two-dimensional mode.

  In step S602, the main CPU 250 determines whether or not the moving image mode is set. This determination is made based on whether or not the movie release 278 has been operated. If it is the moving image mode, the process proceeds to the next step S603, and if it is the still image mode, the control is transferred to the processing of the still image mode.

  In step S603, the main CPU 250 sets shooting conditions for the optical system. In this setting, for example, the aperture and the convergence angle of the first camera 12 and the second camera 14 are set based on various parameters relating to the shooting conditions registered in the EEPROM 222.

  In step S604, the main CPU 250 waits for an operation input for starting shooting. When there is an operation input, the main CPU 250 performs processing according to the compression encoding processing of the second encoding device 10B.

  That is, in step S605, the main CPU 250 reads from the first storage circuit 232 the interval between the optical axis of the first camera 12 and the optical axis of the second camera 14, and the convergence angle between the first camera 12 and the second camera 14. .

  Thereafter, in step S606, the main CPU 250 sets deviation information based on at least one of the read interval and the convergence angle and records it in the first work memory 244.

  Thereafter, in step S <b> 607, the search range changing circuit 92 of the second encoding processing system 20 in the second compression / decompression circuit 246 reads deviation information from the first work memory 244.

  Thereafter, in step S608, the search range changing circuit 92 sets the search range of the motion vector of the second image 94 based on the read deviation information.

  On the other hand, in step S609, the first encoding processing system 18 in the first compression / decompression circuit 242 performs compression encoding of the first image 54, and transfers the compressed and encoded first image 54 to the image recording processing circuit 238. To do. The image recording processing circuit 238 stores the compression encoded data of the first image 54 transferred from the first compression / decompression circuit 242 in the memory card 322 via the thirteenth buffer memory 316.

  On the other hand, in step S610, the code information creation circuit 106 in the second compression / decompression circuit 246 creates the code information 114. That is, a portion that matches or substantially matches the first image 54 in the changed search range is extracted, and code information 114 is created based on the address information and the like of the extracted portion.

  Thereafter, in step S611, the second encoding processing system 20 in the second compression / decompression circuit 246 encodes the generated code information 114 and transfers the encoded code information 114 to the image recording processing circuit 238. The image recording processing circuit 238 stores the compression encoded data of the code information 114 transferred from the second compression / decompression circuit 246 in the memory card 322 via the thirteenth buffer memory 316.

  Thereafter, in step S612, the second encoding processing system 20 in the second compression / decompression circuit 246 performs compression encoding of the remaining partial images in the second image 94, and records the compression-encoded partial images. Transfer to the processing circuit 238. The image recording processing circuit 238 stores the compressed encoded data of the partial image transferred from the second compression / decompression circuit 246 in the memory card 322 via the thirteenth buffer memory 316.

  In the next step S613, the main CPU 250 determines whether or not the photographing is finished. This determination is made based on whether or not there is an operation input indicating the end of shooting.

  If the photographing is not finished, the process returns to step S609, and the processes after step S609 are repeated.

  If the shooting is finished, the process proceeds to step S614, and the main CPU 250 stores the management data of the current moving image in the memory card 322.

  When the process in step S614 ends, the first processing operation ends.

  Next, the processing operation of the digital camera 200 according to the present embodiment, particularly the second processing operation including the compression encoding processing based on the third encoding device 10C will be described with reference to FIG. The compression encoding process based on the third encoding apparatus 10C will be described with reference to the components in FIG.

  First, in step S701 in FIG. 23, the main CPU 250 determines whether or not it is in the three-dimensional mode. This determination is made based on the contents of the three-dimensional / two-dimensional mode flag 226. If it is the three-dimensional mode, the process proceeds to the next step S702, and if it is the two-dimensional mode, the control is transferred to the processing of the two-dimensional mode.

  In step S702, the main CPU 250 determines whether or not the moving image mode is set. This determination is made based on whether or not the movie release 278 has been operated. If it is the moving image mode, the process proceeds to the next step S703, and if it is the still image mode, the control is transferred to the processing of the still image mode.

  In step S703, the main CPU 250 sets shooting conditions for the optical system. In this setting, for example, the aperture and the convergence angle of the first camera 12 and the second camera 14 are set based on various parameters relating to the shooting conditions registered in the EEPROM 222.

  In step S704, the main CPU 250 waits for an operation input to start shooting. When there is an operation input, the main CPU 250 performs a process according to the compression encoding process of the third encoding apparatus 10C.

  That is, in step S <b> 705, the main CPU 250 reads distance data from the second storage circuit 236 to the subject.

  Thereafter, in step S706, the main CPU 250 sets deviation information based on the read distance data to the subject and records it in the first work memory 244.

  Thereafter, in step S <b> 707, the search range changing circuit 92 of the second encoding processing system 20 in the second compression / decompression circuit 246 reads deviation information from the storage unit 90.

  Thereafter, in step S708, the search range changing circuit 92 sets the search range of the motion vector of the second image 94 based on the read deviation information.

  On the other hand, in step S 709, the first encoding processing system 18 in the first compression / decompression circuit 242 performs compression encoding of the first image 54 and transfers the compression-encoded first image 54 to the image recording processing circuit 238. To do. The image recording processing circuit 238 stores the compression encoded data of the first image 54 transferred from the first compression / decompression circuit 242 in the memory card 322 via the thirteenth buffer memory 316.

  On the other hand, in step S710, the code information creation circuit 106 in the second compression / decompression circuit 246 creates the code information 114. That is, a portion that matches or substantially matches the first image 54 in the changed search range is extracted, and code information 114 is created based on the address information and the like of the extracted portion.

  Thereafter, in step S711, the second encoding processing system 20 in the second compression / decompression circuit 246 encodes the generated code information 114 and transfers the encoded code information 114 to the image recording processing circuit 238. The image recording processing circuit 238 stores the compression encoded data of the code information 114 transferred from the second compression / decompression circuit 246 in the memory card 322 via the thirteenth buffer memory 316.

  Thereafter, in step S712, the second encoding processing system 20 in the second compression / decompression circuit 246 performs compression encoding of the remaining partial images in the second image 94, and records the compression-encoded partial images. Transfer to the processing circuit 238. The image recording processing circuit 238 stores the compressed encoded data of the partial image transferred from the second compression / decompression circuit 246 in the memory card 322 via the thirteenth buffer memory 316.

  In the next step S713, the main CPU 250 determines whether or not the photographing is finished. This determination is made based on whether or not there is an operation input indicating the end of shooting.

  If the photographing is not finished, the process returns to step S705, and the processes after step S705 are repeated.

  If the shooting is finished, the process proceeds to step S714, and the main CPU 250 stores the management data of the current moving image in the memory card 322.

  When the processing in step S714 is completed, the second processing operation ends.

  Next, the processing operation of the digital camera 200 according to the present embodiment, in particular, the third processing operation including the decompression decoding process based on the decoding device 124 will be described with reference to FIG. The compression decoding process based on the decoding device 124 will be described with reference to the components shown in FIG.

  First, in step S801 in FIG. 24, the main CPU 250 waits for completion of image selection. This image selection is performed by displaying thumbnail images of a plurality of moving images that have been compression-encoded and recorded on the memory card 322 on the liquid crystal display device 332, and the operator selects one of the images. When the image is selected, the process advances to step S802, and the main CPU 250 reads management information (such as an ID code and storage address of the recording medium) related to the selected image from the EEPROM 222, for example.

  Thereafter, in step S803, it is determined whether or not the mode is the three-dimensional mode. This determination is made based on the contents of the three-dimensional / two-dimensional mode flag 226. If it is the three-dimensional mode, the process proceeds to the next step S804, and if it is the two-dimensional mode, the control is transferred to the processing of the two-dimensional mode.

  In step S804, the main CPU 250 determines whether or not the moving image mode is set. This determination is made based on whether or not the movie release 278 has been operated. If it is the moving image mode, the process proceeds to the next step S805, and if it is the still image mode, the control is transferred to the processing of the still image mode.

  From step S805, the main CPU 250 performs processing according to the decompression decoding processing of the decoding device 124.

  That is, in step S805, the first decoding processing system 126 in the first compression / decompression circuit 242 performs decompression decoding of the first image 54, and transfers the decompressed first image 54 to the image display processing circuit 240. To do. The image display processing circuit 240 outputs the restored data of the first image 54 transferred from the first compression / decompression circuit 242 to the liquid crystal display device 332 via the image data conversion circuit 328 and the display drive circuit 330, thereby The first image 54 is displayed on the screen of the liquid crystal display device 332.

  Thereafter, in step S806, the second decoding processing system 128 in the second compression / decompression circuit 246 reads out the code information 114 included in the encoded data of the second image 94 and transmits it to the code information processing circuit 156.

  Thereafter, in step S 807, the code information processing circuit 156 uses the first image 54 restored by the first decoding circuit 132 based on the address code included in the code information 114 to generate one of the second images 94. Restore the part.

  Thereafter, in step S808, the second decoding circuit 152 and the fourth motion compensation circuit 154 in the second compression / decompression circuit 246 perform decompression decoding of the partial image of the second image 94.

  Thereafter, in step S809, the synthesizing circuit 158 in the second compression / decompression circuit 246 uses a part of the second image 94 restored based on the code information 114, the second decoding circuit 152, and the fourth motion compensation circuit 154. By synthesizing the restored partial image, the second image 94 is restored, and the second image 94 is transferred to the image display processing circuit 240. The image display processing circuit 240 outputs the restored data of the second image 94 transferred from the second compression / decompression circuit 246 to the liquid crystal display device 332 via the image data conversion circuit 328 and the display drive circuit 330, thereby The second image 94 is displayed on the screen of the liquid crystal display device 332.

  In the next step S810, the main CPU 250 determines whether or not the reproduction is finished. This determination is made based on whether or not there is an operation input indicating the end of reproduction.

  If the reproduction is not finished, the process proceeds to step S811, and the main CPU 250 determines whether or not the currently reproduced image data is end (EOD: End of Data). If the image data is not finished, the process returns to step S805, and the processes after step S805 are repeated.

  When the reproduction is completed or the image data is completed, the process proceeds to the next step S812, where reproduction completion processing such as display of a thumbnail image on the liquid crystal display device 332 is performed, and this third processing operation is completed.

  In the digital camera according to the present embodiment, the second encoding device and the third encoding device can increase the speed of encoding processing, decoding processing, reduce power consumption, and reduce the amount of data after encoding. Since it has the same function as the encoding device and the decoding device, the encoding process of the three-dimensional image or multi-image processed by the digital camera, the speed of the decoding process, the reduction of power consumption, and The amount of data after encoding can be reduced.

  Note that the image encoding device, the image decoding device, the image encoding method, the image decoding method, the imaging device, the imaging device control method, and the program according to the present invention are not limited to the above-described embodiments, and It goes without saying that various configurations can be adopted without departing from the gist.

It is a block diagram which shows the structure of a 1st encoding apparatus. It is a block diagram which shows the structure of a 1st encoding process system. It is explanatory drawing which shows operation | movement of a 1st motion compensation circuit. It is a block diagram which shows the structure of a 2nd encoding process system. It is explanatory drawing which shows the shift | offset | difference of the horizontal direction of a 1st image and a 2nd image when a 1st camera and a 2nd camera are provided along with the horizontal direction. It is explanatory drawing which shows the shift | offset | difference of the perpendicular direction of the 1st image and 2nd image when a 1st camera and a 2nd camera are installed side by side along the perpendicular direction. It is explanatory drawing which shows the shift | offset | difference of the horizontal direction of four images at the time of arrange | positioning four cameras in matrix form, and the shift | offset | difference of a perpendicular direction. It is explanatory drawing which shows the breakdown of code information. It is a flowchart which shows the basic processing operation of a 1st encoding apparatus. It is a flowchart which shows the specific process operation | movement of the 1st encoding process type | system | group in a 1st encoding apparatus. It is a flowchart (the 1) which shows the concrete process operation | movement of the 2nd encoding process system in a 1st encoding apparatus. It is a flowchart (the 2) which shows the concrete process operation | movement of the 2nd encoding process system in a 1st encoding apparatus. It is a block diagram which shows the structure of a decoding apparatus. It is a block diagram which shows the structure of a 1st decoding processing system. It is a block diagram which shows the structure of a 2nd decoding processing system. It is a flowchart which shows the basic processing operation | movement of a decoding apparatus. It is a block diagram which shows the structure of a 2nd encoding apparatus. It is a flowchart which shows the basic processing operation of a 2nd encoding apparatus. It is a block diagram which shows the structure of a 3rd encoding apparatus. It is a flowchart which shows the basic processing operation of a 3rd encoding apparatus. It is a block diagram which shows the structure of the digital camera which concerns on this Embodiment. It is a flowchart which shows the 1st process operation | movement including the process operation | movement of the digital camera which concerns on this Embodiment, especially the compression encoding process based on a 2nd encoding apparatus. It is a flowchart which shows the 2nd processing operation including the processing operation of the digital camera which concerns on this Embodiment, especially the compression encoding process based on a 3rd encoding apparatus. It is a flowchart which shows the 3rd process operation | movement including the process operation | movement of the digital camera which concerns on this Embodiment, especially the expansion | extension decoding process based on a decoding apparatus.

Explanation of symbols

10A to 10C... 1st encoding device to 3rd encoding device 12... 1st camera 14... 2nd camera 16... Recording medium 18 1st encoding processing system 20 2nd encoding processing system 24. 1st motion compensation circuit 52 ... 1st motion compensation predictor 54, 54a-54c ... 1st image 60 ... 2nd encoding circuit 62 ... 2nd motion compensation circuit 88 ... 2nd motion compensation predictor 90 ... Storage unit 92 ... Search range change circuit 94 ... Second image 96 ... Deviation 100 ... First extraction circuit 102 ... Second extraction circuit 104 ... Comparison circuit 106 ... Code information creation circuit 108 ... Code information transfer circuit 110 ... Compression processing table creation Circuit 112 ... Third extraction circuit 114 ... Code information 124 ... Decoding device 126 ... First decoding processing system 128 ... Second decoding processing system 132 ... First decoding circuit 134 ... Third motion compensation circuit 148 ... First Motion compensation predictor 152 ... second decoding circuit 154 ... fourth motion compensation circuit 156 ... code information processing circuit 158 ... synthesis circuit 172 ... fourth motion compensation predictor 186 ... optical system information detection circuit 188 ... first deviation information setting Circuit 190 ... Ranging circuit 192 ... Second deviation information setting circuit 200 ... Digital camera 230 ... Camera interval / convergence angle processing circuit 234 ... Ranging processing circuit 238 ... Image recording processing circuit 240 ... Image display processing circuit 242 ... First compression Decompression circuit 246 ... Second compression / decompression circuit 250 ... Main CPU 322 ... Memory card 332 ... Liquid crystal display device

Claims (22)

  1. A storage unit that stores information on displacement of the second image with respect to the first image;
    First motion compensation means for performing motion compensation prediction of the first image;
    Second motion compensation means for performing motion compensation prediction of the second image;
    Search range changing means for changing the search range of the motion vector of the second image in the second motion compensation means based on the shift information stored in the storage unit. apparatus.
  2. The image encoding device according to claim 1,
    The convergence angle between the first camera that captures the subject and obtains the first image and the second camera that captures the subject and obtains the second image, and the optical axis of the first camera and the light of the second camera An image encoding apparatus comprising: means for setting the deviation information based on at least one of the axis intervals and storing the information in the storage unit.
  3. The image encoding device according to claim 1 or 2,
    The second motion compensation means includes
    Comparison means for sequentially comparing a plurality of macroblocks included in the search range changed by the search range change means and a plurality of macroblocks included in the first image;
    In the case where the comparison result from the comparison means indicates that they match or substantially match, instead of the macro block which is the object of the comparison result among the plurality of macro blocks of the second image, An image encoding apparatus comprising: output means for outputting code information indicating a macroblock of the first image compared.
  4. The image encoding device according to claim 1,
    Based on the depth information of the macroblock unit related to the first image, there is means for setting the shift information of the macroblock unit and storing it in the storage unit,
    The search range changing unit changes a search range of a motion vector for each macroblock of the second image in the second motion compensation unit based on the shift information stored in the storage unit. An image encoding device.
  5. The image encoding device according to claim 1 or 4,
    The second motion compensation means includes
    Comparison means for comparing the macroblocks included in the search range changed by the search range change means with the macroblocks included in the first image;
    In the case where the comparison result from the comparison means indicates that they match or substantially match, instead of the macro block which is the object of the comparison result among the plurality of macro blocks of the second image, An image encoding apparatus comprising: output means for outputting code information indicating a macroblock of the first image compared.
  6. First restoration means for decoding encoded data of the first image and restoring the first image;
    Second restoration means for decoding encoded data of the second image including code information associated based on deviation information of the second image with respect to the first image and restoring the second image,
    The second restoration means includes
    An image decoding apparatus comprising: an association unit that associates the encoded data of the second image with the restored first image based on the code information.
  7. The image decoding device according to claim 6, wherein
    The association means includes
    Of the encoded data of the second image, the decoded data of the macroblock of the first image is used as the decoded data of the macroblock in which code information indicating the macroblock of the first image is recorded. An image decoding apparatus.
  8. A first camera that captures a subject and obtains a first image;
    A second camera that captures the subject and obtains a second image;
    Based on at least one of the convergence angle of the first camera and the second camera and the distance between the optical axis of the first camera and the optical axis of the second camera, the second image relative to the first image Means for setting deviation information and storing it in the storage unit;
    First motion compensation means for performing motion compensation prediction of the first image;
    Second motion compensation means for performing motion compensation prediction of the second image;
    An image pickup apparatus comprising: search range changing means for changing a search range of a motion vector of the second image in the second motion compensation means based on the shift information stored in the storage unit.
  9. A first camera that captures a subject and obtains a first image;
    A second camera that captures the subject and obtains a second image;
    Means for setting deviation information in units of macroblocks based on depth information in units of macroblocks relating to the first image and storing the information in the storage unit;
    First motion compensation means for performing motion compensation prediction of the first image;
    Second motion compensation means for performing motion compensation prediction of the second image;
    Search range changing means for changing a search range of a motion vector for each macroblock of the second image in the second motion compensation means based on the shift information stored in the storage unit. An imaging device.
  10. A first motion compensation step for performing motion compensation prediction of the first image;
    A second motion compensation step for performing motion compensated prediction of the second image;
    Prior to the second motion compensation step, the search range of the motion vector of the second image in the second motion compensation step is the shift information of the second image with respect to the first image stored in the storage unit. An image coding method comprising: a search range changing step for changing based on the method.
  11. The image encoding method according to claim 10, wherein
    further,
    The convergence angle between the first camera that captures the subject and obtains the first image and the second camera that captures the subject and obtains the second image, and the optical axis of the first camera and the light of the second camera An image encoding method comprising: setting the deviation information based on at least one of the axis intervals and storing the deviation information in the storage unit.
  12. The image encoding method according to claim 10 or 11,
    The second motion compensation step includes:
    A comparison step of sequentially comparing a plurality of macroblocks included in the search range changed in the search range change step and a plurality of macroblocks included in the first image;
    When the comparison result from the comparison step indicates that the comparison results match or substantially match, instead of the macroblock that is the target of the comparison result, the macroblock And an output step of outputting code information indicating the compared macroblocks of the first image.
  13. The image encoding method according to claim 10, wherein
    further,
    Based on the depth information in units of macroblocks related to the first image, the step of setting and storing in the storage unit shift information in units of macroblocks,
    The search range changing step changes a search range of a motion vector for each macroblock of the second image in the second motion compensation step, based on the shift information stored in the storage unit. An image encoding method to be performed.
  14. The image encoding method according to claim 10 or 13,
    The second motion compensation step includes:
    A comparison step of comparing the macroblocks included in the search range changed in the search range change step with the macroblocks included in the first image;
    When the comparison result from the comparison step indicates that the comparison results match or substantially match, instead of the macroblock that is the target of the comparison result, the macroblock And an output step of outputting code information indicating the compared macroblocks of the first image.
  15. A first restoration step of decoding encoded data of the first image and restoring the first image;
    A second restoration step of decoding encoded data of the second image including code information associated based on shift information of the second image with respect to the first image and restoring the second image,
    The second restoration step includes
    An image decoding method comprising an associating step of associating encoded data of the second image with the restored first image based on the code information.
  16. The image decoding method according to claim 15, wherein
    The association step includes
    Of the encoded data of the second image, the decoded data of the macroblock of the first image is used as the decoded data of the macroblock in which code information indicating the macroblock of the first image is stored. An image decoding method.
  17. A first camera that captures a subject and obtains a first image;
    In a control method of an imaging apparatus having a second camera that captures an image of the subject and obtains a second image,
    Based on at least one of the convergence angle of the first camera and the second camera and the distance between the optical axis of the first camera and the optical axis of the second camera, the second image relative to the first image Setting deviation information and storing it in the storage unit;
    A first motion compensation step for performing motion compensation prediction of the first image;
    A second motion compensation step for performing motion compensation prediction of the second image;
    The search range change is performed prior to the second motion compensation step, and the search range of the motion vector of the second image in the second motion compensation step is changed based on the deviation information stored in the storage unit. And a step of controlling the imaging apparatus.
  18. A first camera that captures a subject and obtains a first image;
    In a control method of an imaging apparatus having a second camera that captures an image of the subject and obtains a second image,
    Based on the macroblock unit depth information related to the first image, setting macroblock unit shift information and storing it in the storage unit;
    A first motion compensation step for performing motion compensation prediction of the first image;
    A second motion compensation step for performing motion compensation prediction of the second image;
    Prior to the second motion compensation step, the search range of the motion vector for each macroblock of the second image in the second motion compensation step is changed based on the shift information stored in the storage unit. And a search range changing step for controlling the imaging apparatus.
  19. A computer having a storage unit in which shift information of the second image with respect to the first image is stored;
    First motion compensation means for performing motion compensation prediction of the first image;
    Second motion compensation means for performing motion compensation prediction of the second image;
    Search range changing means for changing the search range of the motion vector of the second image in the second motion compensation means based on the deviation information stored in the storage unit;
    Program to function as.
  20. A computer having a storage unit in which shift information of the second image with respect to the first image is stored;
    First restoration means for decoding encoded data of the first image and restoring the first image;
    Second restoration means for decoding the encoded data of the second image and restoring the second image;
    Association means for associating the encoded data of the second image and the restored first image based on the deviation information stored in the storage unit and included in the second restoration means,
    Program to function as.
  21. A first camera that captures a subject and obtains a first image;
    An imaging device having a second camera that captures the subject and obtains a second image;
    Based on at least one of the convergence angle of the first camera and the second camera and the distance between the optical axis of the first camera and the optical axis of the second camera, the second image relative to the first image Means for setting deviation information and storing it in the storage unit;
    First motion compensation means for performing motion compensation prediction of the first image;
    Second motion compensation means for performing motion compensation prediction of the second image;
    Search range changing means for changing the search range of the motion vector of the second image in the second motion compensation means based on the deviation information stored in the storage unit;
    Program to function as.
  22. A first camera that captures a subject and obtains a first image;
    An imaging device having a second camera that captures the subject and obtains a second image;
    Means for setting macroblock unit shift information based on the macroblock unit depth information relating to the first image and storing the information in the storage unit;
    First motion compensation means for performing motion compensation prediction of the first image;
    Second motion compensation means for performing motion compensation prediction of the second image;
    Search range changing means for changing a search range of a motion vector for each macroblock of the second image in the second motion compensation means based on the shift information stored in the storage unit;
    Program to function as.
JP2007045961A 2007-02-26 2007-02-26 Image encoding apparatus, image decoding apparatus, image encoding method, image decoding method, imaging apparatus, imaging apparatus control method, and program Expired - Fee Related JP4810465B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007045961A JP4810465B2 (en) 2007-02-26 2007-02-26 Image encoding apparatus, image decoding apparatus, image encoding method, image decoding method, imaging apparatus, imaging apparatus control method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2007045961A JP4810465B2 (en) 2007-02-26 2007-02-26 Image encoding apparatus, image decoding apparatus, image encoding method, image decoding method, imaging apparatus, imaging apparatus control method, and program

Publications (2)

Publication Number Publication Date
JP2008211502A true JP2008211502A (en) 2008-09-11
JP4810465B2 JP4810465B2 (en) 2011-11-09

Family

ID=39787435

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2007045961A Expired - Fee Related JP4810465B2 (en) 2007-02-26 2007-02-26 Image encoding apparatus, image decoding apparatus, image encoding method, image decoding method, imaging apparatus, imaging apparatus control method, and program

Country Status (1)

Country Link
JP (1) JP4810465B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013183200A (en) * 2012-02-29 2013-09-12 Oki Electric Ind Co Ltd Motion compensation control apparatus, motion compensation control program, and encoder
JP2014120917A (en) * 2012-12-17 2014-06-30 Fujitsu Ltd Moving image encoder, moving image encoding method and moving image encoding program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0715748A (en) * 1993-06-24 1995-01-17 Canon Inc Picture recording and reproducing device
JPH089421A (en) * 1994-06-20 1996-01-12 Sanyo Electric Co Ltd Three-dimensional imaging device
JP2000165909A (en) * 1998-11-26 2000-06-16 Sanyo Electric Co Ltd Method and device for image compressing processing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0715748A (en) * 1993-06-24 1995-01-17 Canon Inc Picture recording and reproducing device
JPH089421A (en) * 1994-06-20 1996-01-12 Sanyo Electric Co Ltd Three-dimensional imaging device
JP2000165909A (en) * 1998-11-26 2000-06-16 Sanyo Electric Co Ltd Method and device for image compressing processing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JPN6011002218, 泉岡生晃,渡辺裕, "視差補償予測を用いたステレオ動画像の符号化", 電子情報通信学会技術研究報告, 19890428, Vol.89 No.22(IE89 1−4), p.1−7, JP *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013183200A (en) * 2012-02-29 2013-09-12 Oki Electric Ind Co Ltd Motion compensation control apparatus, motion compensation control program, and encoder
JP2014120917A (en) * 2012-12-17 2014-06-30 Fujitsu Ltd Moving image encoder, moving image encoding method and moving image encoding program

Also Published As

Publication number Publication date
JP4810465B2 (en) 2011-11-09

Similar Documents

Publication Publication Date Title
ES2431289T3 (en) Image signal decoding method and associated device
CN101253774B (en) Apparatus of predictive coding/decoding using view-temporal reference picture buffers and method using the same
KR100774296B1 (en) Method and apparatus for encoding and decoding motion vectors
JP5362831B2 (en) Video coding system and method using configuration reference frame
KR20090086610A (en) Motion vector calculating method
EP1781042A1 (en) Image encoding method, image decoding method, image encoding device, image decoding device, image encoding program, and image decoding program
JP5129370B2 (en) Electronic video image stabilization
KR100239260B1 (en) Picture decoder
JP4687404B2 (en) Image signal processing apparatus, imaging apparatus, and image signal processing method
US20020021354A1 (en) Imaging sensing apparatus
US7705904B2 (en) Moving-image recording device for recording moving image
EP2127363A1 (en) Digital camera for providing improved temporal sampling
JP2006253768A (en) Digital camera
JPH05115061A (en) Motion vector coder and decoder
JP2008005112A (en) Stream encoder and stream decoder
JP5054583B2 (en) Imaging device
ES2535314T3 (en) Video coding method, decoding method, device thereof, program thereof, and storage medium containing the program
JP2006162991A (en) Stereoscopic image photographing apparatus
JP2007329693A (en) Image encoding device and method
JP6324063B2 (en) Image reproducing apparatus and control method thereof
JP2007135057A (en) Imaging apparatus, image processing method, as well as program
JP5295045B2 (en) Method and apparatus for providing high resolution images in embedded devices
TWI446098B (en) Imaging device and its shutter drive mode selection method
KR101336204B1 (en) A method and apparatus for encoding or decoding frames of different views in multiview video using global disparity
US20070165716A1 (en) Signal processing device, image capturing device, network camera system and video system

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20090908

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20110107

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110118

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20110322

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110412

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20110613

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20110726

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20110822

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140826

Year of fee payment: 3

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

LAPS Cancellation because of no payment of annual fees