JP2006157432A - Three-dimensional photographic apparatus and photographic method of three-dimensional image - Google Patents

Three-dimensional photographic apparatus and photographic method of three-dimensional image Download PDF

Info

Publication number
JP2006157432A
JP2006157432A JP2004344565A JP2004344565A JP2006157432A JP 2006157432 A JP2006157432 A JP 2006157432A JP 2004344565 A JP2004344565 A JP 2004344565A JP 2004344565 A JP2004344565 A JP 2004344565A JP 2006157432 A JP2006157432 A JP 2006157432A
Authority
JP
Japan
Prior art keywords
image
image data
feature point
predetermined
stereoscopic image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2004344565A
Other languages
Japanese (ja)
Inventor
Takashi Ebato
尚 江波戸
Original Assignee
Fuji Photo Film Co Ltd
富士写真フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuji Photo Film Co Ltd, 富士写真フイルム株式会社 filed Critical Fuji Photo Film Co Ltd
Priority to JP2004344565A priority Critical patent/JP2006157432A/en
Publication of JP2006157432A publication Critical patent/JP2006157432A/en
Application status is Pending legal-status Critical

Links

Images

Abstract

A three-dimensional image is obtained by correcting a positional shift due to a shift of a field angle based on the position of a subject in a captured image.
A feature point position comparison unit 64 detects a corresponding feature point from a predetermined area of each image indicated by two image data, and a CPU 40 detects the detected feature point between the images. , The correction parameters for matching the positions of the corresponding feature points are derived, and the angle-of-view distortion correction circuit 66 performs a shift process on at least one of the image data based on the correction parameters, At least one of a reduction process or an enlargement process is performed.
[Selection] Figure 2

Description

  The present invention relates to a stereoscopic image capturing apparatus and a stereoscopic image capturing method for capturing an image of a subject at the same time with two image capturing means and acquiring two image data indicating a stereoscopic image.

  In recent years, interest in three-dimensional stereoscopic images has increased, and a stereoscopic image capturing apparatus (hereinafter also referred to as “stereoscopic camera”) capable of performing stereoscopic imaging has been commercialized.

  In general, this type of stereoscopic camera is provided with two imaging systems for imaging a subject, and 2 obtained by performing imaging simultaneously with each imaging system due to a difference in parallax with respect to the subject in each imaging system. One image can be reproduced as a stereoscopic image.

  However, since a stereoscopic camera performs shooting using two imaging systems, a slight shift in the angle of view occurs between the two imaging systems, and the main subject image in the two images captured by each imaging system In addition to the shift due to the parallax, a positional shift due to a shift in the angle of view occurs in the position.

As a technique that can be applied to adjust the positional deviation due to the deviation of the angle of view, Patent Document 1 discloses that a stereo camera having two imaging systems captures an adjustment pattern with two imaging systems, and 2 Determining the angle of view based on the positional deviation of the pattern image in one image, pre-calculating angle-of-view adjustment parameters such as magnification, rotational displacement, translational displacement, etc., and adjusting the angle of view when shooting the subject A technique is described in which affine transformation is performed on a photographed image on the basis of parameters for use to correct a positional shift due to a shift in the angle of view.
JP-A-10-307352

  However, the stereoscopic camera to which the technique of Patent Document 1 is applied has a problem in that it is necessary to image an adjustment pattern in advance, which is extremely troublesome. In particular, in a stereoscopic camera, two image pickup systems are generally mechanically attached, and a mechanical angle of view shifts due to secular change. Therefore, it is necessary to periodically pick up an adjustment pattern. Is serious.

  The present invention has been made to solve the above-described problems, and provides a stereoscopic image capturing apparatus and a stereoscopic image capturing method capable of easily correcting a positional shift of a main subject due to a shift of a field angle. For the purpose.

  In order to achieve the above object, the invention described in claim 1 is a stereoscopic image capturing apparatus that simultaneously captures an image of a subject by two imaging means and acquires two image data indicating a stereoscopic image. A detection unit that detects a corresponding feature point from a predetermined region of each image indicated by the image data, and a position of the feature point detected by the detection unit between the images is compared, and the corresponding feature point Deriving means for deriving correction parameters for matching the positions of the image data, and processing means for performing at least one of shift processing, reduction processing or enlargement processing on at least one of the image data based on the correction parameters And.

  According to the first aspect of the present invention, the corresponding feature point is detected from the predetermined region of each image indicated by the two image data by the detection unit, and the feature point is detected by the detection unit by the derivation unit. Correction positions for comparing the positions of the respective images and matching the positions of the corresponding feature points are derived, and shift processing is performed on at least one of the image data based on the correction parameters by the processing means. At least one of the reduction process or the enlargement process is performed.

  Thus, according to the first aspect of the present invention, corresponding feature points are detected from a predetermined area of each image indicated by two image data, and the detected feature points are detected between the images. Correction position for deriving a correction parameter for matching the positions of the corresponding feature points, and at least one of the image data based on the correction parameter is shifted, reduced, or enlarged Since at least one of them is performed, it is possible to easily correct the position shift of the main subject due to the view angle shift.

  According to the present invention, as in the second aspect of the invention, the detection unit is arranged at a predetermined position from the predetermined area of one of the images indicated by the two image data. A pixel is defined as a feature point, and a pixel having the same feature information including luminance information of the feature point and luminance change amount information is detected as the corresponding feature point from the predetermined region of the other image. Also good.

  In the present invention, it is preferable that the predetermined area is provided at four corners of each image indicated by the two image data, as in the invention described in claim 3.

  On the other hand, in order to achieve the above object, the invention described in claim 4 is a stereoscopic image capturing method in which a subject is simultaneously imaged by two imaging means, and two image data indicating a stereoscopic image is acquired. Corresponding feature points are detected from a predetermined area of each image indicated by the two image data, the positions of the detected feature points between the images are compared, and the positions of the corresponding feature points are matched. Correction parameters are derived, and at least one of shift processing, reduction processing, or enlargement processing is performed on at least one of the image data based on the correction parameters.

  Therefore, the method for capturing a stereoscopic image according to claim 4 can produce the same effect as that of claim 1, so that, as in the invention of claim 1, the main subject subject caused by the deviation of the angle of view. Misalignment can be easily corrected.

  Note that, as in the invention described in claim 5, the present invention uses a pixel at a predetermined position from the predetermined area of one of the images indicated by the two image data as a feature point. The pixel having the same feature information including the luminance information of the feature point and the luminance change amount information may be detected as the corresponding feature point from the predetermined region of the other image.

  In the present invention, it is preferable that the predetermined areas are provided at four corners of each image indicated by the two image data, as in the invention described in claim 6.

  As described above, a corresponding feature point is detected from a predetermined region of each image indicated by two image data, and the position of the detected feature point between the images is compared and the corresponding feature is detected. Since a correction parameter for matching the position of the point is derived and at least one of shift processing, reduction processing, or enlargement processing is performed on at least one of the image data based on the correction parameter, It has an excellent effect that the position shift of the main subject due to the angle of view can be easily corrected.

  Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. Here, a case where the present invention is applied to a stereoscopic image capturing system that captures a stereoscopic image using two image capturing devices will be described.

  First, with reference to FIG. 1, the structure of the three-dimensional image photographing system 10 according to the present embodiment will be described.

  As shown in the figure, the stereoscopic image capturing system 10 includes two image capturing devices 12 and 13, a support member 15 that supports the image capturing devices 12 and 13, and an image display device that displays captured images. 14.

  The image capturing devices 12 and 13 are mechanically attached to the support member 15 so that the same main subject can be photographed from different viewpoints at a predetermined interval. Accordingly, the predetermined interval becomes a parallax with respect to the main subject, and a stereoscopic image can be reproduced based on two images obtained by photographing.

  Each of the image capturing devices 12 and 13 includes a lens unit 21 for forming a subject image. Further, a release button (so-called shutter) 56A and a power switch 56B are provided on the upper surfaces of the image capturing devices 12 and 13 to be pressed when performing image capturing.

  Note that the release button 56A of the image capturing apparatuses 12 and 13 according to the present embodiment is in a state where it is pressed down to an intermediate position (hereinafter referred to as “half-pressed state”) and to a final pressed position that exceeds the intermediate position. It is configured to be able to detect a two-stage pressing operation of a pressed state (hereinafter referred to as a “fully pressed state”).

  In the image capturing devices 12 and 13, the release button 56A is half-pressed to activate the AE (Automatic Exposure) function to set the exposure state (shutter speed, aperture state), and then AF (Auto Focus (automatic focus) function works to control the focus, and then exposure (photographing) is performed when it is fully pressed.

  Note that the image capturing devices 12 and 13 according to the present embodiment input and output signals for synchronizing the operations of the AE function and the AF function with other image capturing devices (hereinafter referred to as synchronization signals). When the release button 56A of the image capturing device 12 or the image capturing device 13 is half-pressed, the connector 16 is provided, and is connected to each other by the cable 19 via the connector 16. The AE function and the AF function are activated almost simultaneously in the photographing apparatuses 12 and 13.

  Further, the image capturing devices 12 and 13 are provided with a video signal output connector 17 and a video signal input connector 18, respectively, and each image data acquired by capturing by the image capturing devices 12 and 13 is converted into a video signal. Are output from the video signal output connector 17.

  Here, the video signal output connector 17 of the image capturing device 12 is connected to the image display device 14 via a video cable 20A. Therefore, the image captured by the image capturing device 12 is displayed on the image display device 14. On the other hand, the video signal output connector 17 of the image capturing device 13 is connected to the video signal input connector 18 of the image capturing device 12 via a video cable 20B. Image data obtained by photographing by the image photographing device 13 is input to the image photographing device 12 as a video signal.

  Next, the main configuration of the electrical system of the image capturing apparatuses 12 and 13 will be described with reference to FIG. Since the image capturing devices 12 and 13 have the same configuration, only the configuration of the image capturing device 12 will be described here.

  The image capturing apparatus 12 receives an optical unit 22 configured inside the lens unit 21 and a charge coupled device (hereinafter referred to as “CCD”) 24 disposed behind the optical axis of the optical unit 22. An analog signal processing unit 26 that performs various analog signal processing on the analog signal, an analog / digital converter (hereinafter referred to as “ADC”) 28 that converts the input analog signal into digital data, and an input And a digital signal processing unit 30 that performs various digital signal processing on the digital data.

  The digital signal processing unit 30 has a built-in line buffer having a predetermined capacity, and also performs control for directly storing the input digital data in a predetermined area of the RAM 48 described later.

  The output terminal of the CCD 24 is connected to the input terminal of the analog signal processing unit 26, the output terminal of the analog signal processing unit 26 is connected to the input terminal of the ADC 28, and the output terminal of the ADC 28 is connected to the input terminal of the digital signal processing unit 30. . Accordingly, the analog signal indicating the subject image output from the CCD 24 is subjected to predetermined analog signal processing by the analog signal processing unit 26, converted into digital image data by the ADC 28, and then input to the digital signal processing unit 30.

  On the other hand, the image photographing device 12 includes a CPU (central processing unit) 40 that controls the operation of the entire device, a RAM 48 that mainly stores digital image data obtained by photographing, and various programs and various parameters including an image correction program described later. And a ROM 49 in which are stored in advance.

  In addition, the image capturing apparatus 12 includes a media interface circuit 50 for making the portable memory card 15 accessible by the image capturing apparatus 12, and a compression / decompression process that performs compression processing and decompression processing on the captured digital image data. The video signal input from the expansion circuit 54 and the video signal input connector 18 is sampled and converted into digital image data, and the digital image data converted by the external synchronization control circuit 60 is temporarily stored. A feature point is obtained from a predetermined area of each image indicated by the image buffer memory 62 to be stored and the digital image data respectively stored in the image buffer memory 62 and the RAM 48, and pixel feature information of the feature points between the images is compared. Feature point position comparison unit 64 and shift processing and enlargement for digital image data An angle-of-view distortion correction circuit 66 that performs the physical or reduction processing, and a video encoder 68 that converts the digital image data stored in the RAM 48 into a video signal and outputs the video signal to the video signal output connector 17. .

  In the image photographing device 12 of the present embodiment, smart media (Smart Media (registered trademark)) is used as the memory card 15. The compression / decompression circuit 54 performs compression processing or decompression processing according to a predetermined still image compression method (in this embodiment, JPEG (Joint Photographic Coding Expert Group) method).

  The digital signal processing unit 30, CPU 40, RAM 48, ROM 49, media interface circuit 50, compression / expansion circuit 54, image buffer memory 62, feature point position comparison unit 64, field angle distortion correction circuit 66, and video encoder 68 are connected to the system bus. They are connected to each other via BUS. Therefore, the CPU 40 controls the operation of the digital signal processing unit 30, the compression / decompression circuit 54, the feature point position comparison unit 64, and the view angle distortion correction circuit 66, accesses to the image buffer memory 62, the RAM 48, and the ROM 49, and the memory. Access to the card 15 via the media interface circuit 50 can be performed.

  On the other hand, the image capturing apparatus 12 of the present embodiment includes a timing generator 32 that mainly generates a timing signal for driving the CCD 24 and supplies the timing signal to the CCD 24. The CPU 40 drives the timing generator 32 by the CPU 40. Controlled through.

  Further, the image photographing apparatus 12 is provided with an AE / AF control unit 34, and the CPU 40 also drives a focus adjustment motor and a diaphragm drive motor (not shown) provided in the optical unit 22 via the AE / AF control unit 34. Be controlled.

  That is, the optical unit 22 according to the present embodiment includes a lens driving mechanism (not shown). The lens drive mechanism includes the focus adjustment motor and the aperture drive motor, and these motors are driven by drive signals supplied from the AE / AF control unit 34 under the control of the CPU 40, respectively.

  In addition, the release button 56A and the power switch 56B (generally referred to as “operation unit 56” in the figure) are connected to the CPU 40, and the CPU 40 can always grasp the operation state of these operation units 56.

  Further, the image capturing apparatus 12 includes an AE / AF synchronization control unit 70 for operating the AE function and the AF function in synchronization with the synchronization signal input via the connector 16, and the AE / AF synchronization control unit. 70 is connected to the system bus BUS. When detecting that the release button 56A of the CPU 40 is half-pressed, the CPU 40 controls the AE / AF synchronization control unit 70 to output a synchronization signal for synchronizing the operation of the AE function and the AF function to the connector 16. Let When the AE / AF synchronization control unit 70 receives the synchronization signal via the connector 16, the CPU 40 causes the AE / AF control unit 34 to activate the AE function and the AF function.

  Next, an overall operation at the time of photographing of the image photographing device 12 according to the present embodiment will be briefly described.

  The CCD 24 performs imaging through the optical unit 22 and sequentially outputs analog signals for R (red), G (green), and B (blue) indicating the subject image to the analog signal processing unit 26. The analog signal processing unit 26 performs analog signal processing such as correlated double sampling processing on the analog signal input from the CCD 24 and sequentially outputs the analog signal to the ADC 28.

  The ADC 28 converts the R, G, and B analog signals input from the analog signal processing unit 26 into 12-bit R, G, and B signals (digital image data) and sequentially outputs them to the digital signal processing unit 30. To do. The digital signal processing unit 30 accumulates digital image data sequentially input from the ADC 28 in a built-in line buffer and temporarily stores the digital image data directly in a predetermined area of the RAM 48.

  The digital image data stored in the predetermined area of the RAM 48 is read by the digital signal processing unit 30 under the control of the CPU 40, and the white balance is adjusted by applying the digital gain according to the predetermined physical quantity, and the gamma processing is performed. Then, sharpness processing is performed to generate 8-bit digital image data.

  Then, the digital signal processing unit 30 performs YC signal processing on the generated 8-bit digital image data to convert it into a luminance signal Y and chroma signals Cr and Cb (hereinafter referred to as “YC signal”), and a YC signal. Is stored in an area different from the predetermined area of the RAM 48.

  In the image capturing devices 12 and 13, a YC signal of a moving image (through image) obtained by continuous imaging by the CCD 24 is converted into a video signal by the video encoder 68 and output from the video signal output connector 17. . Therefore, a moving image (through image) obtained by photographing by the image photographing device 12 is displayed on the image display device 14.

  In the image photographing device 12, when the release button 56A is pressed halfway, the AF function is activated and the focus control is performed after the AE function is activated and the exposure state is set as described above. At the same time, the image capturing device 12 outputs a synchronization signal to the image capturing device 13. As a result, the AE function and the AF function also operate in synchronization with the image capturing device 12 in the image capturing device 13. Thereafter, the release button 56A of the image photographing device 12 is fully pressed to perform photographing.

  Here, since the image capturing devices 12 and 13 are mechanically attached to the support member 15, the angle of view of the image capturing devices 12 and 13 is strict due to variations in the angle of view adjustment and mechanical displacement caused by secular change. In many cases, the angle of view is slightly shifted.

  For this reason, in the image photographing device 12 according to the present embodiment, the digital image data (hereinafter referred to as digital image data B) stored in the RAM 48 when the release button 56A is fully pressed by the CPU 40. Correction processing by an image correction program for at least one of digital image data (hereinafter referred to as digital image data A) based on a video signal input from the image capturing device 13 and stored in the image buffer memory 62 It is carried out.

  Next, correction processing by the executed image correction program will be described with reference to FIG. FIG. 3 is a flowchart showing the flow of processing of the image correction program.

  In step 100 in the figure, the digital image data A stored in the image buffer memory 62 and the digital image data B stored in the memory card 15 are transferred to the feature point position comparison unit 64. Hereinafter, an image indicated by the digital image data A is referred to as an image A, and an image indicated by the digital image data B is referred to as an image B. Note that FIGS. 4A and 4B show an example of the image A and the image B. FIG.

  When the digital image data A and the digital image data B are transferred, the feature point position comparison unit 64 characterizes each of the four corner regions located at predetermined positions of the image A indicated by the digital image data A. As the point detection areas 80A, 80B, 80C, 80D, the respective predetermined positions (in this embodiment, the center position of each feature point detection area) from the feature point detection areas 80A, 80B, 80C, 80D are feature points 82A, 82B. , 82C, and 82D, and the position (coordinates) of each feature point in the image A is obtained. Note that the feature points 82A, 82B, 82C, and 82D may be set as the predetermined positions without previously defining the feature point detection regions 80A, 80B, 80C, and 80D.

  In addition, the feature point position comparison unit 64 derives the luminance and the amount of change in luminance between the surrounding pixels of the feature point as the feature information of each pixel located at the feature points 82A, 82B, 82C, and 82D.

  Then, the feature point position comparison unit 64 sets the four corner regions of the image B indicated by the digital image data B at the same position as the image A as feature point detection regions 80E, 80F, 80G, and 80H, respectively. From the feature point detection area 80E, a pixel whose brightness and brightness change amount are within a predetermined range with respect to the brightness and brightness change amount of the feature point 82A is detected as a feature point 82E corresponding to the feature point 82A. The feature point 82F corresponding to the feature point 82B from the feature point detection region 80F, the feature point 82G corresponding to the feature point 82C from the feature point detection region 80G, and the feature point 82H corresponding to the feature point 82D from the feature point detection region 80H. Are detected, and the positions in the feature points 82E, 82F, 82G, and 82H in the image B are obtained. The predetermined range is preferably adjusted according to the brightness of the subject. In addition, when there are a plurality of pixels that fall within the predetermined range, in the present embodiment, the sum total of the absolute value of the difference in luminance from the feature point of the image B and the absolute value of the difference in luminance change is the most. When a small pixel is a corresponding feature point and there are a plurality of such pixels, the pixel closest to the position of the feature point in the image B is a corresponding feature point. However, the present invention is not limited to this.

  Then, the CPU 40 causes the RAM 48 to store the positions of the feature points 82A to 82H obtained by the feature point position comparison unit 64.

Here, the horizontal direction of the images A and B is the X direction, the vertical direction is the Y direction, and the positions of the feature points 82A to 82H stored in the RAM 48 are (X A , Y A ) to (X H , Y H ). It expresses.

  In the next step 102, a position correction amount H for matching the positions of the straight line connecting the feature point 82A and the feature point 82C of the image A and the straight line connecting the feature point 82E and the feature point 82G of the image B is calculated. That is, as shown in FIG. 5, the position correction amount H for shifting the pixel is calculated so that the straight line B connecting the feature points 82E and 82G matches the straight line A connecting the feature points 82A and 82C.

Here, the difference in the X direction between the feature point 82A and the feature point 82E corresponding to each other between the image A and the image B is X E −X A , and the difference in the X direction between the feature point 82C and the feature point 82G is X G. is -X C. The difference in the X-direction of the feature point 82A and the feature point 82C is Y C -Y A.

  Therefore, the value α representing the inclination (angle) between the straight line A and the straight line B shown in FIG. 5 can be obtained as in the following equation (1).

Inclination α = ((X G −X C ) − (X E −X A )) / (Y C −Y A ) (1)
Here, according to the position in the Y direction, the pixels of the image B are shifted in the X direction by the difference between the straight lines A and B, so that the straight lines A and B coincide. That is, the position correction amount H at the position of the feature point 82A in the Y direction (Y = Y A ) is X E −X A, and the position correction amount H at the position of the feature point 82C in the Y direction (Y = Y C ). Is X G -X C , the relationship between the position correction amount H and the position in the Y direction can be shown as a straight line D in FIG. In FIG. 8, since the slope α is a value indicating the slope of the straight line D, the relational expression between the position in the Y direction indicated by the straight line D and the position correction amount H can be expressed by the following formula (2). , (2), the position correction amount H corresponding to the position in the Y direction can be calculated.

Position correction amount H = α (Y−Y A ) + (X E −X A ) (2)
In the next step 104, a magnification M for matching the distances in the X direction between the image A and the image B is calculated.

That is, the X-direction distance X B -X A between the feature point 82A and the feature point 82B of the image A shown in FIG. 4 and the X-direction distance X D -X C between the feature point 82D and the feature point 82C are represented by the image B , The X-direction distance X F -X E between the feature point 82F and the feature point 82E and the X-direction distance X H -X G between the feature point 82H and the feature point 82G are changed.

Therefore, the magnification M in the X direction at the position of the feature point 82A in the Y direction (Y = Y A ) is (X H −X G ) / (X D −X C ), and the position of the feature point 82C (Y = Since the magnification M in the X direction at Y C ) is (X F −X E ) / (X B −X A ), the relationship between the magnification M and the position in the Y direction is shown as a straight line C in FIG. be able to. The slope β of the straight line C can be calculated from the following equation (3).

Therefore, the straight line C shown in FIG. 6 can be expressed by the following equation (4), and the magnification M of the pixel in the X direction between the image A and the image B corresponding to the position in the Y direction is derived by the equation (4). it can.
Magnification M = β (Y−Y A ) + (X H −X G ) / (X D −X C ) (4)
In the next step 106, it is determined whether or not the image B needs to be reduced in the X direction when compared with the image A. The magnification M calculated from the equation (4) is the image area in the Y direction of the image B. It is determined whether or not it is less than 1 in (the upper left pixel to the lower left pixel of the image B). If the determination is affirmative, the process proceeds to step 108. If the determination is negative, the process proceeds to step 110.

  In step 108, since there is a portion where the magnification M is reduced in the image area in the Y direction of the image B in step 104, the digital image data A is output to the angle-of-view distortion correction circuit 66, and the image of the image B in the Y direction Enlargement processing is performed on the entire image A indicated by the digital image data A at a magnification (1 / minimum magnification M) obtained by dividing the minimum magnification M obtained from the equation (4) by 1 in the region. That is, since the minimum magnification M is less than 1, the magnification obtained by dividing the minimum magnification M by 1 is enlargement. Further, pixel interpolation processing is performed on the digital image data A, the processed image A is stored in the image buffer memory 62 as new digital image data A, and the process proceeds to step 100 again. Accordingly, since the image A is enlarged, it is not necessary to reduce the image B, and the image B can be prevented from being distorted with respect to the image A.

  On the other hand, in the next step 110, data for one line in the horizontal direction (X direction) of the image B indicated by the digital image data B stored in the RAM 48 is read and output to the angle-of-view distortion correction circuit 66.

  In the next step 112, a position correction amount H and a magnification M corresponding to the position in the Y direction in the image B of the read one line data are derived using the equations (2) and (4).

  In the next step 114, in the angle-of-view distortion correction circuit, position shift processing in the X direction based on the position correction amount H calculated from equation (2) for one line of data, and calculation from equation (4) An enlargement process in the X direction is performed based on the magnification M, a pixel interpolation process is performed in accordance with the shift process and the enlargement process, a density value at each pixel position is calculated, and the calculated one line data is stored in the RAM 48. The digital image data is stored in a predetermined area different from the digital image data B.

  In the next step 116, it is determined whether or not all the horizontal lines of the digital image data B have been read. If the determination is negative, the process proceeds to step 110, where one line of data is newly read, and an affirmative determination is made. In this case, the process proceeds to step 118.

  In step 118, the digital image data corrected by the shift process and the enlargement process stored in the RAM 48 and the digital image data A stored in the image buffer memory 62 are sequentially output to the compression / decompression circuit 54, where The decompression circuit 54 performs compression processing in a predetermined still image compression method (in this embodiment, JPEG method) accompanied by processing such as encoding and discrete cosine transform, and the memory card is connected via the media interface circuit 50. 15 and the process ends.

  Here, FIG. 7A shows a stereoscopic image when the shift process and the enlargement process are not performed, and FIG. 7B shows a stereoscopic image subjected to the correction process according to the present embodiment. ing.

  As shown in FIG. 7A, when the shift process and the enlargement process are not performed, a position shift due to a field angle shift occurs in the background portion between the two images.

  On the other hand, as shown in FIG. 7B, according to the correction processing according to the present embodiment, the feature points 82A, 82B, 82C, and 82D and the corresponding feature points 82E, 82F, 82G, and 82H in the horizontal direction. The positions of match. That is, as shown in FIG. 7B, the positional deviation of the background portion due to the deviation of the angle of view or the like is corrected based on the position of the subject of the captured image. In addition, the main subject (a person image in FIG. 7) photographed in the central portion is misaligned due to parallax and can be reproduced in a three-dimensional manner.

  As described above, according to the present embodiment, a stereoscopic image capturing apparatus that simultaneously captures an image of a subject by two imaging units (in this case, the CCD 24) and acquires two image data indicating a stereoscopic image is provided. (Here, the feature point position comparison unit 64) detects a corresponding feature point from a predetermined area of each image indicated by the two image data, and the derivation means (here, the CPU 40) uses the detection means. Comparing the positions of the feature points detected by each of the images, deriving correction parameters for matching the positions of the corresponding feature points, and processing means (here, the angle-of-view distortion correction circuit 66) Since at least one of shift processing, reduction processing, or enlargement processing is performed on at least one of the image data based on the correction parameter, it is caused by a shift in the angle of view. It is possible to easily correct the positional deviation of the main subject.

  Further, the detection means determines a pixel at a predetermined position as a feature point from the predetermined region of one of the images indicated by the two image data, and the predetermined of the other image. Since the pixel having the same feature information including the luminance information of the feature point and the luminance change amount information is detected as the corresponding feature point, the corresponding feature point can be accurately detected from each image. it can.

  Further, since the predetermined areas are provided at the four corners of each image indicated by the two image data, the feature points are detected from the areas provided at the four corners of each image, and the corresponding areas are detected. By matching the positions of the feature points, the distortion between the images can be corrected.

  In the present embodiment, the horizontal direction (X direction) position correction amount H and magnification M are calculated to match the horizontal direction of the two images, but the same processing is performed in the vertical direction (Y direction). Thus, the position correction amount H and the magnification M can be calculated to match the vertical directions of the two images.

  In the present embodiment, the center position of each feature point detection region is set to the feature points 80A, 80B, 80C, and 80D. However, the feature points may be determined based on the feature information of the pixels. For example, the pixel with the highest luminance in each feature point detection area may be set as the feature points 80A, 80B, 80C, and 80D.

  Further, in the present embodiment, the corresponding feature points are detected based on the luminance information and luminance change information of the pixels, but pixels such as pixel density value (tone value) information and density value change information are used. Corresponding feature points may be detected based on feature information representing the features.

  In the present embodiment, the predetermined areas for detecting the feature points are areas provided at the four corners of the image, but areas other than the four corners may be used. Further, a feature point corresponding to each horizontal or vertical line of the image may be detected and correction processing may be performed for each line.

  In this embodiment, the enlargement process is performed on the image data B in the correction process so as to match the positional deviation. However, the positional deviation may be matched by performing a reduction process. Further, the positional deviation may be matched by performing at least one of a shift process and a reduction process or an enlargement process on at least one of the image data A and the image data B.

  Further, in the present embodiment, the stereoscopic image capturing system 10 configured by the two image capturing devices 12 and 13, the support member 15, and the image display device 14 has been described. Two imaging systems may be provided in the apparatus to form a stereoscopic image capturing apparatus, and correction processing may be performed on two image data captured by the two imaging systems. Further, two or more imaging systems may be provided, and correction processing may be performed between other image data using any one of the plurality of captured image data as reference image data.

  In addition, the configuration of the stereoscopic image capturing system 10 described in the present embodiment (see FIGS. 1 and 2) is merely an example, and it is needless to say that the configuration can be appropriately changed without departing from the gist of the present invention. .

  The processing flow of the image correction program described in the present embodiment (see FIG. 3) is also an example, and it goes without saying that it can be changed as appropriate without departing from the gist of the present invention.

1 is a configuration diagram illustrating an overall configuration of a stereoscopic image capturing system according to an embodiment. It is a figure which shows the main structures of the electric system of the image imaging device which concerns on embodiment. It is a flowchart which shows the flow of a process of the image correction program which concerns on embodiment. It is a figure which shows the image image | photographed with two image imaging devices of the stereo image imaging system which concerns on embodiment. It is a figure which shows the relationship between the straight line A which connects the feature point of the image A which concerns on embodiment, and the straight line B which connects the feature point of the image B. FIG. It is a figure which shows the relationship between the position of the Y direction and magnification which concern on embodiment. It is a figure which shows the image image | photographed and synthesize | combined by the stereo image imaging system which concerns on embodiment. It is a figure which shows the relationship between the position of the Y direction and position correction amount which concern on embodiment.

Explanation of symbols

DESCRIPTION OF SYMBOLS 10 Stereoscopic imaging system 12 Image capturing device 13 Image capturing device 24 CCD
40 CPU
64 feature point position comparison unit 66 angle-of-view distortion correction circuit

Claims (6)

  1. A stereoscopic image capturing device that simultaneously captures an image of a subject by two imaging means and acquires two image data indicating a stereoscopic image,
    Detecting means for detecting a corresponding feature point from a predetermined region of each image indicated by the two image data;
    Derivation means for comparing the positions of the feature points detected by the detection means between the images and deriving correction parameters for matching the positions of the corresponding feature points;
    Processing means for performing at least one of shift processing, reduction processing, or enlargement processing on at least one of the image data based on the correction parameter;
    A stereoscopic image photographing apparatus comprising:
  2.   The detection means determines a pixel at a predetermined position as a feature point from the predetermined area of any one of the images indicated by the two image data, and the predetermined image of the other image The stereoscopic image capturing apparatus according to claim 1, wherein pixels corresponding to feature information including luminance information of the feature point and luminance change amount information are detected from the region as the corresponding feature point.
  3.   3. The stereoscopic image capturing apparatus according to claim 1, wherein the predetermined areas are provided at four corners of each image indicated by the two image data.
  4. A method for capturing a stereoscopic image, in which a subject is simultaneously imaged by two imaging means and two image data indicating a stereoscopic image are acquired,
    Detecting a corresponding feature point from a predetermined region of each image indicated by the two image data;
    Comparing the positions of the detected feature points between the images and deriving correction parameters for matching the positions of the corresponding feature points;
    A stereoscopic image capturing method, wherein at least one of shift processing, reduction processing, or enlargement processing is performed on at least one of the image data based on the correction parameter.
  5.   A pixel at a predetermined position is determined as a feature point from the predetermined region of one of the images indicated by the two image data, and the feature point is determined from the predetermined region of the other image. 5. The method for capturing a stereoscopic image according to claim 4, wherein pixels having the same feature information including luminance information and luminance change amount information are detected as the corresponding feature points.
  6.   6. The stereoscopic image capturing method according to claim 4, wherein the predetermined areas are provided at four corners of each image indicated by the two image data.
JP2004344565A 2004-11-29 2004-11-29 Three-dimensional photographic apparatus and photographic method of three-dimensional image Pending JP2006157432A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2004344565A JP2006157432A (en) 2004-11-29 2004-11-29 Three-dimensional photographic apparatus and photographic method of three-dimensional image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2004344565A JP2006157432A (en) 2004-11-29 2004-11-29 Three-dimensional photographic apparatus and photographic method of three-dimensional image

Publications (1)

Publication Number Publication Date
JP2006157432A true JP2006157432A (en) 2006-06-15

Family

ID=36635185

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2004344565A Pending JP2006157432A (en) 2004-11-29 2004-11-29 Three-dimensional photographic apparatus and photographic method of three-dimensional image

Country Status (1)

Country Link
JP (1) JP2006157432A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009044496A (en) * 2007-08-09 2009-02-26 Fujifilm Corp Photographing field angle calculation apparatus
WO2010061956A1 (en) 2008-11-27 2010-06-03 Fujifilm Corporation Stereoscopic image processing device, method, recording medium and stereoscopic imaging apparatus
JP2011239379A (en) * 2010-04-30 2011-11-24 Sony Europe Ltd Image capturing system, image capturing apparatus, and image capturing method
CN102547354A (en) * 2010-12-20 2012-07-04 索尼公司 Correction value calculation apparatus, compound eye imaging apparatus, and method of controlling correction value calculation apparatus
CN102665087A (en) * 2012-04-24 2012-09-12 浙江工业大学 Automatic shooting parameter adjusting system of three dimensional (3D) camera device
JP2013114505A (en) * 2011-11-29 2013-06-10 Fujitsu Ltd Stereo image generating apparatus, stereo image generating method, and computer program for stereo image generation
JP2014527196A (en) * 2011-08-03 2014-10-09 トゥルアリティ, エルエルシーTruality, Llc Method for correcting zoom setting and / or vertical offset of stereo film frame, and control or adjustment system for a camera rig having two cameras
JP5765418B2 (en) * 2011-05-06 2015-08-19 富士通株式会社 Stereoscopic image generation apparatus, stereoscopic image generation method, and stereoscopic image generation program
US9225958B2 (en) 2012-03-15 2015-12-29 Panasonic Intellectual Property Management Co., Ltd. Video signal processor and method of processing video signal
US10165249B2 (en) 2011-07-18 2018-12-25 Truality, Llc Method for smoothing transitions between scenes of a stereo film and controlling or regulating a plurality of 3D cameras

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009044496A (en) * 2007-08-09 2009-02-26 Fujifilm Corp Photographing field angle calculation apparatus
WO2010061956A1 (en) 2008-11-27 2010-06-03 Fujifilm Corporation Stereoscopic image processing device, method, recording medium and stereoscopic imaging apparatus
US8111910B2 (en) 2008-11-27 2012-02-07 Fujifilm Corporation Stereoscopic image processing device, method, recording medium and stereoscopic imaging apparatus
JP2011239379A (en) * 2010-04-30 2011-11-24 Sony Europe Ltd Image capturing system, image capturing apparatus, and image capturing method
CN102547354A (en) * 2010-12-20 2012-07-04 索尼公司 Correction value calculation apparatus, compound eye imaging apparatus, and method of controlling correction value calculation apparatus
JP2012134609A (en) * 2010-12-20 2012-07-12 Sony Corp Correction value calculation apparatus, compound-eye imaging apparatus, and method of controlling correction value calculation apparatus
JP5765418B2 (en) * 2011-05-06 2015-08-19 富士通株式会社 Stereoscopic image generation apparatus, stereoscopic image generation method, and stereoscopic image generation program
US10165249B2 (en) 2011-07-18 2018-12-25 Truality, Llc Method for smoothing transitions between scenes of a stereo film and controlling or regulating a plurality of 3D cameras
US10356329B2 (en) 2011-08-03 2019-07-16 Christian Wieland Method for correcting the zoom setting and/or the vertical offset of frames of a stereo film and control or regulating system of a camera rig having two cameras
JP2014527196A (en) * 2011-08-03 2014-10-09 トゥルアリティ, エルエルシーTruality, Llc Method for correcting zoom setting and / or vertical offset of stereo film frame, and control or adjustment system for a camera rig having two cameras
JP2013114505A (en) * 2011-11-29 2013-06-10 Fujitsu Ltd Stereo image generating apparatus, stereo image generating method, and computer program for stereo image generation
US9225958B2 (en) 2012-03-15 2015-12-29 Panasonic Intellectual Property Management Co., Ltd. Video signal processor and method of processing video signal
CN102665087A (en) * 2012-04-24 2012-09-12 浙江工业大学 Automatic shooting parameter adjusting system of three dimensional (3D) camera device

Similar Documents

Publication Publication Date Title
JP4904108B2 (en) Imaging apparatus and image display control method
EP2563006B1 (en) Method for displaying character information, and image-taking device
US8311362B2 (en) Image processing apparatus, imaging apparatus, image processing method and recording medium
US7415166B2 (en) Image processing device
US7453510B2 (en) Imaging device
US20020158973A1 (en) Image-taking apparatus and image-taking method
KR20110102313A (en) Stereoscopic image processing device, method, recording medium and stereoscopic imaging apparatus
TWI387330B (en) Imaging apparatus provided with panning mode for taking panned image
US20010035910A1 (en) Digital camera
US20090324135A1 (en) Image processing apparatus, image processing method, program and recording medium
US9304388B2 (en) Three-dimensional imaging device and three-dimensional imaging method
JP2004354581A (en) Imaging apparatus
EP1246453A2 (en) Signal processing apparatus and method, and image sensing apparatus
EP1079609A1 (en) Autofocus apparatus
JP4406937B2 (en) Imaging device
JP2005086499A (en) Imaging apparatus
US20050128323A1 (en) Image photographing device and method
JP2008252254A (en) Stereoscopic photographing device and optical axis adjusting method
JP4014612B2 (en) Peripheral light amount correction device, peripheral light amount correction method, electronic information device, control program, and readable recording medium
EP1695542B1 (en) Method and ystem for determining camera operating parameters from a region of interest in evaluation images
US20020114015A1 (en) Apparatus and method for controlling optical system
JP3533756B2 (en) Image input device
US20110019989A1 (en) Imaging device and imaging method
KR20090040844A (en) Image processing apparatus and image processing method
US20110018970A1 (en) Compound-eye imaging apparatus

Legal Events

Date Code Title Description
A711 Notification of change in applicant

Free format text: JAPANESE INTERMEDIATE CODE: A712

Effective date: 20070221