JP5127731B2 - Video camera - Google Patents

Video camera Download PDF

Info

Publication number
JP5127731B2
JP5127731B2 JP2009008455A JP2009008455A JP5127731B2 JP 5127731 B2 JP5127731 B2 JP 5127731B2 JP 2009008455 A JP2009008455 A JP 2009008455A JP 2009008455 A JP2009008455 A JP 2009008455A JP 5127731 B2 JP5127731 B2 JP 5127731B2
Authority
JP
Japan
Prior art keywords
image
means
changing
unit
zoom
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2009008455A
Other languages
Japanese (ja)
Other versions
JP2010166462A (en
Inventor
光章 黒川
Original Assignee
三洋電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三洋電機株式会社 filed Critical 三洋電機株式会社
Priority to JP2009008455A priority Critical patent/JP5127731B2/en
Priority claimed from US12/390,585 external-priority patent/US8253812B2/en
Publication of JP2010166462A publication Critical patent/JP2010166462A/en
Application granted granted Critical
Publication of JP5127731B2 publication Critical patent/JP5127731B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Description

  The present invention relates to a video camera, and more particularly to a video camera that performs focal plane distortion correction processing on image data output from an image sensor that employs a focal plane electronic shutter system.

  An example of this type of camera is disclosed in Patent Document 1. According to this background art, a scaling parameter corresponding to the electronic zoom magnification designated by the zoom key is set in the RAW data scaling circuit. The RAW data scaling circuit subjects the image data output from the A / D converter to a scaling process according to the scaling parameter. A moving image based on the RAW data output from the scaling processing circuit is displayed on the image display unit.

JP 2007-166551 A

  However, if the angle of view of the image represented by the RAW data subjected to the scaling process matches the angle of view of the image displayed on the image display unit, image quality correction processing that affects the angle of view can be executed. Therefore, the reproduction performance of the object scene image may be deteriorated.

  Therefore, a main object of the present invention is to provide a video camera capable of improving the reproduction performance of the object scene image.

  A video camera according to the present invention (10: reference numeral corresponding to the embodiment; the same applies hereinafter) includes an imaging unit (18) that repeatedly outputs an image representing an object scene, and a reduction unit that reduces an image output from the imaging unit ( 24z), extraction means (40) for extracting a part of the reduced image belonging to the extraction area of the predetermined size from the reduced images created by the reduction means, and the reduced image created by the reduction means when a zoom operation is accepted Size changing means (S79) for changing the size within a range exceeding the predetermined size is provided.

  The imaging means repeatedly outputs an image representing the scene. The image output from the imaging unit is reduced by the reduction unit. The extraction means extracts a part of the reduced images belonging to the extraction area of a predetermined size from the reduced images created by the reduction means. When the zoom operation is accepted, the size changing unit changes the size of the reduced image created by the reducing unit within a range exceeding a predetermined size.

  By changing the size of the reduced image within a range that exceeds the predetermined size, image quality correction processing that affects the angle of view can be executed in the entire zoom range. Thereby, the reproduction performance of the object scene image is improved.

  Preferably, the zoom lens (12) provided in front of the imaging means, and the magnification changing means (S75) for changing the magnification of the zoom lens in the same direction as the changing direction of the size changing means in relation to the change processing of the size changing means Is further provided.

  Preferably, detection means (S23, S25) for detecting the movement of the imaging surface in a direction orthogonal to the optical axis, and position changing means for changing the position of the extraction area so that the movement detected by the detection means is compensated ( S33) is further provided.

  In one aspect, the imaging unit includes an exposure unit (66) that exposes the imaging surface by a focal plane electronic shutter method, and the shape of the extraction area is configured so that focal plane distortion is suppressed based on the motion detected by the detection unit. There is further provided a shape changing means (S27) for changing.

  In one embodiment, the extraction area is a rectangular area having a left side and a right side, and the shape changing unit changes the amount of inclination of the right side and the left side based on the horizontal component of the motion detected by the detection unit.

  More preferably, a limiting means (S31) for limiting the change amount of the position changing means with reference to the inclination amount changed by the shape changing means is further provided.

  In another aspect, a stop means (S29) for stopping the position changing means when the movement detected by the detecting means corresponds to a pan / tilt operation of the imaging surface is further provided.

  Preferably, the image output from the imaging unit corresponds to an image in which each pixel has any one color information of a plurality of colors, and the reduced image extracted by the extraction unit is used to store all color information of each pixel in a plurality of colors. Conversion means (44) for converting the image into a further image is further provided.

  Preferably, output means (46) for outputting a moving image based on the reduced image extracted by the extracting means is further provided.

  According to the present invention, by changing the size of the reduced image within a range exceeding the predetermined size, the image quality correction processing that affects the angle of view can be executed in the entire zoom range. Thereby, the reproduction performance of the object scene image is improved.

  The above object, other objects, features and advantages of the present invention will become more apparent from the following detailed description of embodiments with reference to the drawings.

It is a block diagram which shows the structure of one Example of this invention. (A) is an illustrative view showing an example of the resolution of an image output from the image sensor, (B) is an illustrative view showing an example of the resolution of an EIS / AF evaluation image, and (C) is an AE / AWB evaluation. It is an illustration figure which shows an example of the resolution of an image. It is a block diagram which shows an example of a structure of the image sensor applied to the FIG. 1 Example. It is a block diagram which shows an example of a structure of the pre-processing circuit applied to FIG. 1 Example. It is a graph which shows an example of a zoom magnification characteristic. (A) is an illustrative view showing an example of an image output from an image sensor, (B) is an illustrative view showing an example of an image output from a preprocessing circuit, and (C) is an EIS / AF evaluation image. FIG. 4D is an illustrative view showing an example of the resolution of an AE / AWB evaluation image. (A) is an illustrative view showing another example of an image output from the image sensor, (B) is an illustrative view showing another example of an image outputted from the preprocessing circuit, and (C) is an EIS. FIG. 10 is an illustrative view showing another example of the resolution of the / AF evaluation image, and FIG. 9D is an illustrative view showing another example of the resolution of the AE / AWB evaluation image. (A) is an illustrative view showing another example of an image output from the image sensor, (B) is an illustrative view showing another example of an image output from the preprocessing circuit, and (C) is an EIS. FIG. 10 is an illustrative view showing another example of the resolution of the / AF evaluation image, and FIG. 9D is an illustrative view showing another example of the resolution of the AE / AWB evaluation image. It is an illustration figure which shows an example of the imaging operation in FIG. 1 Example. (A) is an illustration figure which shows an example of the shape change process of an extraction area, (B) is an illustration figure which shows another example of the shape change process of an extraction area, (C) is a shape change process of an extraction area FIG. 8D is an illustrative view showing yet another example of the extraction area shape changing process. (A) is an illustration figure which shows an example of the operation | movement which determines the inclination amount of the left side and right side of an extraction area, (B) is the illustration which shows another example of the operation | movement which determines the inclination amount of the left side and right side of an extraction area. FIG. (A) is an illustration which shows an example of the shape of the extraction area corresponding to display magnification "1.0", (B) is an illustration which shows an example of the shape of the extraction area corresponding to display magnification "8.0". (C) is an illustrative view showing one example of the shape of the extraction area corresponding to the display magnification “16”. It is a graph which shows an example of a horizontal margin characteristic. It is a flowchart which shows a part of operation | movement of CPU applied to the FIG. 1 Example. It is a flowchart which shows a part of other operation | movement of CPU applied to the FIG. 1 Example. It is a flowchart which shows a part of other operation | movement of CPU applied to the FIG. 1 Example. FIG. 12 is a flowchart showing yet another portion of behavior of the CPU applied to the embodiment in FIG. 1. It is a flowchart which shows a part of other operation | movement of CPU applied to the FIG. 1 Example.

  Referring to FIG. 1, a digital video camera 10 of this embodiment includes a zoom lens 12, a focus lens 14 and an aperture unit 16 driven by drivers 20a, 20b and 20c, respectively. The optical image of the object scene is irradiated on the imaging surface of the CMOS type image sensor 18 through these members. The imaging surface has an effective pixel area corresponding to horizontal 3072 pixels × vertical 1728 pixels, and is covered with a color filter (not shown) in a primary color Bayer array. The electric charge generated in each pixel has color information of any one of R (Red), G (Green), and B (Blue).

  When the power is turned on, the CPU 36 gives a corresponding command to the driver 20d to execute through image processing. An SG (Signal Generator) 22 generates a vertical synchronization signal Vsync every 1/30 seconds, for example. The driver 20d exposes the imaging surface by a focal plane electronic shutter system in response to the vertical synchronization signal Vsync generated from SG22, and reads out the electric charge generated thereby from the imaging surface. The image sensor 18 has N (N: integer of 2 or more, for example, “4”) channels CH1 to CHN, and raw image data based on the read charges is distributed (in parallel) from the channels CH1 to CHN. Output). The output raw image data has a resolution of horizontal 3072 pixels × vertical 1728 pixels as shown in FIG.

  The preprocessing circuit 24 applies N systems of parallel preprocessing to the N-channel raw image data output from the image sensor 18. Preprocessing for each system is configured by noise removal, reduction zoom, and edge adjustment, and raw image data that has undergone such preprocessing is written into the raw image area 42 a of the SDRAM 42 through the memory control circuit 40.

  Note that the reduction zoom in the preprocessing circuit 24 is executed by the zoom circuit 24z. Hereinafter, the reduction zoom executed by the zoom circuit 24z is defined as “RAW zoom”.

  The raw image data (resolution: horizontal 3072 pixels × vertical 1728 pixels) subjected to noise removal by the preprocessing circuit 22 is also supplied to the evaluation image creation circuits 26 and 28. The evaluation image creation circuit 26 performs E2 / AF evaluation image data by applying vertical 2-pixel addition processing and horizontal 2-pixel addition processing to the given raw image data. On the other hand, the evaluation image creation circuit 28 performs horizontal 4-pixel addition processing on the given raw image data to create AE / AWB evaluation image data.

  The EIS / AF evaluation image data has a resolution of horizontal 1536 pixels × vertical 864 pixels as shown in FIG. The AE / AWB evaluation image data has a resolution of horizontal 768 pixels × vertical 1728 pixels as shown in FIG. The EIS / AF evaluation image data is supplied to the motion detection circuit 30 and the AF evaluation circuit 32, and the AE / AWB evaluation image data is supplied to the AE / AWB evaluation circuit 34.

  Referring to FIGS. 2A and 2B, one extraction area EX and nine motion detection areas MD1 to MD and 9 are assigned to the imaging surface. The extraction area EX is a rectangular area having a size of horizontal 1920 pixels × vertical 1080 pixels. The motion detection areas MD1 to MD3 are arranged horizontally in the upper stage of the imaging surface, the motion detection areas MD4 to MD6 are arranged horizontally in the middle stage of the imaging surface, and the motion detection areas MD7 to MD9 are arranged horizontally in the lower stage of the imaging surface. Lined up.

  The motion detection circuit 30 detects a partial motion vector representing the motion of the object scene in each of the motion detection areas MD1 to MD9 based on the EIS / AF evaluation image data. The motion detection circuit 30 also combines the partial motion vectors of the motion detection areas MD1 to MD3 to generate a combined motion vector UVC, and combines the partial motion vectors of the motion detection areas MD4 to MD6 to generate a combined motion vector MVC. Then, the combined motion vectors LVC are generated by combining the partial motion vectors of the motion detection areas MD7 to MD9.

  Both the partial motion vector detection process and the synthesized motion vector creation process are executed each time the vertical synchronization signal Vsync is generated. Further, the combined motion vector UVC represents the motion of the object scene in the upper part of the imaging surface, the composite motion vector MVC represents the motion of the object field in the middle part of the imaging surface, and the composite motion vector LVC represents the object motion in the lower part of the imaging surface. Represents the movement of the scene.

  The CPU 36 creates an overall motion vector based on the combined motion vectors UVC, MVC, and LVC output from the motion detection circuit 30, and the movement of the imaging surface in the direction orthogonal to the optical axis is either a camera shake or a pan / tilt operation. Whether it is caused or not is discriminated based on the entire motion vector, and the extraction area EX is moved along the entire motion vector when the motion of the imaging surface is caused by camera shake. The position of the extraction area EX is changed so that the movement of the imaging surface due to camera shake is compensated (cancelled).

  The post-processing circuit 44 reads out partial raw image data belonging to the extraction area EX from the raw image data stored in the raw image area 42a through the memory control circuit 40, and performs color separation and white balance on the read partial raw image data. Post-processing such as adjustment, YUV conversion, and zooming is performed. Partial raw image data is read from the raw image area 42a in response to the vertical synchronization signal Vsync, and post-processing is also executed in response to the vertical synchronization signal Vsync. The YUV format image data generated in this way is output from the moving image output terminal M_OUT, and is written into the moving image area 42 b of the SDRAM 42 through the memory control circuit 40.

  Each of the plurality of pixels forming the image data subjected to the color separation processing has all the R, G, and B color information. Such a format of the image data is converted into the YUV format by the YUV conversion, and further magnified. Further, the enlargement zoom in the post-processing circuit 44 is executed by the zoom circuit 44z. Hereinafter, the enlargement zoom executed by the post-processing circuit 44 is defined as “YUV zoom”.

  The LCD driver 46 repeatedly reads the image data stored in the moving image area 42b, and drives the LCD monitor 48 based on the read image data. As a result, a real-time moving image (through image) representing the scene is displayed on the monitor screen.

  The AE / AWB evaluation circuit 34 extracts a part of AE / AWB evaluation image data belonging to the photometry / white balance area EWA shown in FIG. 2C among the AE / AWB evaluation image data output from the evaluation image creation circuit 28. Integration is performed every time the vertical synchronization signal Vsync is generated, and an integrated value, that is, an AE / AWB evaluation value is output. The CPU 36 executes AE / AWB processing so as to calculate an appropriate EV value and an appropriate white balance adjustment gain based on the AE / AWB evaluation value output from the AE / AWB evaluation circuit 34. The aperture amount and exposure time that define the calculated appropriate EV value are set in the drivers 20c and 20d, respectively, and the calculated appropriate white balance adjustment gain is set in the post-processing circuit 44. As a result, the brightness and white balance of the moving image output from the LCD monitor 48 are appropriately adjusted.

  The AF evaluation circuit 32 extracts and extracts a part of EIS / AF evaluation image data belonging to the focus area FA shown in FIG. 2B from the EIS / AF evaluation image data output from the evaluation image creation circuit 26. The high frequency component of the EIS / AF evaluation image data is integrated in response to the vertical synchronization signal Vsync. The calculated integral value, that is, the AF evaluation value is given to the CPU 36 for the continuous AF process. The CPU 36 continuously searches for the focal point by so-called hill climbing processing with reference to the given AF evaluation value. The focus lens 14 is disposed at the found focal point.

  When the zoom button 38z on the key input device 38 is operated, the CPU 36 sets a display magnification that is different from the current display magnification in a desired direction by a predetermined amount (= small amount) as the target display magnification. An optical zoom magnification, a RAW zoom magnification, and a YUV zoom magnification corresponding to the target display magnification are calculated.

  Subsequently, the CPU 36 sets the calculated optical zoom magnification, RAW zoom magnification, and YUV zoom magnification in the driver 20a, the zoom circuit 24z, and the zoom circuit 44z, respectively, in order to execute zoom processing. As a result, a through image having a target display magnification is output from the LCD monitor 48.

  Thereafter, the CPU 36 sets the motion detection areas MD1 to MD9, the focus area FA, and the photometry / white balance area EWA so as to conform to the RAW zoom magnification set in the zoom circuit 24z and the YUV zoom magnification set in the zoom circuit 44z. To change. As a result, the accuracy of camera shake correction processing, continuous AF processing, and AE / AWB processing is improved.

  When the movie button 38m on the key input device 38 is operated, the CPU 36 gives a recording start command to the I / F 50 in order to start the moving image recording process. The I / F 50 creates a moving image file in the recording medium 52, periodically reads out the image data stored in the moving image area 42b, and writes the read image data into the moving image file in the recording medium 52. When the movie button 38m is operated again, a recording stop command is given to the I / F 50. The I / F 50 ends reading of the image data from the moving image area 42b and closes the writing destination moving image file. Thereby, the moving image file is completed.

  When the shutter button 38s on the key input device 38 is operated during the moving image recording process, the CPU 36 gives a still image extraction command to the post-processing circuit 44 in order to execute the parallel still image recording process. A still image recording command is given to the I / F 50. The post-processing circuit 44 outputs one frame of image data representing the object scene image at the time when the shutter button 38s is operated from the still image output terminal S_OUT. The output image data is written into the still image area 42 b of the SDRAM 42 through the memory control circuit 40. The I / F 50 reads out the image data stored in the still image area 42 c through the memory control circuit 40, and creates a still image file containing the read image data in the recording medium 52.

  On the other hand, when the shutter button 38s is operated in a state in which the moving image recording process is interrupted, the CPU 36 performs a RAW zoom magnification and a YUV zoom magnification both indicating “1.0” in order to execute the single still image recording process. Are set in the zoom circuits 24z and 44z, and a still image processing command and a still image recording command are applied to the preprocessing circuit 24, the post processing circuit 44 and the I / F 50, respectively.

  Thus, one frame of raw image data having a resolution of horizontal 3072 pixels × 1728 pixels is output from the preprocessing circuit 24 and written into the raw image area 42 a of the SDRAM 42.

  The post-processing circuit 44 reads raw image data having the same resolution from the raw image area 42a, and outputs image data in YUV format based on the read raw image data from the still image output terminal S_OUT. The output image data is written into the still image area 42 c of the SDRAM 42 through the memory control circuit 40.

  The I / F 50 reads out the image data stored in the still image area 42 c through the memory control circuit 40, and creates a still image file containing the read image data in the recording medium 52. When the recording is completed, the above-described through image processing is resumed.

  The image sensor 18 is configured as shown in FIG. Charges representing the object scene image are generated by a plurality of light receiving elements 56, 56,... Arranged in a matrix. Each light receiving element 56 corresponds to the pixel described above. Each of the light receiving elements 56, 56,... Arranged in the vertical direction is connected to a common CDS circuit 62 via an A / D converter 58 and a row selection switch 60. The electric charge generated by the light receiving element 56 is converted into 12-bit digital data by the A / D converter 58. The vertical scanning circuit 66 performs an operation of turning on / off the row selection switches 60, 60,... Pixel by pixel in a raster scanning manner to expose the imaging surface by the focal plane electronic shutter method. Noise included in the pixel data that has passed through the row selection switch 60 in the on state is removed by the CDS circuit 62.

  The column selection switch 641 is assigned to the CDS circuit 62 in the N * M + 1 column (M: 0, 1, 2, 3,...), And the column selection switch 642 is assigned to the CDS circuit 62 in the N * M + 2 column. Similarly, the column selection switch 64N is assigned to the CDS circuit 62 in the N * M + Nth column.

  The horizontal scanning circuit 68 turns on the column selection switch 641 at the timing when the row selection switch 60 in the N * M + 1 column is turned on, and the column selection switch 642 at the timing when the row selection switch 60 in the N * M + 2 column is turned on. Turn on. Similarly, the horizontal scanning circuit 68 turns on the column selection switch 64N at the timing when the row selection switch 60 in the N * M + N column is turned on.

  As a result, partial raw image data based on the charges generated by the light receiving elements 56 in the N * M + 1th column is output from the channel CH1, and partial raw image data based on the charges generated by the light receiving elements 56 in the N * M + 2th column. Is output from channel CH2. Partial raw image data based on the charges generated by the light receiving elements 56 in the N * M + Nth column is output from the channel CHN.

  The preprocessing circuit 24 is configured as shown in FIG. The partial raw image data of the channel CH1 is given to the preprocessing block PB1, and the partial raw image data of the channel CH2 is given to the preprocessing block PB2. The partial raw image data of the channel CHN is given to the preprocessing block PBN.

  The preprocessing block PB1 includes an LPF 701, a reduction zoom circuit 721, and an edge adjustment circuit 741, and the preprocessing block PB2 includes an LPF 702, a reduction zoom circuit 722, and an edge adjustment circuit 742. The preprocessing block PBN includes an LPF 70N, a reduction zoom circuit 72N, and an edge adjustment circuit 74N. The zoom circuit 24z shown in FIG. 1 is configured by the reduction zoom circuits 721 to 72N.

  Therefore, the partial raw image data of each channel is subjected to a series of processes of noise removal, reduction zoom, and edge adjustment in parallel with each other. The partial raw image data subjected to noise removal is output to the evaluation image creation circuits 26 and 28, and the partial raw image data subjected to edge adjustment is written into the SRAM 78. The controller 76 issues a write request to the memory control circuit 40 every time the amount of data stored in the SRAM 78 reaches a threshold value, and controls a predetermined amount of raw image data when the approval signal is returned from the issue destination. Output toward the circuit 40.

  The zoom magnification setting process in response to the operation of the zoom button 38z, and the motion detection area MD1 to MD9, the focus area FA, and the photometry / white balance area EWA with reference to the RAW zoom magnification are executed in the manner described below. The When the target display magnification is set, the optical zoom magnification, the RAW zoom magnification, and the YUV zoom magnification are calculated with reference to the graph shown in FIG. Note that data corresponding to the graph shown in FIG. 5 is stored in the flash memory 54 as graph data GRD1.

  According to FIG. 5, the optical zoom magnification indicates “1.0” when the zoom lens 12 is positioned at the wide end, and indicates “10.0” when the zoom lens 12 is positioned at the tele end. The optical zoom magnification also increases linearly as the zoom lens 12 moves from the wide end to the tele end, and maintains “10.0” in the range where the display magnification exceeds “16”. The YUV zoom magnification maintains “1.0” when the display magnification is “16” or less, and increases linearly to “10.0” when the display magnification exceeds “16”.

  The RAW zoom magnification indicates “0.625” corresponding to display magnification = 1.0 (zoom lens 12 = wide end), and “1. zoom magnification corresponding to display magnification = 16 (zoom lens 12 = tele end)”. 0 "is indicated. The RAW zoom magnification also increases linearly as the display magnification goes from “1.0” to “16”, and maintains “1.0” in a range where the display magnification exceeds “16”.

  When the target display magnification is set to “1.0”, “1.0” is calculated as the optical zoom magnification, “0.625” is calculated as the RAW zoom magnification, and “1.0” is calculated as the YUV zoom magnification. "Is calculated. When the target display magnification is set to “8.0”, “5.0” is calculated as the optical zoom magnification, “0.7692” is calculated as the RAW zoom magnification, and “1” as the YUV zoom magnification. .0 "is calculated. Further, when the target display magnification is set to “16”, “10.0” is calculated as the optical zoom magnification, “1.0” is calculated as the RAW zoom magnification, and “1.0” as the YUV zoom magnification. "Is calculated.

  The optical zoom magnification, RAW zoom magnification, and YUV zoom magnification calculated in this way are set in the driver 20a, the zoom circuit 24z, and the zoom circuit 44z, respectively. In addition, the motion detection areas MD1 to MD9, the focus area FA, and the photometry / white balance area EWA are allocated to the imaging surface in different modes depending on the set RAW zoom magnification.

  When the raw image data shown in FIG. 6A is output from the image sensor 18 corresponding to the optical zoom magnification “1.0”, the preprocessing circuit 24 sends the size (= horizontal 1935) shown in FIG. Raw image data having pixels × vertical 1088 pixels) is output. The post-processing circuit 44 performs post-processing on a part of the raw image data belonging to the extraction area EX (size: horizontal 1920 pixels × vertical 1080 pixels) in the raw image data shown in FIG. 6B. Since the YUV zoom magnification is “1.0”, an image having an angle of view corresponding to the extraction area EX is displayed on the LCD monitor 48.

  As shown in FIG. 6C, the focus area FA is assigned to the entire area of the EIS / AF evaluation image, and the motion detection areas MD1 to MD9 have a predetermined relationship with the focus area FA. Assigned on the evaluation image. Further, as shown in FIG. 6D, the photometry / white balance area EWA is assigned to the entire area of the AE / AWB evaluation image.

  When the optical zoom magnification is changed to “5.0”, the raw image data shown in FIG. 7A is output from the image sensor 18. Since the RAW zoom magnification is changed to “0.7692”, the pre-processing circuit 24 outputs raw image data having the size (= horizontal 2363 pixels × vertical 1329 pixels) shown in FIG. The post-processing circuit 44 performs post-processing on a part of the raw image data belonging to the extraction area EX in the raw image data shown in FIG. The YUV zoom magnification is “1.0”. As a result, a through image having an angle of view corresponding to the extraction area EX shown in FIG. 7B is displayed on the LCD monitor 48.

  Referring to FIG. 7C, a focus area FA having a size corresponding to horizontal 1258 pixels × vertical 697 pixels is assigned to the center of the EIS / AF evaluation image. The motion detection areas MD1 to MD9 are allocated on the EIS / AF evaluation image so as to have a predetermined relationship with the focus area FA. Further, referring to FIG. 7D, the photometry / white balance area EWA has horizontal 590 pixels × vertical 1329 pixels, and is allocated on the AE / AWB evaluation image.

  When the optical zoom magnification is changed to “10.0”, the raw image data shown in FIG. 8A is output from the image sensor 18. The RAW zoom magnification is changed to “1.0”, and raw image data having a size (= horizontal 3096 pixels × vertical 1728 pixels) shown in FIG. The post-processing circuit 44 performs post-processing on a part of the raw image data belonging to the extraction area EX in the raw image data shown in FIG. The YUV zoom magnification is “1.0”. As a result, a through image having an angle of view corresponding to the evaluation area EX shown in FIG. 8B is displayed on the LCD monitor 48.

  Referring to FIG. 8C, a focus area FA having a size corresponding to horizontal 968 pixels × vertical 540 pixels is assigned to the center of the EIS / AF evaluation image. The motion detection areas MD1 to MD9 are allocated on the EIS / AF evaluation image so as to have a predetermined relationship with the focus area FA. Further, referring to FIG. 8D, the photometric / white balance area EWA has horizontal 484 pixels × vertical 1080 pixels, and is allocated on the AE / AWB evaluation image.

  Thus, the RAW zoom magnification increases as the optical zoom magnification increases, and decreases as the optical zoom magnification decreases. Therefore, the field angle of the object scene image based on the raw image data extracted by the post-processing circuit 44 decreases at a rate exceeding the decrease rate due to the increase in the optical zoom magnification, and increases due to the decrease in the optical zoom magnification. Increase at a rate above the rate. As a result, in the low zoom magnification range, a wide angle of view is ensured regardless of the increase in the resolution of the imaging surface. In addition, the zoom effect increases in the high zoom magnification range. In this way, the reproduction performance of the object scene image is improved.

  As described above, since the image sensor 16 exposes the imaging surface by the focal plane electronic shutter method, the exposure timing varies depending on the horizontal pixel row. Then, in the raw image data stored in the raw image area 42a, a horizontal focal plane distortion occurs due to the horizontal movement of the imaging surface (see FIG. 9). Therefore, the CPU 36 changes the shape of the extraction area EX based on the combined vectors UVC, MVC, and LVC fetched from the motion detection circuit 30 so that the focus plane distortion is suppressed.

  When the focal plane distortion in the horizontal direction occurs, the inclination amounts of the right side and the left side of the rectangle representing the extraction area EX are changed as shown in FIGS. 10 (A) to 10 (D). That is, when the imaging surface moves to the right, the exposure timing of the imaging surface changes in the manner shown in the left column of FIG. 10A, so the right side and left side of the extraction area EX are the right column of FIG. 10A. It is inclined as shown in When the imaging surface moves to the left, the exposure timing of the imaging surface changes in the manner shown in the left column of FIG. 10B, so the right side and the left side of the extraction area EX are the right column of FIG. 10B. It is inclined as shown in

  Furthermore, when the moving direction of the imaging surface is reversed from right to left, the exposure timing of the imaging surface changes in the manner shown in the left column of FIG. 10C, and therefore the right side and the left side of the extraction area EX are shown in FIG. ) Incline as shown in the right column. Furthermore, when the moving direction of the imaging surface is reversed from left to right, the exposure timing of the imaging surface changes in the manner shown in the left column of FIG. It is tilted as shown in the right column of D).

  The amount of inclination is determined based on the combined vectors UVC, MVC, and LVC output from the motion detection circuit 22 as shown in FIG. First, the horizontal components of the combined motion vectors UVC, MVC, and LVC are specified as “UVCx”, “MVCx”, and “LVCx”. Next, the Y coordinate of the horizontal component UVCx is determined corresponding to the vertical position of the motion detection areas MD1 to MD3, the Y coordinate of the horizontal component MVCx is determined corresponding to the vertical position of the motion detection areas MD4 to MD6, and The Y coordinate of the horizontal component LVCx is determined corresponding to the vertical position of the motion detection areas MD7 to MD9.

  Further, the horizontal component UVCx is set so that the X coordinate of the end of the horizontal component MVCx matches the X coordinate of the tip of the horizontal component UVCx and the X coordinate of the end of the horizontal component LVCx matches the X coordinate of the tip of the horizontal component MVCx. , MVCx and LVCx tip X coordinates are determined. Thereafter, an approximate function representing a straight line or a curve connecting the determined three XY coordinates is calculated.

  Accordingly, when the horizontal components UVCx, MVCx, and LVCx have the sizes shown in the left column of FIG. 11A, three XY coordinates are determined in the manner shown in the center column of FIG. The approximate function having the slope shown in the right column of) is calculated. When the horizontal components UVCx, MVCx, and LVCx have the sizes shown in the left column of FIG. 11B, three XY coordinates are determined as shown in the center column of FIG. 11B, and FIG. The approximate function having the slope shown in the right column of) is calculated. The shape of the extraction area EX is changed with reference to the approximate function thus calculated.

  When camera shake occurs, the imaging surface vibrates at a maximum of about 10 Hz. Then, in the extraction area EX, a slope corresponding to a maximum of 5 horizontal pixels occurs corresponding to the display magnification “1.0”, and a maximum corresponding to 40 horizontal pixels corresponding to the display magnification “8.0”. And a slope corresponding to a maximum of 80 horizontal pixels is generated corresponding to the display magnification “16” (see FIGS. 12A to 12C).

  On the other hand, the size of the raw image data is changed in response to the operation of the zoom button 38z, but exceeds the size of the extraction area EX in the entire zoom range. In particular, when the display magnification “1.0” is set, a margin of 15 pixels is secured in the horizontal direction as shown in FIG. As a result, focal plane distortion can be corrected even at the lowest display magnification.

  However, the movable range of the extraction area EX decreases corresponding to such deformation of the extraction area EX. That is, the movable range of the extraction area decreases by a maximum of 15 pixels corresponding to the display magnification “1.0”, and decreases by a maximum of 42 pixels corresponding to the display magnification “8.0”. In correspondence with the display magnification “16”, the horizontal pixel is reduced by 64 pixels at the maximum.

  Therefore, the CPU 36 specifies a horizontal margin corresponding to the target display magnification with reference to the graph shown in FIG. 13, and determines the movement amount of the extraction area EX with reference to the specified horizontal margin. The movement amount of the extraction area corresponds to an amount obtained by subtracting the horizontal margin from the horizontal component of the entire motion vector. As a result, it is possible to avoid a situation in which a part of the extraction area EX deviates from the raw image area 42a. Note that data corresponding to the graph shown in FIG. 13 is stored in the flash memory 54 as graph data GRD2.

  The CPU 36 executes in parallel a plurality of tasks including an imaging task shown in FIG. 14, a camera shake correction task shown in FIGS. 15 to 16, and a zoom control task shown in FIGS. 17 to 18. Note that control programs corresponding to these tasks are stored in the flash memory 54.

  Referring to FIG. 14, through image processing is started in step S1, and continuous AF processing is started in step S3. As a result of the processing in step S1, raw image data having a resolution of horizontal 3072 pixels × vertical 1928 pixels is output from the image sensor 18 every 1/30 seconds, and a through image based on the raw image data is output from the LCD monitor 48. The Further, as a result of the processing in step S3, the position of the focus lens 14 is continuously adjusted.

  In step S5, AE / AWB processing is executed. As a result, the brightness and white balance of the through image are appropriately adjusted. In step S7, it is determined whether or not the movie button 38m has been operated. In step S9, it is determined whether or not the shutter button 38s has been operated.

  When the movie button 38m is operated, the process proceeds from step S7 to step S11, and it is determined whether or not the moving image recording process is being executed. If “NO” here, the moving image recording process is started in a step S13, while if “YES”, the moving image recording process is stopped in a step S15. When the process of step S13 or S15 is completed, the process returns to step S5. When the shutter button 38s is operated, the single still image recording process or the parallel still image recording process is executed in step S17, and then the process returns to step S5.

  Referring to FIG. 15, in step S21, it is determined whether or not the vertical synchronization signal Vsync is generated. If YES, the combined motion vectors UVC, MVC and LVC are fetched from the motion detection circuit 30 in step S23. In step S25, an entire motion vector is created based on the fetched synthesized motion vectors UVC, MVC, and LVC. In step S27, an area shape change process is executed, and in the subsequent step S29, it is determined based on the entire motion vector whether or not the current movement of the imaging surface is caused by the pan / tilt operation. If “YES” here, the process returns to the step S1 as it is. If NO, the process proceeds to step S31, and the movement amount of the extraction area EX is calculated with reference to the entire motion vector created in step S25 and a horizontal margin specified in step S63 described later. The movement amount corresponds to an amount obtained by subtracting the horizontal margin from the horizontal component of the entire motion vector. In step S33, the extraction area EX is moved according to the calculated movement amount, and then the process returns to step S21.

  The area shape changing process in step S27 is executed according to a subroutine shown in FIG. First, in step S41, the horizontal components of the combined motion vectors UVC, MVC, and LVC are specified as “UVCx”, “MVCx”, and “LVCx”. In step S43, the Y coordinate of the horizontal component UVCx is determined corresponding to the vertical position of the motion detection areas MD1 to MD3, the Y coordinate of the horizontal component MVCx is determined corresponding to the vertical position of the motion detection areas MD4 to MD6, Then, the Y coordinate of the horizontal component LVCx is determined corresponding to the vertical position of the motion detection areas MD7 to MD9.

  In step S45, the horizontal coordinate MVCx is adjusted so that the X coordinate of the end of the horizontal component UVCx matches the X coordinate of the tip of the horizontal component UVCx, and the X coordinate of the end of the horizontal component LVCx matches the X coordinate of the tip of the horizontal component MVCx. Determine the X-coordinates of the tips of the components UVCx, MVCx and LVCx. In step S47, an approximate function representing a straight line or curve connecting the determined three XY coordinates is calculated, and in step S49, the inclination amounts of the right side and the left side of the extraction area EX are determined according to the calculated approximate function. When the process of step S49 is completed, the process returns to the upper layer routine.

  Referring to FIG. 17, in step S51, the zoom setting is initialized, and in step S53, it is determined whether or not the zoom button 38z has been operated. When the determination result is updated from NO to YES, the process proceeds to step S55, and a different display magnification is set as the target display magnification according to the operation mode of the zoom button 38z. In step S57, the optical zoom magnification, RAW zoom magnification, and YUV zoom magnification corresponding to the target display magnification are calculated with reference to the graph shown in FIG.

  In step S59, the calculated optical zoom magnification, RAW zoom magnification, and YUV zoom magnification are set in the driver 20a, zoom circuit 24z, and zoom circuit 44z, respectively, in order to execute zoom processing. As a result, a through image having a target display magnification is output from the LCD monitor 48.

  In step S61, the settings of the motion detection areas MD1 to MD9, the focus area FA, and the photometry / white balance area EWA are changed so as to conform to the RAW zoom magnification set in step S59. As a result, camera shake correction processing, continuous AF processing, and AE / AWB processing are executed with high accuracy. In step S63, the horizontal margin corresponding to the target display magnification is specified with reference to the graph shown in FIG. When the process of step S63 is completed, the process returns to step S53.

  The zoom process in step S59 is executed according to a subroutine shown in FIG. First, it is determined in step S71 whether both the current display magnification and the target display magnification are in the range of 1.0 to 16 times, and both the current display magnification and the target display magnification are in the range exceeding 16 times. In step S73, it is determined whether or not.

  If “YES” in the step S71, the optical zoom magnification is changed in a step S75. When the operation for changing the optical zoom magnification is completed, YES is determined in step S77, and the RAW zoom magnification is changed in step S79. If “YES” in the step S73, the YUV zoom magnification is changed in a step S81. If “NO” in the step S73, it is considered that the current display magnification and the target zoom magnification cross 16 times, and a corresponding magnification changing process is executed in a step S833. When the processes in steps S79 to S83 are completed, the process returns to the upper hierarchy routine.

  As can be seen from the above description, the image sensor 18 exposes the imaging surface by the focal plane electronic shutter method, and repeatedly outputs an image representing the object scene. The image output from the image sensor 18 is reduced by a zoom circuit 24 z provided in the preprocessing circuit 24. The memory control circuit 40 extracts a part of reduced images belonging to the extraction area EX having a predetermined size from the reduced images created by the zoom circuit 24z. A moving image based on the extracted reduced image is displayed on the LCD monitor 48.

  The CPU 36 changes the shape of the extraction area EX so that focal plane distortion is suppressed (S27), and changes the position of the extraction area EX so that the movement of the imaging surface in the direction orthogonal to the optical axis is compensated. (S33). Further, when receiving a zoom operation, the CPU 36 changes the size of the reduced image created by the zoom circuit 24z within a range exceeding the predetermined size (S79).

  By changing the size of the reduced image within a range exceeding the predetermined size, image quality correction processing that affects the angle of view, such as movement and / or deformation of the extraction area EX, can be performed over the entire zoom range. Thereby, the reproduction performance of the object scene image is improved. Further, the time required for the processing can be shortened by simultaneously executing the camera shake correction processing and the focal plane distortion correction.

  In this embodiment, a CMOS type image sensor is used, but a CCD type image sensor may be used instead.

DESCRIPTION OF SYMBOLS 10 ... Digital video camera 18 ... Image sensor 24 ... Pre-processing circuit 30 ... Motion detection circuit 32 ... AF evaluation circuit 34 ... AE / AWB evaluation circuit 36 ... CPU

Claims (9)

  1. Imaging means for repeatedly outputting an image representing the object scene;
    Reduction means for reducing the image output from the imaging means;
    Extraction means for extracting a part of a reduced image belonging to an extraction area of a predetermined size among the reduced images created by the reduction means, and the size of the reduced image created by the reduction means when a zoom operation is accepted A video camera provided with a resizing means for changing within a range exceeding the size.
  2. A zoom lens provided in front of the imaging means; and a magnification changing means for changing the magnification of the zoom lens in the same direction as the changing direction of the size changing means in relation to the changing process of the size changing means. Item 1. A video camera according to item 1.
  3. 2. A detection unit that detects a movement of the imaging surface in a direction orthogonal to the optical axis, and a position changing unit that changes the position of the extraction area so that the movement detected by the detection unit is compensated. Or the video camera of 2.
  4. The imaging means includes an exposure means for exposing the imaging surface by a focal plane electronic shutter system,
    The video camera according to claim 3, further comprising a shape changing unit that changes the shape of the extraction area so that focal plane distortion is suppressed based on the motion detected by the detecting unit.
  5. The extraction area is a rectangular area having a left side and a right side,
    The video camera according to claim 4, wherein the shape changing unit changes the amount of inclination of the right side and the left side based on a horizontal component of the motion detected by the detecting unit.
  6.   The video camera according to claim 5, further comprising a limiting unit that limits a change amount of the position changing unit with reference to an inclination amount changed by the shape changing unit.
  7.   The video camera according to claim 3, further comprising a stopping unit that stops the position changing unit when the movement detected by the detecting unit corresponds to a pan / tilt operation of the imaging surface.
  8. The image output from the imaging means corresponds to an image in which each pixel has any one color information of a plurality of colors,
    The electronic camera according to claim 1, further comprising conversion means for converting the reduced image extracted by the extraction means into an image in which each pixel has all the color information of the plurality of colors.
  9.   9. The video camera according to claim 1, further comprising an output unit that outputs a moving image based on the reduced image extracted by the extraction unit.
JP2009008455A 2009-01-19 2009-01-19 Video camera Active JP5127731B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2009008455A JP5127731B2 (en) 2009-01-19 2009-01-19 Video camera

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2009008455A JP5127731B2 (en) 2009-01-19 2009-01-19 Video camera
US12/390,585 US8253812B2 (en) 2008-02-23 2009-02-23 Video camera which adopts a focal-plane electronic shutter system
US13/555,373 US20120287314A1 (en) 2008-02-23 2012-07-23 Video camera which adopts a focal-plane electronic shutter system

Publications (2)

Publication Number Publication Date
JP2010166462A JP2010166462A (en) 2010-07-29
JP5127731B2 true JP5127731B2 (en) 2013-01-23

Family

ID=42582257

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2009008455A Active JP5127731B2 (en) 2009-01-19 2009-01-19 Video camera

Country Status (1)

Country Link
JP (1) JP5127731B2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5708097B2 (en) * 2011-03-18 2015-04-30 株式会社リコー Imaging apparatus, imaging method, and imaging program

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3925415B2 (en) * 2003-01-22 2007-06-06 ソニー株式会社 Image processing apparatus and method, recording medium, and program
JP4390068B2 (en) * 2004-12-28 2009-12-24 ソニー株式会社 Method for correcting distortion of captured image signal and distortion correction apparatus for captured image signal
JP2007104452A (en) * 2005-10-06 2007-04-19 Sony Corp Imaging apparatus
JP4769567B2 (en) * 2005-12-16 2011-09-07 キヤノン株式会社 Imaging device, imaging device control method, computer program, and storage medium

Also Published As

Publication number Publication date
JP2010166462A (en) 2010-07-29

Similar Documents

Publication Publication Date Title
JP4340358B2 (en) Image shooting device
JP4457358B2 (en) Display method of face detection frame, display method of character information, and imaging apparatus
JP4904108B2 (en) Imaging apparatus and image display control method
JP4766320B2 (en) Imaging apparatus and program thereof
JP2009130845A (en) Flicker correction device and method, and imaging apparatus
US7880792B2 (en) Image capturing apparatus with through image display function
US7711258B2 (en) Image capturing apparatus with zoom image data generating, image display, image selection and image capture functions, and method for same
US8248480B2 (en) Imaging apparatus provided with panning mode for taking panned image
JP2008085388A (en) Imaging apparatus
JP2005142680A (en) Image processing apparatus
JP4288612B2 (en) Image processing apparatus and method, and program
JP2007019973A (en) Imaging device and imaging method
JP2009284309A (en) Imaging device, display control program, and display control method
JP5271165B2 (en) Imaging apparatus and imaging control method
JP4761146B2 (en) Imaging apparatus and program thereof
JP4845817B2 (en) Imaging device, lens unit, imaging method, and control program
JP2009232275A (en) Image pickup device
KR100910295B1 (en) Imaging apparatus, method of compensating for hand shake, and computer-readable storage medium
JP2004200950A (en) Imaging apparatus and imaging control program
JP2008164839A (en) Photographing apparatus and focusing method
JP5054583B2 (en) Imaging device
JP2010139694A (en) Device and method for correcting blur and imaging apparatus
JP2000217033A (en) Image pickup unit and correction processing method for automatic exposure control
JP2007074031A (en) Imaging device, and image processing apparatus and method therefor
JP4919160B2 (en) Imaging apparatus and program thereof

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20111222

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20120927

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20121002

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20121030

R151 Written notification of patent or utility model registration

Ref document number: 5127731

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R151

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20151109

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20151109

Year of fee payment: 3

S111 Request for change of ownership or part of ownership

Free format text: JAPANESE INTERMEDIATE CODE: R313113

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20151109

Year of fee payment: 3

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350