JP5518150B2 - Image processing method, imaging apparatus, control method thereof, and program - Google Patents

Image processing method, imaging apparatus, control method thereof, and program Download PDF

Info

Publication number
JP5518150B2
JP5518150B2 JP2012190498A JP2012190498A JP5518150B2 JP 5518150 B2 JP5518150 B2 JP 5518150B2 JP 2012190498 A JP2012190498 A JP 2012190498A JP 2012190498 A JP2012190498 A JP 2012190498A JP 5518150 B2 JP5518150 B2 JP 5518150B2
Authority
JP
Japan
Prior art keywords
image data
brightness
subject
correction
step
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2012190498A
Other languages
Japanese (ja)
Other versions
JP2013013131A (en
JP2013013131A5 (en
Inventor
宣人 松田
Original Assignee
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by キヤノン株式会社 filed Critical キヤノン株式会社
Priority to JP2012190498A priority Critical patent/JP5518150B2/en
Publication of JP2013013131A publication Critical patent/JP2013013131A/en
Publication of JP2013013131A5 publication Critical patent/JP2013013131A5/ja
Application granted granted Critical
Publication of JP5518150B2 publication Critical patent/JP5518150B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Description

  The present invention relates to an imaging apparatus, and more particularly to an imaging apparatus having an image processing function for color correction of a video to be shot, a control method thereof, and a program.

  For example, some imaging apparatuses such as electronic cameras have a function of correcting colors more vividly by increasing the gain in the color saturation direction during shooting. By using this function, it is possible to perform correction so that an image becomes brighter than usual in order to approach a human perceptual impression when shooting a vivid subject. In this case, if a dark subject is corrected based on the same saturation standard as that of a bright subject, an image different from an impression seen by human eyes is obtained.

  For this reason, conventionally, as in Patent Document 1, there is known an imaging apparatus that performs correction so as to be bright when the object scene is bright.

JP 2007-267170 A

However, in Patent Document 1, any subject is corrected to be vivid when shooting in a bright field. For this reason, the gain in the saturation direction is increased even in a subject having no color in the first place, and as a result, there is a drawback that the color noise in the saturation direction becomes conspicuous without obtaining a correction effect.
An object of the present invention is to provide an imaging apparatus capable of solving the above-described problems of the prior art. In particular, an object of the present invention is to provide an imaging apparatus that can change the gain in the saturation direction according to the brightness of the subject even in a bright scene.

According to one configuration of the present invention, the image processing method of the present invention is based on the acquisition step of acquiring the image data and whether or not the subject of the image data is a vivid subject based on the saturation information of the recorded image data. In the determination step for determining whether or not the subject of the image data is a vivid subject, when the brightness of the image data is the first brightness, the brightness of the image data is greater than the first brightness. The vividness of the image data is corrected with a larger correction range than when the second brightness is weaker, and the vividness of the image data is corrected when it is determined that the subject of the image data is not a vivid subject Or a correction step for correcting the vividness of the image data with a correction width smaller than that when it is determined that the subject of the image data is a vivid subject.
According to another configuration of the present invention, the imaging device of the present invention is configured to capture an image of a subject and output the image data, and the subject of the image data is vivid based on the saturation information of the output image data. Determining means for determining whether or not the image data is a subject, and when it is determined that the subject of the image data is a vivid subject, when the brightness of the image data is the first brightness, When the vividness of the image data is corrected with a larger correction width than when the second brightness is weaker than the first brightness, and it is determined that the subject of the image data is not a vivid subject Correction means for correcting the vividness of the image data with a smaller correction width than when the vividness of the image data is not corrected or when it is determined that the subject of the image data is a vivid subject.

  According to the present invention described above, it is determined whether or not the subject is vivid, and if it is determined that the subject is vivid, a more vivid image can be obtained by increasing the gain in the saturation direction. Also, by looking at the brightness of the subject and reducing the correction range as it gets darker, it is possible to make corrections that match the human sense that dark subjects are less likely to be vivid even with subjects that have the same saturation as bright subjects. Become.

1 is a block diagram illustrating a configuration of an imaging apparatus according to a first embodiment of the present invention. It is a figure which shows the flowchart of the operation | movement procedure of the imaging device concerning the 1st Embodiment of this invention. It is a block diagram which shows the structure of the imaging device concerning the 2nd Embodiment of this invention. It is a figure which shows the flowchart of the operation | movement procedure of the imaging device concerning the 2nd Embodiment of this invention.

<First Embodiment>
First, a first embodiment of the present invention will be described by taking a digital video camera having the configuration shown in FIG. 1 as an example. However, the imaging apparatus of the present invention is not limited to this, and may be another apparatus having a configuration in which an image of a subject is captured by an imaging unit to obtain an image signal.

  In FIG. 1, light passing through a lens 101 is limited in light quantity by a diaphragm 102, then enters a CCD 104, and is converted into an electrical signal corresponding to the incident light quantity. The analog signal output from the CCD 104 is amplified by an AFE (Anolog Front End) 106 to be amplified, converted into digital data, and an image generation circuit 107 performs necessary processing as an image. The image generation circuit 107 can be configured to perform interpolation processing and white balance processing, for example. However, the present invention is not limited to this, and other image processing may be performed as necessary.

  Digital image data output from the image generation circuit 107 is input to the color correction circuit 108. The color correction circuit 108 performs correction to increase the saturation direction gain of a color determined by a microcomputer (hereinafter referred to as a microcomputer) 111 to make the color vivid. The image data color-corrected by the color correction circuit 108 is input to the color gamut detection circuit 109. Here, an image is divided into mesh-like frames (for example, 8 × 8 blocks), and integral values of luminance and saturation (hereinafter, referred to as luminance information and saturation information, respectively) of pixels existing in each frame are obtained. . The integrated value of saturation obtained by the color gamut detection circuit 109 is sent to the microcomputer 111 and used for analysis of the input image. In this embodiment, since the color correction circuit 108 is located before the color gamut detection circuit 109, the ease of removal from correction and the presence / absence of correction are repeated between the analysis of the input image and the determination of color correction (vibration). It is necessary to consider hunting prevention. This will be described later.

  An aperture control circuit 103 that controls the aperture 102 is controlled by the microcomputer 111 to limit the incident light from the lens to a desired amount. The aperture value indicating the light amount limitation of the aperture 102 at this time is used by the microcomputer 111 for analysis of the input image. The shutter control circuit 105 controls the electronic shutter of the CCD 104. The shutter control circuit 105 is also controlled by the microcomputer 111, and the control amount is used for analysis of the input image. The gain applied to the image signal by the AFE 106 is also controlled by the microcomputer 111, and the gain value is used for analysis of the input image. The input image analysis performed by the microcomputer 111 will be described later.

  The image data output from the color gamut detection circuit 109 is converted by the image output circuit 110 according to the output format and output to the outside as a video signal.

That is, the imaging apparatus according to the present embodiment includes an acquisition unit that captures an image of a subject and acquires image data, and a correction unit that performs color correction on the image data.
Next, how to shoot a subject with the imaging apparatus according to the above-described embodiment and analyze and correct the image data obtained thereby will be described with reference to the flowchart of FIG. The operation of the flowchart is performed by the microcomputer 111 loading and executing a program generated and stored for that purpose.
In step S201, the color gamut detection circuit 109 acquires saturation information for each frame. Next, in step S202, the saturation average value of the entire image is calculated from the saturation information for each frame obtained in step S201, and the value is equal to or greater than the average saturation threshold Sall (the third threshold in the claims). If so, the process proceeds to step S203, and if less than the average saturation threshold Sall, the process proceeds to step S205. In step S203, out of the saturation information for each frame obtained in step S201, the number of frames equal to or greater than the saturation threshold value Sth (fourth threshold value in the claims) is counted, and the result is set as Cs before proceeding to step S204. In step S204, Cs obtained in step S203 is compared with the high saturation area threshold Rs (the fifth threshold in the claims), and if Cs is equal to or greater than Rs, the process proceeds to step S206. If Cs is less than Rs, the process proceeds to step S205.

That is, in the image processing according to the present invention, an evaluation value related to the saturation of the image is generated from the saturation information, and determination means is provided for determining whether the generated evaluation value satisfies a predetermined condition. The determination unit generates a first evaluation value representing the saturation of the entire image and a second evaluation value representing the saturation of the subject as evaluation values, and performs a comparison determination between the first evaluation value and the third threshold value. And when the first evaluation value is determined to be greater than or equal to the third threshold value, the second evaluation value is compared with the fifth threshold value. The second evaluation value is generated by detecting a pixel block having a saturation equal to or higher than the fourth threshold value.
In step S205, it can be determined from the determination result in step S202 or step S204 that the subject is not a vivid subject, so the vivid correction value (correction width of claims) C is set to 1. Thereby, the vivid correction is not substantially performed. However, a time constant process for ending the vivid correction is performed so that the saturation does not change abruptly. Specifically, a process is performed in which a plurality of frames are applied and the correction value to be originally set (color correction value when vivid correction is not performed) is gradually approached from the current correction value. After the time constant process is performed, the determination flow ends. In this embodiment, the vivid correction value C is set to 1 when it is determined that the subject is not a vivid subject. However, the present invention is not limited to this, and the value is smaller than that when the subject is determined to be a vivid subject. It only has to be.

  A correction value for vivid correction is calculated in the processing after step S206. First, from the aperture diameter information of the aperture control circuit 103, the shutter speed information of the shutter control circuit 105, the gain information of the AFE 106, and the luminance information for each frame of the color gamut detection circuit 109, the brightness of the subject (the image data of the claims) Is calculated as Bv (step S206). Specifically, the luminance information for each frame takes into account the increase in gain of the AFE 106, the decrease due to the shutter speed of the 105 shutter control circuit, and the decrease due to the aperture diameter of the aperture control circuit 103. Can be calculated by Next, in step S207, it is determined whether or not Bv calculated in step S206 is greater than or equal to the absolute brightness threshold value Bv1 (first threshold value in the claims). If it is greater than or equal to the threshold value Bv1, the process proceeds to step S208, and the vivid correction value C1 is obtained by calculation. If Bv is less than the subject brightness threshold Bv1, the process proceeds to step S209.

That is, in the image processing of the present invention, when it is determined that the second evaluation value satisfies a predetermined condition, the brightness of the subject is calculated based on the imaging control value of the image data and the luminance information. The predetermined condition is that the second evaluation value is determined to be greater than or equal to the fifth threshold value.
In step S208, the Cmax constant value (C1) is set as the vivid correction value C according to the graph of FIG. 2B, and the process proceeds to step S212. Here, the constant value of Cmax is based on the fact that it is preferable that there is an upper limit (maximum value of the correction value) in order to not feel uncomfortable even if vivid correction is performed when the subject is bright. Yes.

  In step S209, it is determined whether or not Bv calculated in step S206 is greater than or equal to the brightness threshold Bv2 (second threshold of claims). If it is greater than or equal to the threshold Bv2, the process proceeds to step S210 and the vivid correction value C2 is reached. Is calculated. If Bv is less than the brightness threshold Bv2, the process proceeds to step S211 to determine the vivid correction value C3 by calculation. In step S210, the correction value C2 is determined by interpolation that monotonically increases between Cmin and Cmax according to the graph of FIG. In this embodiment, the calculation is performed by the simplest linear interpolation, but is not limited to this interpolation method as long as it is monotonously increased. The interpolation formula is as shown in FIG. The correction value C2 is calculated by the equation of FIG. 2C, and the process proceeds to step S212.

  In step S211, in accordance with the graph of FIG. 2B, the Cmin constant value is set to the vivid correction value C3, and the process proceeds to step S212. Here, the Cmin constant value is set because it is contrary to the intention of the correction that the vivid correction is performed so that the original image is not brighter. That is, it is preferable that C3 ≧ 1.

  In step S212, since the vivid correction value C has already been determined, vivid correction start time constant processing is performed, and the color correction circuit 108 performs correction. The correction is performed by multiplying the correction value C determined in step S208, step S210, and step S211 by the gain applied to the color difference signal before correction, that is, the Ry gain and the By gain.

That is, in the image processing of the present invention, the brightness of the subject is determined by comparing the calculated brightness of the subject with the first and second thresholds, and different saturation correction ranges are determined according to the brightness determination result. The image data is corrected based on the determined saturation correction width. In particular, when it is determined that the subject of the image data is a bright subject and the brightness of the image data is the first brightness, the second brightness of the image data is weaker than the first brightness. The vividness of the image data is corrected with a correction range larger than that when the brightness is. For example, when it is determined that the brightness of the subject is between the first threshold value and the second threshold value, a saturation correction value that changes within a predetermined range according to the brightness of the subject is generated. On the other hand, when it is determined that it is in an area other than the area between the first threshold value and the second threshold value, the correction value is set so that the maximum or minimum value of the predetermined range is obtained regardless of the brightness of the subject. decide.
On the other hand, if it is determined that the subject of the image data is not a vivid subject, the image data vividness is not corrected, or the correction range is smaller than when the subject of the image data is determined to be a vivid subject Use to correct the vividness of the image data.

  Here, the hunting prevention described above will be described. Hunting prevention in this embodiment is performed as follows. That is, after executing the flowchart of FIG. 2A to determine a correction value, the next time it is executed (hereinafter expressed as “next time”), the threshold used for vivid determination is changed as follows.

  The saturation threshold Sth ′ to be used next time is multiplied by the correction value C determined in step S208, step S210, or step S211 by the saturation threshold Sth before correction, and is less than 1 and is effective in preventing hunting. The result of multiplying by.

  The average saturation threshold value Sall 'to be used next time remains the same as the average saturation threshold value Sall before correction. This is because when the vivid correction is performed, the entire saturation value is increased by the correction value at the next determination, so that the value before the correction has a sufficient anti-hunting effect. .

  As described above, when it is determined that the subject is bright by analysis of the input image data, the gain (correction value) in the saturation direction is increased and correction is performed so that the subject becomes bright. Then, the correction value is changed in accordance with the brightness calculated by the control parameters of the imaging apparatus such as the lens aperture value, the sensor shutter speed, and the processing system gain. As a result, when the subject is dark, the correction amount becomes small and the correction is made slightly brightly. On the other hand, when the subject is bright, the correction amount increases, and the correction is performed more vividly. In this way, by changing the correction amount according to the brightness of the subject, it is possible to obtain an image that matches the human feeling that even when the subject is dark, it is difficult to feel the subject with the same saturation. Further, since the gain in the saturation direction is not changed unless the input image is vivid, color noise in the saturation direction does not increase.

In the present embodiment, the hue / saturation is calculated from the color difference signal, but the method of calculating the hue / saturation is not limited to this. For example, the hue / saturation in the L * a * b * space may be calculated after being once converted into another space such as the L * a * b * space.
In this embodiment, the example in which the color gamut detection circuit 109 divides the image signal into 8 × 8 blocks has been described. However, any number of divisions such as a pixel unit may be used.
In the present embodiment, the case where the gain applied to the color difference signal is controlled based on the determination result of whether or not the scene is vivid is described. However, the control for correcting the color signal or the luminance signal based on the scene determination result is described. Any control can be performed.
In the above embodiment, whether or not the scene is vivid is determined based on two pieces of information, that is, the average saturation value and the number of high saturation blocks. However, only one of them is used as an evaluation value. The determination method is not limited to this. For example, a vivid scene may be determined based on saturation information in a region with the highest saturation in the image.
In the above-described embodiment, the case where the gain applied to the color difference signal is controlled based on the determination result of whether or not the scene is vivid is described. However, any process may be performed as long as it corrects a color signal or a luminance signal based on whether or not the scene is vivid and emphasizes vividness. That is, the vividness may be emphasized by correcting other than the saturation. Such processing includes, for example, correction for increasing luminance and contour enhancement processing. In this case, it is conceivable that the same processing as that performed for the saturation threshold is performed for each parameter threshold or for each correction value.

<Second Embodiment>
A second embodiment of the present invention will be described by taking an image pickup apparatus (digital camcorder) shown in FIG. 3 as an example. The same components as those in FIG. 1 are denoted by the same reference numerals, and the description thereof is omitted. In this embodiment, unlike the first embodiment, the color gamut detection circuit and the color correction circuit are arranged in this order.

  The digital image data output from the image generation circuit 107 is divided into mesh-like frames by the color gamut detection circuit 308, and the luminance and saturation integration values (luminance information and saturation) of the pixels existing in each frame. Information). The saturation integrated value obtained by the color gamut detection circuit 308 is sent to the microcomputer 312 and used for analysis of the input image. The diaphragm control circuit 103 that controls the diaphragm 102 is controlled by the microcomputer 312 to limit the incident light from the lens to a desired amount. The aperture value indicating the light amount limitation of the aperture 102 at this time is used for analysis of the input image by the microcomputer 312. The CCD 104 is controlled by an electronic shutter by a shutter control means 105. The shutter control circuit 105 is also controlled by the microcomputer 312 and the control amount is used for analysis of the input image. The gain applied by the AFE 106 is also controlled from the microcomputer 312 and the gain value is used for analysis of the input image. The input image analysis performed by the microcomputer 312 will be described later.

  Image data output from the 308 color gamut detection circuit is input to a 309 color correction circuit. The 309 color correction circuit performs correction to increase the saturation direction gain of the color determined by the 312 microcomputer to make the color vivid. The image data color-corrected in 309 is converted into a format suitable for the output format by the 310 image output circuit and recorded in the 311 storage medium.

Next, how to shoot a subject with the imaging apparatus of the present embodiment and analyze and correct the image data obtained thereby will be described with reference to the flowchart of FIG.
In step S401, the color gamut detection circuit 308 acquires saturation information for each frame. Next, in step S402, the saturation average value of the entire image is calculated from the saturation information for each frame obtained in step S401, and it is determined whether the value is equal to or greater than the average saturation threshold Sall. If it is greater than or equal to the threshold value Sall, the process proceeds to step S403, and if it is less than the threshold value Sall, the process proceeds to step S408.

  In step S403, out of the saturation information for each frame obtained in step S401, the number of frames equal to or greater than the saturation threshold Sth is counted, and the result is set as Cs and the process proceeds to step S404. In step S404, the color gamut detection circuit 308 acquires luminance information for each frame. Next, in step S405, the brightness of the subject is calculated using the data in step S404, the aperture value indicating the light amount limitation of the 302 aperture, the control amount of the shutter control circuit 105, and the gain applied by the AFE 106, and is set as Bv. The calculation method for Bv is the same as in the first embodiment.

  In step S40, a high saturation area threshold value Rs is determined for Bv calculated in step S405. Rs is determined according to the relationship between Bv and Rs shown in FIG. When Bv is large, the subject is bright, so the high saturation area threshold Rs is low. However, when Bv is small, the subject is a dark subject, so the high saturation area threshold Rs is high, and the saturation is higher than when Bv is large. If the area of the region is not larger, it is not determined to be vivid.

  In step S407, Cs calculated in step S403 is compared with the high saturation area threshold Rs determined in step S406. If Cs is equal to or greater than Rs, the process proceeds to step S409 to perform vivid correction on a bright subject. . On the other hand, if Cs is less than Rs, the process proceeds to step S408.

That is, in the image processing of the present embodiment, the fifth threshold value is changed based on the calculated subject brightness and the first and second threshold values, and the changed fifth threshold value and second evaluation value are changed. Comparison judgment is performed.
In step S408, since it is determined that the subject is not a vivid subject, vivid correction is not performed. However, a time constant process for ending the vivid correction is performed so that the saturation does not change abruptly. Specifically, a process of gradually approaching the correction value that should be set from the current correction value is performed. After performing the time constant process, the determination flow is terminated.

In step S409, a vivid correction value C for correcting the input image more vividly is determined according to the brightness Bv. The relationship between Bv and correction value C in this case is shown in FIG. Determination of the correction value C in step S409 is the same as that in step S207 to step S211 in the first embodiment.
In step S410, out of the luminance information for each frame obtained in step S404, the number of frames that are equal to or lower than the luminance threshold Bth (sixth threshold of claims) is counted, and the result is set as Cd and the process proceeds to step S411. Next, in step S411, it is determined whether or not Cd counted in step S410 is greater than or equal to the low-luminance area threshold D (the seventh threshold of the claims). If so, many areas on the screen are dark. Judge and go to step S412. Otherwise, the process proceeds to step S413.

  In step S412, the vivid correction value C is set to C3, that is, the minimum value Cmin regardless of the Bv value in the case of a subject with many low luminance areas. This is a process for dealing with the fact that when a dark and vivid subject is photographed, the Bv value changes just by entering a slightly brighter object, and the subject's saturation changes frequently and a sense of incongruity is likely to occur. After setting the vivid correction value to C3 (Cmin), the process proceeds to step S413.

  That is, in the image processing of the present embodiment, a third evaluation value that represents the area of the low-luminance region in the image is generated based on the result of comparison determination (below the sixth threshold) between the luminance information and the sixth threshold. To do. Then, a comparison determination (greater than or equal to the seventh threshold value) is performed between the third evaluation value and the seventh threshold value, and the determined saturation correction value is changed based on the determination result.

  In step S413, since the vivid correction value C is determined, vivid correction start time constant processing is performed, and the color correction circuit 309 performs correction. The correction is performed by multiplying the correction value C determined in step S410 or step S412 by the gain relating to the color difference signal before correction, that is, the Ry gain and the By gain.

  At this time, there is a possibility that hunting that repeats the ease of removal from vivid correction and the presence or absence of correction may occur. Therefore, when determining the vividness determination and the correction value when the operation of the present embodiment is performed next, the threshold value is changed as follows. In this embodiment, the color gamut detection circuit and the color correction circuit are arranged in this order. However, there is a possibility of hunting between the imaging control via the image data and the threshold determination.

  The saturation threshold value Sth ′ used next time is a value obtained by multiplying the saturation threshold value Sth before correction by a hunting prevention coefficient d that is smaller than 1 and effective in preventing hunting.

  The average saturation threshold Sall 'used next time is also a value obtained by multiplying the average saturation threshold Sall before correction by the anti-hunting coefficient d.

  That is, in the present embodiment, the hunting prevention unit corrects the third threshold value and the fourth threshold value based on a predetermined hunting prevention coefficient.

  According to the above-described embodiment, the threshold for determining whether or not the subject is vivid is changed according to the brightness calculated by the control parameters of the imaging apparatus such as the aperture value, shutter speed, and processing system gain. Yes. Specifically, the threshold value for a dark subject is set larger than that for a bright subject. As a result, when the subject is dark, correction to make it brighter is difficult to apply, and an image that matches human perception that a subject with the same saturation as a bright subject is less likely to be vivid can be obtained. Furthermore, since the vivid correction value is fixed to the minimum value when the low-luminance area in the screen is other than a certain amount, it is possible to prevent the saturation of the corrected image from changing frequently. Further, when it is determined that the input video is not vivid, the gain in the saturation direction does not change (C = 1), so that the color noise in the saturation direction does not increase.

  In the present embodiment, an imaging apparatus has been described as an example of an image processing apparatus to which the present invention can be applied. However, the present invention is not limited to this. It can be applied to both imaging devices that can shoot still images and moving images, and can be applied to image forming devices such as printers that acquire image data from the outside for processing, and information processing devices such as personal computers. The present invention is applicable.

  Also, when a software program that realizes the functions of the above-described embodiments is supplied from a recording medium directly to a system or apparatus having a computer that can execute the program using wired / wireless communication, and the program is executed Are also included in the present invention.

  Accordingly, the program code itself supplied and installed in the computer in order to implement the functional processing of the present invention by the computer also realizes the present invention. That is, the computer program itself for realizing the functional processing of the present invention is also included in the present invention.

  In this case, the program may be in any form as long as it has a program function, such as an object code, a program executed by an interpreter, or script data supplied to the OS.

As a recording medium for supplying the program, for example, a magnetic recording medium such as a hard disk or a magnetic tape, an optical / magneto-optical storage medium, or a nonvolatile semiconductor memory may be used.
As a program supply method, a computer program that forms the present invention is stored in a server on a computer network, and a connected client computer downloads and programs the computer program.

101 …… Lens
102 …… Aperture
103 .. Aperture control circuit
104 ... CCD
105 ... Shutter control means
106 ... AFE (Analog Front End)
107 Image generation circuit
108 Color correction circuit
109 Color gamut detection circuit
110 ... Image output circuit
111 …… Microcomputer
308 Color gamut detection circuit
309 Color correction circuit
310 ... Image output circuit
311 Recording medium
312 Microcomputer

Claims (11)

  1. An acquisition step of acquiring image data;
    A determination step of determining whether the subject of the image data is a vivid subject depending on whether the saturation information of the image data satisfies a predetermined condition;
    When it is determined in the determination step that the subject of the image data is a vivid subject, the brightness of the image data is greater than the first brightness when the brightness of the image data is the first brightness. And a correction step of correcting the vividness of the image data with a correction width larger than that when the second brightness is weaker.
  2.   The determination step includes a step of generating a first evaluation value that represents the saturation of the entire image, and when the first evaluation value is determined to be equal to or greater than a third threshold, the predetermined condition is satisfied. The image processing method according to claim 1, wherein the determination is performed.
  3.   The determining step includes a step of dividing the entire image of the image data into a plurality of blocks and generating a second evaluation value representing the number of blocks having a saturation equal to or higher than a fourth threshold value from the plurality of blocks. The image processing method according to claim 1, wherein when the second evaluation value is equal to or greater than a fifth threshold, it is determined that the predetermined condition is satisfied.
  4.   The brightness of the image data is calculated from the imaging control value of the image data and the luminance information of the image data, and the imaging control value includes aperture diameter information, shutter speed information, and gain information of the image data. The correction step determines the brightness of the image data by comparing the brightness of the image data with first and second threshold values, and the brightness of the image data is determined by comparing the first threshold value with the second threshold value. A saturation correction value that changes within a predetermined range in accordance with the brightness of the image data is generated as the correction width, and the brightness of the image data is the first When it is determined that it is in an area other than the area between the threshold value and the second threshold value, the correction value is determined so as to be the maximum value or the minimum value of the predetermined range regardless of the brightness of the image data. To be characterized by The image processing method according to any one of claims 1 to 3 that.
  5. The hunting prevention step of controlling the threshold value of the comparison judgment used at the above-mentioned judgment step to prevent the hunting that occurs between the judgment at the above-mentioned judgment step and the determination of the correction width at the above-mentioned correction step. Item 4. The image processing method according to Item 3 .
  6.   6. The image according to claim 5, wherein the anti-hunting step includes a step of correcting the fourth threshold based on a saturation correction width determined in the correcting step and a predetermined anti-hunting coefficient. Processing method.
  7.   The correcting step includes a step of dividing the entire image of the image data into a plurality of blocks and generating a third evaluation value representing the number of blocks having a luminance equal to or lower than a sixth threshold from the plurality of blocks; The image processing according to claim 6, wherein the third evaluation value and a seventh threshold value are compared and determined, and the determined saturation correction value is changed based on a result of the comparison determination. Method.
  8. Acquisition means for acquiring image data;
    Determining means for determining whether or not the subject of the image data is a vivid subject depending on whether or not the saturation information of the image data satisfies a predetermined condition;
    When the determination unit determines that the subject of the image data is a vivid subject, and the brightness of the image data is the first brightness, the brightness of the image data is greater than the first brightness. An image processing apparatus comprising: a correction unit that corrects the vividness of the image data with a larger correction width than when the second brightness is weaker.
  9. Imaging means for imaging a subject and outputting image data;
    Determining means for determining whether or not the subject of the image data is a vivid subject depending on whether or not the saturation information of the image data satisfies a predetermined condition;
    When the determination unit determines that the subject of the image data is a vivid subject, and the brightness of the image data is the first brightness, the brightness of the image data is greater than the first brightness. An image pickup apparatus comprising: correction means for correcting the vividness of the image data with a correction width larger than that when the second brightness is weaker.
  10. Computer
    Acquisition means for acquiring image data;
    Determining means for determining whether or not the subject of the image data is a vivid subject depending on whether or not the saturation information of the image data satisfies a predetermined condition;
    When the determination unit determines that the subject of the image data is a vivid subject, and the brightness of the image data is the first brightness, the brightness of the image data is greater than the first brightness. A program for causing an image processing apparatus to function as an image processing apparatus including correction means for correcting the vividness of the image data with a correction width larger than that when the second brightness is weaker.
  11.   The computer-readable recording medium which recorded the program of Claim 10.
JP2012190498A 2012-08-30 2012-08-30 Image processing method, imaging apparatus, control method thereof, and program Active JP5518150B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2012190498A JP5518150B2 (en) 2012-08-30 2012-08-30 Image processing method, imaging apparatus, control method thereof, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2012190498A JP5518150B2 (en) 2012-08-30 2012-08-30 Image processing method, imaging apparatus, control method thereof, and program

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
JP2009295401 Division 2009-12-25

Publications (3)

Publication Number Publication Date
JP2013013131A JP2013013131A (en) 2013-01-17
JP2013013131A5 JP2013013131A5 (en) 2013-02-28
JP5518150B2 true JP5518150B2 (en) 2014-06-11

Family

ID=47686531

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2012190498A Active JP5518150B2 (en) 2012-08-30 2012-08-30 Image processing method, imaging apparatus, control method thereof, and program

Country Status (1)

Country Link
JP (1) JP5518150B2 (en)

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3929210B2 (en) * 1998-08-20 2007-06-13 富士フイルム株式会社 Image processing method and apparatus
JP2000224607A (en) * 1999-01-28 2000-08-11 Matsushita Electric Ind Co Ltd Image processor
JP3956567B2 (en) * 2000-02-18 2007-08-08 富士ゼロックス株式会社 Image processing device
JP3540758B2 (en) * 2000-09-08 2004-07-07 三洋電機株式会社 Horizontal contour signal generation circuit in a single-chip color camera
JP2003134354A (en) * 2001-10-29 2003-05-09 Noritsu Koki Co Ltd Image processing apparatus and method therefor
JP2007094742A (en) * 2005-09-28 2007-04-12 Olympus Corp Image signal processor and image signal processing program
JP4622899B2 (en) * 2006-03-17 2011-02-02 パナソニック株式会社 Image processing apparatus, image processing method, program, and recording medium
JP4857856B2 (en) * 2006-03-29 2012-01-18 株式会社ニコン Electronic camera having saturation adjustment function and image processing program
JP2007281952A (en) * 2006-04-07 2007-10-25 Olympus Imaging Corp Digital camera
JP4956140B2 (en) * 2006-10-30 2012-06-20 東芝デジタルメディアエンジニアリング株式会社 Auto Color control circuit
JP4993275B2 (en) * 2006-12-15 2012-08-08 コニカミノルタアドバンストレイヤー株式会社 Image processing device
JP4766692B2 (en) * 2006-12-20 2011-09-07 キヤノン株式会社 Imaging device, its control method, program, and storage medium
JP4279318B2 (en) * 2007-02-02 2009-06-17 三菱電機株式会社 Video display device

Also Published As

Publication number Publication date
JP2013013131A (en) 2013-01-17

Similar Documents

Publication Publication Date Title
CN100377575C (en) Image processing method and image processing apparatus
US8798395B2 (en) Image processing apparatus, image processing method, and program
US8355059B2 (en) Image capturing apparatus and control method thereof
JP2009159496A (en) White balance control device, imaging apparatus using the same, and white balance control method
KR20080035981A (en) Image processing apparatus, imaging apparatus, image processing method, and computer program
JP5398156B2 (en) White balance control device, its control method, and imaging device
US8305487B2 (en) Method and apparatus for controlling multiple exposures
JP4622629B2 (en) Imaging device
JP2010199985A (en) Image processing apparatus, image processing method, and program
JP5157753B2 (en) Image processing apparatus, image processing method, and image processing program
CN101877765B (en) Image transforming apparatus and method of controlling operation of same
JP2010147816A5 (en)
JP5130675B2 (en) Digital camera and image processing program
US9445067B2 (en) Imaging device and image signal processor with color noise correction
US8982232B2 (en) Image processing apparatus and image processing method
US8537269B2 (en) Method, medium, and apparatus for setting exposure time
JP4875032B2 (en) Solid-state imaging device
KR20120016476A (en) Image processing method and image processing apparatus
JP6020199B2 (en) Image processing apparatus, method, program, and imaging apparatus
US9516290B2 (en) White balance method in multi-exposure imaging system
JP5169318B2 (en) Imaging apparatus and imaging method
CN101800858B (en) Image capturing apparatus and control method thereof
JP2012235332A (en) Imaging apparatus, imaging apparatus control method and program
JP6006543B2 (en) Image processing apparatus and image processing method
US9420197B2 (en) Imaging device, imaging method and imaging program

Legal Events

Date Code Title Description
A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20121225

A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20121225

RD05 Notification of revocation of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7425

Effective date: 20130701

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20131031

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20131119

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20140110

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20140304

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20140401