WO2006033236A1 - 画像処理方法、画像処理装置、撮像装置及び画像処理プログラム - Google Patents
画像処理方法、画像処理装置、撮像装置及び画像処理プログラム Download PDFInfo
- Publication number
- WO2006033236A1 WO2006033236A1 PCT/JP2005/016384 JP2005016384W WO2006033236A1 WO 2006033236 A1 WO2006033236 A1 WO 2006033236A1 JP 2005016384 W JP2005016384 W JP 2005016384W WO 2006033236 A1 WO2006033236 A1 WO 2006033236A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- index
- image data
- captured image
- shooting scene
- gradation
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 33
- 238000012545 processing Methods 0.000 title claims description 153
- 238000003672 processing method Methods 0.000 title claims description 9
- 238000000034 method Methods 0.000 claims abstract description 137
- 238000006243 chemical reaction Methods 0.000 claims abstract description 67
- 238000004364 calculation method Methods 0.000 claims description 119
- 230000008569 process Effects 0.000 claims description 38
- 238000012937 correction Methods 0.000 description 27
- 240000007320 Pinus strobus Species 0.000 description 19
- 238000010586 diagram Methods 0.000 description 17
- 238000004891 communication Methods 0.000 description 12
- 230000001186 cumulative effect Effects 0.000 description 11
- 230000003287 optical effect Effects 0.000 description 9
- 238000009826 distribution Methods 0.000 description 8
- 239000000463 material Substances 0.000 description 8
- 238000012546 transfer Methods 0.000 description 7
- 238000004458 analytical method Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 230000006835 compression Effects 0.000 description 5
- 238000007906 compression Methods 0.000 description 5
- 238000003860 storage Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 101100385969 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) CYC8 gene Proteins 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 238000000611 regression analysis Methods 0.000 description 3
- 238000013500 data storage Methods 0.000 description 2
- 239000000428 dust Substances 0.000 description 2
- 238000011068 loading method Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- GGCZERPQGJTIQP-UHFFFAOYSA-N sodium;9,10-dioxoanthracene-2-sulfonic acid Chemical compound [Na+].C1=CC=C2C(=O)C3=CC(S(=O)(=O)O)=CC=C3C(=O)C2=C1 GGCZERPQGJTIQP-UHFFFAOYSA-N 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- BYJQAPYDPPKJGH-UHFFFAOYSA-N 3-(2-carboxyethyl)-1h-indole-2-carboxylic acid Chemical compound C1=CC=C2C(CCC(=O)O)=C(C(O)=O)NC2=C1 BYJQAPYDPPKJGH-UHFFFAOYSA-N 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000001035 drying Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 229910052900 illite Inorganic materials 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- VGIBGUSAECPPNB-UHFFFAOYSA-L nonaaluminum;magnesium;tripotassium;1,3-dioxido-2,4,5-trioxa-1,3-disilabicyclo[1.1.1]pentane;iron(2+);oxygen(2-);fluoride;hydroxide Chemical compound [OH-].[O-2].[O-2].[O-2].[O-2].[O-2].[F-].[Mg+2].[Al+3].[Al+3].[Al+3].[Al+3].[Al+3].[Al+3].[Al+3].[Al+3].[Al+3].[K+].[K+].[K+].[Fe+2].O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2 VGIBGUSAECPPNB-UHFFFAOYSA-L 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 235000012736 patent blue V Nutrition 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000005092 sublimation method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
- H04N1/407—Control or modification of tonal gradation or of extreme levels, e.g. background level
- H04N1/4072—Control or modification of tonal gradation or of extreme levels, e.g. background level dependent on the contents of the original
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Definitions
- Image processing method image processing apparatus, imaging apparatus, and image processing program
- the present invention relates to an image processing method, an image processing device, an imaging device, and an image processing program.
- Patent Document 1 discloses a method for calculating an additional correction value in place of the discriminant regression analysis method.
- the method described in Patent Document 1 deletes the high luminance region and the low luminance region from the luminance histogram indicating the cumulative number of pixels of luminance (frequency number), and further uses the frequency frequency limit to limit the luminance.
- An average value is calculated, and a difference between the average value and the reference luminance is obtained as a correction value.
- Patent Document 2 describes a method of determining a light source state at the time of photographing in order to compensate for the extraction accuracy of a face region.
- the method described in Patent Document 2 first extracts a face candidate area, calculates a bias of the average brightness of the extracted face candidate area with respect to the entire image, and if the deviation amount is large, a shooting scene (backlight shooting camera The flash close-up shooting) and adjust the tolerance of the criterion for the face area.
- a method for extracting a face candidate region a method using a two-dimensional histogram of hue and saturation described in Japanese Patent Laid-Open No. 6-67320, Japanese Patent Laid-Open No. 8-122944, Japanese Patent Laid-Open No.
- Patent Document 2 As a method for removing a background region other than a face, the ratio of the straight line portion, the line object property, the image described in JP-A-8-122944 and JP-A-8-184925 are disclosed. Citing methods using the contact ratio with the outer edge, density contrast, density change pattern and periodicity are cited. A method using a one-dimensional histogram of density is described for discrimination of shooting scenes. This method is based on an empirical rule that the face area is dark and the background area is bright in the case of backlighting, and that the face area is bright and the background area is dark in the case of close-up flash photography.
- Patent Document 1 JP 2002-247393 A
- Patent Document 2 JP 2000-148980 A
- Patent Document 1 reduces the influence of a region with a large luminance deviation in a backlight or strobe scene, but in a shooting scene in which a person is the main subject, There was a problem that the brightness of was inappropriate.
- Patent Document 2 can achieve the effect of compensating for the identification of the face area in the case of typical backlight or close-up flash photography, but if it does not apply to the typical composition, There was a problem that the compensation effect could not be obtained.
- An object of the present invention is to calculate an index that quantitatively represents a shooting scene (light source condition and exposure condition) of captured image data, and to determine an image processing condition based on the calculated index. It is to improve the lightness reproducibility.
- the photographed image data is divided into regions including combinations of predetermined brightness and hue, and the entire photographed image data is divided into the divided regions.
- a first occupancy ratio calculating step for calculating a first occupancy ratio (first occupancy ratio) indicating a ratio to the first area and a first occupancy ratio of each of the calculated areas are set in advance according to shooting conditions. By multiplying the first coefficient, the first index (index 1) for specifying the shooting scene is calculated, and the occupancy rate of each area is set to a second index set in advance according to the shooting conditions.
- An adjustment amount determining step for determining a gradation adjustment amount for the captured image data based on the first index, the second index, and the fourth index;
- a gradation conversion process of the gradation adjustment amount determined in the adjustment amount determination step is performed on the photographed image data.
- a tone conversion step is Using the gradation adjustment method determined in the adjustment method determination step, a gradation conversion process of the gradation adjustment amount determined in the adjustment amount determination step.
- the photographed image data is divided into predetermined areas composed of a combination of the distance from the outer edge of the screen of the photographed image data and the brightness, and a second ratio indicating the ratio of the divided area to the entire photographed image data for each divided area.
- a second occupancy ratio calculating step for calculating an occupancy ratio of the second occupancy ratio, and a third occupancy ratio calculating section for multiplying the second occupancy ratio by a third coefficient set in advance according to a shooting condition.
- a third index calculation process for calculating the index (index 3) of
- the first index, the second index, the third index, and the fourth index are used. Then, determine the shooting scene of the shot image data,
- a gradation adjustment amount for the captured image data is determined based on the first index, the second index, the third index, and the fourth index.
- the fourth index calculation step at least an average luminance value of a skin color in a screen center portion of the captured image data,
- the fourth index is calculated by multiplying the difference value between the maximum luminance value and the average luminance value of the photographed image data by a fourth coefficient preset in accordance with the photographing conditions.
- the captured image data is divided into regions each including a combination of predetermined brightness and hue, and the ratio of the divided regions to the entire captured image data is indicated.
- a first index (index 1) for specifying the shooting scene is calculated.
- a second index (index 2) for specifying the shooting scene is obtained.
- a fourth index calculating unit for calculating a fourth index (index 4) for specifying a shooting scene
- An adjustment method determining unit that determines a method of gradation adjustment for the captured image data according to the determined shooting scene
- An adjustment amount determining unit that determines a gradation adjustment amount for the captured image data based on the first index, the second index, and the fourth index;
- the captured image data is subjected to gradation conversion processing of the gradation adjustment amount determined by the adjustment amount determination unit.
- a gradation conversion unit Using the gradation adjustment method determined by the adjustment method determination unit, the captured image data is subjected to gradation conversion processing of the gradation adjustment amount determined by the adjustment amount determination unit.
- the captured image data is placed in a predetermined region having a combination power of distance and brightness from the outer edge of the screen of the captured image data.
- a second occupancy ratio calculating unit that divides and calculates a second occupancy ratio indicating a ratio occupied in the entire captured image data for each of the divided areas;
- a third index calculation unit that calculates a third index (index 3) for specifying a shooting scene by multiplying the second occupation ratio by a third coefficient set in advance according to shooting conditions. And further comprising
- the discriminating unit discriminates a shooting scene of the captured image data based on the first index, the second index, the third index, and the fourth index,
- the adjustment method determination unit determines a gradation adjustment amount for the captured image data based on the first index, the second index, the third index, and the fourth index.
- the fourth index calculation unit includes at least an average luminance value of a skin color in a screen central portion of the captured image data, The fourth index is calculated by multiplying a difference value between the maximum luminance value and the average luminance value of the photographed image data by a fourth coefficient set in advance according to the photographing condition.
- the imaging unit that captures captured image data by capturing a subject, and divides the captured image data into regions having a combination of predetermined brightness and hue, and A first occupancy ratio calculation unit that calculates a first occupancy ratio (first occupancy ratio) indicating a ratio of the entire captured image data for each area;
- a first index (index 1) for specifying the shooting scene is calculated.
- a second index (index 2) for specifying the shooting scene is obtained.
- At least a fourth coefficient preset according to the shooting conditions (for example, the fourth coefficient shown in Equation (9)) is added to the average luminance value of the skin color at the center of the screen of the captured image data.
- a fourth index calculation unit that calculates a fourth index (index 4) for identifying the shooting scene by multiplying;
- An adjustment method determining unit that determines a method of gradation adjustment for the captured image data according to the determined shooting scene
- An adjustment amount determining unit that determines a gradation adjustment amount for the captured image data based on the first index, the second index, and the fourth index;
- the embodiment according to Item 8 is the imaging device according to Item 7,
- the photographed image data is divided into predetermined areas composed of a combination of the distance from the outer edge of the screen of the photographed image data and the brightness, and a second ratio indicating the ratio of the divided area to the entire photographed image data for each divided area.
- a second occupancy ratio calculating unit that calculates an occupancy ratio of the second occupancy ratio, and a third occupancy ratio for identifying a shooting scene by multiplying the second occupancy ratio by a third coefficient that is set in advance according to shooting conditions.
- the discriminating unit discriminates a shooting scene of the captured image data based on the first index, the second index, the third index, and the fourth index,
- the gradation conversion unit determines a gradation adjustment amount for the captured image data based on the first index, the second index, the third index, and the fourth index.
- the fourth index calculation unit includes at least an average luminance value of skin color in a screen center portion of the captured image data, and The fourth index is calculated by multiplying the difference value between the maximum luminance value and the average luminance value of the photographed image data by a fourth coefficient preset in accordance with the photographing condition.
- the embodiment according to item 10 is provided in a computer for executing image processing.
- the photographed image data is divided into areas that are a combination of predetermined brightness and hue, and a first occupation ratio (first occupation ratio) that indicates a ratio of the entire photographed image data for each divided area.
- First occupancy ratio calculating step for calculating
- a first index (index 1) for specifying the shooting scene is calculated.
- a second index (index 2) for specifying the shooting scene is obtained.
- An adjustment amount determining step for determining a gradation adjustment amount for the captured image data based on the first index, the second index, and the fourth index;
- the embodiment described in Item 11 is the image processing program described in Item 10, wherein the captured image data is divided into predetermined regions composed of a combination of a distance from the outer edge of the screen of the captured image data and brightness, and A second occupancy ratio calculating step for calculating a second occupancy ratio indicating a ratio of the entire captured image data for each divided area; and the second occupancy ratio is preset according to the imaging condition.
- a third index calculating step for calculating a third index (index 3) for identifying a shooting scene by multiplying by a third coefficient; In the determining step, a shooting scene of the shot image data is determined based on the first index, the second index, the third index, and the fourth index;
- a gradation adjustment method for the captured image data is determined in accordance with the determined captured scene.
- the fourth index calculation step at least an average luminance value of a skin color in a screen center portion of the captured image data Then, the fourth index is calculated by multiplying the difference value between the maximum luminance value and the average luminance value of the photographed image data by a fourth coefficient preset according to the photographing condition.
- FIG. 1 is a perspective view showing an external configuration of an image processing apparatus according to an embodiment of the present invention.
- FIG. 2 is a block diagram showing an internal configuration of the image processing apparatus according to the present embodiment.
- FIG. 3 is a block diagram showing a main part configuration of the image processing unit in FIG.
- ⁇ 4] A diagram showing an internal configuration (a) of the scene determination unit, an internal configuration (b) of the ratio calculation unit, and an internal configuration (c) of the image processing condition calculation unit.
- FIG. 5 is a flowchart showing scene discrimination processing executed in the image adjustment processing unit.
- FIG. 6 is a flowchart showing a first occupancy ratio calculation process for calculating a first occupancy ratio for each brightness and hue area.
- FIG. 7 is a diagram showing an example of a program for converting RGB power into the HSV color system.
- FIG. 8 is a diagram showing the brightness (V) —hue (H) plane and the region rl and region r2 on the V—H plane.
- FIG. 9 is a diagram showing the lightness (V) —hue (H) plane, and regions r3 and r4 on the V—H plane.
- FIG. 10 is a diagram showing a curve representing a first coefficient for multiplying the first occupancy for calculating index 1;
- FIG. 11 is a diagram showing a curve representing a second coefficient for multiplying the first occupancy for calculating index 2;
- FIG. 12 is a flowchart showing a second occupancy ratio calculation process for calculating a second occupancy ratio based on the composition of captured image data.
- Regions nl to n4 determined according to the distance from the outer edge of the screen of captured image data Figures shown ((a), (b), (c) and (d)).
- FIG. 14 is a diagram showing, for each region (n1 to n4), curves representing a third coefficient for multiplying the second occupancy ratio for calculating index 3;
- FIG. 15 is a flowchart showing an index 4 calculation process executed in the index calculation unit.
- FIG. 16 is a flowchart showing details of the image processing condition determination processing shown in FIG.
- FIG. 17 Plot diagrams showing the relationship between shooting scenes (forward light, strobe, backlight, under) and indices 4-6 ((a) and (b)).
- FIG. 18 is a diagram showing the relationship among indicators for identifying (discriminating) a shooting scene, parameters A to C, and gradation adjustment methods A to C.
- FIG. 19 is a diagram showing tone conversion curves corresponding to each tone adjustment method ((a), (b) and (c)).
- FIG. 20 is a diagram showing a luminance frequency distribution (histogram) (a), a normalized histogram (b), and a block-divided histogram (c).
- histogram luminance frequency distribution
- b normalized histogram
- c block-divided histogram
- FIG. 21 A diagram ((a) and (b)) for explaining deletion of a low luminance region and a high luminance region by a histogram of luminance histogram, and a diagram ((c) and (c) d)).
- FIG. 22 is a diagram showing a gradation conversion curve representing image processing conditions (gradation conversion conditions) when the shooting scene is backlit or under.
- FIG. 23 is a block diagram showing a configuration of a digital camera to which the imaging apparatus of the present invention is applied.
- FIG. 24 is a flowchart showing scene discrimination processing using a reduced image, which is executed in an image adjustment processing unit in the image processing apparatus or an image processing unit of a digital camera.
- FIG. 1 is a perspective view showing an external configuration of the image processing apparatus 1 according to the embodiment of the present invention.
- the image processing apparatus 1 is provided with a magazine loading section 3 for loading a photosensitive material on one side surface of a housing 2. Inside the housing 2 are provided an exposure processing unit 4 for exposing the photosensitive material and a print creating unit 5 for developing and drying the exposed photosensitive material to create a print. On the other side of the casing 2, a tray 6 for discharging the prints produced by the print creation unit 5 is provided.
- a CRT (Cathode Ray Tube) 8 as a display device, a film scanner unit 9 that is a device for reading a transparent document, a reflective document input device 10, and an operation unit 11 are provided on the upper part of the housing 2.
- the CRT8 power print is composed of a display means that displays the image of the image information to be created on the screen.
- the housing 2 is provided with an image reading unit 14 that can read image information recorded on various digital recording media, and an image writing unit 15 that can write (output) image signals to various digital recording media.
- a control unit 7 that centrally controls each of these units is provided inside the housing 2.
- the image reading unit 14 includes a PC card adapter 14a and a floppy (registered trademark) disk adapter 14b, and a PC card 13a and a floppy (registered trademark) disk 13b can be inserted therein.
- the PC card 13a has a memory in which a plurality of frame image data captured by a digital camera is recorded.
- a plurality of frame image data captured by a digital camera is recorded on the floppy (registered trademark) disk 13b.
- Recording media that record frame image data in addition to the PC card 13a and floppy disk 13b include, for example, a multimedia card (registered trademark), a memory stick (registered trademark), MD data, and a CD-ROM. Etc.
- the image writing unit 15 includes a floppy (registered trademark) disk adapter 15a, an MO adapter 15b, and an optical disk adapter 15c.
- the operation unit 11, the CRT 8, the film scanner unit 9, the reflective document input device 10, and the image reading unit 14 are structured so as to be integrally provided in the housing 2. Either one or more powers may be provided as separate bodies!
- a force print creation method is exemplified in which a photosensitive material is exposed and developed to create a print.
- a method such as a kuget method, an electrophotographic method, a heat sensitive method, or a sublimation method may be used.
- FIG. 2 shows a main part configuration of the image processing apparatus 1.
- the image processing apparatus 1 includes a control unit 7, an exposure processing unit 4, a print generation unit 5, a film scanner unit 9, and a reflective original input.
- the apparatus 10 includes an image reading unit 14, a communication unit (input) 32, an image writing unit 15, a data storage unit 71, a template storage unit 72, an operation unit 11, a CRT 8, and a communication unit (output) 33.
- the control unit 7 is configured by a microcomputer, and is stored in a storage unit (not shown) such as a ROM (Read Only Memory), and a CPU (Central Processing Unit) (not shown). The operation of each part constituting the image processing apparatus 1 is controlled in cooperation with the.
- the control unit 7 includes an image processing unit 70 according to the image processing apparatus of the present invention. Based on an input signal (command information) from the operation unit 11, the control unit 7 receives from the film scanner unit 9 and the reflective original input device 10. The read image signal, the image signal read from the image reading unit 14, and the image signal input from the external device via the communication means 32 are subjected to image processing to form image information for exposure, and exposure Output to processing unit 4. Further, the image processing unit 70 performs a conversion process corresponding to the output form on the image signal subjected to the image processing, and outputs it. As output destinations of the image processing unit 70, there are CRT8, image writing unit 15, communication means (output) 33, and the like.
- the exposure processing unit 4 performs image exposure on the photosensitive material and outputs the photosensitive material to the print creating unit 5.
- the print creating unit 5 develops the exposed photosensitive material and dries it to create prints Pl, P2, and P3.
- Print P1 is a service size, high-definition size, panorama size, etc.
- print P2 is an A4 size print
- print P3 is a business card size print.
- the film scanner unit 9 reads a frame image recorded on a transparent original such as a developed negative film N or a reversal film imaged by an analog camera, and obtains a digital image signal of the frame image.
- the reflective original input device 10 reads an image on the print P (photo print, document, various printed materials) by a flat bed scanner, and obtains a digital image signal.
- the image reading unit 14 reads frame image information recorded on the PC card 13a or the floppy (registered trademark) disk 13b and transfers it to the control unit 7.
- the image reading unit 14 includes, as image transfer means 30, a PC card adapter 14a, a floppy (registered trademark) disk adapter 14b, and the like.
- the image reader 14 is a PC car inserted into the PC card adapter 14a.
- the frame image information recorded in the floppy disk 13b inserted in the disk 13a and the floppy disk adapter 14b is read and transferred to the control unit 7.
- a PC card reader or a PC card slot is used as the PC card adapter 14a.
- the communication means (input) 32 receives an image signal representing a captured image and a print command signal from another computer in the facility where the image processing apparatus 1 is installed or a remote computer via the Internet or the like. To do.
- the image writing unit 15 includes a floppy (registered trademark) disk adapter 15a, an MO adapter 15b, and an optical disk adapter 15c as the image conveying unit 31.
- the image writing unit 15 includes a floppy disk 16a inserted into the floppy disk adapter 15a and an MO inserted into the MO adapter 15b. 16b, the optical disk 16c inserted into the optical disk adapter 15c, and the image signal generated by the image processing method of the present invention is written.
- the data accumulating unit 71 stores and sequentially accumulates image information and order information corresponding to the image information (information on how many sheets of image power are to be created, information on the print size, etc.).
- the template storage means 72 stores at least one template data for setting a synthesis area and a background image, an illustration image, etc., which are sample image data corresponding to the sample identification information Dl, D2, and D3.
- a predetermined template is selected from a plurality of templates that are set by the operator's operation and stored in advance in the template storage means 72, and the frame image information is synthesized by the selected template and designated sample identification information Dl, D2,
- the sample image data selected based on D3 is combined with the image data based on the order and the Z or character data to create a print based on the specified sample.
- the synthesis using this template is performed by the well-known Chromaki method.
- sample identification information Dl, D2, and D3 for designating a print sample is configured to be input from the operation unit 211.
- these sample identification information is stored in print samples or orders. Since it is recorded on the sheet, it can be read by reading means such as OCR. Or it can also input by an operator's keyboard operation.
- the sample corresponding to the sample identification information D1 that designates the print sample in this way Record the image data, input the sample identification information Dl that specifies the sample of the print, select the sample image data based on the input sample identification information D1, and select the selected sample image data and the order. Image data and Z or text data based on this, and prints based on specified samples are created, so users can place print orders with various actual size samples. Can meet the request.
- the first sample identification information D2 designating the first sample and the image data of the first sample are stored, and the second sample identification information D3 designating the second sample and the second sample identification information D3 are stored.
- the image data of two samples is stored, the sample image data selected based on the designated first and second sample identification information D2, D3, the image data based on the order, and the Z or character data Since a print based on the specified sample is created, a wider variety of images can be synthesized, and a print that meets a wider variety of user requirements can be created.
- the operation unit 11 has information input means 12.
- the information input means 12 is composed of, for example, a touch panel and outputs a pressing signal from the information input means 12 to the control unit 7 as an input signal.
- the operation unit 11 may be configured with a keyboard, a mouse, and the like.
- the CRT 8 displays image information and the like according to the display control signal input from the control unit 7.
- the communication means (output) 33 sends an image signal representing a photographed image after the image processing of the present invention and order information attached thereto to other links in the facility where the image processing apparatus 1 is installed.
- the computer transmits to a distant computer via the Internet or the like.
- the image processing apparatus 1 includes an image input unit that captures image information obtained by dividing and metering images of various digital media and image originals, an image processing unit, and a processed image Image output means for displaying images, printing output, writing to image recording media, and order information attached to image data for remote computers via the communication line via another communication line or computer Means for transmitting.
- FIG. 3 shows the internal configuration of the image processing unit 70.
- the image processing unit 70 Image adjustment processing unit 701, film scan data processing unit 702, reflection original scan data processing unit 703, image data format decoding processing unit 704, template processing unit 705, CRT specific processing unit 706, printer specific processing unit A707, printer specific processing unit B708, and an image data format creation processing unit 709.
- the film scan data processing unit 702 performs, for image data input from the film scanner unit 9, a calibration operation unique to the film scanner unit 9, negative / positive reversal (in the case of a negative document), dust flaw removal, contrast adjustment, It performs processing such as granular noise removal and sharpening enhancement, and outputs the processed image data to the image adjustment processing unit 701.
- the film size, negative / positive type, information on the main subject optically or magnetically recorded on the film, information on the shooting conditions (for example, information content described in APS), etc. are also output to the image adjustment processing unit 701. .
- the reflection document scan data processing unit 703 performs a calibration operation, negative / positive reversal (in the case of a negative document), dust flaw removal, and contrast adjustment specific to the image data input from the reflection document input device 10. Then, processing such as noise removal and sharpening enhancement is performed, and the processed image data is output to the image adjustment processing unit 701.
- the image data format decoding processing unit 704 applies compression code to the image data input from the image transfer means 30 and Z or the communication means (input) 32 according to the data format of the image data as necessary. Processing such as restoration and conversion of the color data expression method is performed, the data is converted into a data format suitable for computation in the image processing unit 70, and output to the image adjustment processing unit 701. In addition, when the size of the output image is specified from any of the operation unit 11, the communication means (input) 32, and the image transfer means 30, the image data format decoding processing unit 704 detects the specified information. And output to the image adjustment processing unit 701. Information about the size of the output image specified by the image transfer means 30 is embedded in the header information and tag information of the image data acquired by the image transfer means 30.
- the image adjustment processing unit 701 is based on a command from the operation unit 11 or the control unit 7, and includes a film scanner unit 9, a reflective document input device 10, an image transfer unit 30, a communication unit (input) 32, and a template.
- the image data received from the image processing unit 705 is subjected to image processing (see FIG. 5, FIG. 6, FIG. 12 and FIG. 16), which will be described later, for image formation optimized for viewing on the output medium. Desi
- the image data is generated and output to the CRT specific processing unit 706, the printer specific processing unit A707, the printer specific processing unit B708, the image data format creation processing unit 709, and the data storage unit 71.
- the optimal color reproduction is performed within the color gamut of the sRGB standard. If output to silver salt photographic paper is assumed, processing is performed to obtain the optimum color reproduction within the color gamut of silver salt photographic paper.
- gradation processing from 16 bits to 8 bits, reduction of the number of output pixels, and processing for handling output characteristics (LUT) of the output device are also included. It goes without saying that noise compression, sharpening, gray balance adjustment, saturation adjustment, or gradation compression processing such as masking and baking are performed.
- the image adjustment processing unit 701 includes a scene determination unit 710 and a gradation conversion unit 711.
- Figure 4 (a) shows the internal structure of the scene discriminator 710.
- the scene determination unit 710 includes a ratio calculation unit 712, an index calculation unit 713, and an image processing condition calculation unit 714.
- the ratio calculation unit 712 includes a color system conversion unit 715, a histogram creation unit 716, and an occupation rate calculation unit 717, as shown in FIG.
- the index calculation unit 713 includes an index calculation unit that calculates the first index and the second index, and a fourth index calculation unit that calculates the fourth index, but further calculates a third index.
- a third index calculation unit may be included.
- the index calculation unit 713 may have functions of an index calculation unit that calculates the first index and the second index, a fourth index calculation unit, and a third index calculation unit.
- the occupancy rate calculation unit 717 includes a first occupancy rate calculation unit that calculates the first occupancy rate, but may further include a second occupancy rate calculation unit that calculates the second occupancy rate. Further, the occupation rate calculation unit 717 may have both functions of the first occupation rate calculation unit and the second occupation rate calculation unit.
- the color system conversion unit 715 converts the RGB (Red, Green, Blue) value of the captured image data into the HSV color system.
- the HSV color system represents image data with three elements: Hue, Saturation, and Value (Value or Brightness), and is based on the color system proposed by Mansenore. Was devised.
- “brightness” means “brightness” that is generally used unless otherwise noted.
- V (0 to 255) of the HSV color system is used as “brightness”, but a unit system representing the brightness of any other color system may be used. At that time, it goes without saying that numerical values such as various coefficients described in the present embodiment are recalculated.
- the photographed image data in the present embodiment is assumed to be image data having a person as a main subject.
- the histogram creation unit 716 creates a two-dimensional histogram by dividing the photographed image data into regions composed of a predetermined combination of hue and brightness, and calculating the cumulative number of pixels for each of the divided regions. In addition, the histogram creation unit 716 divides the captured image data into predetermined regions having a combination power of distance and brightness from the outer edge of the screen of the captured image data, and calculates the cumulative number of pixels for each of the divided regions. To create a two-dimensional histogram. Note that the captured image data is divided into regions that have the combined power of distance, brightness, and hue from the outer edge of the screen of the captured image data, and a three-dimensional histogram is created by calculating the cumulative number of pixels for each divided region. You may make it do. In the following, a method of creating a two-dimensional histogram will be adopted.
- the occupancy ratio calculation unit 717 indicates the ratio of the cumulative number of pixels calculated by the histogram creation unit 716 to the total number of pixels (the entire captured image data) for each region divided by the combination of brightness and hue. Calculate the first occupancy (see Table 1). The occupancy calculation unit 717 also calculates the total number of pixels calculated by the histogram creation unit 716 for each area divided by the combination of the distance from the outer edge of the screen of the captured image data and the brightness (the captured image data). Calculate the second occupancy ratio (see Table 4) indicating the ratio of the total occupancy.
- the index calculation unit 713 uses the first coefficient set in advance (for example, by discriminant analysis) in accordance with the shooting conditions in the first occupancy rate calculated for each region in the occupancy rate calculation unit 717. By multiplying (see Table 2) and taking the sum, index 1 for identifying the shooting scene is calculated.
- the shooting scene indicates the light source condition when shooting a subject, such as direct light, backlight, strobe light, and the exposure condition such as under shooting.
- Index 1 shows the characteristics of strobe shooting, such as indoor shooting, close-up shooting, and high brightness of the face, and is used to separate the image that should be identified as ⁇ strobe '' from other shooting scenes. .
- the index calculation unit 713 uses coefficients of different signs for a predetermined high-lightness skin color hue region and a hue region other than the high-lightness skin color hue region.
- the skin color hue region of a predetermined high lightness includes a region of 170 to 224 in the lightness value of the HSV color system.
- the hue area other than the predetermined high brightness skin color hue area includes at least one of the high brightness areas of the blue hue area (hue values 161 to 250) and the green hue area (hue values 40 to 160).
- the index calculation unit 713 sets the first occupancy calculated for each region in the occupancy rate calculation unit 717 to the second occupancy previously set (for example, by discriminant analysis) according to the imaging conditions. By multiplying the coefficient (see Table 3) and taking the sum, index 2 for specifying the shooting scene is calculated.
- Indicator 2 is a composite indication of backlight shooting characteristics such as outdoor shooting level, sky blue high brightness, and facial color low brightness, and is used to separate images that should be identified as ⁇ backlighting '' from other shooting scenes. Is.
- the index calculation unit 713 uses different codes for the intermediate brightness area of the flesh color hue area (hue values 0 to 39, 330 to 359) and the brightness areas other than the intermediate brightness area.
- the coefficient of is used.
- the intermediate brightness area of the flesh tone hue area includes areas with brightness values of 85 to 169.
- the brightness area other than the intermediate brightness area includes, for example, a shadow area (brightness value 26-84).
- the shooting scene can be specified by using the index 1 and the index 2 described above, in order to specify the shooting scene with higher accuracy, the third index calculation unit will further describe later. It is preferable to calculate the index 3 of this and use this together to specify the shooting scene.
- the index calculation unit 713 sets the second occupancy calculated for each region in the occupancy rate calculation unit 717 to the third occupancy set in advance (for example, by discriminant analysis) according to the imaging conditions. By multiplying the coefficient (see Table 5) and taking the sum, index 3 for specifying the shooting scene is calculated. Indicator 3 shows the difference in contrast between the center and outside of the screen of the captured image data between backlight and strobe, and quantitatively shows only the image that should be identified as backlight or strobe. When calculating the index 3, the index calculation unit 713 uses different coefficient values depending on the distance from the outer edge of the screen of the captured image data.
- the index calculation unit 713 is at least a skin color in the center of the screen of the captured image data.
- the average brightness value of the first preset value (for example, by discriminant analysis) according to the shooting conditions
- index 4 for specifying the shooting scene is calculated. More preferably, the difference value between the maximum luminance value and the average luminance value of the photographed image data obtained only by the average luminance value of the skin color at the center of the screen of the photographed image data, the standard deviation of the luminance, the average luminance value at the center of the screen, and the image
- the difference value between the maximum flesh color luminance value and the minimum flesh color luminance value and the comparison value of the flesh color average luminance value are multiplied by a fourth coefficient set in advance according to the shooting conditions.
- index 4 for specifying the shooting scene is calculated. It goes without saying that the fourth coefficient is changed depending on the variable used.
- Index 4 shows the distribution information in the luminance histogram that merely shows the difference in brightness between the center and outside of the screen of the captured image data in the flash shooting scene and the under shooting scene. Only the images that should be identified as scenes are shown quantitatively.
- the index calculation unit 713 calculates the average luminance value of the skin color at the center of the screen of the captured image data, the difference value between the maximum luminance value and the average luminance value of the image, the luminance standard deviation, The average brightness value at the center of the screen, the difference between the maximum skin color brightness value and the minimum skin color brightness value of the image, and the comparison value of the skin color average brightness value are used.
- the brightness value here represents the brightness. It is an index, and another index indicating brightness (for example, a brightness value of the HSV color system) may be used.
- the maximum luminance value, flesh color maximum luminance value, and flesh color minimum luminance value are obtained by using the luminance values of the pixels in which the cumulative number of pixels from the maximum luminance value and the minimum luminance value reaches a predetermined ratio with respect to all the pixels. Also good.
- the index calculation unit 713 calculates the index 5 by multiplying the index index 3 by a coefficient that is set in advance (for example, by discriminant analysis) according to the imaging condition, and taking the sum. More preferably, index 5, index 3, and index 4 '(skin color average luminance value in the center of the screen) are each multiplied by a coefficient set in advance according to the shooting conditions, and index 5 is calculated. (See equation (10) below). Further, the index calculation unit 713 calculates the index 6 by multiplying the index 2 and the index 3 by a coefficient set in advance according to the shooting condition, and taking the sum.
- index 2 index 3 and index 4 ′ (skin color average luminance value in the center of the screen) are each multiplied by a coefficient set in advance according to the shooting conditions. Then, index 6 may be calculated by taking the sum (see equation (11) below). Note that a specific calculation method of the indices 1 to 6 in the index calculation unit 713 will be described in detail in the operation description of the present embodiment described later.
- FIG. 4C shows the internal configuration of the image processing condition calculation unit 714.
- the image processing condition calculation unit 714 includes a scene determination unit 718, a gradation adjustment method determination unit 719, a gradation adjustment parameter calculation unit 720, and a gradation adjustment amount calculation unit 721.
- the scene discriminating unit 718 discriminates the shooting scene (light source condition and exposure condition) of the photographed image data based on the values of the index 4, the index 5 and the index 6 calculated by the index calculating unit 713. To do.
- the gradation adjustment method determination unit 719 determines a gradation adjustment method for the captured image data in accordance with the captured scene determined by the scene determination unit 718. For example, when the shooting scene is in direct light, as shown in Fig. 19 (a), a method (gradation adjustment method A) for correcting the translation (offset) of the pixel value of the input shot image data is applied. The When the shooting scene is backlit, as shown in FIG. 19 (b), a method (tone adjustment method B) for applying gamma correction to the pixel value of the input shot image data is applied.
- the method (tone adjustment method C) that applies gamma correction and translation (offset) correction to the pixel values of the input captured image data is applied, as shown in Fig. 19 (c). Is done.
- a method (tone adjustment method B) for applying gamma correction to the pixel value of the input shot image data is applied.
- the tone adjustment parameter calculation unit 720 calculates parameters (key correction values, etc.) necessary for tone adjustment based on the values of the index 4, the index 5, and the index 6 calculated by the index calculation unit 713. To do.
- the gradation adjustment amount calculation unit 721 calculates (determines) the gradation adjustment amount for the captured image data based on the gradation adjustment parameter calculated by the gradation adjustment parameter calculation unit 720. Specifically, the tone adjustment amount calculation unit 721 performs tone adjustment from a plurality of tone conversion curves set in advance corresponding to the tone adjustment method determined by the tone adjustment method determination unit 719. The parameter calculation unit 720 selects a gradation conversion curve corresponding to the gradation adjustment parameter calculated. Note that the tone adjustment parameter calculation unit 720 Calculate the tone conversion curve (tone adjustment amount) based on the tone adjustment parameters.
- a method of determining a shooting scene (light source condition and exposure condition) in the scene determination unit 718 and a method of calculating a gradation adjustment parameter in the gradation adjustment parameter calculation unit 720 will be described in the description of the operation of the present embodiment described later. This will be described in detail.
- a gradation conversion unit 711 performs gradation conversion on the captured image data according to the gradation conversion curve determined by the gradation adjustment amount calculation unit 721.
- the template processing unit 705 reads predetermined image data (template) from the template storage unit 72 based on a command from the image adjustment processing unit 701, and synthesizes the image data to be processed and the template. The template processing is performed, and the image data after the template processing is output to the image adjustment processing unit 701.
- the CRT specific processing unit 706 performs processing such as changing the number of pixels and color matching on the image data input from the image adjustment processing unit 701 as necessary, and displays information that needs to be displayed such as control information.
- the combined display image data is output to CRT8.
- the printer-specific processing unit A707 performs printer-specific calibration processing, color matching, pixel number change processing, and the like as necessary, and outputs processed image data to the exposure processing unit 4.
- a printer-specific processing unit B708 is provided for each printer apparatus to be connected.
- the printer-specific processing unit B708 performs printer-specific calibration processing, color matching, pixel number change, and the like, and outputs processed image data to the external printer 51.
- the image data format creation processing unit 709 converts the image data input from the image adjustment processing unit 701 to various general-purpose image formats represented by JPEG, TIFF, Exif, and the like as necessary.
- the processed image data is output to the image transport unit 31 and the communication means (output) 33.
- the section 709 is a section provided to help understand the functions of the image processing unit 70, and does not necessarily need to be realized as a physically independent device. It may be realized as a type of processing.
- the captured image data is divided into predetermined image areas, and an occupation ratio indicating the ratio of each divided area to the entire captured image data (first occupation ratio, second occupation ratio). ) Is calculated (step S1). Details of the occupation rate calculation process will be described later with reference to FIGS.
- step S2 the occupancy ratio (first occupancy ratio, second occupancy ratio) calculated by the ratio calculation unit 712, at least the average luminance value of the skin color in the center of the screen of the captured image data, and the shooting conditions Accordingly, an index (indicator 1 to 6) for specifying the photographing scene (quantitatively representing the light source condition and the exposure condition) is calculated based on a preset coefficient (step S2).
- the index calculation process in step S2 will be described in detail later.
- step S2 a shooting scene is determined based on the index calculated in step S2, and an image processing condition (tone conversion processing condition) for the shot image data is determined according to the determination result.
- Step S3 the scene discrimination process ends.
- the image processing condition determination process in step S3 will be described in detail later with reference to FIG.
- RGB (Red, Green, Blue) values of photographed image data are converted into the HSV color system (step S10).
- Figure 7 shows an example of a conversion program (HSV conversion program) that obtains hue values, saturation values, and brightness values by converting RGB power into the HSV color system.
- the values of digital image data as input image data are defined as InR, InG, InB
- the calculated hue value is defined as OutH
- the scale is defined as 0 to 360
- the degree value is OutS
- the lightness value is OutV
- the unit is defined as 0 ⁇ 255.
- the photographed image data is divided into regions composed of a combination of predetermined brightness and hue, and a two-dimensional histogram is created by calculating the cumulative number of pixels for each divided region (step Sll).
- a two-dimensional histogram is created by calculating the cumulative number of pixels for each divided region (step Sll).
- Lightness (V) i Lightness value power ⁇ ) to 25 (vl), 26-50 ( ⁇ 2), 51 to 84 ( ⁇ 3), 85 to 169 ( ⁇ 4), 170 to 199 (v5), 200 It is divided into seven areas of ⁇ 224 (v6) and 225 ⁇ 255 (v7).
- Hue (H) is a flesh color range (HI and H2) with a hue value of 0 to 39, 330 to 359, a green hue range (H3) with a hue value of 0 to 160, and a blue hue range with a hue value of 161 to 250. It is divided into four areas: (H4) and red hue area (H5).
- the red hue region (H5) is not used in the following calculations because of the fact that it contributes little to the determination of the shooting scene.
- the flesh-colored hue area is further divided into a flesh-colored area (HI) and other areas (H2).
- Hue '(H) Hue (H) + 60 (when 0 ⁇ Hue (H) ⁇ 300),
- Hue, (H) Hue (H) — 300 (when 300 ⁇ Hue (H) 360 360),
- Luminance (Y) InRXO.30 + InGXO.59 + InBXO. 11 (A)
- a first occupancy ratio indicating the ratio of the cumulative number of pixels calculated for each divided region to the total number of pixels (the entire captured image) is calculated (step S12).
- the occupation rate calculation process ends. Assuming that Rij is the first occupancy calculated in the divided area, which is the combined power of the lightness area vi and the hue area Hj, the first occupancy ratio in each divided area is expressed as shown in Table 1.
- Table 2 shows the first coefficient necessary for calculating the accuracy for strobe shooting, that is, the index 1 that quantitatively indicates the brightness state of the face area at the time of strobe shooting, for each divided area.
- the coefficient of each divided area shown in Table 2 is a weighting coefficient by which the first occupancy Rij of each divided area shown in Table 1 is multiplied, and is set in advance according to the photographing conditions.
- FIG. 8 shows the brightness (v) —hue (H) plane.
- a positive (+) coefficient is used for the first occupancy calculated from the area (rl) distributed in the high brightness skin color hue area in Fig. 8, and the other hues are blue.
- Hue region (r2) force A negative (one) coefficient is used for the first occupancy calculated.
- Figure 10 shows a curve (coefficient curve) in which the first coefficient in the skin tone area (HI) and the first coefficient in the other area (green hue area (H3)) change continuously over the entire brightness. ).
- Index 1 is defined as equation (3) using the sum of the H1 to H4 regions shown in equations (2-1) to (2-4).
- Index 1 sum of H1 regions + sum of H2 regions + sum of H3 regions + sum of H4 regions +4.424 (3)
- Table 3 shows the second coefficient required for each divided area to calculate the index 2 that quantitatively shows the accuracy of backlighting, that is, the brightness state of the face area during backlighting.
- the coefficient of each divided area shown in Table 3 is a weighting coefficient by which the first occupancy ratio Rij of each divided area shown in Table 1 is multiplied, and is set in advance according to the shooting conditions.
- Figure 9 shows the brightness (v) -hue (H) plane.
- a negative (one) coefficient is used for the occupancy calculated from the area (r4) distributed in the intermediate brightness in the flesh-colored hue area in Fig. 9, and the low-lightness (shadow) area in the flesh-colored hue area.
- a positive (+) coefficient is used for the occupation ratio calculated from (r3).
- Fig. 11 shows the second coefficient in the flesh color region (HI) as a curve (coefficient curve) that continuously changes over the entire brightness. According to Table 3 and Fig.
- the sign of the second coefficient in the intermediate lightness region of lightness value 85 to 169 (v4) in the flesh tone hue region is negative (-), and the lightness value is 26 to 84 (v2 , v3)
- the sign of the second coefficient in the low brightness (shadow) region is positive (+), and it can be seen that the sign of the coefficient in both regions is different.
- Index 2 is defined as equation (5) using the sum of the H1 to H4 regions shown in equations (41) to (44).
- Indicator 2 sum of H1 region + sum of H2 region + sum of H3 region + sum of H4 region +1.554 (5)
- index 1 and index 2 are calculated based on the brightness and hue distribution amount of the captured image data, they are effective in determining the shooting scene when the captured image data is a color image.
- an occupation ratio calculation process executed in the ratio calculation unit 712 to calculate the index 3 will be described in detail.
- the RGB value of the photographed image data is converted into the HSV color system (step S 20).
- the captured image data is divided into areas where the combined power of the distance from the outer edge of the captured image screen and the brightness is determined, and a two-dimensional histogram is created by calculating the cumulative number of pixels for each divided area. (Step S21).
- the area division of the captured image data will be described in detail.
- FIGS. 13 (a) to (d) show four regions nl to n4 divided according to the distance from the outer edge of the screen of the captured image data.
- the region nl shown in FIG. 13 (a) is the outer frame
- the region n2 shown in FIG. 13 (b) is the inner region of the outer frame
- the region n3 shown in FIG. 13 (c) is the region n2.
- an inner area, an area n4 shown in FIG. 13 (d) is an area at the center of the captured image screen.
- a second occupancy ratio indicating the ratio of the total number of pixels calculated for each divided region to the total number of pixels (the entire captured image) is calculated (step S22).
- the occupation rate calculation process ends. Assuming that Qij is the second occupancy calculated in the divided area, which is the combination of the lightness area vi and the screen area nj, the second occupancy ratio in each divided area is shown in Table 4.
- Table 5 shows the third coefficient necessary for calculating the index 3 for each divided region.
- the coefficient of each divided area shown in Table 5 is a weighting coefficient by which the second occupancy Qij of each divided area shown in Table 4 is multiplied, and is set in advance according to the photographing conditions.
- Fig. 14 shows the third coefficient in the screen regions nl to n4 as a curve (coefficient curve) that continuously changes over the entire brightness.
- nl region sum Q11X40. 1 + Q21X37.0 + (omitted)
- n4 area sum Q14X1. 5 + Q24X (-32. 9) + (omitted)
- the index 3 is obtained by the equation (7) using the sum of the N1 to H4 regions shown in the equations (6-1) to (6-4). Is defined as
- Index 3 sum of nl regions + sum of n2 regions + sum of n3 regions + sum of n4 regions 12.
- Index 3 is calculated based on the compositional characteristics (distance from the outer edge of the screen of the captured image data) based on the brightness distribution position of the captured image data. It is also effective for discrimination.
- the luminance Y is calculated from the RGB (Red, Green, Blue) value of the photographed image data using Equation (A).
- the average brightness value xl of the skin color area in the center of the screen of the captured image data is calculated (step S23).
- the center of the screen is, for example, an area composed of an area n3 and an area n4 in FIGS. 13 (a) to 13 (d).
- a difference value x2 between the maximum luminance value and the average luminance value of the photographed image data is calculated (step S24).
- step S25 the standard deviation x3 of the luminance of the captured image data is calculated (step S25), and the average luminance value x4 at the center of the screen is calculated (step S26).
- step S26 the average luminance value x4 at the center of the screen is calculated (step S26).
- step S27 a comparison value x5 between the difference value between the maximum luminance value Yskin—max and the minimum luminance value Yskin—min of the skin color area in the photographed image data and the average luminance value Yskin—ave of the skin color area is calculated (step S27).
- This comparison value x5 is expressed as the following equation (8).
- x5 (Yskin— max— Yskin— min) Z2 —Yskin— ave (8)
- index 4 is calculated by multiplying each of the values xl to x5 calculated in steps S23 to S27 by a fourth coefficient set in advance according to the shooting conditions, and taking the sum (step) S28).
- Indicator 4 is defined as in Equation (9) below.
- Indicator 4 0. 06 X xl + l. 13 ⁇ ⁇ 2 + 0. 02 ⁇ ⁇ 3 + (-0. 01) ⁇ ⁇ 4 + 0. 03 ⁇ ⁇ 5— 6. 50 (9)
- This index 4 has luminance histogram distribution information that is based only on the compositional characteristics of the screen of the captured image data, and is particularly effective in distinguishing between a flash photography scene and an under photography scene.
- the average luminance value of the skin color area at the center of the screen of the captured image data is defined as index 4 '.
- the central portion of the screen is, for example, the region 112, the region n3, and the region n4 force in FIGS. 13 (&) to 13 ((1).
- the index 5 is the index 1
- index 6 is defined using index 2, index 3 and index 4' as in formula (11).
- the weighting coefficient multiplied by each index in Expression (10) and Expression (11) is set in advance according to the shooting conditions.
- step S3 in FIG. 5 the image processing condition determination process (step S3 in FIG. 5) executed in the image processing condition calculation unit 714 will be described with reference to the flowchart in FIG.
- the shooting scene (light source condition and exposure condition) of the captured image data is determined (step S30).
- the shooting scene (light source condition and exposure condition) of the captured image data is determined (step S30).
- a method for discriminating a shooting scene (light source condition and exposure condition) will be described.
- Figure 17 (a) shows 60 images taken under each light source condition of forward light, backlight, and strobe, and index 5 and index 6 were calculated for a total of 180 digital image data. The values of index 5 and index 6 are plotted. According to Fig. 17 (a), if the value of index 5 is greater than 0.5, if the value of index 5 is less than 0.5 and the value of index 6 is greater than 0.5 when there are many flash shooting scenes, It can be seen that there are many backlight scenes. Thus, the shooting scene can be determined quantitatively based on the values of the index 5 and the index 6.
- index 4 is particularly useful for discriminating between strobe shooting scenes where gradation adjustment is performed to darken the entire image and undershooting scenes where gradation adjustment is performed to brighten the entire image.
- Fig. 17 (b) is a plot of index 4 and index 5 of an image with index 5 greater than 0.5 out of 60 images taken in both the flash photography scene and the under photography scene.
- Table 6 shows the contents of scene discrimination based on the values of index 4, index 5 and index 6.
- a gradation adjustment method for the photographic image data is determined in accordance with the determined photographic scene (step S31).
- the gradation adjustment method A (Fig. 19 (a)) is selected when the shooting scene is a direct light
- the gradation adjustment method B Fig. 19 (b) when the shooting scene is a backlight.
- gradation adjustment method C (Fig. 19 (c)) is selected when the flash is used
- gradation adjustment method B Fig. 19 (b)
- step S32 parameters necessary for gradation adjustment are calculated based on the index calculated by the index calculation unit 713 (step S32).
- the calculation method of the gradation adjustment parameter calculated in step S32 will be described. In the following, it is assumed that the 8-bit captured image data has been converted to 16-bit in advance, and the unit of the captured image data value is 16-bit.
- step S32 the following parameters P1 to P9 are calculated as parameters necessary for gradation adjustment (gradation adjustment parameters).
- Reproduction target correction value Brightness reproduction target value (30360) -P4
- Offset value 2 P5— P8— PI
- a CDF cumulative density function
- Rx normalized data in R plane is R, Gx in G plane
- the normalized data R 1, G 2, and B are respectively expressed as equations (12) to (14).
- N (B + G + R) / 3 (15)
- Figure 20 (a) shows the frequency distribution (histogram) of the brightness of RGB pixels before normalization.
- the horizontal axis represents luminance
- the vertical axis represents pixel frequency. This histogram is created for each RGB.
- regularity is performed for each plane on the captured image data according to equations (12) to (14).
- Figure 20 (b) shows a histogram of the brightness calculated by equation (15). Since the captured image data is normally entered at 65535, each pixel takes an arbitrary value between the maximum value of 65535 and the minimum value power.
- FIG. 20 (c) When the luminance histogram shown in FIG. 20 (b) is divided into blocks divided by a predetermined range, a frequency distribution as shown in FIG. 20 (c) is obtained.
- the horizontal axis is the block number (luminance) and the vertical axis is the frequency.
- FIG. 21 (c) an area having a frequency greater than a predetermined threshold is deleted from the luminance histogram. This is because if there is a part with an extremely high frequency, the data in this part has a strong influence on the average brightness of the entire photographed image, so that erroneous correction is likely to occur. Therefore, as shown in FIG. 21 (c), the number of pixels above the threshold is limited in the luminance histogram.
- Figure 21 (d) shows the luminance histogram after the pixel number limiting process.
- Each block number of the luminance histogram (Fig. 21 (d)) obtained by deleting the high luminance region and the low luminance region from the normalized luminance histogram and further limiting the cumulative number of pixels,
- the parameter P2 is the average luminance value calculated based on each frequency.
- the norm P1 is an average value of the luminance of the entire photographed image data
- the parameter P3 is an average value of the luminance of the skin color area (HI) in the photographed image data.
- the key correction value for parameter P7, the key correction value 2 for parameter P7 ', and the brightness correction value 2 for parameter P8 are defined as shown in equations (16), (17), and (18), respectively.
- P7 (key correction value) [P3 — ((index 6Z 6) X 18000 + 22000)] / 24. 78
- P7 '(key correction value 2) [P3— ((index 4Z 6) X 10000 + 30000)] / 24. 78 (17)
- step S33 specifically, a step is selected from a plurality of gradation conversion curves set in advance corresponding to the gradation adjustment method determined in step S31.
- the tone conversion curve corresponding to the tone adjustment parameter calculated in step S32 is selected (determined). Note that the gradation conversion curve (gradation adjustment amount) may be calculated based on the gradation adjustment parameter calculated in step S32.
- offset correction (parallel shift of 8bit value) is performed by the following equation (19) to match the meter P1 with P5.
- RGB value of output image RGB value of input image + P6 (19)
- a gradation conversion curve corresponding to the parameter P7 (key correction value) shown in Expression (16) is selected from the plurality of gradation conversion curves shown in FIG. 19 (b).
- a specific example of the gradation conversion curve in FIG. 19 (b) is shown in FIG.
- the correspondence between the value of parameter P7 and the selected gradation transformation curve is shown below.
- the shooting scene is backlit, it is preferable to perform the dodging process together with the gradation conversion process. In this case, it is desirable to adjust the degree of the dodging process according to the index 6 indicating the backlight intensity.
- the shooting scene is under, select from the multiple tone conversion curves shown in Fig. 19 (b). Then, the gradation conversion curve corresponding to the parameter P7 ′ (key correction value 2) shown in Equation (17) is selected. Specifically, the gradation conversion curve corresponding to the value of parameter P7 ′ is selected from the gradation conversion curves shown in FIG. 22 in the same manner as the method for selecting the gradation conversion curve when the shooting scene is backlit. The When the shooting scene is under, dodging is not performed as shown in the case of backlight.
- RGB value of output image RGB value of input image + P9 (20)
- a gradation conversion curve corresponding to Equation (20) is selected from a plurality of gradation conversion curves shown in FIG. 19 (c).
- a gradation conversion curve may be calculated (determined) based on Equation (20).
- the above-described image processing conditions are changed from 16 bits to 8 bits.
- an index that quantitatively indicates the shooting scene of the shot image data is calculated, and the shooting scene is determined based on the calculated index.
- determine the gradation adjustment method for the captured image data according to the discrimination result and determine the gradation adjustment amount (gradation conversion curve) of the captured image data, thereby correcting the brightness of the subject appropriately. It becomes possible.
- index 3 that also derives the compositional power of the captured image data By using this to determine the shooting scene, it is possible to improve the shooting scene determination accuracy. Also, the compositional elements of the captured image data and the distribution information power of the histogram are calculated. Index 4 can be used to distinguish between strobe shooting scenes that adjust the gradation that darkens the entire image and under-shooting scenes that adjust the gradation of the entire image brighter, and further improve the accuracy of shooting scene determination. Can do.
- FIG. 23 shows the configuration of a digital camera 200 to which the imaging apparatus of the present invention is applied.
- Digital camera 200200 Figure 23 [As shown]
- CPU201 optical system 202
- image sensor ⁇ 203 image sensor ⁇ 203
- AF calculation ⁇ 204 WB calculation ⁇ 205
- AE calculation ⁇ 206 lens control ⁇ 207
- image processing unit 208 Display unit 209, recording data creation unit 210, recording medium 211, scene mode setting key 212, color space setting key 213, release button 214, and other operation keys 215 c
- the CPU 201 comprehensively controls the operation of the digital camera 200.
- the optical system 202 is a zoom lens, and forms a subject image on a charge-coupled device (CCD) image sensor in the imaging sensor unit 203.
- the imaging sensor unit 203 photoelectrically converts an optical image by a CCD image sensor, converts it into a digital signal (AZD conversion), and outputs it.
- the image data output from the imaging sensor unit 203 is input to the AF calculation unit 204, the WB calculation unit 205, the AE calculation unit 206, and the image processing unit 208.
- the AF calculation unit 204 calculates and outputs the distances of the AF areas provided at nine places in the screen. The determination of the distance is performed by determining the contrast of the image, and the CPU 201 selects a value at the closest distance among them and sets it as the subject distance.
- the WB calculation unit 205 calculates and outputs a white balance evaluation value of the image.
- the white balance evaluation value is a gain value required to match the RGB output value of a neutral subject under the light source at the time of shooting, and is calculated as the ratio of RZG and BZG with reference to the G channel. The calculated evaluation value is input to the image processing unit 208, and the white balance of the image is adjusted.
- the AE calculation unit 206 calculates and outputs an appropriate exposure value for the image data, and the CPU 201 calculates an aperture value and a shutter speed value so that the calculated appropriate exposure value matches the current exposure value.
- the aperture value is output to the lens control unit 207, and the corresponding aperture diameter is set.
- the shutter speed value is output to the image sensor unit 203, and the corresponding CCD integration time is set.
- the image processing unit 208 performs processing such as white balance processing, CCD filter array interpolation processing, color conversion, primary gradation conversion, and sharpness correction on the captured image data, and then performs the above-described implementation.
- the image is converted into a preferable image by performing a scene discrimination process (see FIGS. 5 to 22) and performing a gradation conversion process determined based on the discrimination result. After that, conversion such as JPEG compression is executed.
- the JPEG-compressed image data is output to the display unit 209 and the recording data creation unit 210.
- Display unit 209 displays captured image data on a liquid crystal display and displays various types of information according to instructions from CPU 201.
- the recording data creation unit 210 formats JPEG-compressed image data and various captured image data input from the CPU 201 into an Exif (Exchangeable Image File Format) file, and records the data on the recording medium 211.
- Exif Exchangeable Image File Format
- the recording medium 211 there is a part called a maker note as a space where each manufacturer can write free information, and the scene discrimination result and the indexes 4, 5, and 6 may be recorded.
- the shooting scene mode can be switched by a user setting. That is, three modes can be selected as a shooting scene mode: a normal mode, a portrait mode, and a landscape mode scene.
- a shooting scene mode When the user operates the scene mode setting key 212 and the subject is a person, the portrait mode and the landscape mode are selected. In case of, switch to landscape mode to perform primary gradation conversion suitable for the subject.
- the digital camera 200 records the selected shooting scene mode information by adding it to the maker note portion of the image data file. The digital camera 200 also records the position information of the AF area selected as the subject in the image file in the same manner.
- the user can set the output color space using the color space setting key 213.
- sRGB IEC61966-2-i; ⁇ RAW can be selected.
- sRGB the image processing in this embodiment is executed.
- Raw is selected, The image processing of this embodiment is not performed, and the image is output in a color space unique to CCD.
- an index that quantitatively indicates the shooting scene of the shot image data is calculated. Based on the calculated index, the shooting scene is determined, the gradation adjustment method for the captured image data is determined according to the determination result, and the gradation adjustment amount (gradation conversion curve) of the captured image data is determined. By doing so, it is possible to appropriately correct the brightness of the subject. As described above, even when the digital camera 200 and the printer are directly connected without using a personal computer by performing appropriate gradation conversion processing inside the digital camera 200, it is preferable. The image can be output.
- a face image may be detected from the photographed image data, a photographing scene may be determined based on the detected face image, and image processing conditions may be determined. Also, Exif (Exchangeable Image File Format) information may be used for discriminating the shooting scene. If Exif information is used, it is possible to further improve the accuracy of determining the shooting scene.
- Exif Exchangeable Image File Format
- a process of reducing the image size may be performed, and the scene determination process of the present embodiment may be performed on the reduced image data.
- the scene discrimination process when using a reduced image will be described.
- the captured image data is converted into a reduced image (step T1).
- a method for reducing the image size a known method (for example, a bilinear method, a bicubic method, a two-arrest neighbor method, or the like) can be used.
- the reduction ratio is not particularly limited, but from the viewpoint of the processing speed and the accuracy of the scene discrimination process, the original image is preferably about 1Z2 to LZ10.
- the reduced image data obtained in step T1 is divided into predetermined image areas, and an occupancy ratio calculation process is performed to calculate an occupancy ratio indicating the ratio of each divided area to the entire reduced image data.
- Step T2 an index for identifying a shooting scene based on the occupation ratio calculated in step ⁇ 2, at least the average luminance value of the skin color area in the center of the screen of the shot image data, and a coefficient set in advance according to the shooting conditions (Indicators 1 to 6) are calculated (Step ⁇ 3).
- the shooting scene is determined based on the index calculated in step ⁇ 3, and image processing for the reduced image data is performed according to the determination result.
- the processing conditions (gradation conversion processing conditions) are determined (step T4), and the scene discrimination processing ends.
- step T2 The occupancy ratio calculation process in step T2, the index calculation process in step T3, and the image processing condition determination process in step T4 are the same as the methods shown in steps Sl, S2, and S3 in FIG. 5, respectively. .
- the original image data is subjected to gradation conversion processing according to the image processing conditions (gradation conversion processing conditions) determined in step T4. do it.
- the image processing conditions gradient conversion processing conditions
- an index that quantitatively indicates the shooting scene of the shot image data is calculated, the shooting scene is determined based on the calculated index, and the shot image data is determined according to the determination result.
- the brightness of the subject can be corrected appropriately.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
- Picture Signal Circuits (AREA)
- Color Image Communication Systems (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-275239 | 2004-09-22 | ||
JP2004275239A JP2006092137A (ja) | 2004-09-22 | 2004-09-22 | 画像処理方法、画像処理装置、撮像装置及び画像処理プログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006033236A1 true WO2006033236A1 (ja) | 2006-03-30 |
Family
ID=36089999
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/016384 WO2006033236A1 (ja) | 2004-09-22 | 2005-09-07 | 画像処理方法、画像処理装置、撮像装置及び画像処理プログラム |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2006092137A (ja) |
WO (1) | WO2006033236A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110197159A (zh) * | 2019-05-31 | 2019-09-03 | 维沃移动通信有限公司 | 指纹采集方法及终端 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4997846B2 (ja) | 2006-06-30 | 2012-08-08 | ブラザー工業株式会社 | 画像処理プログラムおよび画像処理装置 |
JP4853414B2 (ja) * | 2007-07-18 | 2012-01-11 | ソニー株式会社 | 撮像装置、画像処理装置およびプログラム |
KR101151435B1 (ko) | 2009-11-11 | 2012-06-01 | 한국전자통신연구원 | 얼굴 인식 장치 및 방법 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000148980A (ja) * | 1998-11-12 | 2000-05-30 | Fuji Photo Film Co Ltd | 画像処理方法、画像処理装置及び記録媒体 |
JP2002199221A (ja) * | 2000-12-27 | 2002-07-12 | Fuji Photo Film Co Ltd | 濃度補正曲線生成装置および方法 |
JP2002247393A (ja) * | 2001-02-14 | 2002-08-30 | Konica Corp | 画像処理方法 |
-
2004
- 2004-09-22 JP JP2004275239A patent/JP2006092137A/ja active Pending
-
2005
- 2005-09-07 WO PCT/JP2005/016384 patent/WO2006033236A1/ja active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000148980A (ja) * | 1998-11-12 | 2000-05-30 | Fuji Photo Film Co Ltd | 画像処理方法、画像処理装置及び記録媒体 |
JP2002199221A (ja) * | 2000-12-27 | 2002-07-12 | Fuji Photo Film Co Ltd | 濃度補正曲線生成装置および方法 |
JP2002247393A (ja) * | 2001-02-14 | 2002-08-30 | Konica Corp | 画像処理方法 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110197159A (zh) * | 2019-05-31 | 2019-09-03 | 维沃移动通信有限公司 | 指纹采集方法及终端 |
Also Published As
Publication number | Publication date |
---|---|
JP2006092137A (ja) | 2006-04-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7076119B2 (en) | Method, apparatus, and program for image processing | |
WO2006120839A1 (ja) | 画像処理方法、画像処理装置及び画像処理プログラム | |
WO2006123492A1 (ja) | 画像処理方法、画像処理装置、撮像装置及び画像処理プログラム | |
US20040095478A1 (en) | Image-capturing apparatus, image-processing apparatus, image-recording apparatus, image-processing method, program of the same and recording medium of the program | |
US20050141002A1 (en) | Image-processing method, image-processing apparatus and image-recording apparatus | |
JPWO2005079056A1 (ja) | 画像処理装置、撮影装置、画像処理システム、画像処理方法及びプログラム | |
WO2005112428A1 (ja) | 画像処理方法、画像処理装置、画像記録装置及び画像処理プログラム | |
JP2003283731A (ja) | 画像入力装置及び画像出力装置並びにこれらから構成される画像記録装置 | |
JP2007184888A (ja) | 撮像装置、画像処理装置、画像処理方法、及び画像処理プログラム | |
US7324702B2 (en) | Image processing method, image processing apparatus, image recording apparatus, program, and recording medium | |
WO2006033235A1 (ja) | 画像処理方法、画像処理装置、撮像装置及び画像処理プログラム | |
WO2006033236A1 (ja) | 画像処理方法、画像処理装置、撮像装置及び画像処理プログラム | |
WO2006077702A1 (ja) | 撮像装置、画像処理装置及び画像処理方法 | |
JP2006318255A (ja) | 画像処理方法、画像処理装置及び画像処理プログラム | |
JP2005192162A (ja) | 画像処理方法、画像処理装置及び画像記録装置 | |
US6801296B2 (en) | Image processing method, image processing apparatus and image recording apparatus | |
JP2005203865A (ja) | 画像処理システム | |
WO2006033234A1 (ja) | 画像処理方法、画像処理装置、撮像装置及び画像処理プログラム | |
JP2007312125A (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
WO2006077703A1 (ja) | 撮像装置、画像処理装置及び画像記録装置 | |
JP2004096508A (ja) | 画像処理方法、画像処理装置、画像記録装置、プログラム及び記録媒体 | |
JP2006203571A (ja) | 撮像装置、画像処理装置及び画像記録装置 | |
JP2005332054A (ja) | 画像処理方法、画像処理装置、画像記録装置及び画像処理プログラム | |
JP2006094000A (ja) | 画像処理方法、画像処理装置及び画像処理プログラム | |
WO2006132067A1 (ja) | 画像処理方法、画像処理装置、撮像装置及び画像処理プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 05778568 Country of ref document: EP Kind code of ref document: A1 |