US9135726B2 - Image generation apparatus, image generation method, and recording medium - Google Patents

Image generation apparatus, image generation method, and recording medium Download PDF

Info

Publication number
US9135726B2
US9135726B2 US14/010,192 US201314010192A US9135726B2 US 9135726 B2 US9135726 B2 US 9135726B2 US 201314010192 A US201314010192 A US 201314010192A US 9135726 B2 US9135726 B2 US 9135726B2
Authority
US
United States
Prior art keywords
image
face
hairstyle
characteristic information
hair
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US14/010,192
Other versions
US20140064617A1 (en
Inventor
Shigeru KAFUKU
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Casio Computer Co Ltd
Original Assignee
Casio Computer Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Casio Computer Co Ltd filed Critical Casio Computer Co Ltd
Assigned to CASIO COMPUTER CO., LTD. reassignment CASIO COMPUTER CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KAFUKU, SHIGERU
Publication of US20140064617A1 publication Critical patent/US20140064617A1/en
Application granted granted Critical
Publication of US9135726B2 publication Critical patent/US9135726B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation

Definitions

  • the present invention relates to an image generation apparatus, an image generation method, and a recording medium.
  • a portrait creating apparatus which creates a portrait by using characteristic points of facial parts such as eyes, nose, mouth, ears, face outline, etc. has been known (for example, see Japanese Patent Application Laid-Open Publication No. 2004-145625).
  • a game apparatus which creates a character image by combining part objects previously prepared for respective regions (for example, see Japanese Patent Application Laid-Open Publication No. 2008-61896).
  • the object of the present invention is to provide an image generation apparatus, an image generation method, and recording medium, which can generate a more proper portrait image by considering hair characteristics of an original image.
  • an image generation apparatus including: an extracting section to extract characteristic information of a hair region in a face image; an image specifying section to specify a hairstyle image on the basis of the characteristic information extracted by the extracting section; and a first generating section to generate a portrait image of a face in the face image by using the hairstyle image specified by the image specifying section.
  • a method for generating an image by using an image generation apparatus comprising the processes of: extracting characteristic information of a hair region in a face image; specifying a hairstyle image on the basis of the extracted characteristic information; and generating a portrait image of a face in the face image by using the specified hairstyle image.
  • a recording medium which records a program readable by a computer of an image generation apparatus, which program causes the computer to exerts the functions of: extracting characteristic information of a hair region in a face image; specifying a hairstyle image on the basis of the extracted characteristic information; and generating a portrait image of a face in the face image by using the specified hairstyle image.
  • FIG. 1 is a block diagram illustrating a schematic configuration of an imaging apparatus of an embodiment to which the present invention is applied;
  • FIG. 2 is a flowchart illustrating an example of an operation relevant to portrait image generating processing by the imaging apparatus illustrated in FIG. 1 ;
  • FIG. 3 is a flowchart illustrating an example of an operation relevant to characteristic extracting processing in the portrait image generating processing illustrated in FIG. 2 ;
  • FIG. 4A is a diagram schematically illustrating an example of an image relevant to the portrait image generating processing illustrated in FIG. 2 ;
  • FIG. 4B is a diagram schematically illustrating an example of an image relevant to the portrait image generating processing illustrated in FIG. 2 ;
  • FIG. 4C is a diagram schematically illustrating an example of an image relevant to the portrait image generating processing illustrated in FIG. 2 ;
  • FIG. 5A is a diagram schematically illustrating an example of an image relevant to the portrait image generating processing illustrated in FIG. 2 ;
  • FIG. 5B is a diagram schematically illustrating an example of an image relevant to the portrait image generating processing illustrated in FIG. 2 ;
  • FIG. 6 is a diagram schematically illustrating examples of styles of front hairs relevant to the portrait image generating processing illustrated in FIG. 2 ;
  • FIG. 7A is a diagram schematically illustrating an example of an image relevant to the portrait image generating processing illustrated in FIG. 2 ;
  • FIG. 7B is a diagram schematically illustrating an example of an image relevant to the portrait image generating processing illustrated in FIG. 2 .
  • FIG. 1 is a block diagram illustrating a schematic configuration of an imaging apparatus 100 of an embodiment to which the present invention is applied.
  • the imaging apparatus 100 of the embodiment is specifically includes an imaging unit 1 , an imaging control unit 2 , an image data generating unit 3 , a memory 4 , an image recording unit 5 , an image processing unit 6 , a display control unit 7 , a display unit 8 , an operation input unit 9 , and a central control unit 10 .
  • imaging unit 1 imaging control unit 2 , image data generating unit 3 , memory 4 , image recording unit 5 , image processing unit 6 , display control unit 7 , and central control unit 10 are connected to one another via a bus line 11 .
  • the imaging unit 1 takes, as an imaging section, an image of a predetermined object to generate a frame image.
  • the imaging unit 1 is equipped with a lens unit 1 A, an electronic imaging unit 1 B, and a lens driving unit 1 C.
  • the lens unit 1 A is composed of, for example, a plurality of lenses such as a zoom lens and a focus lens.
  • the electronic imaging unit 1 B is composed of, for example, an image sensor (imaging element) such as a Charge Coupled Device (CCD) and Complementary Metal-oxide Semiconductor (CMOS).
  • imaging element such as a Charge Coupled Device (CCD) and Complementary Metal-oxide Semiconductor (CMOS).
  • CCD Charge Coupled Device
  • CMOS Complementary Metal-oxide Semiconductor
  • the lens driving unit 1 C is equipped with, for example, a zoom driving unit which causes the zoom lens to move along an optical axis direction, a focusing driving unit which causes the focus lens to move along the optical axis direction, etc, though illustrations thereof are omitted.
  • the imaging unit 1 can includes a diaphragm (not illustrated) which adjusts an amount of light passing through the lens unit 1 A, in addition to the lens unit 1 A, the electronic imaging unit 1 B, and the lens driving unit 1 C.
  • the imaging control unit 2 controls imaging of an object by the imaging unit 1 .
  • the imaging control unit 2 is equipped with a timing generator, a driver, etc., though illustrations thereof are omitted.
  • the imaging control unit 2 drives, by using the timing generator and the driver, the electronic imaging unit 1 B to perform scanning, and to convert the optical image which has passed through the lens unit 1 A into the two-dimensional image signal at predetermined intervals, and causes the frame image of each screen to be read out from an imaging region of the electronic imaging unit 1 B to be output to the image data generating unit 3 .
  • the imaging control unit 2 can cause the electronic imaging unit 1 B, in stead of the focus lens of the lens unit 1 A, to move along the optical axis to adjust a focusing position of the lens unit 1 A.
  • the imaging control unit 2 can also perform a control to adjust a condition for imaging a specific object, such as Auto Focus (AF) processing, Auto Exposure (AE) processing, and Auto White Balance (AWB) processing.
  • AF Auto Focus
  • AE Auto Exposure
  • ABB Auto White Balance
  • the image data generating unit 3 arbitrary performs gain adjustment for each color component of RGB of an signal of the frame image, which signal has an analog value and is transferred from the electronic imaging unit 1 B, and after that, causes a sample/hold circuit (not illustrated) to sample and hold the signal, causes an A/D converter (not illustrated) to convert the signal into digital data, causes a color process circuit (not illustrated) to perform colorization processing including pixel interpolation processing and gamma correction processing, and then generates a luminance signal Y and color difference signals Cb, Cr (YUV data) which have digital values.
  • the luminance signal Y and the color difference signals Cb, Cr output from the color process circuit are subjected to DMA transfer to the memory 4 , which is used as a buffer memory, via a DMA controller not illustrated.
  • the memory 4 is composed of, for example, a Dynamic Random Access Memory (DRAM) or the like, and temporarily stores data and the like to be processed by each unit such as the image processing unit 6 and the central control unit 10 of the imaging apparatus 100 .
  • DRAM Dynamic Random Access Memory
  • the image recording unit 5 is composed of, for example, a non-volatile memory (flash memory) or the like, and records the image data to be recorded, which data has been encoded in a predetermined compression format (for example, JPEG format, etc.) by an encoding unit (not illustrated) of the image processing unit 6 .
  • a predetermined compression format for example, JPEG format, etc.
  • the image recording unit 5 records pieces of image data of hairstyle images P 1 , . . . of the predetermined number (see FIG. 7A ).
  • Each piece of image data of hairstyle images P 1 , . . . is, for example, an image which schematically represents an outline of human hairs, and is correlated to characteristic information of an entire hair region including a front hair region (hair-tip region).
  • each piece of image data of hairstyle images P 1 , . . . can be formed, for example, by performing a process (details thereof will be described later) using an Active Appearance Model (AAM) with respect to a face region F 1 detected by later-described face detecting processing, deleting face components (for example, eyes, nose, mouth, eyebrows, etc.) existing inside of an outline of a jaw of a face, then drawing lines along the face outline and/or tip portions of hairs, and painting a skin portion inside the face outline and/or the hairs with predetermined colors. Drawing the lines along the face outline and/or the tip portions of the hairs can be manually performed on the basis of a predetermined operation in the operation input unit 9 by a user, or can be automatically performed under a control of a CPU of the central control unit 10 .
  • AAM Active Appearance Model
  • the characteristic information of the hair region for example, there can be adopted an amount of characteristics (details thereof will be described later) obtained by generating a gradient direction histogram of the luminance signal Y of the image data of an original image from which each hairstyle image P 1 is generated.
  • Each piece of image data of hairstyle images P 1 , . . . is also correlated to mask information for designating a front hair region F 2 from the hair region, in addition to the characteristic information of the entire hair region.
  • each piece of image data of hairstyle images P 1 , . . . is recorded so as to be correlated to characteristic information of the front hair region F 2 of the hair region through the mask information.
  • Each piece of image data of hairstyle images P 1 , . . . can be correlated to a shape (details thereof will be described later) of the face outline, which is of the characteristic information of the face in the original image from which each hairstyle image P 1 is generated.
  • each piece of image data of the hairstyle images P 1 , . . . can be image data including the face outline.
  • various styles of front hairs such as front hairs parted on the left, front hairs separated in the middle, front hairs parted to the right, no part (front hairs let down), and no part (a forehead let uncovered) can be correlated, respectively.
  • the image recording unit 5 can have a configuration where a recording medium (not illustrated) is attachable/detachable to/from the image recording unit 5 so that writing/reading of data in/from the recording medium attached to the image recording unit 5 is controlled thereby.
  • a recording medium not illustrated
  • the image processing unit 6 includes an image obtaining unit 6 A, a face detecting unit 6 B, a component image generating unit 6 C, an outline specifying unit 6 D, a front hair specifying unit 6 E, a characteristic information extracting unit 6 F, a hairstyle image specifying unit 6 G, and a portrait image generating unit 6 H.
  • Each unit of the image processing unit 6 is composed of, for example, a predetermined logic circuit, but such configuration is a mere example and the present invention is not limited thereto.
  • the image obtaining unit 6 A obtains the image which is to be subjected to portrait image generating processing.
  • the image obtaining unit 6 A obtains the image data of the original image (for example, a photographic image, etc.) P 2 .
  • the image obtaining unit 6 A obtains from the memory 4 a copy of the image data (RGB data and/or YUV data) of the original image P 2 , which has been generated by the image data generating unit 3 by imaging an object by the imaging unit 1 and the imaging control unit 2 , and/or obtains a copy of the image data of the original image P 2 recorded in the image recording unit 5 (see FIG. 4A ).
  • later-described processes by the image processing unit 6 can be performed with respect to the image data of the original image P 2 itself, or with respect to reduced image data of a predetermined size (for example, VGA size, etc.) obtained by reducing the image data of the original image P 2 at a predetermined ratio, as appropriate.
  • a predetermined size for example, VGA size, etc.
  • the face detecting unit 6 B detects a face region F 1 (see FIG. 4A ) from the original image P 2 which is to be processed.
  • the face detecting unit 6 B detects the face region F 1 including a face from the original image P 2 obtained by the image obtaining unit 6 A. More specifically, the face detecting unit 6 B obtains the image data of the original image P 2 , which has been obtained as the image to be subjected to the portrait image generating processing by the image obtaining unit 6 A, and performs a predetermined face detecting processing with respect to the obtained image data to detect the face region F 1 . Then, the face detecting unit 6 B cuts out a region A (see FIG. 4A ) of a predetermined size surrounding the face region F 1 , and sets the cut region to a face region image.
  • the component image generating unit 6 C generates a face component image P 4 (see FIG. 4C ) representing a facial principal components.
  • the component image generating unit 6 C generates the face component image P 4 relevant to the facial principal components in the original image P 2 (see FIG. 4A ) obtained by the image obtaining unit 6 A. More specifically, the component image generating unit 6 C performs a minute-part extracting processing with respect to the face region image containing the face region F 1 of the original image P 2 therein, and generates a minute-part-of-face image P 3 (see FIG. 4B ) representing face components such as eyes, nose, mouth, eyebrows, hairs, and face outline, with lines. For example, the component image generating unit 6 C generates the minute-part-of-face image P 3 by process using the AAM, as the minute-part extracting processing. In addition, the component image generating unit 6 C performs the minute-part extracting processing with respect to the face region F 1 detected from the image data of the original image P 2 by the face detecting unit 6 B.
  • the AAM is a method of modelization of visual events, which is processing to model the image of the arbitrary face region F 1 .
  • the component image generating unit 6 C previously registers a result of statistical analysis of positions and/or pixel values (for example, luminance values) of predetermined characteristic regions (for example, corners of eyes, tip of nose, face line, etc.) in a plurality of sample face images, in a predetermined registration section. Then, the component image generating unit 6 C sets, with reference to the positions of the characteristic regions, a shape model representing a face shape and/or a texture model representing “Appearance” in an average shape, and models the image (face region image) of the face region F 1 by using these models. Thus, the component image generating unit 6 C generates the minute-part-of-face image P 3 in which a principal composition of the original image P 2 is extracted and represented with lines.
  • the minute-part-of-face image P 3 there can be adopted a binary image in which a black pixel is set to a first pixel value (for example, “0(zero)” etc.) and a while pixel is set to a second pixel value (for example, “255” etc.) different from the first pixel value.
  • a black pixel is set to a first pixel value (for example, “0(zero)” etc.) and a while pixel is set to a second pixel value (for example, “255” etc.) different from the first pixel value.
  • the component image generating unit 6 C specifies the face outline in the face region F 1 by the minute-part extracting processing, and generates the face component image P 4 (see FIG. 4C ) representing the face components existing inside of the face outline and the face components contacting with the face outline, with lines.
  • the component image generating unit 6 C specifies the pixels contacting with the face outline in the minute-part-of-face image P 3 , and deletes, among the pixels continuous with the above pixels, a pixel assembly existing outside of the face outline. In other words, the component image generating unit 6 C deletes a part of the minute-part-of-face image P 3 , which part exists outside of the face outline, and leaves a part of the minute-part-of-face image P 3 , which part exists inside of the face outline, to generate the face component image P 4 including, for example, part images of principal face components such as eyes, nose, mouth, eyebrows, etc.
  • the component image generating unit 6 C can extract and obtain information relevant to a relative positional relationship of the part images of the face components in XY plain space, and/or information relevant to a coordinate position.
  • edge extracting processing and/or anisotropic diffusion processing can be performed to generate the face component image P 4 including the part images of the face components.
  • the component image generating unit 6 C can execute a differential operation with respect to the image data of the original image P 2 by using a predetermined differential filter (for example, a high-pass filter, etc.) to perform edge detecting processing to detect as an edge a point at which a luminance value, color, and/or density change precipitously.
  • a predetermined differential filter for example, a high-pass filter, etc.
  • the component image generating unit 6 C can also perform the anisotropic diffusion processing with respect to the image data of the original image P 2 , by using a predetermined anisotropic diffusion filter, by which processing the image data is smoothed in a state where weighting in a tangential direction of a linear edge is different from weighting in a vertical direction of the edge.
  • the outline specifying unit 6 D specifies a face outline W in the original image P 2 .
  • the outline specifying unit 6 D specifies the face outline W in the original image P 2 obtained by the image obtaining unit 6 A. More specifically, the outline specifying unit 6 D specifies a part corresponding to the face outline specified in the minute-part-of-face image P 3 by the component image generating unit 6 C, inside of the face region image (image in the region A) including the face region F 1 of the original image P 2 .
  • the shape of the face outline W for example, there can be adopted a substantially U-letter shape (see FIG. 5A ) which connects right and left temples to each other with a line which passes through a jaw. It is a mere example, and the present invention is not limited thereto.
  • the present invention can adopt also an elliptical shape which connects right and left temples, jaw, and forehead to one another with a line, especially the elliptical shape which matches with an outline of a jaw.
  • the front hair specifying unit 6 E specifies the front hair region F 2 in the original image P 2 .
  • the front hair specifying unit 6 E specifies the front hair region F 2 with reference to a predetermined position in the face outline W specified by the outline specifying unit 6 D. More specifically, the front hair specifying unit 6 E specifies a predetermined range on the basis of the positions corresponding to the right and left temples constituting the face outline W, which has been specified by the outline specifying unit 6 D, as the front hair region F 2 (see FIG. 5A ). Then, the front hair specifying unit 6 E generates the mask information for designating the specified front hair region F 2 .
  • the characteristic information extracting unit 6 F extracts the characteristic information from the hair region of the original image P 2 .
  • the characteristic information extracting unit 6 F extracts, from the hair region of the original image P 2 obtained by the image obtaining unit 6 A, the characteristic information of at least the hair-tip region (for example, the front hair region F 2 ). More specifically, the characteristic information extracting unit 6 F performs characteristic extracting processing to select and extract a block region (characteristic point) having a high amount of characteristics in the hair region.
  • the characteristic information extracting unit 6 F performs the characteristic extracting processing with respect to the pixels constituting the entire hair region of the original image P 2 , as the minute-part extracting processing, and extracts the characteristic information of the entire hair region.
  • the characteristic extracting processing there can be adopted a processing to extract an amount of characteristic (an amount of shape characteristics), which is obtained by generating a histogram of a luminance (the luminance signal Y) gradient direction of the original image P 2 to be processed. More specifically, the characteristic information extracting unit 6 F calculates a gradient direction (for example, nine directions obtained by dividing the range of zero(0) to 179 degrees into nine sections, an angle between neighboring directions being 20 degrees, etc.) and a gradient intensity (for example, 8 bit: 0-255) of each pixel of the luminance image converted from the original image P 2 to be subjected to the characteristic extracting processing.
  • a gradient direction for example, nine directions obtained by dividing the range of zero(0) to 179 degrees into nine sections, an angle between neighboring directions being 20 degrees, etc.
  • a gradient intensity for example, 8 bit: 0-255
  • the characteristic information extracting unit 6 F divides the luminance image, from which the gradient direction and the gradient intensity of each pixel have been calculated, at a predetermined ratio (for example, a vertical direction: 16 ⁇ a horizontal direction: 16, etc.), after that, calculates integrated value of gradient intensities for each gradient direction to generate the gradient direction histogram, and extracts it as the amount of characteristic (characteristic information).
  • a predetermined ratio for example, a vertical direction: 16 ⁇ a horizontal direction: 16, etc.
  • the characteristic information extracting unit 6 F can use the mask information generated by the front hair specifying unit 6 E to extract only the characteristic information of the front hair region F 2 of the hair region of the original image P 2 (face region image).
  • the characteristic extracting processing to generate the gradient direction histogram is a known technique, the detailed description thereof is omitted here.
  • the number of gradient directions and the number of gradations of gradient intensity are mere examples. The present invention is not limited to the above, and can be arbitrary changed.
  • the hairstyle image specifying unit 6 G specifies the hairstyle image P 1 on the basis of the characteristic information extracted by the characteristic information extracting unit 6 F.
  • the hairstyle image specifying unit 6 G specifies the hairstyle image P 1 corresponding to the characteristic information of the hair region of the original image P 2 extracted by the characteristic information extracting unit 6 F.
  • the hairstyle image specifying unit 6 G includes a first specifying unit G 1 and a second specifying unit G 2 .
  • the first specifying unit G 1 specifies candidate hairstyle images of a predetermined numbers.
  • the first specifying unit G 1 specifies the candidate hairstyle images of the predetermined numbers on the basis of the characteristic information of the front hair region F 2 extracted by the characteristic information extracting unit 6 F. More specifically, the first specifying unit G 1 normalizes the characteristic information (gradient direction histograms) of the front hair region corresponding to each of the respective hairstyle images P 1 , . . .
  • the image recording unit 5 normalizes the characteristic information (gradient direction histogram) of the front hair region F 2 designated by the mask information among the hair regions of the original images P 2 , compares the pieces of normalized information with each other, and rearranges the hairstyle images P 1 , . . . , with reference to a matching degree, in the order of the matching degree from highest to lowest up to a predetermined ranking (for example, tenth, etc.) Then the first specifying unit G 1 specifies the most common style of the front hairs (see FIG. 6 ) among the images up to the predetermined ranking, and specifies the hairstyle images P 1 of the most common style of the front hairs as the candidate hairstyle images among the hair style images P 1 , . . . recorded in the image recording unit 5 .
  • the characteristic information gradient direction histogram
  • FIG. 6 illustrates the style of the front hairs such as front hairs parted on the left, front hairs separated in the middle, front hairs parted to the right, no part A, and no part B, they are mere examples, and the present invention is not limited thereto and can be arbitrary changed.
  • the second specifying unit G 2 specifies the hairstyle image P 1 corresponding to the characteristic information of the hair region of the original image P 2 , from among the candidate hairstyle images of a predetermined number.
  • the second specifying unit G 2 specifies the hairstyle image P 1 on the basis of the characteristic information of the entire hair region extracted by the characteristic information extracting unit 6 F, from among the candidate hairstyle images of a predetermined number specified by the first specifying unit G 1 . More specifically, the second specifying unit G 2 normalizes the characteristic information (gradient direction histogram) of the hair region corresponding to each of the candidate hairstyle images of a predetermined number specified by the first specifying unit G 1 , normalizes the characteristic information (gradient direction histogram) of the hair region of the original image P 2 , compares the pieces of normalized information with each other, and rearrange the hairstyle images P 1 , . . . , with reference to a matching degree, in the order of the matching degree from highest to lowest up to a predetermined ranking (for example, tenth, etc.)
  • a predetermined ranking for example, tenth, etc.
  • the second specifying unit G 2 can automatically specify one(1) hairstyle image P 1 (see FIG. 7A ) having the highest matching degree, or can specify the hairstyle image P 1 desired by a user on the basis of a predetermined operation in the operation input unit 9 by a user.
  • the portrait image generating unit 6 H generates the portrait image P 5 on the basis of the hairstyle image P 1 and the face component image P 4 .
  • the portrait image generating unit 6 H generates the portrait image P 5 by using the image data of the hairstyle image P 1 specified by the hairstyle image specifying unit 6 G. More specifically, the portrait image generating unit 6 H specifies, inside the face outline W in the hairstyle image P 1 , a position where each face component such as eyes, nose, mouth, eyebrows, etc. is superimposed, and generates the image data of the portrait image P 5 which represents the portrait of the original image P 2 by superimposing the part images of face components on the specified position.
  • the portrait image generating unit 6 H can generate the image in which the predetermined parts (for example, face components such as eyes, mouth, eyebrows, etc.) are colored predetermined colors.
  • predetermined parts for example, face components such as eyes, mouth, eyebrows, etc.
  • the display control unit 7 performs a control to read out the image data, which is temporarily stored in the memory 4 and is to be displayed, and causes the display unit 8 to display the read-out image data.
  • the display control unit 7 is equipped with a Video Random Access Memory (VRAM), a VRAM controller, a digital video encoder, etc.
  • VRAM Video Random Access Memory
  • the digital video encoder periodically reads out the luminance signal Y and the color difference signals Cb, Cr, which have been read out from the memory 4 under the control of the central control unit 10 and stored in the VRAM (not illustrated), from the VRAM via the VRAM controller, and generates a video signal based on these pieces of data to output the same to the display unit 8 .
  • the display unit 8 is, for example, a liquid crystal display panel, and displays the image imaged by the imaging unit 1 and the like on the basis of the video signal from the display control unit 7 . Specifically, the display unit 8 displays a live-view image, in a still-image imaging mode or a moving image imaging mode, while continually updating the frame images generated by imaging of an object by the imaging unit 1 and the imaging control unit 2 at a predetermined frame rate. The display unit 8 also displays the image (REC-view image) to be recorded as a still image, and/or displays the image which is currently being recorded as a moving image.
  • the operation input unit 9 is used for executing a predetermined operation of the imaging apparatus 100 .
  • the operation input unit 9 is equipped with an operation unit such as a shutter button relevant to an instruction to image an object, a selection determination button relevant to an instruction to select an imaging mode and/or functions, etc., a zoom button relevant to an instruction to adjust an amount of zoom, and so on, which are not illustrated, and outputs a predetermined operation signal to the central control unit 10 according to an operation of each button of the operation unit.
  • the central control unit 10 controls the respective units of the imaging apparatus 100 .
  • the central control unit 10 is, though illustration is omitted, equipped with a Central Processing Unit (CPU), etc., and performs various control operations according to various processing program (not illustrated) for the imaging apparatus 100 .
  • CPU Central Processing Unit
  • FIG. 2 is a flowchart illustrating an example of the operation of the portrait image generating processing.
  • the portrait image generating processing is processing to be executed by each unit, especially the image processing unit 6 of the imaging apparatus 100 under the control of the central control unit 10 , when a portrait image generating mode is instructed to be selected among a plurality of operation modes displayed in a menu screen, on the basis of a predetermined operation on the selection determination button of the operation input unit 9 by a user.
  • the image data of the original image P 2 to be subjected to the portrait image generating processing, and the image data of the hairstyle image P 1 to be used for generating the portrait image P 5 , are previously recorded in the image recording unit 5 .
  • the image data of the original image P 2 (see FIG. 4 ), which has been designated on the basis of a predetermined operation in the operation input unit 9 by a user, is firstly read out from among the pieces of image data recorded in the image recording unit 5 , and the image obtaining unit 6 A of the image processing unit 6 obtains the image data read out as a processing object of the portrait image generating processing (Step S 1 ).
  • the face detecting unit 6 B performs the predetermined face detecting processing with respect to the image data of the original image P 2 , obtained as the processing object by the image obtaining unit 6 A, to detect the face region F 1 (Step S 2 ).
  • the component image generating unit 6 C performs the minute-part extracting processing (for example, a process using the AAM, etc.) with respect to the face region image including the detected face region F 1 , and generates the minute-part-of-face image P 3 (see FIG. 4B ) in which the face components such as eyes, nose, mouth, eyebrows, hairs, and face outline are represented with lines, for example (Step S 3 ).
  • the minute-part extracting processing for example, a process using the AAM, etc.
  • the component image generating unit 6 C specifies the face outline in the face region F 1 by the minute-part extracting processing, and generates the face component image P 4 which includes the face components existing inside the face outline and the face components contacting with the face outline, namely, the part images of the facial principal components such as eyes, nose, mouth, and eyebrows (Step S 4 ; see FIG. 4C ).
  • the outline specifying unit 6 D specifies, inside the face region image of the original image P 2 , the portion corresponding to the face outline specified in the minute-part-of-face image P 3 by the component image generating unit 6 C, as the face outline W (Step S 5 ; see FIG. 5A ).
  • the front hair specifying unit 6 E specifies, inside the face region image of the original image P 2 , a predetermined range on the basis of the predetermined position (for example, positions corresponding to right and left temples) in the face outline W specified by the outline specifying unit 6 D, as the front hair region F 2 (Step S 6 ; see FIG. 5A ). After that, the front hair specifying unit 6 E generates the mask information for designating the specified front hair region F 2 .
  • the characteristic information extracting unit 6 F performs characteristic extracting processing (see FIG. 3 ) (Step S 7 ).
  • FIG. 3 is a flowchart illustrating an example of the operation relevant to the characteristic extracting processing.
  • the characteristic information extracting unit 6 F converts a copy of the image data (for example, RGB data) of the face region image of the original image P 2 into the YUV data to generate the luminance image from the luminance signal Y (Step S 11 ). Then, the characteristic information extracting unit 6 F calculates the gradient direction and the gradient intensity of each pixel of the luminance image (Step S 12 ). For example, the characteristic information extracting unit 6 F sets, as the gradient directions, the nine directions each of which is of 20 degrees and which are obtained by dividing the range of zero(0) to 179 degrees into nine sections, and calculates the gradient intensity using 256 gradations (8 bit) of zero(0) to 255 (see FIG. 5B ).
  • FIG. 5B is the image in which each pixel is represented with a representative pixel value corresponding to any one of the gradient directions each of which is of 20 degrees.
  • the characteristic information extracting unit 6 F performs smoothing processing to smooth the color by using a filter (for example, Gaussian filter, etc.) of a predetermined size, with respect to a copy of the image data of the minute-part-of-face image P 3 corresponding to the face region image of the original image P 2 (Step S 13 ).
  • the characteristic information extracting unit 6 F then corrects the gradient direction and the gradient intensity of each pixel of the luminance image of the face region image by using the image data of the minute-part-of-face image P 3 after the smoothing processing (Step S 14 ).
  • the characteristic information extracting unit 6 F regards a white pixel of the minute-part-of-face image P 3 after the smoothing processing as the one having no edge or the one having an edge of low intensity, and performs correction so that each pixel of the luminance image, from which pixel the gradient direction and the gradient intensity have been extracted, and which pixel corresponds to the while pixel, has no gradient.
  • the characteristic information extracting unit 6 F divides the face region image, for which the gradient direction and the gradient intensity of each pixel have been calculated, at a predetermined rate (for example, vertical direction: 16 ⁇ horizontal direction: 16, etc.) to set a plurality of divided regions (Step S 15 ), then generates the gradient direction histogram for each divided region (processing object region), and extracts it as the amount of characteristics (characteristic information).
  • a predetermined rate for example, vertical direction: 16 ⁇ horizontal direction: 16, etc.
  • the first specifying unit G 1 of the hairstyle image specifying unit 6 G specifies, among the hairstyle images P 1 , . . . recorded in the image recording unit 5 , the candidate hairstyle images of a predetermined number on the basis of the characteristic information of the front hair region F 2 of the hair region extracted by the characteristic extracting processing (Step S 8 ).
  • the first specifying unit G 1 obtains the characteristic information (gradient direction histogram) of the front hair region corresponding to each of the hairstyle images P 1 , from the image recording unit 5 , obtains the characteristic information (gradient direction histogram) of the front hair region F 2 of the original image P 2 , and after that, normalizes each of the obtained information to compare them with each other.
  • the first specifying unit G 1 then specifies, among the hairstyle images P 1 , . . .
  • first specifying unit G 1 specifies, among the hairstyle images P 1 , . . . recorded in the image recording unit 5 , a predetermined number of the hairstyle images P 1 , . . . of the most common style of the front hairs as the candidate hairstyle images.
  • the rate of the number of the hairstyle images P 1 , . . . of the most common style of the front hairs, which have been specified among the hairstyle images P 1 , . . . up to the predetermined ranking of the matching degree, is equal to or more than a predetermined percent (for example, 50 percent, etc.), and only when it is judged that the rate is equal to or more than the predetermined percent, to cause the first specifying unit G 1 to specify the hairstyle images P 1 of the most common style of the front hairs as the candidate hairstyle images.
  • a predetermined percent for example, 50 percent, etc.
  • the second specifying unit G 2 specifies, among the candidate hairstyle images of the predetermined number specified by the first specifying unit G 1 , the hairstyle image P 1 on the basis of the characteristic information of the entire hair region extracted by the character extracting processing (Step S 9 ).
  • the second specifying unit G 2 obtains the characteristic information (gradient direction histogram) of the entire hair region with respect to each of the candidate hairstyle images of the predetermined number, obtains the characteristic information (gradient direction histogram) of the entire hair region of the original image P 2 , normalizes each of the pieces of obtained information, and compares them with each other.
  • the second specifying unit G 2 then automatically specifies the hairstyle image P 1 (see FIG. 7A ) having the highest matching degrees with respect to the characteristic information of the entire hairstyle region of the original image P 2 .
  • the second specifying unit G 2 can rearrange the hairstyle images P 1 , . . .
  • a predetermined ranking for example, tenth, etc.
  • the portrait image generating unit 6 H generates the portrait image P 5 by using the hairstyle image P 1 and the face component image P 4 (Step S 10 ). Specifically, the portrait image generating unit 6 H specifies, inside the face outline W of the hairstyle image P 1 specified by the hairstyle image specifying unit 6 G, the positions of the face component image P 4 generated by the component image generating unit 6 C at which the part images of face components such as eyes, nose, mouth, eyebrows, etc. are superimposed, and makes the part images of face components superimposed on the specified positions to generate the image data of the portrait image P 5 which represent the original image P 2 as the portrait thereof (see FIG. 7B ). The image recording unit 5 then obtains the image data (YUV data) of the portrait image P 5 generated by the portrait image generating unit 6 H to record the same.
  • the image data YUV data
  • the hairstyle image P 1 corresponding to the characteristic information of the front hair region F 2 of the hair region in the original image P 2 is specified from among the hairstyle images P 1 , . . . , each of which represents the hair outline and is recorded so as to be correlated to the characteristic information of the front hair region (hair-tip region) in the image recording unit 5 , so that the portrait image P 5 of the face is generated by using the specified hairstyle image P 1 , the hairstyle image P 1 corresponding to the hairstyle in the original image P 2 can be specified in view of the characteristics of the hair tips of the original image P 2 . Accordingly, more proper portrait image P 5 can be generated.
  • a natural hairstyle image P 1 can be specified by considering the characteristics of the hair tips, in which image P 1 the appearance of the hairstyle does not stray from that of the original image P 2 , and accordingly more proper portrait image P 1 can be specified by using the hairstyle image P 1 .
  • the portrait image P 5 can be generated based on the specified hairstyle image P 1 , and the face component image P 4 relevant to the facial principal components in the original image P 2 .
  • the predetermined number of the candidate hairstyle images are specified among the hairstyle images P 1 , . . . , each of which is recorded so as to be correlated to the characteristic information of the front hair region (hair-tip region) in the image recording unit 5 , on the basis of the characteristic information of the front hair region F 2 extracted from the original image P 2 , and since the hairstyle image P 1 is specified among the predetermined numbers of the candidate hairstyle images on the basis of the characteristic information of the entire hair region extracted from the original image P 2 , it is possible to narrow down, among the hairstyle images P 1 , . . .
  • the number of candidate hairstyle images to a predetermined number in view of the characteristics of the hair tips of the original image P 2 , and to specify, among the predetermined number of the candidate hairstyle images, more proper hairstyle image P 1 in view of the characteristics of the whole hairs of the original image P 2 .
  • the hairstyle images P 1 , . . . each corresponding to the characteristic information of the front hair region F 2 extracted from the original image P 2 are specified among the hairstyle images P 1 , . . . recorded in the image recording unit 5 , and since the predetermined number of the hairstyle images P 1 , . . .
  • the hairstyle image P 1 in which the appearance of the front hairs does not stray from that of the original image P 2 , as the candidate hairstyle image while removing the hairstyle images of the types except the most common type of the front hairs, namely the hairstyle images of the types of the front hairs (hair tips) whose appearance is judged as being less similar to that of the original image P 1 , and to execute the processing to specify the hairstyle image P 1 in view of the characteristics of the whole hairs of the original image P 2 after the above processing.
  • the characteristic information of the front hair region F 2 can be extracted more easily and more properly.
  • the hairstyle image P 1 can be specified by using overall shape information (amount of characteristics) of the front hair region F 2 , and thereby the natural hairstyle image P 2 in which the appearance of the hairstyle does not depart from that of the original image P 2 can be specified.
  • the hair-tip region can be a region including a tip of tail which exists around the side of neck and/or ears while hairs are tied together on the side of the head.
  • the hair-tip region can be specified by calculating a difference between an average color (representative color) of a background of the face in the original image P 2 and an average color of the hairs.
  • the hairstyle images P 1 including the most common type of the front hairs is specified as the candidate hairstyle images among the hairstyle images P 1 , . . . , each of which corresponds to the characteristic information of the front hair region F 2 in the embodiment, it is a mere example of a method for specifying the candidate hairstyle images.
  • the present invention is not limited to the above, and can be arbitrary changed.
  • the predetermined number of the candidate hairstyle images are specified on the basis of the characteristic information of the front hair region (hair-tip region) F 2 , it is a mere example and the present invention is not limited thereto.
  • the candidate hairstyle images do not always need to be specified. It is also possible to specify the hairstyle image P 1 corresponding to the characteristic information of the hair-tip region from among the hairstyle images P 1 , . . . recorded in the image recording unit 5 .
  • the front hair region F 2 is specified with reference to the predetermined positions in the face outline W in the original image P 2 in the embodiment, but it is a mere example of a method for specifying the front hair region F 2 .
  • the present invention is not limited to the above, and can be arbitrary changed.
  • the data in which the shape of the face outline is correlated to the image data of the hairstyle image P 1 is used when generating the portrait image P 5 in the embodiment, it is a mere example.
  • the present invention is not limited thereto, and can adopt a configuration to specify a face outline image (not illustrated) is specified separately from the hairstyle image P 1 , for example.
  • the embodiment generates the face component image P 4 which is relevant to the facial principal components in the original image P 2 and generates the portrait image P 5 by using the face component image P 4 , the face component image P 4 does not always need to be generated. It is possible to arbitrary change whether or not to generate the face component image P 4 as needed.
  • the original images, from which the hairstyle image P 1 and/or the face component image P 4 are generated do not need to be an image which represents a frontal face.
  • an image deformed so that a face is directed forward can be generated to be used as the original image.
  • the embodiment adopts the configuration which includes the image recording unit 5 to recode the hairstyle images P 1 , . . . , but the present invention is not limited thereto.
  • the present invention can adopt a configuration where the hairstyle images P 1 , are recorded in a predetermined server which can connect to a main body of the imaging apparatus 100 via a predetermined communication network, and the image obtaining unit 6 A obtains the hairstyle images P 1 , . . . from the predetermined server by accessing to the server from not-illustrated communication processing unit via the communication network.
  • the configuration of the imaging apparatus 100 illustrated in the embodiment is a mere example, and the present invention is not limited thereto.
  • the imaging apparatus 100 is illustrated as the image generation apparatus, the image generation apparatus of the present invention is not limited thereto and can have any configuration as long as it can execute the image generating processing of the present invention.
  • the present invention is not limited thereto, and can have a configuration where a predetermined program and the like are execute by the central control unit 10 .
  • a program including an obtaining processing routine, an extracting processing routine, a specifying processing routine, and an image generating processing routine are previously stored in a program memory (not illustrated) for storing the program. It is possible to cause the CPU of the central control unit 10 to function as a section which obtains the original image P 2 by the obtaining processing routine. It is also possible to cause the CPU of the central control unit 10 to function as a section which extracts the characteristic information of the hair region in the obtained original image P 2 by the extracting processing routine. It is also possible to cause the CPU of the central control unit 10 to function as a section which specifies the hairstyle image P 1 on the basis of the extracted characteristic information by the specifying processing routine. It is also possible to cause the CPU of the central control unit 10 to function as a section which generates the portrait image P 5 of the face by using the specified hairstyle image P 1 by the image generating processing routine.
  • the first specifying section, the second specifying section, the outline specifying section, the front hair specifying section, and the second generating section are implemented by executing the predetermined program and the like by the CPU of the central control unit 10 .
  • a computer-readable medium which stores the programs for executing the above-mentioned respective processes
  • a non-volatile memory such as a flash memory and a portable recording medium such as a CD-ROM can be adopted.
  • carrier wave can be adopted as a medium which provides program data via a predetermined communication line.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

An image generation apparatus includes: an extracting section to extract characteristic information of a hair region in a face image; an image specifying section to specify a hairstyle image on the basis of the characteristic information extracted by the extracting section; and a first generating section to generate a portrait image of a face in the face image by using the hairstyle image specified by the image specifying section.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2012-189406 filed on Aug. 30, 2012, the entire contents of which are incorporated herein by reference.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to an image generation apparatus, an image generation method, and a recording medium.
2. Description of the Related Art
Heretofore, a portrait creating apparatus which creates a portrait by using characteristic points of facial parts such as eyes, nose, mouth, ears, face outline, etc. has been known (for example, see Japanese Patent Application Laid-Open Publication No. 2004-145625).
In addition, a game apparatus which creates a character image by combining part objects previously prepared for respective regions (for example, see Japanese Patent Application Laid-Open Publication No. 2008-61896).
In the meantime, it is believed that a hairstyle has a significant impact on a portrait as compared with the eyes, nose, mouth, etc., and drastically changes the impression of the portrait.
For a system which automatically generates a portrait, considering variousness of hairstyles, a method for utilizing a color of hairs has been proposed. However, there is a possibility that extraction of a hair region from an image cannot be performed properly when the color of hairs is not a plain color.
BRIEF SUMMARY OF THE INVENTION
The object of the present invention is to provide an image generation apparatus, an image generation method, and recording medium, which can generate a more proper portrait image by considering hair characteristics of an original image.
According to an embodiment of the present invention, there is provided an image generation apparatus including: an extracting section to extract characteristic information of a hair region in a face image; an image specifying section to specify a hairstyle image on the basis of the characteristic information extracted by the extracting section; and a first generating section to generate a portrait image of a face in the face image by using the hairstyle image specified by the image specifying section.
According to an embodiment of the present invention, there is provided a method for generating an image by using an image generation apparatus, the method comprising the processes of: extracting characteristic information of a hair region in a face image; specifying a hairstyle image on the basis of the extracted characteristic information; and generating a portrait image of a face in the face image by using the specified hairstyle image.
According to an embodiment of the present invention, there is provided a recording medium which records a program readable by a computer of an image generation apparatus, which program causes the computer to exerts the functions of: extracting characteristic information of a hair region in a face image; specifying a hairstyle image on the basis of the extracted characteristic information; and generating a portrait image of a face in the face image by using the specified hairstyle image.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
The above and further objects and novel features of the present invention will more fully appear from the following detailed description when the same is read in conjunction with the accompanying drawings. It is to be expressly understood, however, that the drawings are for the purpose of illustration only and are not intended as a definition of the limits of the invention, in which drawings:
FIG. 1 is a block diagram illustrating a schematic configuration of an imaging apparatus of an embodiment to which the present invention is applied;
FIG. 2 is a flowchart illustrating an example of an operation relevant to portrait image generating processing by the imaging apparatus illustrated in FIG. 1;
FIG. 3 is a flowchart illustrating an example of an operation relevant to characteristic extracting processing in the portrait image generating processing illustrated in FIG. 2;
FIG. 4A is a diagram schematically illustrating an example of an image relevant to the portrait image generating processing illustrated in FIG. 2;
FIG. 4B is a diagram schematically illustrating an example of an image relevant to the portrait image generating processing illustrated in FIG. 2;
FIG. 4C is a diagram schematically illustrating an example of an image relevant to the portrait image generating processing illustrated in FIG. 2;
FIG. 5A is a diagram schematically illustrating an example of an image relevant to the portrait image generating processing illustrated in FIG. 2;
FIG. 5B is a diagram schematically illustrating an example of an image relevant to the portrait image generating processing illustrated in FIG. 2;
FIG. 6 is a diagram schematically illustrating examples of styles of front hairs relevant to the portrait image generating processing illustrated in FIG. 2;
FIG. 7A is a diagram schematically illustrating an example of an image relevant to the portrait image generating processing illustrated in FIG. 2; and
FIG. 7B is a diagram schematically illustrating an example of an image relevant to the portrait image generating processing illustrated in FIG. 2.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
Hereinafter, specific embodiments of the present invention will be described with reference to the drawings. In this regard, however, the scope of the invention is not limited to the illustrated examples.
FIG. 1 is a block diagram illustrating a schematic configuration of an imaging apparatus 100 of an embodiment to which the present invention is applied.
As illustrated in FIG. 1, the imaging apparatus 100 of the embodiment is specifically includes an imaging unit 1, an imaging control unit 2, an image data generating unit 3, a memory 4, an image recording unit 5, an image processing unit 6, a display control unit 7, a display unit 8, an operation input unit 9, and a central control unit 10.
These imaging unit 1, imaging control unit 2, image data generating unit 3, memory 4, image recording unit 5, image processing unit 6, display control unit 7, and central control unit 10 are connected to one another via a bus line 11.
The imaging unit 1 takes, as an imaging section, an image of a predetermined object to generate a frame image.
Specifically, the imaging unit 1 is equipped with a lens unit 1A, an electronic imaging unit 1B, and a lens driving unit 1C.
The lens unit 1A is composed of, for example, a plurality of lenses such as a zoom lens and a focus lens.
The electronic imaging unit 1B is composed of, for example, an image sensor (imaging element) such as a Charge Coupled Device (CCD) and Complementary Metal-oxide Semiconductor (CMOS). The electronic imaging unit 1B converts an optical image which has passed through various lenses of the lens unit 1A into a two-dimensional image signal.
The lens driving unit 1C is equipped with, for example, a zoom driving unit which causes the zoom lens to move along an optical axis direction, a focusing driving unit which causes the focus lens to move along the optical axis direction, etc, though illustrations thereof are omitted.
The imaging unit 1 can includes a diaphragm (not illustrated) which adjusts an amount of light passing through the lens unit 1A, in addition to the lens unit 1A, the electronic imaging unit 1B, and the lens driving unit 1C.
The imaging control unit 2 controls imaging of an object by the imaging unit 1. Concretely, the imaging control unit 2 is equipped with a timing generator, a driver, etc., though illustrations thereof are omitted. The imaging control unit 2 drives, by using the timing generator and the driver, the electronic imaging unit 1B to perform scanning, and to convert the optical image which has passed through the lens unit 1A into the two-dimensional image signal at predetermined intervals, and causes the frame image of each screen to be read out from an imaging region of the electronic imaging unit 1B to be output to the image data generating unit 3.
The imaging control unit 2 can cause the electronic imaging unit 1B, in stead of the focus lens of the lens unit 1A, to move along the optical axis to adjust a focusing position of the lens unit 1A.
The imaging control unit 2 can also perform a control to adjust a condition for imaging a specific object, such as Auto Focus (AF) processing, Auto Exposure (AE) processing, and Auto White Balance (AWB) processing.
The image data generating unit 3 arbitrary performs gain adjustment for each color component of RGB of an signal of the frame image, which signal has an analog value and is transferred from the electronic imaging unit 1B, and after that, causes a sample/hold circuit (not illustrated) to sample and hold the signal, causes an A/D converter (not illustrated) to convert the signal into digital data, causes a color process circuit (not illustrated) to perform colorization processing including pixel interpolation processing and gamma correction processing, and then generates a luminance signal Y and color difference signals Cb, Cr (YUV data) which have digital values.
The luminance signal Y and the color difference signals Cb, Cr output from the color process circuit are subjected to DMA transfer to the memory 4, which is used as a buffer memory, via a DMA controller not illustrated.
The memory 4 is composed of, for example, a Dynamic Random Access Memory (DRAM) or the like, and temporarily stores data and the like to be processed by each unit such as the image processing unit 6 and the central control unit 10 of the imaging apparatus 100.
The image recording unit 5 is composed of, for example, a non-volatile memory (flash memory) or the like, and records the image data to be recorded, which data has been encoded in a predetermined compression format (for example, JPEG format, etc.) by an encoding unit (not illustrated) of the image processing unit 6.
The image recording unit 5 records pieces of image data of hairstyle images P1, . . . of the predetermined number (see FIG. 7A).
Each piece of image data of hairstyle images P1, . . . is, for example, an image which schematically represents an outline of human hairs, and is correlated to characteristic information of an entire hair region including a front hair region (hair-tip region).
Here, each piece of image data of hairstyle images P1, . . . can be formed, for example, by performing a process (details thereof will be described later) using an Active Appearance Model (AAM) with respect to a face region F1 detected by later-described face detecting processing, deleting face components (for example, eyes, nose, mouth, eyebrows, etc.) existing inside of an outline of a jaw of a face, then drawing lines along the face outline and/or tip portions of hairs, and painting a skin portion inside the face outline and/or the hairs with predetermined colors. Drawing the lines along the face outline and/or the tip portions of the hairs can be manually performed on the basis of a predetermined operation in the operation input unit 9 by a user, or can be automatically performed under a control of a CPU of the central control unit 10.
As the characteristic information of the hair region, for example, there can be adopted an amount of characteristics (details thereof will be described later) obtained by generating a gradient direction histogram of the luminance signal Y of the image data of an original image from which each hairstyle image P1 is generated.
Each piece of image data of hairstyle images P1, . . . is also correlated to mask information for designating a front hair region F2 from the hair region, in addition to the characteristic information of the entire hair region. Concretely, each piece of image data of hairstyle images P1, . . . is recorded so as to be correlated to characteristic information of the front hair region F2 of the hair region through the mask information.
Each piece of image data of hairstyle images P1, . . . can be correlated to a shape (details thereof will be described later) of the face outline, which is of the characteristic information of the face in the original image from which each hairstyle image P1 is generated. In other words, each piece of image data of the hairstyle images P1, . . . can be image data including the face outline.
To the image data of hairstyle images P1, . . . , various styles of front hairs such as front hairs parted on the left, front hairs separated in the middle, front hairs parted to the right, no part (front hairs let down), and no part (a forehead let uncovered) can be correlated, respectively.
The image recording unit 5 can have a configuration where a recording medium (not illustrated) is attachable/detachable to/from the image recording unit 5 so that writing/reading of data in/from the recording medium attached to the image recording unit 5 is controlled thereby.
The image processing unit 6 includes an image obtaining unit 6A, a face detecting unit 6B, a component image generating unit 6C, an outline specifying unit 6D, a front hair specifying unit 6E, a characteristic information extracting unit 6F, a hairstyle image specifying unit 6G, and a portrait image generating unit 6H.
Each unit of the image processing unit 6 is composed of, for example, a predetermined logic circuit, but such configuration is a mere example and the present invention is not limited thereto.
The image obtaining unit 6A obtains the image which is to be subjected to portrait image generating processing.
Concretely, the image obtaining unit 6A obtains the image data of the original image (for example, a photographic image, etc.) P2. Concretely, the image obtaining unit 6A obtains from the memory 4 a copy of the image data (RGB data and/or YUV data) of the original image P2, which has been generated by the image data generating unit 3 by imaging an object by the imaging unit 1 and the imaging control unit 2, and/or obtains a copy of the image data of the original image P2 recorded in the image recording unit 5 (see FIG. 4A).
Incidentally, later-described processes by the image processing unit 6 can be performed with respect to the image data of the original image P2 itself, or with respect to reduced image data of a predetermined size (for example, VGA size, etc.) obtained by reducing the image data of the original image P2 at a predetermined ratio, as appropriate.
The face detecting unit 6B detects a face region F1 (see FIG. 4A) from the original image P2 which is to be processed.
Concretely, the face detecting unit 6B detects the face region F1 including a face from the original image P2 obtained by the image obtaining unit 6A. More specifically, the face detecting unit 6B obtains the image data of the original image P2, which has been obtained as the image to be subjected to the portrait image generating processing by the image obtaining unit 6A, and performs a predetermined face detecting processing with respect to the obtained image data to detect the face region F1. Then, the face detecting unit 6B cuts out a region A (see FIG. 4A) of a predetermined size surrounding the face region F1, and sets the cut region to a face region image.
Since the face detecting processing is a known technique, the detailed description thereof is omitted here.
The component image generating unit 6C generates a face component image P4 (see FIG. 4C) representing a facial principal components.
Concretely, the component image generating unit 6C generates the face component image P4 relevant to the facial principal components in the original image P2 (see FIG. 4A) obtained by the image obtaining unit 6A. More specifically, the component image generating unit 6C performs a minute-part extracting processing with respect to the face region image containing the face region F1 of the original image P2 therein, and generates a minute-part-of-face image P3 (see FIG. 4B) representing face components such as eyes, nose, mouth, eyebrows, hairs, and face outline, with lines. For example, the component image generating unit 6C generates the minute-part-of-face image P3 by process using the AAM, as the minute-part extracting processing. In addition, the component image generating unit 6C performs the minute-part extracting processing with respect to the face region F1 detected from the image data of the original image P2 by the face detecting unit 6B.
Here, the AAM is a method of modelization of visual events, which is processing to model the image of the arbitrary face region F1. For example, the component image generating unit 6C previously registers a result of statistical analysis of positions and/or pixel values (for example, luminance values) of predetermined characteristic regions (for example, corners of eyes, tip of nose, face line, etc.) in a plurality of sample face images, in a predetermined registration section. Then, the component image generating unit 6C sets, with reference to the positions of the characteristic regions, a shape model representing a face shape and/or a texture model representing “Appearance” in an average shape, and models the image (face region image) of the face region F1 by using these models. Thus, the component image generating unit 6C generates the minute-part-of-face image P3 in which a principal composition of the original image P2 is extracted and represented with lines.
As the minute-part-of-face image P3, there can be adopted a binary image in which a black pixel is set to a first pixel value (for example, “0(zero)” etc.) and a while pixel is set to a second pixel value (for example, “255” etc.) different from the first pixel value.
Moreover, the component image generating unit 6C specifies the face outline in the face region F1 by the minute-part extracting processing, and generates the face component image P4 (see FIG. 4C) representing the face components existing inside of the face outline and the face components contacting with the face outline, with lines.
Specifically, the component image generating unit 6C specifies the pixels contacting with the face outline in the minute-part-of-face image P3, and deletes, among the pixels continuous with the above pixels, a pixel assembly existing outside of the face outline. In other words, the component image generating unit 6C deletes a part of the minute-part-of-face image P3, which part exists outside of the face outline, and leaves a part of the minute-part-of-face image P3, which part exists inside of the face outline, to generate the face component image P4 including, for example, part images of principal face components such as eyes, nose, mouth, eyebrows, etc.
Here, the component image generating unit 6C can extract and obtain information relevant to a relative positional relationship of the part images of the face components in XY plain space, and/or information relevant to a coordinate position.
Although the process using the AAM is illustrated as the minute-part extracting processing, it is a mere example. The present invention is not limited to the above, and can be arbitrary changed.
For example, as the minute-part extracting processing, edge extracting processing and/or anisotropic diffusion processing can be performed to generate the face component image P4 including the part images of the face components. Specifically, for example, the component image generating unit 6C can execute a differential operation with respect to the image data of the original image P2 by using a predetermined differential filter (for example, a high-pass filter, etc.) to perform edge detecting processing to detect as an edge a point at which a luminance value, color, and/or density change precipitously. The component image generating unit 6C can also perform the anisotropic diffusion processing with respect to the image data of the original image P2, by using a predetermined anisotropic diffusion filter, by which processing the image data is smoothed in a state where weighting in a tangential direction of a linear edge is different from weighting in a vertical direction of the edge.
The outline specifying unit 6D specifies a face outline W in the original image P2.
Concretely, the outline specifying unit 6D specifies the face outline W in the original image P2 obtained by the image obtaining unit 6A. More specifically, the outline specifying unit 6D specifies a part corresponding to the face outline specified in the minute-part-of-face image P3 by the component image generating unit 6C, inside of the face region image (image in the region A) including the face region F1 of the original image P2.
Here, as the shape of the face outline W, for example, there can be adopted a substantially U-letter shape (see FIG. 5A) which connects right and left temples to each other with a line which passes through a jaw. It is a mere example, and the present invention is not limited thereto. The present invention can adopt also an elliptical shape which connects right and left temples, jaw, and forehead to one another with a line, especially the elliptical shape which matches with an outline of a jaw.
The front hair specifying unit 6E specifies the front hair region F2 in the original image P2.
Concretely, the front hair specifying unit 6E specifies the front hair region F2 with reference to a predetermined position in the face outline W specified by the outline specifying unit 6D. More specifically, the front hair specifying unit 6E specifies a predetermined range on the basis of the positions corresponding to the right and left temples constituting the face outline W, which has been specified by the outline specifying unit 6D, as the front hair region F2 (see FIG. 5A). Then, the front hair specifying unit 6E generates the mask information for designating the specified front hair region F2.
The characteristic information extracting unit 6F extracts the characteristic information from the hair region of the original image P2.
Concretely, the characteristic information extracting unit 6F extracts, from the hair region of the original image P2 obtained by the image obtaining unit 6A, the characteristic information of at least the hair-tip region (for example, the front hair region F2). More specifically, the characteristic information extracting unit 6F performs characteristic extracting processing to select and extract a block region (characteristic point) having a high amount of characteristics in the hair region.
For example, the characteristic information extracting unit 6F performs the characteristic extracting processing with respect to the pixels constituting the entire hair region of the original image P2, as the minute-part extracting processing, and extracts the characteristic information of the entire hair region.
Here, as the characteristic extracting processing, there can be adopted a processing to extract an amount of characteristic (an amount of shape characteristics), which is obtained by generating a histogram of a luminance (the luminance signal Y) gradient direction of the original image P2 to be processed. More specifically, the characteristic information extracting unit 6F calculates a gradient direction (for example, nine directions obtained by dividing the range of zero(0) to 179 degrees into nine sections, an angle between neighboring directions being 20 degrees, etc.) and a gradient intensity (for example, 8 bit: 0-255) of each pixel of the luminance image converted from the original image P2 to be subjected to the characteristic extracting processing. Then, the characteristic information extracting unit 6F divides the luminance image, from which the gradient direction and the gradient intensity of each pixel have been calculated, at a predetermined ratio (for example, a vertical direction: 16×a horizontal direction: 16, etc.), after that, calculates integrated value of gradient intensities for each gradient direction to generate the gradient direction histogram, and extracts it as the amount of characteristic (characteristic information).
At that time, the characteristic information extracting unit 6F can use the mask information generated by the front hair specifying unit 6E to extract only the characteristic information of the front hair region F2 of the hair region of the original image P2 (face region image).
Incidentally, since the characteristic extracting processing to generate the gradient direction histogram is a known technique, the detailed description thereof is omitted here. In addition, the number of gradient directions and the number of gradations of gradient intensity are mere examples. The present invention is not limited to the above, and can be arbitrary changed.
The hairstyle image specifying unit 6G specifies the hairstyle image P1 on the basis of the characteristic information extracted by the characteristic information extracting unit 6F.
Concretely, from among the hairstyle images P1, . . . each being recorded so as to be correlated to the characteristic information of the hair region in the image recording unit 5, the hairstyle image specifying unit 6G specifies the hairstyle image P1 corresponding to the characteristic information of the hair region of the original image P2 extracted by the characteristic information extracting unit 6F. The hairstyle image specifying unit 6G includes a first specifying unit G1 and a second specifying unit G2.
The first specifying unit G1 specifies candidate hairstyle images of a predetermined numbers.
Concretely, from among the hairstyle images P1, . . . each being recorded so as to be correlated to the characteristic information of the front-hair region (hair-tip region) F2 in the image recording unit 5, the first specifying unit G1 specifies the candidate hairstyle images of the predetermined numbers on the basis of the characteristic information of the front hair region F2 extracted by the characteristic information extracting unit 6F. More specifically, the first specifying unit G1 normalizes the characteristic information (gradient direction histograms) of the front hair region corresponding to each of the respective hairstyle images P1, . . . recorded in the image recording unit 5, normalizes the characteristic information (gradient direction histogram) of the front hair region F2 designated by the mask information among the hair regions of the original images P2, compares the pieces of normalized information with each other, and rearranges the hairstyle images P1, . . . , with reference to a matching degree, in the order of the matching degree from highest to lowest up to a predetermined ranking (for example, tenth, etc.) Then the first specifying unit G1 specifies the most common style of the front hairs (see FIG. 6) among the images up to the predetermined ranking, and specifies the hairstyle images P1 of the most common style of the front hairs as the candidate hairstyle images among the hair style images P1, . . . recorded in the image recording unit 5.
Although FIG. 6 illustrates the style of the front hairs such as front hairs parted on the left, front hairs separated in the middle, front hairs parted to the right, no part A, and no part B, they are mere examples, and the present invention is not limited thereto and can be arbitrary changed.
The second specifying unit G2 specifies the hairstyle image P1 corresponding to the characteristic information of the hair region of the original image P2, from among the candidate hairstyle images of a predetermined number.
Concretely, the second specifying unit G2 specifies the hairstyle image P1 on the basis of the characteristic information of the entire hair region extracted by the characteristic information extracting unit 6F, from among the candidate hairstyle images of a predetermined number specified by the first specifying unit G1. More specifically, the second specifying unit G2 normalizes the characteristic information (gradient direction histogram) of the hair region corresponding to each of the candidate hairstyle images of a predetermined number specified by the first specifying unit G1, normalizes the characteristic information (gradient direction histogram) of the hair region of the original image P2, compares the pieces of normalized information with each other, and rearrange the hairstyle images P1, . . . , with reference to a matching degree, in the order of the matching degree from highest to lowest up to a predetermined ranking (for example, tenth, etc.)
Here, the second specifying unit G2 can automatically specify one(1) hairstyle image P1 (see FIG. 7A) having the highest matching degree, or can specify the hairstyle image P1 desired by a user on the basis of a predetermined operation in the operation input unit 9 by a user.
The portrait image generating unit 6H generates the portrait image P5 on the basis of the hairstyle image P1 and the face component image P4.
Concretely, the portrait image generating unit 6H generates the portrait image P5 by using the image data of the hairstyle image P1 specified by the hairstyle image specifying unit 6G. More specifically, the portrait image generating unit 6H specifies, inside the face outline W in the hairstyle image P1, a position where each face component such as eyes, nose, mouth, eyebrows, etc. is superimposed, and generates the image data of the portrait image P5 which represents the portrait of the original image P2 by superimposing the part images of face components on the specified position.
Incidentally, the portrait image generating unit 6H can generate the image in which the predetermined parts (for example, face components such as eyes, mouth, eyebrows, etc.) are colored predetermined colors.
The display control unit 7 performs a control to read out the image data, which is temporarily stored in the memory 4 and is to be displayed, and causes the display unit 8 to display the read-out image data.
Concretely, the display control unit 7 is equipped with a Video Random Access Memory (VRAM), a VRAM controller, a digital video encoder, etc. The digital video encoder periodically reads out the luminance signal Y and the color difference signals Cb, Cr, which have been read out from the memory 4 under the control of the central control unit 10 and stored in the VRAM (not illustrated), from the VRAM via the VRAM controller, and generates a video signal based on these pieces of data to output the same to the display unit 8.
The display unit 8 is, for example, a liquid crystal display panel, and displays the image imaged by the imaging unit 1 and the like on the basis of the video signal from the display control unit 7. Specifically, the display unit 8 displays a live-view image, in a still-image imaging mode or a moving image imaging mode, while continually updating the frame images generated by imaging of an object by the imaging unit 1 and the imaging control unit 2 at a predetermined frame rate. The display unit 8 also displays the image (REC-view image) to be recorded as a still image, and/or displays the image which is currently being recorded as a moving image.
The operation input unit 9 is used for executing a predetermined operation of the imaging apparatus 100. Specifically, the operation input unit 9 is equipped with an operation unit such as a shutter button relevant to an instruction to image an object, a selection determination button relevant to an instruction to select an imaging mode and/or functions, etc., a zoom button relevant to an instruction to adjust an amount of zoom, and so on, which are not illustrated, and outputs a predetermined operation signal to the central control unit 10 according to an operation of each button of the operation unit.
The central control unit 10 controls the respective units of the imaging apparatus 100. Specifically, the central control unit 10 is, though illustration is omitted, equipped with a Central Processing Unit (CPU), etc., and performs various control operations according to various processing program (not illustrated) for the imaging apparatus 100.
Next, the portrait image generating processing by the imaging apparatus 100 will be described with reference to FIGS. 2-7.
FIG. 2 is a flowchart illustrating an example of the operation of the portrait image generating processing.
The portrait image generating processing is processing to be executed by each unit, especially the image processing unit 6 of the imaging apparatus 100 under the control of the central control unit 10, when a portrait image generating mode is instructed to be selected among a plurality of operation modes displayed in a menu screen, on the basis of a predetermined operation on the selection determination button of the operation input unit 9 by a user.
The image data of the original image P2 to be subjected to the portrait image generating processing, and the image data of the hairstyle image P1 to be used for generating the portrait image P5, are previously recorded in the image recording unit 5.
As illustrated in FIG. 2, the image data of the original image P2 (see FIG. 4), which has been designated on the basis of a predetermined operation in the operation input unit 9 by a user, is firstly read out from among the pieces of image data recorded in the image recording unit 5, and the image obtaining unit 6A of the image processing unit 6 obtains the image data read out as a processing object of the portrait image generating processing (Step S1).
Subsequently the face detecting unit 6B performs the predetermined face detecting processing with respect to the image data of the original image P2, obtained as the processing object by the image obtaining unit 6A, to detect the face region F1 (Step S2).
Then, the component image generating unit 6C performs the minute-part extracting processing (for example, a process using the AAM, etc.) with respect to the face region image including the detected face region F1, and generates the minute-part-of-face image P3 (see FIG. 4B) in which the face components such as eyes, nose, mouth, eyebrows, hairs, and face outline are represented with lines, for example (Step S3). The component image generating unit 6C specifies the face outline in the face region F1 by the minute-part extracting processing, and generates the face component image P4 which includes the face components existing inside the face outline and the face components contacting with the face outline, namely, the part images of the facial principal components such as eyes, nose, mouth, and eyebrows (Step S4; see FIG. 4C).
Next, the outline specifying unit 6D specifies, inside the face region image of the original image P2, the portion corresponding to the face outline specified in the minute-part-of-face image P3 by the component image generating unit 6C, as the face outline W (Step S5; see FIG. 5A). Then, the front hair specifying unit 6E specifies, inside the face region image of the original image P2, a predetermined range on the basis of the predetermined position (for example, positions corresponding to right and left temples) in the face outline W specified by the outline specifying unit 6D, as the front hair region F2 (Step S6; see FIG. 5A). After that, the front hair specifying unit 6E generates the mask information for designating the specified front hair region F2.
Subsequently, the characteristic information extracting unit 6F performs characteristic extracting processing (see FIG. 3) (Step S7).
Hereinafter, the characteristic extracting processing will be described in detail with reference to FIG. 3. FIG. 3 is a flowchart illustrating an example of the operation relevant to the characteristic extracting processing.
As illustrated in FIG. 3, the characteristic information extracting unit 6F converts a copy of the image data (for example, RGB data) of the face region image of the original image P2 into the YUV data to generate the luminance image from the luminance signal Y (Step S11). Then, the characteristic information extracting unit 6F calculates the gradient direction and the gradient intensity of each pixel of the luminance image (Step S12). For example, the characteristic information extracting unit 6F sets, as the gradient directions, the nine directions each of which is of 20 degrees and which are obtained by dividing the range of zero(0) to 179 degrees into nine sections, and calculates the gradient intensity using 256 gradations (8 bit) of zero(0) to 255 (see FIG. 5B). FIG. 5B is the image in which each pixel is represented with a representative pixel value corresponding to any one of the gradient directions each of which is of 20 degrees.
Next, the characteristic information extracting unit 6F performs smoothing processing to smooth the color by using a filter (for example, Gaussian filter, etc.) of a predetermined size, with respect to a copy of the image data of the minute-part-of-face image P3 corresponding to the face region image of the original image P2 (Step S13). The characteristic information extracting unit 6F then corrects the gradient direction and the gradient intensity of each pixel of the luminance image of the face region image by using the image data of the minute-part-of-face image P3 after the smoothing processing (Step S14). For example, the characteristic information extracting unit 6F regards a white pixel of the minute-part-of-face image P3 after the smoothing processing as the one having no edge or the one having an edge of low intensity, and performs correction so that each pixel of the luminance image, from which pixel the gradient direction and the gradient intensity have been extracted, and which pixel corresponds to the while pixel, has no gradient.
Next, the characteristic information extracting unit 6F divides the face region image, for which the gradient direction and the gradient intensity of each pixel have been calculated, at a predetermined rate (for example, vertical direction: 16×horizontal direction: 16, etc.) to set a plurality of divided regions (Step S15), then generates the gradient direction histogram for each divided region (processing object region), and extracts it as the amount of characteristics (characteristic information).
Then, the characteristic extracting processing is terminated.
Returning to FIG. 2, the first specifying unit G1 of the hairstyle image specifying unit 6G specifies, among the hairstyle images P1, . . . recorded in the image recording unit 5, the candidate hairstyle images of a predetermined number on the basis of the characteristic information of the front hair region F2 of the hair region extracted by the characteristic extracting processing (Step S8).
For example, the first specifying unit G1 obtains the characteristic information (gradient direction histogram) of the front hair region corresponding to each of the hairstyle images P1, from the image recording unit 5, obtains the characteristic information (gradient direction histogram) of the front hair region F2 of the original image P2, and after that, normalizes each of the obtained information to compare them with each other. The first specifying unit G1 then specifies, among the hairstyle images P1, . . . arranged in the order of the matching degree with respect to the characteristic information of the front hair region F2 of the original image P2, from highest to lowest, up to a predetermined ranking (for example, tenth, etc.), the most common style (for example, front hairs parted on the left, etc.; see FIG. 6) of the front hairs. After that, first specifying unit G1 specifies, among the hairstyle images P1, . . . recorded in the image recording unit 5, a predetermined number of the hairstyle images P1, . . . of the most common style of the front hairs as the candidate hairstyle images.
At that time, it is possible to judge whether or not the rate of the number of the hairstyle images P1, . . . of the most common style of the front hairs, which have been specified among the hairstyle images P1, . . . up to the predetermined ranking of the matching degree, is equal to or more than a predetermined percent (for example, 50 percent, etc.), and only when it is judged that the rate is equal to or more than the predetermined percent, to cause the first specifying unit G1 to specify the hairstyle images P1 of the most common style of the front hairs as the candidate hairstyle images.
It is also possible to previously determine whether the face of the original image P2 is the one of male or of female on the basis of a predetermined operation in the operation input unit 9 by a user, and to cause the first specifying unit G1 to obtain the characteristic information of the front hair region F2 of the hairstyle image P1 corresponding to the determined sex from the image recording unit 5.
Next, the second specifying unit G2 specifies, among the candidate hairstyle images of the predetermined number specified by the first specifying unit G1, the hairstyle image P1 on the basis of the characteristic information of the entire hair region extracted by the character extracting processing (Step S9).
For example, the second specifying unit G2 obtains the characteristic information (gradient direction histogram) of the entire hair region with respect to each of the candidate hairstyle images of the predetermined number, obtains the characteristic information (gradient direction histogram) of the entire hair region of the original image P2, normalizes each of the pieces of obtained information, and compares them with each other. The second specifying unit G2 then automatically specifies the hairstyle image P1 (see FIG. 7A) having the highest matching degrees with respect to the characteristic information of the entire hairstyle region of the original image P2. At that time, the second specifying unit G2 can rearrange the hairstyle images P1, . . . in the order of the matching degree from highest to lowest up to a predetermined ranking (for example, tenth, etc.), and after that, can specify among these hairstyle images P1, . . . the hairstyle image P1 desired by a user on the basis of a predetermined operation by a user.
Subsequently, the portrait image generating unit 6H generates the portrait image P5 by using the hairstyle image P1 and the face component image P4 (Step S10). Specifically, the portrait image generating unit 6H specifies, inside the face outline W of the hairstyle image P1 specified by the hairstyle image specifying unit 6G, the positions of the face component image P4 generated by the component image generating unit 6C at which the part images of face components such as eyes, nose, mouth, eyebrows, etc. are superimposed, and makes the part images of face components superimposed on the specified positions to generate the image data of the portrait image P5 which represent the original image P2 as the portrait thereof (see FIG. 7B). The image recording unit 5 then obtains the image data (YUV data) of the portrait image P5 generated by the portrait image generating unit 6H to record the same.
Thus, the portrait image generating processing is terminated.
As described above, according to the imaging apparatus 100 of the embodiment, since the hairstyle image P1 corresponding to the characteristic information of the front hair region F2 of the hair region in the original image P2 is specified from among the hairstyle images P1, . . . , each of which represents the hair outline and is recorded so as to be correlated to the characteristic information of the front hair region (hair-tip region) in the image recording unit 5, so that the portrait image P5 of the face is generated by using the specified hairstyle image P1, the hairstyle image P1 corresponding to the hairstyle in the original image P2 can be specified in view of the characteristics of the hair tips of the original image P2. Accordingly, more proper portrait image P5 can be generated.
Although there is a possibility that the hairstyle in the portrait makes a strong impression on the portrait in comparison with the other face parts such as eyes, nose, mouth, etc., in the embodiment, a natural hairstyle image P1 can be specified by considering the characteristics of the hair tips, in which image P1 the appearance of the hairstyle does not stray from that of the original image P2, and accordingly more proper portrait image P1 can be specified by using the hairstyle image P1. Specifically, the portrait image P5 can be generated based on the specified hairstyle image P1, and the face component image P4 relevant to the facial principal components in the original image P2.
Moreover, since more proper hairstyle image P1 can be specified even when a plurality of hairstyle images P1, . . . are recorded in the image recording unit 5, an operation can be prevented from being complicated, for example, like a method of manually selecting the hairstyle, which is desired by a user, from among the large number of prepared hairstyle templates.
Furthermore, since the predetermined number of the candidate hairstyle images are specified among the hairstyle images P1, . . . , each of which is recorded so as to be correlated to the characteristic information of the front hair region (hair-tip region) in the image recording unit 5, on the basis of the characteristic information of the front hair region F2 extracted from the original image P2, and since the hairstyle image P1 is specified among the predetermined numbers of the candidate hairstyle images on the basis of the characteristic information of the entire hair region extracted from the original image P2, it is possible to narrow down, among the hairstyle images P1, . . . recorded in the image recording unit 5, the number of candidate hairstyle images to a predetermined number in view of the characteristics of the hair tips of the original image P2, and to specify, among the predetermined number of the candidate hairstyle images, more proper hairstyle image P1 in view of the characteristics of the whole hairs of the original image P2.
Specifically, since the hairstyle images P1, . . . each corresponding to the characteristic information of the front hair region F2 extracted from the original image P2 are specified among the hairstyle images P1, . . . recorded in the image recording unit 5, and since the predetermined number of the hairstyle images P1, . . . including the most common type of the front hairs are specified as the candidate hairstyle images, it is possible to regard the hairstyle image P1, in which the appearance of the front hairs does not stray from that of the original image P2, as the candidate hairstyle image while removing the hairstyle images of the types except the most common type of the front hairs, namely the hairstyle images of the types of the front hairs (hair tips) whose appearance is judged as being less similar to that of the original image P1, and to execute the processing to specify the hairstyle image P1 in view of the characteristics of the whole hairs of the original image P2 after the above processing.
Moreover, since the front hair region F in the hair region is specified with reference to the predetermined positions in the face outline W in the original image P2, the characteristic information of the front hair region F2 can be extracted more easily and more properly.
Furthermore, since the amount of characteristics is extracted as the characteristic information by generating the gradient direction histogram of luminance of the front hair region (hair-tip region) F2, the hairstyle image P1 can be specified by using overall shape information (amount of characteristics) of the front hair region F2, and thereby the natural hairstyle image P2 in which the appearance of the hairstyle does not depart from that of the original image P2 can be specified.
The present invention is not limited to the above embodiments, and various improvements and design changes can be added thereto without departing from the scope and spirit of the present invention.
For example, though the front hair region F2 is illustrated as the hair-tip region in the embodiment, it is a mere example. The present invention is not limited thereto, and can be arbitrary changed. For example, the hair-tip region can be a region including a tip of tail which exists around the side of neck and/or ears while hairs are tied together on the side of the head. In this case, for example, the hair-tip region can be specified by calculating a difference between an average color (representative color) of a background of the face in the original image P2 and an average color of the hairs.
Moreover, though the hairstyle images P1 including the most common type of the front hairs is specified as the candidate hairstyle images among the hairstyle images P1, . . . , each of which corresponds to the characteristic information of the front hair region F2 in the embodiment, it is a mere example of a method for specifying the candidate hairstyle images. The present invention is not limited to the above, and can be arbitrary changed.
Furthermore, though the predetermined number of the candidate hairstyle images are specified on the basis of the characteristic information of the front hair region (hair-tip region) F2, it is a mere example and the present invention is not limited thereto. The candidate hairstyle images do not always need to be specified. It is also possible to specify the hairstyle image P1 corresponding to the characteristic information of the hair-tip region from among the hairstyle images P1, . . . recorded in the image recording unit 5.
Moreover, the front hair region F2 is specified with reference to the predetermined positions in the face outline W in the original image P2 in the embodiment, but it is a mere example of a method for specifying the front hair region F2. The present invention is not limited to the above, and can be arbitrary changed.
Furthermore, though the data in which the shape of the face outline is correlated to the image data of the hairstyle image P1 is used when generating the portrait image P5 in the embodiment, it is a mere example. The present invention is not limited thereto, and can adopt a configuration to specify a face outline image (not illustrated) is specified separately from the hairstyle image P1, for example.
Moreover, though the embodiment generates the face component image P4 which is relevant to the facial principal components in the original image P2 and generates the portrait image P5 by using the face component image P4, the face component image P4 does not always need to be generated. It is possible to arbitrary change whether or not to generate the face component image P4 as needed.
Furthermore, the original images, from which the hairstyle image P1 and/or the face component image P4 are generated, do not need to be an image which represents a frontal face. For example, in the case of an image in which a face is inclined so as to be directed obliquely forward, an image deformed so that a face is directed forward can be generated to be used as the original image.
Moreover, the embodiment adopts the configuration which includes the image recording unit 5 to recode the hairstyle images P1, . . . , but the present invention is not limited thereto. For example, the present invention can adopt a configuration where the hairstyle images P1, are recorded in a predetermined server which can connect to a main body of the imaging apparatus 100 via a predetermined communication network, and the image obtaining unit 6A obtains the hairstyle images P1, . . . from the predetermined server by accessing to the server from not-illustrated communication processing unit via the communication network.
Furthermore, the configuration of the imaging apparatus 100 illustrated in the embodiment is a mere example, and the present invention is not limited thereto. Although the imaging apparatus 100 is illustrated as the image generation apparatus, the image generation apparatus of the present invention is not limited thereto and can have any configuration as long as it can execute the image generating processing of the present invention.
In addition, though the embodiment illustrates the configuration where the image obtaining unit 6A, the characteristic information extracting unit 6F, the hairstyle image specifying unit 6G, and the portrait image generating unit 6H are driven under the control of the central control unit 10, the present invention is not limited thereto, and can have a configuration where a predetermined program and the like are execute by the central control unit 10.
Concretely, a program including an obtaining processing routine, an extracting processing routine, a specifying processing routine, and an image generating processing routine are previously stored in a program memory (not illustrated) for storing the program. It is possible to cause the CPU of the central control unit 10 to function as a section which obtains the original image P2 by the obtaining processing routine. It is also possible to cause the CPU of the central control unit 10 to function as a section which extracts the characteristic information of the hair region in the obtained original image P2 by the extracting processing routine. It is also possible to cause the CPU of the central control unit 10 to function as a section which specifies the hairstyle image P1 on the basis of the extracted characteristic information by the specifying processing routine. It is also possible to cause the CPU of the central control unit 10 to function as a section which generates the portrait image P5 of the face by using the specified hairstyle image P1 by the image generating processing routine.
Similarly to the above, there can be adopted the configuration where the first specifying section, the second specifying section, the outline specifying section, the front hair specifying section, and the second generating section are implemented by executing the predetermined program and the like by the CPU of the central control unit 10.
As a computer-readable medium which stores the programs for executing the above-mentioned respective processes, in addition to the ROM, hard disk, etc., a non-volatile memory such as a flash memory and a portable recording medium such as a CD-ROM can be adopted. As a medium which provides program data via a predetermined communication line, carrier wave can be adopted.
The embodiments of the present invention are described above, but the scope of the present invention is not limited to the above-described embodiments and includes the scope of the invention described in the claims and the scope of the equivalents thereof.

Claims (8)

What is claimed is:
1. An image generation apparatus comprising:
a processor which is operable to:
extract characteristic information of a hair-tip region in a face image and characteristic information of an entire hair region in the face image;
specify a predetermined number of candidate hairstyle images from among a plurality of predetermined hairstyle images, based on the extracted characteristic information of the hair-tip region;
select a final hairstyle image from among the specified candidate hairstyle images, based on the extracted characteristic information of the entire hair region; and
generate a portrait image of a face in the face image using the selected final hairstyle image.
2. The image generation apparatus according to claim 1, wherein the processor is further operable to extract characteristic information of a front hair region including a tip of a front hair from the entire hair region, and
wherein the processor specifies hairstyle images including most common styles of front hair as the candidate hairstyle images, from among a plurality of hairstyle images each corresponding to the extracted characteristic information of the front hair region.
3. The image generation apparatus according to claim 2, wherein the processor is further operable to specify a face outline in the face image, and to specify the front hair region based on a predetermined position with respect to the specified face outline, and
wherein the processor extracts the characteristic information of the front hair region specified in the face image.
4. The image generation apparatus according to claim 1, wherein the processor is further operable to extract characteristic information of a front hair region including a tip of a front hair from the entire hair region, and
wherein the processor selects the final hairstyle image based on the extracted characteristic information of the front hair region.
5. The image generation apparatus according to claim 1, wherein the processor is further operable to extract, as the characteristic information of the hair-tip region, a plurality of characteristics obtained by generating a histogram of a luminance gradient direction of the hair-tip region.
6. The image generation apparatus according to claim 1, wherein the processor is further operable to generate a face component image of a facial principal component in the face image, and
wherein the processor generates the portrait image based on the generated face component image and the selected final hairstyle image.
7. A method for generating an image by using an image generation apparatus, the method comprising:
extracting characteristic information of a hair-tip region in a face image and characteristic information of an entire hair region in the face image;
specifying a predetermined number of candidate hairstyle images from among a plurality of predetermined hairstyle images, based on the extracted characteristic information of the hair-tip region;
selecting a final hairstyle image from the specified candidate hairstyle images, based on the extracted characteristic information of the entire hair region; and
generating a portrait image of a face in the face image by using the selected final hairstyle image.
8. A non-transitory recording medium having recorded thereon a program readable by a computer of an image generation apparatus, the program being executable to control the computer to perform functions comprising:
extracting characteristic information of a hair-tip region in a face image and characteristic information of an entire hair region in the face image;
specifying a predetermined number of candidate hairstyle images from among a plurality of predetermined hairstyle images, based on the extracted characteristic information of the hair-tip region;
selecting a final hairstyle image from among the specified candidate hairstyle images, based on the extracted characteristic information of the entire hair region; and
generating a portrait image of a face in the face image using the selected final hairstyle image.
US14/010,192 2012-08-30 2013-08-26 Image generation apparatus, image generation method, and recording medium Active US9135726B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012189406A JP5949331B2 (en) 2012-08-30 2012-08-30 Image generating apparatus, image generating method, and program
JP2012-189406 2012-08-30

Publications (2)

Publication Number Publication Date
US20140064617A1 US20140064617A1 (en) 2014-03-06
US9135726B2 true US9135726B2 (en) 2015-09-15

Family

ID=50187695

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/010,192 Active US9135726B2 (en) 2012-08-30 2013-08-26 Image generation apparatus, image generation method, and recording medium

Country Status (3)

Country Link
US (1) US9135726B2 (en)
JP (1) JP5949331B2 (en)
CN (1) CN103679767A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9760794B2 (en) * 2015-09-25 2017-09-12 Intel Corporation Method and system of low-complexity histrogram of gradients generation for image processing
US20190096097A1 (en) * 2007-02-12 2019-03-28 Tcms Transparent Beauty Llc System and Method for Applying a Reflectance Modifying Agent to Change a Person's Appearance Based on a Digital Image
US20190266390A1 (en) * 2016-03-31 2019-08-29 Snap Inc. Automated avatar generation
US10938758B2 (en) 2016-10-24 2021-03-02 Snap Inc. Generating and displaying customized avatars in media overlays
US11147357B2 (en) 2005-08-12 2021-10-19 Tcms Transparent Beauty, Llc System and method for applying a reflectance modifying agent to improve the visual attractiveness of human skin
US11925869B2 (en) 2012-05-08 2024-03-12 Snap Inc. System and method for generating and displaying avatars

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011221812A (en) * 2010-04-09 2011-11-04 Sony Corp Information processing device, method and program
JP5949030B2 (en) 2012-03-26 2016-07-06 カシオ計算機株式会社 Image generating apparatus, image generating method, and program
CN107451950A (en) * 2016-05-30 2017-12-08 北京旷视科技有限公司 Face image synthesis method, human face recognition model training method and related device
JP6800676B2 (en) * 2016-09-27 2020-12-16 キヤノン株式会社 Image processing equipment, image processing methods and programs
CN108009470B (en) * 2017-10-20 2020-06-16 深圳市朗形网络科技有限公司 Image extraction method and device
WO2021100120A1 (en) * 2019-11-19 2021-05-27 株式会社資生堂 Hair measuring method and hair measuring system

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5487140A (en) 1992-12-21 1996-01-23 Casio Computer Co., Ltd. Data processing apparatus for obtaining pattern-image data
US5542037A (en) 1992-08-24 1996-07-30 Casio Computer Co., Ltd. Image displaying apparatus wherein selected stored image data is combined and the combined image data is displayed
US5568599A (en) 1993-03-18 1996-10-22 Casio Computer Co., Ltd. Electronic montage creation device
US5588096A (en) 1992-12-21 1996-12-24 Casio Computer Co., Ltd. Object image display devices
US5600767A (en) 1994-02-25 1997-02-04 Casio Computer Co., Ltd. Image creation device
US5611037A (en) 1994-03-22 1997-03-11 Casio Computer Co., Ltd. Method and apparatus for generating image
US5787419A (en) 1992-08-24 1998-07-28 Casio Computer Co., Ltd. Face image searching apparatus for searching for and displaying a face image
US5808624A (en) 1994-07-29 1998-09-15 Brother Kogyo Kabushiki Kaisha Picture making apparatus for creating a picture for printing by assembling and positioning component parts
US5818457A (en) 1993-05-25 1998-10-06 Casio Computer Co., Ltd. Face image data processing devices
US5987104A (en) * 1998-01-23 1999-11-16 Mitsubishi Denki Kabushiki Kaisha Telephone with function of creating composite portrait
US6219024B1 (en) 1992-12-25 2001-04-17 Casio Computer Co., Ltd. Object image displaying apparatus
JP2004145625A (en) 2002-10-24 2004-05-20 Mitsubishi Electric Corp Device for preparing portrait
US20050264658A1 (en) * 2000-02-28 2005-12-01 Ray Lawrence A Face detecting camera and method
US20080062198A1 (en) 2006-09-08 2008-03-13 Nintendo Co., Ltd. Storage medium having game program stored thereon and game apparatus
US20080267443A1 (en) 2006-05-05 2008-10-30 Parham Aarabi Method, System and Computer Program Product for Automatic and Semi-Automatic Modification of Digital Images of Faces
US20090002479A1 (en) * 2007-06-29 2009-01-01 Sony Ericsson Mobile Communications Ab Methods and terminals that control avatars during videoconferencing and other communications
US20090087035A1 (en) 2007-10-02 2009-04-02 Microsoft Corporation Cartoon Face Generation
US20110280485A1 (en) 2009-02-05 2011-11-17 Kazuo Sairyo Portrait illustration creation system, character creation system, and created portrait illustration display system
US20120299945A1 (en) 2006-05-05 2012-11-29 Parham Aarabi Method, system and computer program product for automatic and semi-automatic modificatoin of digital images of faces
US20120309520A1 (en) * 2011-06-06 2012-12-06 Microsoft Corporation Generation of avatar reflecting player appearance
US20130251267A1 (en) 2012-03-26 2013-09-26 Casio Computer Co., Ltd. Image creating device, image creating method and recording medium
US20140233849A1 (en) 2012-06-20 2014-08-21 Zhejiang University Method for single-view hair modeling and portrait editing

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10232950A (en) * 1996-12-19 1998-09-02 Omron Corp Device and method for forming image, and image forming program storage medium
CN101034481A (en) * 2007-04-06 2007-09-12 湖北莲花山计算机视觉和信息科学研究院 Method for automatically generating portrait painting
JP2010020594A (en) * 2008-07-11 2010-01-28 Kddi Corp Pupil image recognition device
US8467601B2 (en) * 2010-09-15 2013-06-18 Kyran Daisy Systems, methods, and media for creating multiple layers from an image

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5542037A (en) 1992-08-24 1996-07-30 Casio Computer Co., Ltd. Image displaying apparatus wherein selected stored image data is combined and the combined image data is displayed
US5787419A (en) 1992-08-24 1998-07-28 Casio Computer Co., Ltd. Face image searching apparatus for searching for and displaying a face image
US5487140A (en) 1992-12-21 1996-01-23 Casio Computer Co., Ltd. Data processing apparatus for obtaining pattern-image data
US5588096A (en) 1992-12-21 1996-12-24 Casio Computer Co., Ltd. Object image display devices
US6219024B1 (en) 1992-12-25 2001-04-17 Casio Computer Co., Ltd. Object image displaying apparatus
US5568599A (en) 1993-03-18 1996-10-22 Casio Computer Co., Ltd. Electronic montage creation device
US5818457A (en) 1993-05-25 1998-10-06 Casio Computer Co., Ltd. Face image data processing devices
US5600767A (en) 1994-02-25 1997-02-04 Casio Computer Co., Ltd. Image creation device
US5611037A (en) 1994-03-22 1997-03-11 Casio Computer Co., Ltd. Method and apparatus for generating image
US5808624A (en) 1994-07-29 1998-09-15 Brother Kogyo Kabushiki Kaisha Picture making apparatus for creating a picture for printing by assembling and positioning component parts
US5987104A (en) * 1998-01-23 1999-11-16 Mitsubishi Denki Kabushiki Kaisha Telephone with function of creating composite portrait
US20050264658A1 (en) * 2000-02-28 2005-12-01 Ray Lawrence A Face detecting camera and method
JP2004145625A (en) 2002-10-24 2004-05-20 Mitsubishi Electric Corp Device for preparing portrait
US20120299945A1 (en) 2006-05-05 2012-11-29 Parham Aarabi Method, system and computer program product for automatic and semi-automatic modificatoin of digital images of faces
US20080267443A1 (en) 2006-05-05 2008-10-30 Parham Aarabi Method, System and Computer Program Product for Automatic and Semi-Automatic Modification of Digital Images of Faces
JP2008061896A (en) 2006-09-08 2008-03-21 Nintendo Co Ltd Game program and game device
US20100164987A1 (en) 2006-09-08 2010-07-01 Nintendo Co., Ltd. Storage medium having game program stored thereon and game apparatus
JP4986279B2 (en) 2006-09-08 2012-07-25 任天堂株式会社 GAME PROGRAM AND GAME DEVICE
US20080062198A1 (en) 2006-09-08 2008-03-13 Nintendo Co., Ltd. Storage medium having game program stored thereon and game apparatus
US20090002479A1 (en) * 2007-06-29 2009-01-01 Sony Ericsson Mobile Communications Ab Methods and terminals that control avatars during videoconferencing and other communications
US20090087035A1 (en) 2007-10-02 2009-04-02 Microsoft Corporation Cartoon Face Generation
US20110280485A1 (en) 2009-02-05 2011-11-17 Kazuo Sairyo Portrait illustration creation system, character creation system, and created portrait illustration display system
US20120309520A1 (en) * 2011-06-06 2012-12-06 Microsoft Corporation Generation of avatar reflecting player appearance
US20130251267A1 (en) 2012-03-26 2013-09-26 Casio Computer Co., Ltd. Image creating device, image creating method and recording medium
US20140233849A1 (en) 2012-06-20 2014-08-21 Zhejiang University Method for single-view hair modeling and portrait editing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Related U.S. Appl. No. 13/849,710; First Named Inventor: Shigeru Kafuku; Title: "Image Creating Device, Image Creating Method and Recording Medium"; filed Mar. 25, 2013.

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11147357B2 (en) 2005-08-12 2021-10-19 Tcms Transparent Beauty, Llc System and method for applying a reflectance modifying agent to improve the visual attractiveness of human skin
US11445802B2 (en) 2005-08-12 2022-09-20 Tcms Transparent Beauty, Llc System and method for applying a reflectance modifying agent to improve the visual attractiveness of human skin
US10467779B2 (en) * 2007-02-12 2019-11-05 Tcms Transparent Beauty Llc System and method for applying a reflectance modifying agent to change a person's appearance based on a digital image
US20190096097A1 (en) * 2007-02-12 2019-03-28 Tcms Transparent Beauty Llc System and Method for Applying a Reflectance Modifying Agent to Change a Person's Appearance Based on a Digital Image
US11925869B2 (en) 2012-05-08 2024-03-12 Snap Inc. System and method for generating and displaying avatars
US9760794B2 (en) * 2015-09-25 2017-09-12 Intel Corporation Method and system of low-complexity histrogram of gradients generation for image processing
US11048916B2 (en) * 2016-03-31 2021-06-29 Snap Inc. Automated avatar generation
US20190266390A1 (en) * 2016-03-31 2019-08-29 Snap Inc. Automated avatar generation
US11631276B2 (en) 2016-03-31 2023-04-18 Snap Inc. Automated avatar generation
US10938758B2 (en) 2016-10-24 2021-03-02 Snap Inc. Generating and displaying customized avatars in media overlays
US11218433B2 (en) 2016-10-24 2022-01-04 Snap Inc. Generating and displaying customized avatars in electronic messages
US11843456B2 (en) 2016-10-24 2023-12-12 Snap Inc. Generating and displaying customized avatars in media overlays
US11876762B1 (en) 2016-10-24 2024-01-16 Snap Inc. Generating and displaying customized avatars in media overlays

Also Published As

Publication number Publication date
JP5949331B2 (en) 2016-07-06
CN103679767A (en) 2014-03-26
US20140064617A1 (en) 2014-03-06
JP2014048766A (en) 2014-03-17

Similar Documents

Publication Publication Date Title
US9135726B2 (en) Image generation apparatus, image generation method, and recording medium
KR102362544B1 (en) Method and apparatus for image processing, and computer readable storage medium
JP5880182B2 (en) Image generating apparatus, image generating method, and program
US9437026B2 (en) Image creating device, image creating method and recording medium
JP6111723B2 (en) Image generating apparatus, image generating method, and program
US20170154437A1 (en) Image processing apparatus for performing smoothing on human face area
US20180350046A1 (en) Image processing apparatus adjusting skin color of person, image processing method, and storage medium
US8971636B2 (en) Image creating device, image creating method and recording medium
US9323981B2 (en) Face component extraction apparatus, face component extraction method and recording medium in which program for face component extraction method is stored
US9600735B2 (en) Image processing device, image processing method, program recording medium
JP6260094B2 (en) Image processing apparatus, image processing method, and program
US9135687B2 (en) Threshold setting apparatus, threshold setting method and recording medium in which program for threshold setting method is stored
JP6354118B2 (en) Image processing apparatus, image processing method, and program
JP6668646B2 (en) Image processing apparatus, image processing method, and program
JP5927972B2 (en) Image generating apparatus, image generating method, and program
JP6142604B2 (en) Image processing apparatus, image processing method, and program
JP5962268B2 (en) Image processing apparatus, image processing method, image generation method, and program
US20200118304A1 (en) Image processing apparatus, image processing method, and recording medium
JP2014182722A (en) Image processing device, image processing method, and program
JP2014186404A (en) Image processing apparatus, image processing method, and program
JP2014048767A (en) Image generating device, image generating method, and program
JP2014099077A (en) Face component extraction device, face component extraction method, and program
JP2017021393A (en) Image generator, image generation method, and program

Legal Events

Date Code Title Description
AS Assignment

Owner name: CASIO COMPUTER CO., LTD., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KAFUKU, SHIGERU;REEL/FRAME:031084/0396

Effective date: 20130820

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8