JP4183536B2 - Person image processing method, apparatus and system - Google Patents

Person image processing method, apparatus and system Download PDF

Info

Publication number
JP4183536B2
JP4183536B2 JP2003084508A JP2003084508A JP4183536B2 JP 4183536 B2 JP4183536 B2 JP 4183536B2 JP 2003084508 A JP2003084508 A JP 2003084508A JP 2003084508 A JP2003084508 A JP 2003084508A JP 4183536 B2 JP4183536 B2 JP 4183536B2
Authority
JP
Japan
Prior art keywords
image
face
means
person
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2003084508A
Other languages
Japanese (ja)
Other versions
JP2004297274A (en
Inventor
賢哉 高見堂
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Priority to JP2003084508A priority Critical patent/JP4183536B2/en
Publication of JP2004297274A publication Critical patent/JP2004297274A/en
Application granted granted Critical
Publication of JP4183536B2 publication Critical patent/JP4183536B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Description

[0001]
BACKGROUND OF THE INVENTION
The present invention relates to a person image processing method, apparatus, and system, and more particularly to a technique for extracting a person image from an original image and synthesizing the extracted person image and a background image prepared in advance.
[0002]
[Prior art]
Conventionally, when a person image is extracted from an original image, a person is photographed with a blue screen as a background (blue background) to acquire the original image, and the person is extracted from the original image using the difference in color (chroma). A technique (chroma key) for extracting an image and fitting it into another image is known.
[0003]
Patent Document 1 discloses that a captured image is divided into a plurality of blocks in a matrix, and a background block with a small motion and a target block (a person with a large motion) according to the magnitude of the motion between frames for each block. A block including an image) is disclosed.
[0004]
[Patent Document 1]
Japanese Patent Application Laid-Open No. 10-13799
[Problems to be solved by the invention]
However, in the case of an original image taken with a simple background such as a blue background, it is easy to extract a person image, but a person image cannot be extracted well from an original image taken with a complicated background. There is a problem.
[0006]
On the other hand, since the method for selecting a person image and a background image described in the cited document 1 uses the magnitude of movement between frames of each block of the captured image, when a still image or a person does not move, There is a problem that a person image (a block including a person) and a background image cannot be selected. As described above, conventionally, the background is limited to a simple one that is easy to cut out, and the complicated background is excluded from processing.
[0007]
In addition, there is a service in which a face portion of an image obtained by photographing a person and a background image (template image) prepared in advance are combined, and the obtained combined image is provided to the user. In such a conventional service, a composite image in which the connection between the face image and the background image is unnatural may be created.
[0008]
The present invention has been made in view of such circumstances, and a person image processing method capable of realizing natural image composition using a face image of a person included in an original image and a prepared background image, and An object is to provide an apparatus. It is another object of the present invention to provide a person image processing system suitable for realizing a service for synthesizing an image of a person included in an original image with a background image and providing the obtained synthesized image.
[0009]
[Means for Solving the Problems]
In order to achieve the above object, a person image processing method according to the present invention includes a step of extracting a person region from an original image including a person, and a facial part including the eyes and face outline of the person from the original image. Extracting, and measuring the orientation of the face based on the extracted face part information , wherein the orientation of the face is based on the distance between the center coordinates of the extracted eye and the coordinates of the end points of the face contour A step of deforming a background image prepared in advance based on information obtained by the measurement, and a composite image obtained by combining the background image obtained by the deformation and the image of the person area And a step of creating.
[0010]
According to the present invention, the input original image data is analyzed to automatically extract a facial part including a human region, eyes, and face outline from the image, and the center coordinates of the extracted facial part The orientation of the face is calculated based on the distance from the coordinates of the end points of the face contour . The background image for synthesis is deformed in accordance with the face orientation thus obtained, and is synthesized with the image of the person area. In this way, after correcting the background image according to the characteristics of the person image, they are combined, so the person image part and the background image are combined without a sense of incongruity, and an image with a natural impression that is harmonized as a whole is obtained. Obtainable.
[0011]
In order to provide an apparatus that embodies the above method invention, a human image processing apparatus according to the present invention includes a human area extracting unit that extracts a human area from an original image including a person, and the person's image from the original image. a face part extraction unit that extracts a face part including eye and face contour, a face image measuring means for measuring the orientation of the face based on the face information of the extracted face part by part extracting means, which is the extracted Face image measuring means for measuring the orientation of the face based on the distance between the center coordinates of the eyes and the coordinates of the end points of the face contour, background image storage means for storing background image data as a background of the person image, and the background A background image deformation means for deforming a background image read from the image storage means based on the information obtained by the measurement, a background image obtained by deformation processing by the background image deformation means, and an image of the person area Characterized in that and an image synthesizing means for creating a synthesized image by synthesizing and.
Further, the face image measuring means measures a face direction and a face size based on the face part information extracted by the face part extracting means, and the background image deforming means reads out from the background image storage means. The background image is deformed based on the face orientation and the face size obtained by the measurement.
[0012]
As image data input means for capturing original image data into the human image processing apparatus of the present invention, for example, information is read from a communication interface typified by USB, Ethernet or Bluetooth, or a recording medium typified by a memory card. Media interface etc. can be used.
[0013]
As output means for outputting the composite image created by the image composition means, a communication interface for transmitting data to the outside in accordance with a predetermined communication method, a media interface for writing to a recording medium, and image contents are displayed. There are various forms such as a display device and a printer for printing on a print medium.
[0014]
A human image processing system according to the present invention includes a communication terminal having communication means for transmitting original image data including a person via a network, and image data sent from the communication terminal via the network. A human image processing system configured to process the human image processing system, wherein the image processing server extracts a human region from a raw image acquired from the communication terminal; Face part extraction means for extracting face parts including a person's eyes and face contour, and face image measurement means for measuring the orientation of the face based on the information of the face parts extracted by the face part extraction means , the extraction a face image measuring means for measuring the orientation of a face based on the distance between been eye center coordinates and the face contour of the end points of the coordinate data of the background image as a background of the portrait image A background image storage means for storing, a background image deformation means for deforming a background image read from the background image storage means based on the information obtained by the measurement, and a deformation process obtained by the background image deformation means. Image combining means for generating a combined image by combining a background image and the image of the person area, and image providing means for providing the combined image generated by the image combining means via the network. Features.
[0015]
The means for inputting the original image to the image processing server (image data input means) and the image providing means can be used as communication means for transmitting and receiving data via the network. The communication means includes a receiving means and a transmitting means. The image processing server may be configured by a single computer or a system constructed by a plurality of computers.
[0016]
According to the present invention, image data including an image of a person is transmitted from a communication terminal to an image processing server, and a creation of a composite image is requested. At least one (preferably a plurality) of background image data used for composition may be prepared in advance on the system side, or may be provided from the communication terminal side. In the case where a plurality of background image data are prepared on the system side, it is preferable to provide a means for the user to check the background image on the communication terminal and select a desired background image.
[0017]
The image processing server analyzes the image data sent from the communication terminal to extract a facial part including a person region and eyes and a face outline, and the distance between the center coordinates of the face parts and the coordinates of the end points of the face outline The direction of the face is calculated based on . Then, image deformation is performed such that the viewpoint (image angle) of the composition background image is changed according to the orientation of the face, and the background image is corrected to an appropriate one. The background image thus modified and the extracted person image are synthesized, and a processing result image (synthesized image) is provided. Note that the composite image created by the image processing server may be provided to the communication terminal of the request source (image data transmission source) via the network, or may be provided to other communication terminals or servers.
[0018]
According to the human image processing system of the present invention, it is possible to realize an attractive image processing service that extracts a human region from an original image and synthesizes and returns a human face and various background images with a natural feeling. .
[0019]
According to an aspect of the present invention, the communication terminal includes a display unit that receives an image created by the image processing server and displays image content based on the received image data. To do.
[0020]
The communication terminal includes an imaging unit that converts an optical image of a subject into an electrical signal, and the communication unit can send image data including a person image captured by the imaging unit from the communication unit. is there. For example, a mobile phone with a camera is applicable.
[0021]
DETAILED DESCRIPTION OF THE INVENTION
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings.
[0022]
FIG. 1 is a functional block diagram showing a main part of a human image processing apparatus according to the present invention. As shown in the figure, the person image processing apparatus 10 can be configured by, for example, a personal computer, and includes an image data input unit 11, a face part extraction unit 12, a person region extraction unit 13, and a facial image. It consists of an orientation measurement unit 14, a face size measurement unit 15, a background image deformation unit 16, a background image storage unit 17, a synthesis processing unit 18, and an image data output unit 19.
[0023]
Original image data captured by a digital still camera (hereinafter referred to as DSC), a mobile phone with a camera, or the like is input to the human image processing apparatus 10 via the image data input unit 11. The original image data is person image data obtained by photographing a person under an arbitrary background as shown in FIG.
[0024]
In addition to the media interface of DSC media, a USB interface, an infrared communication (IrDA) interface, an Ethernet interface, and a wireless communication interface may be used for the image data input unit 11, and which interface is applied depends on the original image data Is appropriately selected by the user depending on the medium on which the image is recorded and the recording format of the original image data.
[0025]
The original image data input to the person image processing apparatus 10 is sent to the face part extraction unit 12 and the person region extraction unit 13, where face parts and person regions are extracted (see FIG. 2B). ). The face part extraction unit 12 analyzes the input image data and extracts at least one (preferably a plurality of) face parts such as eyes, nose, lips, and eyebrows. The extraction result may be the coordinates of each face part or a partial image obtained by cutting out each face part.
[0026]
The person area extraction unit 13 performs a process of distinguishing the person area and the background area from the input original image and extracting the person area portion.
[0027]
FIG. 3 is a block diagram illustrating a configuration example of the face part extraction unit 12. The face part extraction unit 12 includes a feature extraction processing unit 21, a matching processing unit 22, a dictionary data storage unit 23, a coincidence calculation unit 24 that calculates the coincidence with dictionary data, a coincidence determination unit 25 that determines the coincidence, and A face part coordinate determination unit 26 is included.
[0028]
The feature extraction processing unit 21 performs, for example, wavelet transform on the input image data, and extracts and quantizes the wavelet coefficients of appropriate positions and frequencies, and the like, for example, to extract the face parts. I do.
[0029]
The dictionary data storage unit 23 stores face part dictionary data created by performing the same feature extraction process from a large number of sample images in advance. The matching processing unit 22 performs matching processing between the data extracted by the feature extraction processing unit 21 and the face part dictionary data. The coincidence degree calculation unit 24 calculates the degree of coincidence between the image data subjected to the feature extraction process and the face part dictionary data. It is assumed that the target face part exists at a position in the image having the highest degree of coincidence.
[0030]
The coincidence determination unit 25 determines whether the coincidence calculated from the result of the matching process exceeds a certain threshold value. If the matching degree does not exceed the threshold value, it is determined that the target face part does not exist. In this case, the position having the highest degree of coincidence is returned as the coordinates of the face part, and information indicating the certainty factor (for example, a value “0” indicating that the certainty factor is low) is returned.
[0031]
On the other hand, when the coincidence degree exceeds the threshold value in the coincidence degree determination unit 25, it is determined that the target face part is present. In this case, the face part coordinate determination unit 26 determines the position having the highest degree of coincidence as the coordinate of the face part, returns the coordinate information, and outputs the coincidence as confidence level information.
[0032]
The better the matching result, the higher the matching result. Therefore, by calculating the certainty factor using the degree of coincidence, the certainty factor can be used as an index as to whether or not good part extraction has been performed.
[0033]
In this way, information on the feature points (coordinates, size, etc.) of each face part can be obtained from the input face image. For example, coordinates such as eyes, nose, lips, eyebrows, the lowest point of the chin, and left and right end points of the face contour are detected.
[0034]
FIG. 4 is a block diagram showing a configuration example of the person area extraction unit 13 shown in FIG. The person area extraction unit 13 includes a face coordinate detection unit 31, which is also used as the face part extraction unit 12 described in FIG. 3, an area division unit 32, a certainty factor calculation unit 33, a person region dictionary data storage unit 34, A certainty factor determination unit 35 and a person area determination unit 36 are included.
[0035]
As shown in FIG. 4, a face part is extracted from the input image data by the face coordinate detection unit 31, and the position where the face part is extracted is determined as “face”.
[0036]
The area dividing unit 32 performs area dividing processing for dividing similar colors and textures together using positions (coordinates) determined as face parts as a result of face coordinate detection. For example, the human skin area including the eye coordinates is detected as the face area, and the human area is detected by detecting the components included in the person so that the hair area is the area slightly above the eyes and black, brown, etc. Extract.
[0037]
On the other hand, from a large number of person area images, person area dictionary data indicating the average positional relationship between the eye positions and the boundary lines between the person and the background is created in advance, and this dictionary data is used as the person area dictionary data. Stored in the storage unit 34. In the certainty factor calculation unit 33, the average positional relationship between the boundary line between the person and the background obtained based on the image data and the boundary line between the person and the background stored in the person area dictionary data storage unit 34. The matching process is performed, and the degree of coincidence is calculated. The degree of coincidence increases as the positional relationship between eyes, hair, shoulders, etc. averages. The degree of coincidence thus obtained is the certainty factor in the person region extraction.
[0038]
The certainty factor determination unit 35 determines that the person area does not exist in the image data when the calculated certainty degree does not exceed the set threshold value (corresponding to a small certainty factor), and is certain because the person area is indefinite. Degrees are provided.
[0039]
Further, when the certainty factor exceeds the threshold value (corresponding to a high certainty factor) in the certainty factor determination unit 35, the region having the highest degree of coincidence is determined as the person region. In this case, the person area determination unit 36 distinguishes the person area from the background area, and outputs information on the certainty factor together with information on the person area.
[0040]
Note that various methods can be considered for extracting the person area and the background area from the image, and the method is not limited to the above-described method. For example, a method for extracting a person area from the original image by performing a filtering process that extracts a boundary line between a person and a background from high-frequency components in the original image, or extracting a skin color in the original image, and having a certain point in the skin color area To the connected regions that are considered to belong to the same region sequentially, and the face region is extracted according to whether or not the shape of the extracted region is the shape of the face. There is a method of extracting a person region by extracting a hair region, a neck and a chest region below the face, and the like.
[0041]
Based on the coordinate information of the face part extracted by the face part extracting unit 12 in FIG. 1, the face direction measuring unit 14 calculates the face direction, and the face size measuring unit 15 calculates the face size. For example, as shown in FIG. 5, the face direction can be calculated by obtaining the difference between the left and right distances d1 and d2 between the center coordinates of the eyes and the coordinates of the end points of the face contour. Further, the face inclination angle θ can be detected by calculating the angle between the line L1 connecting the center coordinates of the left and right eyes and the horizontal reference line L2 of the screen. The size of the face can be obtained from the distance between the left and right eyes, the coordinates of the end points of the face contour, and the like.
[0042]
The measurement results of the face orientation and the face size are sent to the background image deformation unit 16 shown in FIG. The background image deformation unit 16 performs a process of deforming the background image data read from the background image storage unit 17 according to the face direction and the face size.
[0043]
The background image storage unit 17 stores a plurality of background image data, and the user can select a desired background image via a predetermined user interface. Of course, the background image acquisition method is not limited to this embodiment, and a background image to be combined with a person image may be input separately.
[0044]
The background image transformed by the background image transformation unit 16 and the person image extracted by the person region extraction unit 13 are synthesized by the synthesis processing unit 18 to generate a synthesized image.
For example, if a background image (a background image from which a face portion is removed) as shown in FIG. 2C is selected, the background is matched to the orientation and size of the person's face as shown in FIG. After the image is deformed, it is combined with a person image (face image) extracted from the original image.
[0045]
The composite image generated by the composite processing unit 18 is output from the image data output unit 19 in FIG. The output form includes a form for outputting an image to a display device and a printer, a form for recording in a recording medium such as a memory card and a CD-R, and a storage device such as a built-in hard disk, and communication means (both wired and wireless). ) And the like, and so on.
[0046]
The person image processing apparatus 10 described above can be realized by a personal computer, but is not limited thereto, and may be realized by a service server for image processing on a network.
[0047]
FIG. 6 is a configuration diagram of a network system to which the person image processing method according to the present invention is applied.
[0048]
In FIG. 6, reference numeral 130 denotes a camera-equipped mobile phone that can be connected to a network 140 such as the Internet, and 150 denotes a user's computer (PC) that can be connected to the network 140. The PC 150 is connected to a DSC 152 via an interface such as a USB, and can capture an image from the DSC 152.
[0049]
Also connected to the network 140 are a service server 160 that performs image processing similar to the image processing in the person image processing apparatus 10 described above, a print server 170 that prints out a composite image processed by the service server 160, and the like. ing.
[0050]
When using the background image composition processing service provided by the service server 160, the home server of the service server 160 is accessed using the camera-equipped mobile phone 130 or the PC 150, and the image for requesting the background image composition processing is displayed on the service server. Upload to 160. In addition, the service server 160 can present a list of background images to the user and allow the user to select a background image.
[0051]
The service server 160 stores a server computer 162 having a function similar to that of the person image processing apparatus 10 shown in FIG. 1 and a communication function, an image uploaded from a user, a user ID, a mail address, and the like. And a large capacity recording device (storage) 164 for managing information. When the service server 160 receives a request for a background image composition process for an original image uploaded from a user, the service server 160 extracts a person image from the original image, and converts the background image according to the face direction and size as described above. Deformation is performed to combine the person image and the background image. The composite image created in this way is provided to the user's camera-equipped mobile phone 130 and PC 150 via the network 140.
[0052]
As an image providing method, there is a mode in which image data is sent back to the requesting user terminal (in this case, the camera-equipped mobile phone 130 or the PC 150), or a mode in which the image data is attached and distributed. There is also an aspect in which the composite image is stored in a predetermined storage location (for example, a dedicated browsing site set for each user), and the address information (for example, URL) is notified by e-mail or the like. In this case, the user can access the designated site and browse or download the facial expression change image from there.
[0053]
In this way, the user can view the composite image generated by the service server 160 on the display 131 of the camera-equipped mobile phone 130 or the display 151 of the PC 150.
[0054]
When the service server 160 receives a print order of a composite image from the user, the service server 160 transfers the composite image to the print server 170. The print server 170 includes a server computer 172 and a print device 174, and prints out a composite image in which the background is combined by the print device 174 based on the composite image received from the service server 160. The printed photo print is delivered to a receiver such as a convenience store or a photo store designated by the user, or directly to the user's home.
[0055]
In the above-described image providing system, the information on the certainty factor representing the estimation success rate of the face part extraction is further used, and when the certainty factor is lower than a predetermined determination reference value, the processing result image with low accuracy (composite image) ) Is not provided, a service that does not cause discomfort to the user can be provided, and the overall service quality can be improved.
[0056]
【The invention's effect】
As described above, according to the face image processing apparatus of the present invention, the human region and face parts are automatically extracted from the input original image to measure the face direction and the face size. Since the person image and the background image are synthesized by deforming the background image in accordance with the size, a synthesized image with a natural impression can be obtained.
[Brief description of the drawings]
FIG. 1 is a functional block diagram showing a main part of a human image processing apparatus according to an embodiment of the present invention. FIG. 2 is a diagram showing an example of an image used for explaining processing contents of a human image processing method according to the present invention. 3 is a diagram showing a configuration example of a face part extraction unit shown in FIG. 1. FIG. 4 is a diagram showing a configuration example of a person region extraction unit shown in FIG. 1. FIG. 5 is for explaining a face measurement method. FIG. 6 is a configuration diagram of a network system to which a human image processing method according to the present invention is applied.
DESCRIPTION OF SYMBOLS 10 ... Person image processing apparatus, 12 ... Face part extraction part, 13 ... Person area extraction part, 14 ... Face direction measurement part, 15 ... Face size measurement part, 16 ... Background image deformation | transformation part, 17 ... Background image storage part , 18 ... composition processing unit, 130 ... mobile phone with camera, 131 ... display, 140 ... network, 150 ... PC, 152 ... DSC, 160 ... service server, 170 ... print server

Claims (6)

  1. Extracting a person area from an original image including a person;
    Extracting a facial part including the eyes and face outline of the person from the original image;
    Measuring the orientation of the face based on the extracted face part information, measuring the orientation of the face based on the distance between the center coordinates of the extracted eyes and the coordinates of the end points of the face contour ; ,
    Transforming a background image prepared in advance based on the information obtained by the measurement;
    Creating a composite image by combining the background image obtained by the deformation and the image of the person area; and
    A person image processing method comprising:
  2. Person area extraction means for extracting a person area from an original image including a person;
    Facial part extracting means for extracting facial parts including the eyes and facial contours of the person from the original image;
    Face image measuring means for measuring a face orientation based on face part information extracted by the face part extracting means , based on a distance between the extracted eye center coordinates and face contour end point coordinates. A face image measuring means for measuring the orientation of the face ;
    Background image storage means for storing data of a background image as a background of a person image;
    Background image deformation means for deforming a background image read from the background image storage means based on the information obtained by the measurement;
    Image composition means for creating a composite image by combining the background image obtained by the deformation processing by the background image deformation means and the image of the person area;
    A human image processing apparatus comprising:
  3. The face image measuring means measures a face direction and a face size based on the face part information extracted by the face part extracting means;
    3. The human image according to claim 2, wherein the background image deforming means deforms the background image read from the background image storing means based on the face orientation and the face size obtained by the measurement. Processing equipment.
  4. A communication terminal having communication means for transmitting original image data including a person via a network; and an image processing server for processing image data sent from the communication terminal via the network. A human image processing system,
    The image processing server includes a person area extracting unit that extracts a person area from an original image acquired from the communication terminal;
    Facial part extracting means for extracting facial parts including the eyes and facial contours of the person from the original image;
    Face image measuring means for measuring a face orientation based on face part information extracted by the face part extracting means , based on a distance between the extracted eye center coordinates and face contour end point coordinates. A face image measuring means for measuring the orientation of the face ;
    Background image storage means for storing data of a background image as a background of a person image;
    Background image deformation means for deforming a background image read from the background image storage means based on the information obtained by the measurement;
    Image composition means for creating a composite image by combining the background image obtained by the deformation processing by the background image deformation means and the image of the person area;
    Image providing means for providing a synthesized image created by the image synthesizing means via the network;
    A human image processing system comprising:
  5. 5. The human image processing according to claim 4 , wherein the communication terminal includes display means for receiving an image created by the image processing server and displaying the image content based on the received image data. system.
  6. The communication terminal includes an imaging means for converting an optical image of an object into an electrical signal, according to claim 4 or wherein the image data including a person image photographed by the image pickup means may be transmitted from said communication means 5. The human image processing system according to 5 .
JP2003084508A 2003-03-26 2003-03-26 Person image processing method, apparatus and system Active JP4183536B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2003084508A JP4183536B2 (en) 2003-03-26 2003-03-26 Person image processing method, apparatus and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2003084508A JP4183536B2 (en) 2003-03-26 2003-03-26 Person image processing method, apparatus and system

Publications (2)

Publication Number Publication Date
JP2004297274A JP2004297274A (en) 2004-10-21
JP4183536B2 true JP4183536B2 (en) 2008-11-19

Family

ID=33399662

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2003084508A Active JP4183536B2 (en) 2003-03-26 2003-03-26 Person image processing method, apparatus and system

Country Status (1)

Country Link
JP (1) JP4183536B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4871491A (en) 1984-03-15 1989-10-03 Basf Structural Materials Inc. Process for preparing composite articles from composite fiber blends
CN104517274A (en) * 2014-12-25 2015-04-15 西安电子科技大学 Face portrait synthesis method based on greedy search

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100627049B1 (en) 2004-12-03 2006-09-25 삼성테크윈 주식회사 Apparatus and method for composing object to image in digital camera
JP4904013B2 (en) * 2005-04-08 2012-03-28 株式会社川島織物セルコン Image generation program and image generation apparatus
KR101240261B1 (en) * 2006-02-07 2013-03-07 엘지전자 주식회사 The apparatus and method for image communication of mobile communication terminal
JP4839908B2 (en) * 2006-03-20 2011-12-21 カシオ計算機株式会社 Imaging apparatus, automatic focus adjustment method, and program
JP2010530998A (en) * 2007-05-08 2010-09-16 アイトゲネーシッシュ テヒニッシュ ホーホシューレ チューリッヒ Image-based information retrieval method and system
JP4861952B2 (en) 2007-09-28 2012-01-25 富士フイルム株式会社 Image processing apparatus and imaging apparatus
JP5320558B2 (en) * 2008-03-31 2013-10-23 ネット・クレイ株式会社 Imaging system and imaging method
JP5083559B2 (en) * 2008-06-02 2012-11-28 カシオ計算機株式会社 Image composition apparatus, image composition method, and program
JP5048736B2 (en) * 2009-09-15 2012-10-17 みずほ情報総研株式会社 Image processing system, image processing method, and image processing program
JP5605660B2 (en) * 2012-05-31 2014-10-15 フリュー株式会社 Image providing system and method, image providing apparatus, and image generating apparatus
JP6417511B2 (en) * 2017-02-24 2018-11-07 株式会社ミヤビカクリエイティブ Image generation system and image generation apparatus

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4871491A (en) 1984-03-15 1989-10-03 Basf Structural Materials Inc. Process for preparing composite articles from composite fiber blends
CN104517274A (en) * 2014-12-25 2015-04-15 西安电子科技大学 Face portrait synthesis method based on greedy search
CN104517274B (en) * 2014-12-25 2017-06-16 西安电子科技大学 Human face portrait synthetic method based on greedy search

Also Published As

Publication number Publication date
JP2004297274A (en) 2004-10-21

Similar Documents

Publication Publication Date Title
US9137417B2 (en) Systems and methods for processing video data
US9104915B2 (en) Methods and systems for content processing
US8345118B2 (en) Image capturing apparatus, image capturing method, album creating apparatus, album creating method, album creating system and computer readable medium
US8081844B2 (en) Detecting orientation of digital images using face detection information
US9025836B2 (en) Image recomposition from face detection and facial features
US7574054B2 (en) Using photographer identity to classify images
US8938100B2 (en) Image recomposition from face detection and facial features
US8600191B2 (en) Composite imaging method and system
US6813395B1 (en) Image searching method and image processing method
US20130169821A1 (en) Detecting Orientation of Digital Images Using Face Detection Information
US6344907B1 (en) Image modification apparatus and method
US7266251B2 (en) Method and apparatus for generating models of individuals
KR100407111B1 (en) Apparatus and method for generating a synthetic facial image based on shape information of a facial image
US7634106B2 (en) Synthesized image generation method, synthesized image generation apparatus, and synthesized image generation program
EP1276315B1 (en) A method for processing a digital image to adjust brightness
US8599299B2 (en) System and method of processing a digital image for user assessment of an output image product
US7885477B2 (en) Image processing method, apparatus, and computer readable recording medium including program therefor
US7502493B2 (en) Image processing apparatus and method and program storage medium
US9122979B2 (en) Image processing apparatus to perform photo-to-painting conversion processing
US8107764B2 (en) Image processing apparatus, image processing method, and image processing program
JP5084938B2 (en) Image extraction apparatus, image extraction method, and image extraction program
US8488847B2 (en) Electronic camera and image processing device
US7995239B2 (en) Image output apparatus, method and program
US8300064B2 (en) Apparatus and method for forming a combined image by combining images in a template
JP4574249B2 (en) Image processing apparatus and method, program, and imaging apparatus

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20050330

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20060912

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20060914

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20061113

A711 Notification of change in applicant

Free format text: JAPANESE INTERMEDIATE CODE: A712

Effective date: 20061212

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20070216

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20070412

A911 Transfer of reconsideration by examiner before appeal (zenchi)

Free format text: JAPANESE INTERMEDIATE CODE: A911

Effective date: 20070604

A912 Removal of reconsideration by examiner before appeal (zenchi)

Free format text: JAPANESE INTERMEDIATE CODE: A912

Effective date: 20070622

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20080902

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110912

Year of fee payment: 3

R150 Certificate of patent or registration of utility model

Ref document number: 4183536

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120912

Year of fee payment: 4

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130912

Year of fee payment: 5

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250