US20050008246A1 - Image Processing method - Google Patents
Image Processing method Download PDFInfo
- Publication number
- US20050008246A1 US20050008246A1 US10/916,521 US91652104A US2005008246A1 US 20050008246 A1 US20050008246 A1 US 20050008246A1 US 91652104 A US91652104 A US 91652104A US 2005008246 A1 US2005008246 A1 US 2005008246A1
- Authority
- US
- United States
- Prior art keywords
- image
- image processing
- person
- photographed
- processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/175—Static expression
Definitions
- the present invention relates to an image processing method for converting input image data to output image data by applying image processing to the input image data. More specifically, the invention relates to an image processing method for preparing an output image meeting a request from an individual customer; an image processing method for additionally displaying information corresponding to a feeling of a person on an image displaying medium or the like on which an image of the person is displayed, or performing substitution, modification or adjustment to produce an image corresponding to the feeling; and an image processing method for changing an image of a person to a favorite image of the person or an image having no unnatural feeling.
- a film such as a negative film or a reversal film
- a photosensitive material printing paper
- the digital photoprinter photoelectrically reads an image recorded on a film, converts the read image to a digital signal, then converts the digital signal to image data for recording by applying various kinds of image processing, and records an image (latent image) by scanning and exposing a photosensitive material by recording light that is modulated according to the image data to have a print (photograph).
- the digital photoprinter can convert an image into digital image data and determine exposure conditions at the time of printing the image by image data processing.
- various kinds of image processing can be performed with a high degree of freedom, which is difficult or impossible with the conventional direct exposure, including correction of dropouts or blocked-ups of an image due to back-light, strobe photographing or the like, correction of a color failure or a density failure, correction of under exposure or over exposure, correction of insufficient marginal luminosity, sharpness processing, and compression/expansion processing of density dynamic range (giving a dodging effect by image data processing). Therefore, an extremely high-grade print can be obtained compared with the direct exposure.
- composition and division of a plurality of images, composition of characters or the like can be performed by the image data processing, a print that is freely edited and/or processed depending on an application can be outputted.
- the digital photoprinter not only can output an image as a print but also can supply image data to a computer or the like or can store image data in a recording medium such as a floppy disk, the image data can be utilized for various applications other than a photograph.
- an image to be reproduced as a print is an image on which a request of a customer (a person who requests preparation of a print) is reflected as much as possible.
- the applicant has proposed an image processing method of reproducing a finished image that preferably corresponds to a request of a customer in Japanese Patent Application Laid-open No. Hei 11-331570.
- the method is to obtain a reproduced image preferably corresponding to a request of a customer by getting information on the customer relating to image data supplied from an image supplying source, setting image processing conditions according to the information on the customer, and performing image processing based on the image processing conditions.
- information on a customer refers to an occupation of a customer, a sex of a customer, an age of a customer or the like.
- a method of obtaining information on a customer is exemplified by a method with which information on a customer is verbally obtained from the customer when an order of a print is received from the customer, which is communicated to an operator who inputs the information using an operating device such as a mouse, a method with which customer information is written in a customer card and an operator inputs the customer information referring to the customer card when preparing a print, or a method with which customer information is arranged as a database and an operator obtains the customer information from the database.
- image processing that preferably corresponds to a request of a customer is exemplified by the following processing.
- a film is a reversal film and an occupation is a professional photographer
- an image photographed on the film is to be reproduced faithfully
- an occupation is not a professional photographer
- a photographing failure such as over exposure, under exposure and back-light is remedied by adjusting a color and a density of an image normally.
- a face region is extracted and sharpness is given rather strongly to make gradation prominent and show details
- a face region is extracted and sharpness is given rather weakly or soft focusing is applied extremely weakly to make gradation less prominent (soft) and to make live spots, wrinkles, freckles or the like less outstanding.
- the conventional image processing method has a problem in that processing is complicated because an operator must input information on a customer.
- image processing conditions to be set are fixed according to obtained information on a customer, or selection of conditions is limited only to whether the processing is performed or not, and there is no function of setting image processing conditions corresponding to preference of a customer or, more meticulously, of each subject person, thus, image reproduction to which preference of a customer or a subject person is truly reflected cannot be realized.
- image forming media there are conventionally a photograph (print) that reproduces a still image and a movie (a film projector and a screen) that reproduces images as an animation. Since the development of a cathode-ray tube (CRT), in recent years, television sets (TVs) have been spread to all the households. Moreover, with remarkable advances of technologies, various image display devices such as a liquid crystal display, a plasma display and an electronic paper have been developed as image forming media.
- CTR cathode-ray tube
- image forming devices such as a video camera, a digital camera, a digital video movie camera, and a cellular TV telephone, which can capture voices together with images utilizing the above-mentioned image forming media.
- the above-mentioned conventional image forming devices can photograph images and, at the same time, record voices, the captured voice data is simply reproduced as sounds directly.
- the conventional image forming devices aim principally at reproducing an image as faithfully as possible as it is photographed, and an entertaining aspect of an image is not taken into account at all.
- Japanese Patent Application Laid-open No. 2000-151985 discloses a technique in which portions of an image of a face of a person and adjustment parameters are set to correct the image to have a made-up face.
- Ordinary people are not however familiar with adjustment of color tone or gradation, which is difficult processing for amateurs and in particular rather difficult for a user unfamiliar with a personal computer (hereinafter also referred to as “PC”).
- PC personal computer
- good adjustment results cannot be obtained, and there is also a problem in that images very often have unnatural feeling after adjustment.
- Japanese Patent Application Laid-open No. 5-205030 discloses a technique in which an image of eyes in a full-faced state is prepared from a three-dimensional model of an image of a face of a person by computer graphics (hereinafter referred to as “CG”) technology.
- CG computer graphics
- the present invention has been devised in view of the above drawbacks, and it is a first object of the present invention to provide an image processing method with which a reproduced image, on which preference of each subject person is reflected, can be automatically obtained.
- the present invention has been devised in view of the above drawbacks, and it is a second object of the present invention to provide an image processing method with which amusement aspect in image forming media such as a photograph, a video, a TV telephone and the like can be enhanced by visualizing a content that is desired to be emphasized according to a type of feeling of a person in a photographed image, particularly an image of the person, and forming an image.
- the present invention has been devised in view of the above drawbacks, and it is a third object of the present invention to provide an image processing method with which even inexperienced or unskilled persons in personal computer or image processing software can easily correct images so as to have a preferred made-up face or a favorite face and remove unnatural feeling due to noncoincidence of the line of sight.
- the first aspect of the present invention provides an image processing method for applying image processing to an inputted image data, comprising the steps of registering predetermined image processing conditions for each specific person in advance; extracting a person in the inputted image data; identifying the extracted person to find if the extracted person is the specific person; and selecting image processing conditions corresponding to the identified specific person to perform the image processing based on the selected image processing conditions.
- the extracted person is identified using a face image of the specific person registered in advance or person designation information accompanying a photographed frame.
- a plurality of kinds of image processing conditions are set for the each specific person as the predetermined image processing conditions to be registered for the each specific person in advance.
- the image processing is performed by using at least one image processing condition selected from the plurality of kinds of image processing conditions.
- the image processing under the selected image processing conditions is applied to an image as a whole or applied only to the person or the person and a vicinity of the person.
- the second aspect of the present invention provides an image processing method, comprising the steps of determining a type of feeling from types of feeling registered in advance based on at least one kind of information selected from among voice data accompanying a photographed image, an expression of a person extracted from the photographed image, and a gesture of the extracted person; and subjecting the photographed image to image processing which applies an image processing pattern corresponding to the determined type of feeling among image processing patterns set in advance.
- the image processing pattern is set in association with the type of feeling, and the image processing to which the image processing pattern is applied is at least one processing selected from among composition processing for composing a specified mark corresponding to the type of feeling, substitution processing for substituting with an animation image or a computer graphics image corresponding to the type of feeling, image modification processing performed on the photographed image in correspondence with the type of feeling, and processing for changing a density and a color of the photographed image in correspondence with the type of feeling.
- the composition processing is processing for composing the specified mark at a predetermined position in the photographed image or at a predetermined or relative position with respect to the person extracted in advance or during the composition processing from the photographed image, and in a predetermined or relative size and a predetermined or relative orientation with respect to the photographed image or the extracted person.
- the substitution processing is processing for substituting a specified portion of the person extracted in advance or during the substitution processing from the photographed image with the animation image or the computer graphics image.
- the photographed image is a photographed image by an image photographing device with a recording function
- the image processing pattern is registered in the image photographing device with the recording function in advance, and the image processing to which the image processing pattern is applied is performed by the image photographing device with the recording function.
- the image processing to which the image processing pattern is applied is performed on a lab side that receives image photographing information including the voice data recorded by the image photographing device with the recording function.
- the photographed image is a photographed image by a telephone call device with a photographing function, and the image processing to which the image processing pattern corresponding to the type of feeling of the person is applied is performed on the photographed image.
- the image processing pattern is registered in the telephone call device with the photographing function in advance, and the image processing is performed by the telephone call device with the photographing function to transmit a processed image to a terminal on an opposite party side.
- the image processing pattern is registered in a repeater station of the telephone call device with the photographing function in advance, and the image processing is performed in the repeater station to transmit a processed image to one terminal in a connected telephone line.
- the image processing pattern is registered in the telephone call device with the photographing function in advance, and the image processing to which the image processing pattern is applied is performed by the telephone call device with the photographing function on an image that was photographed by a terminal on an opposite party side and received by the telephone call device with the photographing function.
- the mark corresponding to the type of feeling or a composing position of the mark is wrong in the image processing to which the image processing pattern is applied, the mark corresponding to the type of feeling, the composing position of the mark and a size or an orientation of the mark can be corrected.
- the third aspect of the present invention provides an image processing method comprising the steps of capturing a television image in a personal computer; and performing image processing to which an image processing pattern is set in advance on the captured television image in the personal computer.
- the fourth aspect of the present invention provides an image processing method comprising the steps of registering in advance an area image in a specific area of an image or an image characteristic amount; and composing on a corresponding area of an photographed image or adjusting a density and a color tone by using the area image or image characteristic amount registered in advance.
- the corresponding area is extracted from the photographed image in accordance with the area image or the image characteristic amount registered in advance.
- the specific area is at least one of a face of a person, at least one portion constituting the face of the person, an accessory that the person wears and a background.
- the specific area is a face of a person
- the area image registered in advance is an image of a made-up face or best face of the person.
- the specific area is an area of eyes constituting a face of a person, and determination is made as to whether the person as a subject in the photographed image is in a stationary state, and when the person is in the stationary state, the area image in the specific area registered in advance is composed on the area of eyes constituting the face of the person.
- the area image registered in advance is an image of the area of eyes in which a line of sight of the person is coincident with a photographing direction.
- FIG. 1 is a block diagram schematically showing an example of a digital photoprinter including an image processing apparatus that implements an image processing method in accordance with a first aspect of the present invention
- FIG. 2 is an explanatory illustration showing an example of registration data used in the present invention
- FIG. 3 is a flow chart showing operations of a first embodiment of the first aspect of the present invention.
- FIG. 4 is a block diagram schematically showing an example of a digital photoprinter including an image processing apparatus implementing an image processing method in accordance with a second aspect of the present invention
- FIG. 5 is a flow chart showing a flow of processing of an image processing method in accordance with a first embodiment of the second aspect of the present invention
- FIG. 6 is an explanatory illustration showing an example of an image processing pattern
- FIG. 7 is an explanatory illustration schematically showing a cellular TV telephone system in accordance with a fourth embodiment of the second aspect of the present invention.
- FIG. 8 is a block diagram schematically showing an example of a digital photoprinter including an image processing apparatus implementing an image processing method in accordance with a fourth aspect of the present invention.
- FIG. 9 is a flow chart showing a flow of processing of image processing methods in accordance with first and second embodiments of the fourth aspect of the present invention.
- FIGS. 1 to 3 An image processing method in accordance with a first aspect of the present invention will be described first with reference to FIGS. 1 to 3 .
- FIG. 1 is a block diagram schematically showing an example of a digital photoprinter including an image processing apparatus that implements an image processing method of automatically obtaining a reproduced image, on which preference of an individual customer is reflected, in accordance with a first aspect of the present invention.
- a digital photoprinter (hereinafter referred to as a photoprinter) 10 shown in FIG. 1 includes a scanner (image reading apparatus) 12 for photoelectrically reading an image photographed on a film F, an image processing apparatus 14 for performing image processing such as electronic magnification processing of image data read by the scanner 12 , edge detection and sharpness emphasis of image data and smoothing processing (granularity restraining), or operation, control and the like of the entire photoprinter 10 , and an image recording apparatus 16 for applying image exposure and developing processing to a photosensitive material (printing paper) by light-beam that is modulated according to the image data outputted from the image processing apparatus 14 to output a (finished) image as a print.
- a scanner image reading apparatus
- an image processing apparatus 14 for performing image processing such as electronic magnification processing of image data read by the scanner 12 , edge detection and sharpness emphasis of image data and smoothing processing (granularity restraining), or operation, control and the like of the entire photoprinter 10
- an image recording apparatus 16 for applying image
- an operation system 18 including a keyboard 18 a and a mouse 18 b for inputting a selection or an instruction of input, setting and processing of various conditions, an instruction of color/density correction, or the like, and a monitor 20 for displaying the image read by the scanner 12 , various operational instructions, a setting/registration screen of various conditions, and the like are connected to the image processing apparatus 14 .
- the scanner 12 is an apparatus for photoelectrically reading an image photographed on a film F or the like one frame after another, which includes a light source 22 , a variable diaphragm 24 , a diffusion box 26 for equalizing reading light incident on the film F in the surface direction of the film F, a carrier 28 of the film F, an imaging lens unit 30 , an image sensor 32 having a three-line CCD sensor dealing with reading of each color image density of R (red), G (green) and B (blue), an amplifier 33 and an A/D (analog/digital) converter 34 .
- the carrier 28 that is for special-use with and detachably attachable to the body of the scanner 12 is prepared according to a type and a size of the film F such as a film of the Advanced Photo System, a negative (or reversal) film of the 135 size or the like, a form of a film such as a strips and a slide, or the like.
- the photoprinter 10 can cope with various kinds of films and processing by replacing the carrier 28 .
- An image (frame) that is photographed on a film and is served for preparing a print is conveyed to a predetermined reading position by the carrier 28 .
- a magnetic recording medium is formed in a film of the Advanced Photo System, on which an cartridge ID and a film size, an ISO sensitivity, and the like are recorded. Further, various data such as a date and time of photographing or development, an exposure level, a type of a camera or a developing machine can be recorded at the time of photographing, development or the like.
- a reader of this magnetic information is disposed on the carrier 28 dealing with a film (cartridge) of the Advanced Photo System, which reads the magnetic information when the film is conveyed to a reading position. The read various information is sent to the image processing apparatus 14 .
- a color image signal is not limited to a signal inputted by reading light that has transmitted through a film, but a reflected original or an image photographed by a digital camera may be used. That is, an image (digital image signal) can be inputted in the image processing apparatus 14 from, other than the scanner 12 for reading an image of a film, an image supplying source R, such as image photographing devices such as a digital camera and a digital video camera, an image reading apparatus for reading an image of a reflected original, communication networks such as an LAN (Local Area Network) and a computer communication network, and a medium (recording medium) such as a memory card, such as a smart medium, and an MO (mangetooptical recording medium).
- an image digital image signal
- an image processing apparatus 14 can be inputted in the image processing apparatus 14 from, other than the scanner 12 for reading an image of a film, an image supplying source R, such as image photographing devices such as a digital camera and a digital video camera, an image reading apparatus for reading an image of
- the illustrated carrier 28 deals with an elongated film F (strips) such as a 24-exposure film of the 135 size and a cartridge of the Advance Photo System.
- the film F is placed in a reading position by the carrier 28 , on which reading light is irradiated while it is conveyed in a sub-scanning direction perpendicular to a main scanning direction that is an extending direction of the three-line CCD sensor of R, G and B.
- the film F is slit-scanned two-dimensionally and an image of each frame photographed on the film F is read.
- Projected light of the film F is imaged on a light-receiving surface of the image sensor 32 by the imaging lens unit 30 .
- Each output signal of R, G and B outputted from the image sensor 32 is amplified by the amplifier 33 and sent to the A/D converter 34 .
- each output signal is converted to R, G and B digital image data of, for example, 12 bits, and then outputted to the image processing apparatus 14 .
- image reading is performed twice, i.e., pre-scan for reading an image with low resolution (first image reading) and fine scan for obtaining image data of an output image (second image reading), when an image photographed on the film F is read.
- the pre-scan is performed under pre-scan reading conditions set in advance such that the scanner 12 can read all the images on the film F, which are objects of the scanner 12 , while the image sensor 32 does not saturate.
- the fine scan is performed under reading conditions of the fine scan set for each frame such that the image sensor 32 saturates at a density slightly lower than a lowest density of the image (frame).
- output image signals of the pre-scan or the fine scan are basically similar image data except that resolutions and output image signal levels are different.
- the scanner 12 used in the photoprinter 10 is not limited to a scanner that performs such slit-scan reading, but may be a scanner that performs plane reading for reading the entire surface of a film image of one frame at a time.
- the scanner 12 uses an area sensor such as an area CCD sensor, and an inserting device for each of the color filters of R, G and B is provided between the light source 22 and the film F.
- the color filter is inserted in an optical path of light irradiated from the light source 22 .
- Reading light that has transmitted through the color filters is irradiated on the entire surface of the film F to cause the transmitting light to focus on the area CCD sensor for reading the entire image of the film.
- the scanner 12 dissolves an image photographed on the film F into three primary colors to read them by replacing each color filter of R, G and B one after another to repeat this processing.
- the digital image data signal outputted from the scanner 12 is outputted to the image processing apparatus 14 that implements the image processing method in accordance with this aspect of the present invention.
- the pre-scan image data and the fine scan image data in the digital image density data are separately memorized (stored).
- the pre-scan image data is subject to a predetermined image processing and displayed on the monitor 20 .
- a density histogram is prepared and image characteristic volumes such as an average density, an LATD (large area transmission density), a highlight (minimum density) and a shadow (maximum density) are calculated from the pre-scan image data to set reading conditions and image processing conditions of the fine scan.
- image processing conditions are set such that individual preference of each customer is reflected on the conditions as described later.
- Image processing is applied to the fine scan image data according to the set image processing conditions, and is outputted on color paper from the image recording apparatus 16 as an optimal, high quality image at a density, gradation and tone that are desired as a color print.
- the digital photoprinter including the image processing apparatus that implements the image processing method in accordance with the first aspect of the present invention is basically configured as described above.
- the image processing according to individual preference of each customer is executed in the image processing apparatus 14 together with the processing such as conversion of a density, a color and gradation of image data, conversion of a chroma and electronic magnification with respect to the fine scan image data that is read with the image reading conditions that are set based on the pre-scan image data.
- data such as individual identification information and image processing conditions desired by each individual (request processing information) is registered in a lab in advance in order to identify a person in an image and execute image processing according to preference of the specific person.
- FIG. 2 An example of registration data is shown in FIG. 2 .
- the example shown in FIG. 2 is in a case in which data of all members of a family is registered.
- Contents of the registered data are, for example, registered customer IDs, a name of a person representing the people to be registered and individual data of each member of the family, i.e., a relation in the family, a face image of each individual, image processing contents desired by each individual (request processing), and the like.
- Face image data may be captured by designating, for example, a face area in an already ordered photographed scene.
- a plurality of patterns for each person may be set and registered in order to increase accuracy of identification.
- a mode for changing skin to white a mode for erasing wrinkles, a suntan mode and the like can be considered.
- a cloth filter, soft focus finish and the like As a special effect, there are use of a cloth filter, soft focus finish and the like.
- image composition there are frame decoration, composition of specific characters, special make-up and the like.
- a plurality of kinds of the above-mentioned request processing may be set for each person, and some selected kinds of request processing out of the plurality of kinds of the request processing may be combined to be applied to image processing of a specific person.
- conditions such as whether the processing is applied to the entire image, or applied only to a pertinent person or only to the person and the vicinity of the person are set.
- Image processing on which individual preference of each customer is reflected will be hereinafter described with reference to a flow chart of FIG. 3 .
- a face region of a person is extracted.
- An extracting method of a face region is not specifically limited but there are a variety of methods. It is sufficient if a publicly known specific part extracting method (extraction algorithm) is used.
- Such a specific part extracting method is exemplified by a method with which a plurality of different specific part (major part) extracting methods, such as a method of extracting a specific color, a method of extracting a specific shape pattern, and a method of removing a region that is estimated to be equivalent to a background disclosed in Japanese Patent Application Laid-open No. Hei 9-138470, are evaluated to define weights in advance, a specific part is extracted by each extracting device, the extracted specific part is weighted by the defined weight, and the major part is determined and extracted according to a result of the weighting.
- a plurality of different specific part (major part) extracting methods such as a method of extracting a specific color, a method of extracting a specific shape pattern, and a method of removing a region that is estimated to be equivalent to a background disclosed in Japanese Patent Application Laid-open No. Hei 9-138470
- another specific part extracting method is exemplified by a method with which densities or luminance of a plurality of points in an image are measured to find variations among the points, a point with a variation equal to or more than a predetermined value is set as a standard point, then a retrieval area and a retrieval direction pattern are set using a variation or the like of a density or the like within a predetermined range from the standard point to retrieve a part with a variation of a density or the like in the direction indicated by the retrieval direction pattern within the retrieval area, and subsequently the retrieval is repeated with this part as a standard point to extract a specific part by connecting retrieved/set standard points as disclosed in Japanese Patent Application Laid-open No. Hei 9-138471.
- a face is extracted, a person is identified and a person in the image is specified in step 120 .
- matching is performed with respect to the extracted face image with registered face images as a template to find a coinciding degree, and a person is identified.
- the information may be utilized for identification of a person.
- request processing for the registered person is executed with respect to the specific person in step 130 .
- a finished image according to individual preference of each customer can be automatically obtained.
- an image of the father is processed to be a suntanned face, soft focus and wrinkle erasing processing are applied to an image of the mother, and white skin and slim body finish is applied to an image of a daughter A.
- finished images as the members of the family wish to have are obtained, respectively.
- the second embodiment is to execute processing as described in the first embodiment in a case in which a customer performs reproduction processing of an image using a digital camera, a personal computer, a photo player or the like. That is, before the customer places an order with a lab, a photographed image is displayed on these devices to revise the image. Causing a digital camera or a photo player to have a revising function, or connecting the devices-with a personal computer can revise an image.
- a displayed image is revised, although the revision can be automatically performed using image processing software for performing processing similar to that of the first embodiment, a customer may manually identify a person and revise a result of detecting a person while looking at the displayed image.
- an image of a result of the revision may be printed using a video printer for home use, or revision data of a frame may be recorded as information attached to the frame to order a print for the lab. For example, it is sufficient to display an image photographed by the APS on a photo player to revise the image, and if an order is placed with the lab to print from a negative film, to cause the photo player to have a magnetic recording function and to magnetically record the attached information in the negative film. In addition, if an order is placed with the lab using a communication line by a personal computer, it is sufficient to record the attached information in a file.
- this embodiment is to improve accuracy of identification of a person by adding a hair style, a dress and the like as person designation information to characteristic amounts for recognition of a person, when a face is extracted and template matching is performed between the face and the registered face image in identification of a person.
- a characteristic amounts for recognition of a person a density, a distribution of color tint, a texture pattern or the like can be considered.
- a face region is extracted from an input image to perform matching with registered face images as a template, to find a coinciding degree and to identify a person. Then, if a person in an image is specified by the identification of a person, a hairstyle, a dress and the like of the person are extracted, which are registered as person designation information and as registration data-corresponding to the person.
- identification of a person is performed with reference to the hairstyle, the dress and the like that have been registered before in addition to by means of matching of the template of the extracted face image.
- a hairstyle and a dress are registered anew from the input image to automatically accumulate them as person designation information.
- Mi is a coinciding degree by the pattern matching of a face image represented by a point
- Ni is a point derived from a frequency of appearance of a dress.
- the large each coinciding degree, the large a point is.
- i is a number indicating a registered person.
- a customer who has found an error in identification of a person may revise the processing by himself if the customer revises an image in person by downloading image processing software in the customer's own personal computer as in the second embodiment.
- person designation information such as a hairstyle and a dress is also added to characteristic amounts for recognition of an individual, and moreover, accumulated data is revised by adding an element of learning function to the accumulation processing of the person designation information. Therefore, accuracy of the identification of a person can be improved, and image processing according to preference of each person can be properly performed.
- the above description relates to a method of obtaining a print (output image) on which individual preference of each customer is reflected by applying image processing according to the individual preference of each customer to each person in an image.
- a print is outputted on which the tendency of preference according to locality of a customer or a season is reflected in addition to matching individual preference of each customer.
- an exposure control algorithm is set such that a photo print is finished in a skin color preferred by the Japanese in a photoprinter
- a printer of the same type is used in Europe without changing the exposure control algorithm
- a skin color may not be a color preferred by the Europeans.
- the exposure control algorithm is set based on photographing under the sunlight in Japan, if a photo printer of the same type is used in a region at a substantially different latitude, a print of the same quality as that in Japan cannot be realized because the sunlight is also different in this region from that in Japan.
- the applicant has already proposed an exposure control method with which local preference of a person or preference of an individual customer is properly reflected on a print in the case of the analog exposure method in Japanese Patent Application Laid-open No. Hei 6-308632 or Japanese Patent Application Laid-open No. Hei 8-137033.
- a print (output image) on which locality or the tendency of preference of a customer is reflected can also be realized in the case of the digital image processing as described below.
- an example described below is a digital image processing method of accumulating the tendency of revising a setting of image processing conditions for each customer, lab or region to revise and optimize a setting parameter of image processing by.
- the method is also for classifying each frame by scene to accumulate the tendency of revision for each scene classification, and revise and optimize the setting parameter.
- a correction tendency of an operator shows, after classifying each frame by scene, varying tendencies for a certain region such as that it is preferred to emphasize a contrast or to be moderate in a contrast
- the operator revises a correction parameter in accordance with the tendencies.
- revisions of a processing parameter by an operator are accumulated for the number of N frames in the past to check relation with a specific scene classification.
- a scene is estimated as a strobe scene, a back-light scene or the like by estimating a normal scene, over exposure, under exposure or the like based on a distribution pattern found from a density histogram or the relation between density differences between a central part and a peripheral part of an image.
- a scene is estimated to be a person, a scenery or the like by extracting a face.
- a method with which an operator manually estimates a classification of a scene such as a person, a scenery, a night scene, underexposure or high contrast.
- a method with which a way of applying contrast emphasis or resolution (sharpness) is changed or a target value of color tint or a density of a skin color of a person is changed since a person and a preferable skin color varies depending on a region, this method takes into account the difference due to a region. Since, in Europe, a color of a face that is less dense than a skin color based on the Oriental people is preferred and an operator tends to reduce a density, an algorithm is automatically updated such that a density becomes slightly lower than a default value after a face is extracted judging from accumulated data.
- types of scene classifications or a data accumulation period may be set for each season. For example, around a ski resort, a “snow scene” classification that is determined according to a highlight ratio in a scene may be added only in the winter season, or a gradation characteristic emphasizing whiteness may be applied to the “snow scene”.
- algorithm updating processing of the region may be performed by a unit of region in a lab management center. This can be attained by connecting each mini-lab shop in the region and the lab management center in the region by a network such that the lab management center collects information on revisions by an operator in each mini-lab shop, grasps a tendency of the entire region, revises image processing software of mini-lab shops all over the region at a certain timing, and distributes an updated algorithm to each mini-lab shop.
- an image processing parameter may be optimized for each customer by recording the data together with customer IDs and managing the data for each customer.
- the above-mentioned processing can be performed with respect to a printer for home use of each customer instead of a printer in the above-mentioned mini-lab.
- an appropriate image reproduction becomes possible which copes with the change of a tendency of preference for each scene due to an individual customer, locality, a season or the like.
- request contents may be divided for each of print processing and monitor display processing, or registration data processing can be stopped or specific request processing can be designated only for a specific frame.
- the image processing method of the first aspect of the present invention is basically configured as described above.
- the image processing method in accordance with the second aspect of the present invention is basically to determine a type of feeling of a person in a photographed image scene based on voice data accompanying the image scene or based on an expression or gesture of the person extracted from the photographed image, and attach (compose) a mark emphasizing the feeling corresponding to the type of feeling to an image of the person in the scene, thereby enhancing amusement aspect of a photograph or an image expression.
- a case in which a type of feeling of a person in a photographed image scene is determined based on voice data accompanying the photographed image scene will be described below as a typical example, but the second aspect is not limited to this case.
- objects to which the second aspect of the present invention is applied range widely from a photograph (still image) to real time image display such as a video (animation) and a TV telephone, or the like.
- the first embodiment is for applying predetermined image processing with respect to an image having voice data as accompanying information which is photographed by a digital camera or the like having a voice recording function.
- FIG. 4 is a block diagram schematically showing a digital photoprinter including an image processing apparatus that implements the image processing method in accordance with the first embodiment of the second aspect of the present invention.
- a digital photoprinter 50 shown in FIG. 4 mainly includes a photographing information inputting device 52 , an image processing apparatus 54 and an image recording apparatus 16 . Further, the image recording apparatus 16 , the operation system 18 and the monitor 20 similar to those in the photoprinter 10 shown in FIG. 1 can be used.
- the photographing information inputting device 52 is to read image data and voice data from a recording medium in which the image data and the voice data are recorded by an image photographing device with a recording function such as a digital camera.
- the image processing apparatus 54 is to execute the image processing method in accordance with this aspect of the present invention and other various kinds of image processing.
- the operation system 18 including the keyboard 18 a and the mouse 18 b for inputting and setting conditions for various kinds of image processing, selecting and instructing a particular processing, and instructing color/density correction, or the like, and the monitor 20 for displaying the image inputted from the photographing information inputting device 52 , a setting/registration screen for various conditions including various operational instructions, and the like are connected to the image processing apparatus 54 .
- the image recording apparatus 16 is to apply image exposure and developing processing to a photosensitive material (printing paper) by light-beam that is modulated in accordance with the image data outputted from the image processing apparatus 54 to output a (finished) image as a print.
- Image processing of the first embodiment will be hereinafter described with reference to a flow chart of FIG. 5 .
- step 200 image processing patterns corresponding to various types of feelings are set in advance.
- feelings of a human varies and it is impossible to deal with all the feelings, only typical feelings that relatively clearly appear outside are dealt with here.
- FIG. 6 shows an example of setting image processing patterns.
- the types of feelings to be dealt with here include “fret”, “surprise”, “anger”, “sorrow”, “worry (doubt)”, “love”, “pleasure” and the like.
- a type of feeling is determined by voice data (voice information) accompanying a photographed image scene.
- voice data voice information
- a mark image composition pattern
- setting an image processing pattern corresponding to a type of feeling is eventually setting an image composition pattern corresponding to specific voice information.
- items that are set in advance include a mode for representing a type of feeling, a word for drawing each mode (keyword, omitted in the table in FIG. 6 ), and an image composition pattern corresponding to each mode.
- the image composition pattern includes a composition image (mark) to be composed, a position where an image should be composed (a position in an image, a specified or relative position with respect to a position of a person or his or her face in an image), a size of a composition image (a size, a size with respect to an image, a relative size with respect to a size of a person or his or her face in an image), an orientation of a composition image with respect to a person or his or her face and the like.
- a composition image is not limited to one, but may be plural.
- a composing position is preferably designated by a position relative to a position of a face in an image and is more preferably normalized with a width of the face.
- a size of a composition image is also preferably designated based on a face.
- an orientation of composing a composition image is also preferably set based on a face of a person in an image.
- two sweat marks are composed in the left of a face with a coordinate of a center of the sweat marks in the position beside an eye (x1, y1) (with an eye as the origin) and in the size of 0.1 when the width of the face is 1.
- an “orientation determination result” in FIG. 6 indicates that an orientation of a person with respect to a camera is determined to be either in front, at the angle to the right or at the angle to the left by a pattern matching method or the like, and patterns with different orientations of protruding eyes depending on the determination result are composed.
- the “sorrow” mode corresponds to the feeling, and a “shading (slanted lines) mark” is placed on a face and a “falling leaves mark” is composed around the face.
- the color of the face may be white (pale) instead of placing slanted lines in the face.
- more touch of grief may be added by changing the background to be monotone.
- other marks may be composed as follows. If there is a word such as “Mmm” or “well”, a mode is determined to be the “worry (doubt)” mode and a “question mark” is composed.
- a mode is determined to be the “love” mode and a “heart” is composed. If there is a word such as “aah” or “love”, a mode is determined to be the “love” mode and a “heart” is composed. If there is a word such as “yahoo” or “made it”, a mode is determined to be the “pleasure” mode and a “fireworks” mark is composed.
- a certain number of these image processing patterns may be prepared on the system side and a customer may prepare and add data to the patterns. In this case, a word frequently used by the customer in a favorite phrase may be set as a word drawing a mode, or a composition image or a keyword for various feeling mode may be registered as the customer likes.
- step 210 photographing information having voice data as accompanying information which is photographed by a digital camera or the like having a recording function is inputted form the photographing information inputting device 52 .
- Voice data and image data in the inputted photographing information are sent to the image processing apparatus 54 , respectively.
- a type of feeling is specified from the voice data in the image processing apparatus 54 .
- the voice data is recognized first and matching is performed to find if a keyword that draws a registered mode indicating a type of feeling is included in the voice data. If a specific keyword is detected from the voice data, a specific image processing pattern is specified by a mode corresponding to the keyword.
- a composition image of the image processing pattern specified above is composed on the photographed image.
- a composing position of the composition image to be composed on the photographed image is predetermined with respect to the photographed image in a specified position such as any of four corners including right upper corner, and in which a specified size or relative size of the composition image is predetermined with respect to a size of the photographed image, the composition image is composed on the photographed image in the predetermined composing position and in the predetermined size.
- the person or the face is preferably extracted from the photographed image by a publicly known method in advance.
- a method of extracting a face there is, for example, a method disclosed in Japanese Patent Application Laid-open No. Hei 8-122944 by the applicant of this application.
- a device for extracting a specific region such as a face region of a person, a method disclosed in each of Japanese Patent Application Laid-open No.
- Hei 4-346333, 5-158164, 5-165120, 6-160993, 8-184925, 9-101579, 9-138470, 9-138471, 9-146194 and 9-197575 is preferably available in addition to the above-mentioned methods.
- a composition image is composed in accordance with a composing position, a composing size and an orientation designated in the image processing pattern.
- Determination of a type of feeling of a person (face) in the photographed image is not limited to the one depending on voice data as described above, but the type of feeling may be determined by using an expression or a gesture of the person, which will be described later in detail.
- a type of feeling is determined from an expression or a gesture of a person or from voice data and an expression or a gesture of a person, it is necessary to extract a person from the image.
- the person is preferably extracted when composing an image processing pattern, since the image processing pattern is preferably composed with respect to the person or his or her face.
- the image processing pattern is composed in a specified position in the photographed image, or if a background is changed or a density or a color of the entire screen is changed, or if falling leaves are flown without regard to a position, it is unnecessary to extract a person specifically. In this way, extraction of a person out of an image may be performed if necessary. In addition, it is therefore preferable to make it possible to select and set a determining method or the like of a type of feeling in advance.
- the output image is outputted from the image recording apparatus 16 .
- the lab side which has received a request of a customer who performed photographing by a digital camera or the like, performs image processing by a printer in the lab and prepares a print on which a mark emphasizing a feeling is composed.
- the above-mentioned image processing may be performed on the digital camera side. If the processing is performed on the digital camera side, image processing patterns or the like are set on the camera side in advance.
- processing may be entirely performed automatically based on voice data recognition and extraction of a face from the photographed image, or an operator may input by operation keys or the like and perform the processing in accordance with an instruction of a customer.
- image composition processing may be performed automatically by incorporating software for executing the above-mentioned image processing in the digital camera, or a customer may instruct a camera by keys/buttons or the like to perform the composition processing at the time of photographing.
- image processing by the above-mentioned image processing patterns may be applied to an index print while a main print is normally processed.
- an expression may be changed by image modification processing including morphing processing so as to correspond to a type of feeling.
- image modification processing including the morphing processing can be performed to slant eyes upwardly.
- a somewhat comical expression as in the example shown in FIG. 6 has more fun and is effective than a change to a too real expression.
- composition corresponding to a person in a photographed image may be entirely subjected to processing for substituting with an animation image or a computer graphics (CG) image, and in particular an animation image or a CG image corresponding to a type of feeling.
- CG computer graphics
- the animation image or CG image used may be one image determined in accordance with the type of feeling, but a plurality of patterns can be preferably selected for each of a still image and an animation image.
- the animation image and CG image as described above may include a content prepared and registered in advance by a customer, or a content supplied by a company managing a repeater station as described later.
- conversion patterns can be preferably set based on at least one kind of information selected from an expression and a gesture of a subject person in a photographed image, and voice contents. It goes without saying that a request from a customer can be more preferably registered.
- the processing as described above enables remarkable enhancement of amusement aspect in image expression in a photograph, a video, a TV telephone and the like. Further, this method is convenient and does not offend an opposite party when one does not want to display his or her face in a TV telephone or the like.
- the second embodiment is an application of the image processing method of the second aspect of the present invention to an animation such as that in a cellular TV telephone or a movie editing rather than to a still image such as a photograph.
- an image processing method itself is basically the same as that in the above-mentioned first embodiment.
- image processing software is also incorporated in a microcomputer such as a cellular TV telephone terminal or the like in advance. Then, in a case of a cellular TV telephone, if a registered keyword is detected in conversation during a call, processing for composing a composition image of a mode corresponding to the keyword is performed in the terminal. Then, the composition image is transmitted to a terminal of the other party of the call and displayed.
- This embodiment also relates to an apparatus for performing image processing on a real time basis such as a TV telephone (or a cellular TV telephone). That is, this embodiment is to determine a type of feeling using expression recognition or gesture recognition instead of voice data as accompanying information for determining a type of feeling of a person.
- a table associating images of expressions of emotions such as pleasure, anger, grief and joy with modes corresponding to the expressions is registered for each individual in a cellular TV phone terminal of each person in advance.
- a face is extracted from a photographed image of a caller and a type of expression is identified by the pattern matching in the cellular TV telephone terminal. If an expression that coincides with a specific expression registered in advance is detected, image processing of an image processing pattern of a mode corresponding to the registered expression is performed, and a processed image is sent to a terminal of the other party of the call and displayed. Alternatively, similar processing is performed if it is detected that the caller gestures in a specific way by the gesture recognition.
- This embodiment is to perform image composition processing on a repeater station side of the TV telephone, although it also relates to a TV telephone or a cellular TV telephone.
- a cellular TV telephone system in this embodiment is schematically shown in FIG. 7 .
- Face image data an expression of each mode for determining a mode corresponding to a type of feeling
- voice data a keyword for drawing each mode
- image composition patterns in each mode are registered in a database 84 of a repeater station 80 of a communication provider through cellular TV telephone terminals 60 and 70 .
- a photographed image of A photographed by an image sensor 62 and voices of A and a photographed image of B photographed by an image sensor 72 and voices of B are transmitted to the repeater station 80 from the A's cellular TV telephone terminal 60 and the B's cellular TV telephone terminal 70 , respectively, as shown in FIG. 7 .
- faces are always extracted from the photographed images of A and B, which are matched with the registered expressions, and at the same time, it is checked if registered keywords are heard in the voices in the call.
- processing for composing, for example, a sweat mark is applied to the photographed image of A, and the processed image is transmitted to the B's terminal 70 .
- a reduced-size image of the processed image of A may be transmitted to the A's terminal 60 for confirmation.
- the face image of B is displayed on a display screen 64 of the A's terminal 60
- the processed image of A is displayed on a display frame 66 that is provided at the corner of the display screen 64 for confirmation of the processed image of A.
- the processed image of A is similarly displayed on a display screen 74 of the B's terminal 70 and, at the same time, a reduced-size processed image of B transmitted from the repeater station 80 is displayed on a display frame 76 for confirmation at the corner of the display screen 74 .
- face image data, voice data, image composition patterns and the like of each user may be registered in a terminal of a user to perform the process from the mode detection to the image composition processing on the cellular TV telephone terminal side.
- the above-mentioned data may be registered in both the repeater station and the terminal such that the processing can be performed in either of them.
- the image composition processing may be applied not only to voice and image data to be transmitted but also to received voice and image data.
- the composition processing with respect to an image received from B may be performed in the A's terminal.
- amusement aspect is enhanced by adding a pattern that the receiver A desires.
- This embodiment is to correct a position of a composition pattern if a misregistration occurs when the image composition processing is performed on a real time basis.
- a face in the photographed image is designated by an electronic pen or the like.
- the outline of the face may be circled using the electronic pen, or the eyes may be connected by a line.
- a position of the mouth or the like may be designated.
- a position to which the composition pattern should be shifted in parallel, an adjustment amount of a size or the like may be designated by key operation.
- a face position candidate area is automatically corrected to coincide with an original composition pattern, and the face extraction processing is executed again.
- a position and a size of a composition pattern is corrected in accordance with a movement of a face on a real time basis even in a case of an animation as in a cellular TV telephone. Therefore, since a composition pattern automatically follows a face and is displayed at a predetermined position in a displayed image, amusement aspect of an image display in an image displaying medium is enhanced.
- the image processing method according to the third aspect of the present invention applies the image processing method according to the second aspect of the present invention as described above to a TV image captured in a personal computer.
- the TV image is captured in the personal computer and subjected to composition processing, substitution processing, image modification processing and color/density modification processing as described above.
- a digital TV may have a receiver including the above-mentioned composition processing function and other functions so that a customer can further set composition patterns. Amusement aspect can be thus enhanced still more.
- the image processing methods according to the second and third aspects of the present invention are basically configured as described above.
- FIGS. 8 and 9 An image processing method in accordance with a fourth aspect of the present invention will now be described with reference to FIGS. 8 and 9 .
- the image processing method in accordance with a fourth aspect of the present invention is basically to register in advance an area image in a specific area of a photographed image or an image characteristic amount and to compose on a corresponding area of the photographed image or adjust a density and a color tone by using the area image or image characteristic amount registered in advance.
- reference face images are registered in advance as face images of a subject person in a photographed image and include an image of a preferred made-up face of the person, an image of a favorite face of the person and image characteristic amounts thereof, and a face image in a full-faced state in which a photographing direction is coincident with a line of sight.
- the face of the person in the photographed image is corrected, composed or converted to the reference face images registered in advance including the image of the preferred made-up face of the person, the image of the favorite face of the person, and the face image in a full-faced state in which the photographing direction is coincident with the line of sight.
- the image characteristic amounts registered in advance are employed to correct or adjust density or color tone of the face so that the finished face image has the preferred made-up face or favorite face of the person.
- the reference face images such as the image of the preferred made-up face, the image of the favorite face and the like, and unnatural feeling due to noncoincidence of the line of sight can be removed.
- objects to which the fourth aspect of the present invention is applied range widely from a photograph (still image) to real time image display such as a video (animation) and a TV telephone, or the like.
- the first embodiment is for applying predetermined image processing to an image photographed with a digital camera or the like by using registration patterns registered in advance.
- FIG. 8 is a block diagram schematically showing a digital photoprinter including an image processing apparatus that implements the image processing method in accordance with the first embodiment of the fourth aspect of the present invention.
- a digital photoprinter 90 shown in FIG. 8 mainly includes a photographed image acquiring device 92 , an image processing apparatus 94 , a database 96 and an image recording apparatus 16 .
- the image recording apparatus 16 , the operation system 18 and the monitor 20 similar to those in the photoprinter 50 shown in FIG. 4 can be used.
- the photographed image acquiring device 92 is to read photographing image data from a recording medium in which the image data are recorded by an image photographing device such as a digital camera.
- the image processing apparatus 94 implements the image processing method in accordance with this aspect of the present invention and other various kinds of image processing. More specifically, the image processing apparatus 94 acquires, extracts or prepares reference face images to be registered in advance in the database 96 from the photographing image data captured by the photographed image acquiring device 92 ; selects any of the reference face images registered in advance in the database 96 ; and corrects, composes or converts the face image in the photographed image by using the selected reference face image.
- the database 96 is used to register in advance the reference face images including an image of a preferred made-up face of the subject person in the photographed image, an image of a favorite face of the person and image characteristic amounts thereof, and a face image in a full-faced state in which a photographing direction is coincident with a line of sight (hereinafter also referred to as “face image in which line of sight is toward camera”.
- Other functions of the image processing apparatus 94 and functions of the operation system 18 including the keyboard 18 a and the mouse 18 b , the monitor 20 , and the image recording apparatus 16 are basically the same as those in the digital photoprinter 50 shown in FIG. 4 . Therefore, the description will be omitted.
- Image processing of the first embodiment will be hereinafter described with reference to a flow chart of FIG. 9 .
- step 300 a face is photographed in advance in the most preferable situation by a photographing device such as a digital camera and the photographed image of the face is acquired as a photographed image for registration in the photographed image acquiring device 92 .
- a photographing device such as a digital camera
- step 310 a face extraction algorithm is applied in the image processing apparatus 94 to extract a face region for a reference face image from the photographed face image acquired by the photographed image acquiring device 92 .
- Image characteristic amounts of the reference face image may be calculated in this process. It should be noted that calculation of the image characteristic amounts of the reference face image and hence the extracting process in step 310 may be omitted when the whole of the photographed face image is used as the reference face image. When omitted in step 310 , calculation of the image characteristic amounts of the reference face image may be performed in the preceding step 300 , subsequent step 320 or step 360 for correction process to be described later.
- step 320 the whole of the photographed face image or the face region extracted therefrom is registered in the database 96 as the reference face image. Pre-processing is thus finished.
- any type of faces including a neatly made-up face, a face with hair set in a beauty salon or barbershop, a shaven face and a face with mustache set may be used.
- a plurality of reference face images prepared for a specific person may be changed in use for example between morning and night, for each season or for each party in the case of displaying in a TV telephone or a cellular telephone.
- Each portion constituting the face image may be registered as the reference pattern in the reference face image. Namely, in the extracting process in step 310 , an area which it is desired to use for correction or composition may be automatically extracted or manually designated to register the thus extracted or designated area as the reference pattern.
- the reference pattern include respective portions constituting the face such as eyes, eyebrows, cheeks, lips, nose, ears, hair, and mustache, and accessories such as earrings, a hat (bandanna and headband also included) and glasses (sunglasses also included).
- the face extraction algorithm can be applied to automatically recognize which portion of the face the designated area indicates.
- area setting is performed while designating the respective portions one by one.
- step 330 the photographed image is acquired by the photographed image acquiring device 92 .
- the process of acquiring the photographed image in step 330 precedes for example transmission or display of the photographed image to or on the monitor screen of a TV telephone or a cellular telephone.
- an owner or an exclusive user of the TV telephone or the cellular telephone is very often a subject person in the photographed image.
- the subject person is specified in many cases. Therefore, it is preferable that information such as ID can be automatically acquired.
- each user is given an ID so that the ID of a subject person can be acquired in step 330 together with the photographed image by the photographed image acquiring device 92 .
- step 340 the image processing apparatus 94 applies the face extraction algorithm to the photographed image acquired by the photographed image acquiring device 92 to extract the face region of the subject person from the photographed image as the face image.
- the image processing apparatus 94 applies the face extraction algorithm to the photographed image acquired by the photographed image acquiring device 92 to extract the face region of the subject person from the photographed image as the face image.
- the contour of the face is first extracted, followed by area restriction of the respective portions, after which the areas of the respective portions can be determined finally by means of configuration pattern matching.
- the subject person when photographing is started with a photographing device such as a digital camera, the subject person takes a posture so that the face of the subject person comes to (or is located in) the position within a guide such as a face frame displayed on a viewfinder or display device. Extraction of the face image by means of the configuration pattern matching is thus facilitated, and once having been caught, the face of the subject can be followed by means of local matching between photographing frames even in a different photographing frame.
- a photographing device such as a digital camera
- a registered reference face image of a person that the image processing apparatus 94 searched or specified from the database 96 based on the above-mentioned ID or the like, a reference pattern and optionally image characteristic amounts thereof are selected and called. It is also possible to have new combinations by selecting from different reference images for the respective portions of the face image of the specific person as described above. For example, a newly combined face image including a first reference face image for eyes, a third reference face image for eyebrows and an eighth reference face image for earrings can be also made.
- steps 340 and 350 may be preceded or both may be performed simultaneously.
- step 360 the face image extracted in step 340 and the areas of the respective portions constituting the face image are corrected, composed or substituted by using the registered reference face image for the specific person, reference pattern and their image characteristic amounts selected in step 350 .
- the registered reference face image may be composed on or substituted for the entire face image, or a registered reference pattern image may be used for each portion of the face to perform correction, composition or substitution. Alternatively, image characteristic amounts of registered reference pattern images may be used to perform correction.
- cheeks and lips are corrected to have a color tone of the reference pattern of the corresponding portions.
- lips move and hence, only the color tone of lips is preferably corrected after the lip area is separated from the skin area in color tone on a pixel basis.
- a portion surrounding the composition area is preferably smoothed to have a blurred boundary.
- the photographed image in which the face image is corrected in step 360 is outputted from the image processing apparatus 94 as an output image.
- the output image may be outputted from the printer 16 as a photographic print or displayed on the monitor 20 , or outputted to a TV telephone or cellular telephone terminal (more specifically its image processor; see FIG. 7 ) so as to transmit to a TV telephone or cellular telephone terminal (more specifically its image processor; see FIG. 7 ) on an opposite party side for displaying on a display screen.
- a clothing portion (area set under the face) or a background (region other than the face area) of the subject person may be enchased with decoration patterns so that intimate apparel and night clothes including pajamas or interior of a room can be hidden without giving unpleasant feeling to others.
- the second embodiment is for applying predetermined image processing to an image photographed with a digital camera or the like by using registration patterns registered in advance as in the first embodiment, but is different from the first embodiment in that the registration pattern registered in advance is a face image in which a photographing direction is coincident with a line of sight of a subject person (face image in which line of sight is toward camera). Therefore, the second embodiment will be also described below with reference to FIGS. 8 and 9 .
- a photographed image is taken with a photographing device such as a digital camera in a state in which a subject looks at the camera, and then recorded.
- the recorded photographed image is acquired by the photographed image acquiring device 92 (see FIG. 8 ) as a photographed image for registration.
- step 310 the image processing apparatus 94 applies the face extraction algorithm to the photographed image for registration acquired by the photographed image acquiring device 92 to extract an area of eyes as a reference pattern.
- step 320 the thus extracted area of eyes is registered in the database 96 as a reference pattern. Pre-processing is thus finished.
- step 330 the photographed image is acquired by the photographed image acquiring device 92 .
- the process of acquiring the photographed image in step 330 precedes for example transmission or display of the photographed image to or on the monitor screen of a TV telephone or a cellular telephone.
- step 340 in the communication with such a telephone, the image processing apparatus 94 applies the face extraction algorithm to the photographed image acquired by the photographed image acquiring device 92 to extract the area of eyes of the subject person from the photographed image.
- step 350 the image processing apparatus 94 selects and calls a registered reference pattern from the database 96 . Either of steps 340 and 350 may be preceded or both may be performed simultaneously.
- step 360 the area of eyes extracted in step 340 is corrected, composed or substituted by using the registered reference pattern selected in step 350 .
- the registered reference pattern image may be used as such for composition.
- a photographed image in which eyes have been corrected in step 360 so as to look at the camera is outputted from the image processing apparatus 94 as an output image.
- a stationary state of the photographed image for example a state of an image photographed on startup in which a subject is correctly postured and looking at a monitor
- the photographed image can be transmitted as it is in order to prevent unnatural feeling due to the large movement.
- the output image may be outputted from the printer 16 as a photographic print or displayed on the monitor 20 , or outputted to a TV telephone or cellular telephone terminal so as to transmit to a TV telephone or cellular telephone terminal on an opposite party side for displaying on a display screen.
- the photographed image may be directly transmitted to the opposite party side to display on the display screen.
- composition when a reference pattern is registered, an animation of a photographed image in which a subject looks at a camera, for example a photographed animation in which a subject winks at least once is recorded.
- composition can be made as an animation irrespective of the actual timing of winking.
- winking detection may be performed for synchronization, which is more preferable.
- a transmission terminal such as a TV telephone or cellular telephone terminal (see FIG. 7 ) has a high arithmetic capability
- various types of correction processing as described in the first and second embodiments of this aspect can be executed in the transmission terminal.
- a reference face image and decoration pattern data may be registered in the repeater station (see FIG. 7 ) in combination with a customer ID and executed in the repeater station.
- the image processing method of the fourth aspect of the present invention is basically configured as described above.
- an output image on which individual preference of each customer is reflected can be automatically obtained and amusement aspect of photography can be enhanced.
- the second aspect of the present invention since a content that is desired to be emphasized according to a feeling is automatically visualized and represented in the display of, in particular, a face of a person in an image, amusement aspect in image representation such as a photograph, a video, a TV telephone or the like can be significantly enhanced.
- the automatic visualization is convenient and does not offend an opposite party when one does not want to display his or her face in a TV telephone.
- the present invention overcomes the above-mentioned prior art problems so that even inexperienced or unskilled persons in personal computer or image processing software can easily correct images so as to have a preferred made-up face or a favorite face and remove unnatural feeling due to noncoincedence of the line of sight.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
Description
- 1. Field of the Invention
- The present invention relates to an image processing method for converting input image data to output image data by applying image processing to the input image data. More specifically, the invention relates to an image processing method for preparing an output image meeting a request from an individual customer; an image processing method for additionally displaying information corresponding to a feeling of a person on an image displaying medium or the like on which an image of the person is displayed, or performing substitution, modification or adjustment to produce an image corresponding to the feeling; and an image processing method for changing an image of a person to a favorite image of the person or an image having no unnatural feeling.
- 2. Description of the Related Art
- Conventionally, printing of an image photographed on a photographic film (hereinafter referred to as a film) such as a negative film or a reversal film to a photosensitive material (printing paper) has been performed by so-called direct exposure. With the direct exposure, the image on the film is projected on the photosensitive material and the surface of the photosensitive material is exposed to light.
- On the other hand, in recent years, a printing apparatus that utilizes digital exposure, that is, a digital photoprinter, has been put to practical use. The digital photoprinter photoelectrically reads an image recorded on a film, converts the read image to a digital signal, then converts the digital signal to image data for recording by applying various kinds of image processing, and records an image (latent image) by scanning and exposing a photosensitive material by recording light that is modulated according to the image data to have a print (photograph).
- The digital photoprinter can convert an image into digital image data and determine exposure conditions at the time of printing the image by image data processing. Thus, various kinds of image processing can be performed with a high degree of freedom, which is difficult or impossible with the conventional direct exposure, including correction of dropouts or blocked-ups of an image due to back-light, strobe photographing or the like, correction of a color failure or a density failure, correction of under exposure or over exposure, correction of insufficient marginal luminosity, sharpness processing, and compression/expansion processing of density dynamic range (giving a dodging effect by image data processing). Therefore, an extremely high-grade print can be obtained compared with the direct exposure. Moreover, since composition and division of a plurality of images, composition of characters or the like can be performed by the image data processing, a print that is freely edited and/or processed depending on an application can be outputted.
- In addition, since the digital photoprinter not only can output an image as a print but also can supply image data to a computer or the like or can store image data in a recording medium such as a floppy disk, the image data can be utilized for various applications other than a photograph.
- In this way, with the digital photoprinter, it is possible to apply image processing with a higher degree of freedom to an image by the image data processing and to output a print with higher commodity value. Incidentally, it is preferable that an image to be reproduced as a print is an image on which a request of a customer (a person who requests preparation of a print) is reflected as much as possible. In this regard, the applicant has proposed an image processing method of reproducing a finished image that preferably corresponds to a request of a customer in Japanese Patent Application Laid-open No. Hei 11-331570.
- That is, the method is to obtain a reproduced image preferably corresponding to a request of a customer by getting information on the customer relating to image data supplied from an image supplying source, setting image processing conditions according to the information on the customer, and performing image processing based on the image processing conditions.
- In the laid-open patent application, information on a customer refers to an occupation of a customer, a sex of a customer, an age of a customer or the like. In addition, a method of obtaining information on a customer is exemplified by a method with which information on a customer is verbally obtained from the customer when an order of a print is received from the customer, which is communicated to an operator who inputs the information using an operating device such as a mouse, a method with which customer information is written in a customer card and an operator inputs the customer information referring to the customer card when preparing a print, or a method with which customer information is arranged as a database and an operator obtains the customer information from the database.
- In addition, image processing that preferably corresponds to a request of a customer is exemplified by the following processing. In a case in which a film is a reversal film and an occupation is a professional photographer, an image photographed on the film is to be reproduced faithfully, and in a case in which an occupation is not a professional photographer, a photographing failure such as over exposure, under exposure and back-light is remedied by adjusting a color and a density of an image normally. In addition, in a case of a male, a face region is extracted and sharpness is given rather strongly to make gradation prominent and show details, and in a case of a female, a face region is extracted and sharpness is given rather weakly or soft focusing is applied extremely weakly to make gradation less prominent (soft) and to make live spots, wrinkles, freckles or the like less outstanding.
- However, the conventional image processing method has a problem in that processing is complicated because an operator must input information on a customer. In addition, there is also a problem in that image processing conditions to be set are fixed according to obtained information on a customer, or selection of conditions is limited only to whether the processing is performed or not, and there is no function of setting image processing conditions corresponding to preference of a customer or, more meticulously, of each subject person, thus, image reproduction to which preference of a customer or a subject person is truly reflected cannot be realized.
- As image forming media, there are conventionally a photograph (print) that reproduces a still image and a movie (a film projector and a screen) that reproduces images as an animation. Since the development of a cathode-ray tube (CRT), in recent years, television sets (TVs) have been spread to all the households. Moreover, with remarkable advances of technologies, various image display devices such as a liquid crystal display, a plasma display and an electronic paper have been developed as image forming media.
- Recently, image forming devices have been developed such as a video camera, a digital camera, a digital video movie camera, and a cellular TV telephone, which can capture voices together with images utilizing the above-mentioned image forming media.
- However, although the above-mentioned conventional image forming devices can photograph images and, at the same time, record voices, the captured voice data is simply reproduced as sounds directly. In addition, there is also a problem in that the conventional image forming devices aim principally at reproducing an image as faithfully as possible as it is photographed, and an entertaining aspect of an image is not taken into account at all.
- Further, in the present digital image processing technology, it is possible to adjust density or color tone of an image photographed and captured as digital image data, and image processing such as correction of the photographed image per se and composition or substitution with other images is also possible. Various digital image processing techniques have been thus proposed which include digital image processing technique conventionally performed in the field of photograph for correcting or adjusting a face of a person or a line of sight.
- For example, Japanese Patent Application Laid-open No. 2000-151985 discloses a technique in which portions of an image of a face of a person and adjustment parameters are set to correct the image to have a made-up face. Ordinary people are not however familiar with adjustment of color tone or gradation, which is difficult processing for amateurs and in particular rather difficult for a user unfamiliar with a personal computer (hereinafter also referred to as “PC”). In some cases, good adjustment results cannot be obtained, and there is also a problem in that images very often have unnatural feeling after adjustment.
- Further, Japanese Patent Application Laid-open No. 5-205030 discloses a technique in which an image of eyes in a full-faced state is prepared from a three-dimensional model of an image of a face of a person by computer graphics (hereinafter referred to as “CG”) technology. There is however a problem in that unnatural feeling remains on the image of eyes prepared by the CG technology. There is also a problem in that arithmetic computations enormously increase in quantity because of the preparation from the three-dimensional model depending on the CG technology.
- On the other hand, among the digital image processing mentioned above, simple image processing can be performed with a personal computer, and to this end, various types of image processing software programs are commercially available. Nevertheless, the simple image processing by using these image processing software programs that are commercially available cannot provide a sufficient accuracy to finish an image of a face of a person so as to have a favorite face or to finish the image in a full-faced state by making a line of sight coincident with a photographing direction. There is also a problem in that the operation is difficult for amateurs.
- The present invention has been devised in view of the above drawbacks, and it is a first object of the present invention to provide an image processing method with which a reproduced image, on which preference of each subject person is reflected, can be automatically obtained.
- The present invention has been devised in view of the above drawbacks, and it is a second object of the present invention to provide an image processing method with which amusement aspect in image forming media such as a photograph, a video, a TV telephone and the like can be enhanced by visualizing a content that is desired to be emphasized according to a type of feeling of a person in a photographed image, particularly an image of the person, and forming an image.
- The present invention has been devised in view of the above drawbacks, and it is a third object of the present invention to provide an image processing method with which even inexperienced or unskilled persons in personal computer or image processing software can easily correct images so as to have a preferred made-up face or a favorite face and remove unnatural feeling due to noncoincidence of the line of sight.
- In order to attain the first object described above, the first aspect of the present invention provides an image processing method for applying image processing to an inputted image data, comprising the steps of registering predetermined image processing conditions for each specific person in advance; extracting a person in the inputted image data; identifying the extracted person to find if the extracted person is the specific person; and selecting image processing conditions corresponding to the identified specific person to perform the image processing based on the selected image processing conditions.
- Preferably, the extracted person is identified using a face image of the specific person registered in advance or person designation information accompanying a photographed frame.
- Preferably, a plurality of kinds of image processing conditions are set for the each specific person as the predetermined image processing conditions to be registered for the each specific person in advance.
- Preferably, the image processing is performed by using at least one image processing condition selected from the plurality of kinds of image processing conditions.
- Preferably, it is set whether the image processing under the selected image processing conditions is applied to an image as a whole or applied only to the person or the person and a vicinity of the person.
- In order to attain the second object described above, the second aspect of the present invention provides an image processing method, comprising the steps of determining a type of feeling from types of feeling registered in advance based on at least one kind of information selected from among voice data accompanying a photographed image, an expression of a person extracted from the photographed image, and a gesture of the extracted person; and subjecting the photographed image to image processing which applies an image processing pattern corresponding to the determined type of feeling among image processing patterns set in advance.
- Preferably, the image processing pattern is set in association with the type of feeling, and the image processing to which the image processing pattern is applied is at least one processing selected from among composition processing for composing a specified mark corresponding to the type of feeling, substitution processing for substituting with an animation image or a computer graphics image corresponding to the type of feeling, image modification processing performed on the photographed image in correspondence with the type of feeling, and processing for changing a density and a color of the photographed image in correspondence with the type of feeling.
- Preferably, the composition processing is processing for composing the specified mark at a predetermined position in the photographed image or at a predetermined or relative position with respect to the person extracted in advance or during the composition processing from the photographed image, and in a predetermined or relative size and a predetermined or relative orientation with respect to the photographed image or the extracted person.
- Preferably, the substitution processing is processing for substituting a specified portion of the person extracted in advance or during the substitution processing from the photographed image with the animation image or the computer graphics image.
- Preferably, the photographed image is a photographed image by an image photographing device with a recording function, and the image processing pattern is registered in the image photographing device with the recording function in advance, and the image processing to which the image processing pattern is applied is performed by the image photographing device with the recording function.
- Preferably, the image processing to which the image processing pattern is applied is performed on a lab side that receives image photographing information including the voice data recorded by the image photographing device with the recording function.
- Preferably, the photographed image is a photographed image by a telephone call device with a photographing function, and the image processing to which the image processing pattern corresponding to the type of feeling of the person is applied is performed on the photographed image.
- Preferably, the image processing pattern is registered in the telephone call device with the photographing function in advance, and the image processing is performed by the telephone call device with the photographing function to transmit a processed image to a terminal on an opposite party side.
- Preferably, the image processing pattern is registered in a repeater station of the telephone call device with the photographing function in advance, and the image processing is performed in the repeater station to transmit a processed image to one terminal in a connected telephone line.
- Preferably, the image processing pattern is registered in the telephone call device with the photographing function in advance, and the image processing to which the image processing pattern is applied is performed by the telephone call device with the photographing function on an image that was photographed by a terminal on an opposite party side and received by the telephone call device with the photographing function.
- Preferably, if a specified mark corresponding to the type of feeling or a composing position of the mark is wrong in the image processing to which the image processing pattern is applied, the mark corresponding to the type of feeling, the composing position of the mark and a size or an orientation of the mark can be corrected.
- In order to attain the second object described above, the third aspect of the present invention provides an image processing method comprising the steps of capturing a television image in a personal computer; and performing image processing to which an image processing pattern is set in advance on the captured television image in the personal computer.
- In order to attain the third object described above, the fourth aspect of the present invention provides an image processing method comprising the steps of registering in advance an area image in a specific area of an image or an image characteristic amount; and composing on a corresponding area of an photographed image or adjusting a density and a color tone by using the area image or image characteristic amount registered in advance.
- Preferably, the corresponding area is extracted from the photographed image in accordance with the area image or the image characteristic amount registered in advance.
- Preferably, the specific area is at least one of a face of a person, at least one portion constituting the face of the person, an accessory that the person wears and a background.
- Preferably, the specific area is a face of a person, and the area image registered in advance is an image of a made-up face or best face of the person.
- Preferably, the specific area is an area of eyes constituting a face of a person, and determination is made as to whether the person as a subject in the photographed image is in a stationary state, and when the person is in the stationary state, the area image in the specific area registered in advance is composed on the area of eyes constituting the face of the person.
- Preferably, the area image registered in advance is an image of the area of eyes in which a line of sight of the person is coincident with a photographing direction.
- In the accompanying drawings:
-
FIG. 1 is a block diagram schematically showing an example of a digital photoprinter including an image processing apparatus that implements an image processing method in accordance with a first aspect of the present invention; -
FIG. 2 is an explanatory illustration showing an example of registration data used in the present invention; -
FIG. 3 is a flow chart showing operations of a first embodiment of the first aspect of the present invention; -
FIG. 4 is a block diagram schematically showing an example of a digital photoprinter including an image processing apparatus implementing an image processing method in accordance with a second aspect of the present invention; -
FIG. 5 is a flow chart showing a flow of processing of an image processing method in accordance with a first embodiment of the second aspect of the present invention; -
FIG. 6 is an explanatory illustration showing an example of an image processing pattern; -
FIG. 7 is an explanatory illustration schematically showing a cellular TV telephone system in accordance with a fourth embodiment of the second aspect of the present invention; -
FIG. 8 is a block diagram schematically showing an example of a digital photoprinter including an image processing apparatus implementing an image processing method in accordance with a fourth aspect of the present invention; and -
FIG. 9 is a flow chart showing a flow of processing of image processing methods in accordance with first and second embodiments of the fourth aspect of the present invention. - An image processing method in accordance with the present invention will be hereinafter described in detail based on preferred embodiments of the present invention shown in the accompanying drawings.
- In the following description of the embodiments, reference is made to drawing figures. Like reference numerals used throughout the several figures refer to like or corresponding parts.
- An image processing method in accordance with a first aspect of the present invention will be described first with reference to FIGS. 1 to 3.
-
FIG. 1 is a block diagram schematically showing an example of a digital photoprinter including an image processing apparatus that implements an image processing method of automatically obtaining a reproduced image, on which preference of an individual customer is reflected, in accordance with a first aspect of the present invention. - A digital photoprinter (hereinafter referred to as a photoprinter) 10 shown in
FIG. 1 includes a scanner (image reading apparatus) 12 for photoelectrically reading an image photographed on a film F, animage processing apparatus 14 for performing image processing such as electronic magnification processing of image data read by thescanner 12, edge detection and sharpness emphasis of image data and smoothing processing (granularity restraining), or operation, control and the like of theentire photoprinter 10, and animage recording apparatus 16 for applying image exposure and developing processing to a photosensitive material (printing paper) by light-beam that is modulated according to the image data outputted from theimage processing apparatus 14 to output a (finished) image as a print. - In addition, an
operation system 18 including akeyboard 18 a and amouse 18 b for inputting a selection or an instruction of input, setting and processing of various conditions, an instruction of color/density correction, or the like, and amonitor 20 for displaying the image read by thescanner 12, various operational instructions, a setting/registration screen of various conditions, and the like are connected to theimage processing apparatus 14. - The
scanner 12 is an apparatus for photoelectrically reading an image photographed on a film F or the like one frame after another, which includes alight source 22, avariable diaphragm 24, adiffusion box 26 for equalizing reading light incident on the film F in the surface direction of the film F, acarrier 28 of the film F, animaging lens unit 30, animage sensor 32 having a three-line CCD sensor dealing with reading of each color image density of R (red), G (green) and B (blue), an amplifier 33 and an A/D (analog/digital)converter 34. - In the
photoprinter 10, thecarrier 28 that is for special-use with and detachably attachable to the body of thescanner 12 is prepared according to a type and a size of the film F such as a film of the Advanced Photo System, a negative (or reversal) film of the 135 size or the like, a form of a film such as a strips and a slide, or the like. Thus, thephotoprinter 10 can cope with various kinds of films and processing by replacing thecarrier 28. An image (frame) that is photographed on a film and is served for preparing a print is conveyed to a predetermined reading position by thecarrier 28. - In addition, as is well-known, a magnetic recording medium is formed in a film of the Advanced Photo System, on which an cartridge ID and a film size, an ISO sensitivity, and the like are recorded. Further, various data such as a date and time of photographing or development, an exposure level, a type of a camera or a developing machine can be recorded at the time of photographing, development or the like. A reader of this magnetic information is disposed on the
carrier 28 dealing with a film (cartridge) of the Advanced Photo System, which reads the magnetic information when the film is conveyed to a reading position. The read various information is sent to theimage processing apparatus 14. - In the
scanner 12 as described above, when an image photographed on the film F is read, uniform reading light, which is irradiated from thelight source 22 and whose amount is adjusted by thevariable diaphragm 24 and thediffusion box 26, is incident on and transmits through the film F located in the predetermined reading position by thecarrier 28. A projected light bearing the image photographed on the film F is thereby obtained. - Further, a color image signal is not limited to a signal inputted by reading light that has transmitted through a film, but a reflected original or an image photographed by a digital camera may be used. That is, an image (digital image signal) can be inputted in the
image processing apparatus 14 from, other than thescanner 12 for reading an image of a film, an image supplying source R, such as image photographing devices such as a digital camera and a digital video camera, an image reading apparatus for reading an image of a reflected original, communication networks such as an LAN (Local Area Network) and a computer communication network, and a medium (recording medium) such as a memory card, such as a smart medium, and an MO (mangetooptical recording medium). - The illustrated
carrier 28 deals with an elongated film F (strips) such as a 24-exposure film of the 135 size and a cartridge of the Advance Photo System. - The film F is placed in a reading position by the
carrier 28, on which reading light is irradiated while it is conveyed in a sub-scanning direction perpendicular to a main scanning direction that is an extending direction of the three-line CCD sensor of R, G and B. Thus, as a result, the film F is slit-scanned two-dimensionally and an image of each frame photographed on the film F is read. - Projected light of the film F is imaged on a light-receiving surface of the
image sensor 32 by theimaging lens unit 30. - Each output signal of R, G and B outputted from the
image sensor 32 is amplified by the amplifier 33 and sent to the A/D converter 34. In the A/D converter 34, each output signal is converted to R, G and B digital image data of, for example, 12 bits, and then outputted to theimage processing apparatus 14. - Further, in the
scanner 12, image reading is performed twice, i.e., pre-scan for reading an image with low resolution (first image reading) and fine scan for obtaining image data of an output image (second image reading), when an image photographed on the film F is read. - Here, the pre-scan is performed under pre-scan reading conditions set in advance such that the
scanner 12 can read all the images on the film F, which are objects of thescanner 12, while theimage sensor 32 does not saturate. - On the other hand, the fine scan is performed under reading conditions of the fine scan set for each frame such that the
image sensor 32 saturates at a density slightly lower than a lowest density of the image (frame). Further, output image signals of the pre-scan or the fine scan are basically similar image data except that resolutions and output image signal levels are different. - Further, the
scanner 12 used in thephotoprinter 10 is not limited to a scanner that performs such slit-scan reading, but may be a scanner that performs plane reading for reading the entire surface of a film image of one frame at a time. - In this case, the
scanner 12 uses an area sensor such as an area CCD sensor, and an inserting device for each of the color filters of R, G and B is provided between thelight source 22 and the film F. The color filter is inserted in an optical path of light irradiated from thelight source 22. Reading light that has transmitted through the color filters is irradiated on the entire surface of the film F to cause the transmitting light to focus on the area CCD sensor for reading the entire image of the film. Thescanner 12 dissolves an image photographed on the film F into three primary colors to read them by replacing each color filter of R, G and B one after another to repeat this processing. - As described before, the digital image data signal outputted from the
scanner 12 is outputted to theimage processing apparatus 14 that implements the image processing method in accordance with this aspect of the present invention. - In order to first correct dispersion of sensitivities for each pixel of the R, G and B digital image data and dark current due to the CCD sensor with respect to the image signals of R, G and B inputted in the
image processing apparatus 14 from thescanner 12, data correction of read image data such as DC offset correction, dark state correction, defective image correction and shading correction is performed. Thereafter, the image data is subject to logarithmical conversion processing and gradation conversion, and is converted into digital image density data. - The pre-scan image data and the fine scan image data in the digital image density data are separately memorized (stored). The pre-scan image data is subject to a predetermined image processing and displayed on the
monitor 20. In addition, a density histogram is prepared and image characteristic volumes such as an average density, an LATD (large area transmission density), a highlight (minimum density) and a shadow (maximum density) are calculated from the pre-scan image data to set reading conditions and image processing conditions of the fine scan. In this aspect of the present invention, image processing conditions are set such that individual preference of each customer is reflected on the conditions as described later. Image processing is applied to the fine scan image data according to the set image processing conditions, and is outputted on color paper from theimage recording apparatus 16 as an optimal, high quality image at a density, gradation and tone that are desired as a color print. - The digital photoprinter including the image processing apparatus that implements the image processing method in accordance with the first aspect of the present invention is basically configured as described above.
- Operations of a first embodiment of the first aspect of the present invention will be hereinafter described.
- As described above, the image processing according to individual preference of each customer is executed in the
image processing apparatus 14 together with the processing such as conversion of a density, a color and gradation of image data, conversion of a chroma and electronic magnification with respect to the fine scan image data that is read with the image reading conditions that are set based on the pre-scan image data. - The image processing on which individual preference of each customer is reflected will be hereinafter described.
- First, as a precondition, data such as individual identification information and image processing conditions desired by each individual (request processing information) is registered in a lab in advance in order to identify a person in an image and execute image processing according to preference of the specific person.
- An example of registration data is shown in
FIG. 2 . The example shown inFIG. 2 is in a case in which data of all members of a family is registered. - Contents of the registered data are, for example, registered customer IDs, a name of a person representing the people to be registered and individual data of each member of the family, i.e., a relation in the family, a face image of each individual, image processing contents desired by each individual (request processing), and the like.
- Face image data may be captured by designating, for example, a face area in an already ordered photographed scene. In addition, a plurality of patterns for each person may be set and registered in order to increase accuracy of identification.
- In addition, as examples of request processing, there are adjustment of a density/color tint of a face, conversion of a color of hair, eyes, rouge or the like to a designated color, white finish of teeth, a special effect, slim body finish, image composition and the like.
- Here, as adjustment of a density/color tint of a face, a mode for changing skin to white, a mode for erasing wrinkles, a suntan mode and the like can be considered. As a special effect, there are use of a cloth filter, soft focus finish and the like. As image composition, there are frame decoration, composition of specific characters, special make-up and the like.
- A plurality of kinds of the above-mentioned request processing may be set for each person, and some selected kinds of request processing out of the plurality of kinds of the request processing may be combined to be applied to image processing of a specific person.
- Moreover, depending on processing contents, conditions such as whether the processing is applied to the entire image, or applied only to a pertinent person or only to the person and the vicinity of the person are set.
- Image processing on which individual preference of each customer is reflected will be hereinafter described with reference to a flow chart of
FIG. 3 . - In
step 110, a face region of a person is extracted. An extracting method of a face region is not specifically limited but there are a variety of methods. It is sufficient if a publicly known specific part extracting method (extraction algorithm) is used. - Such a specific part extracting method is exemplified by a method with which a plurality of different specific part (major part) extracting methods, such as a method of extracting a specific color, a method of extracting a specific shape pattern, and a method of removing a region that is estimated to be equivalent to a background disclosed in Japanese Patent Application Laid-open No. Hei 9-138470, are evaluated to define weights in advance, a specific part is extracted by each extracting device, the extracted specific part is weighted by the defined weight, and the major part is determined and extracted according to a result of the weighting.
- In addition, another specific part extracting method is exemplified by a method with which densities or luminance of a plurality of points in an image are measured to find variations among the points, a point with a variation equal to or more than a predetermined value is set as a standard point, then a retrieval area and a retrieval direction pattern are set using a variation or the like of a density or the like within a predetermined range from the standard point to retrieve a part with a variation of a density or the like in the direction indicated by the retrieval direction pattern within the retrieval area, and subsequently the retrieval is repeated with this part as a standard point to extract a specific part by connecting retrieved/set standard points as disclosed in Japanese Patent Application Laid-open No. Hei 9-138471.
- In this aspect of the present invention, specific part extracting methods disclosed in each of Japanese Patent Application Laid-open No. Hei 4-346333, 5-158164, 5-165120, 6-160993, 8-122944, 8-184925, 9-101579, 9-146194, 9-197575 and the like are preferably available other than the above-mentioned methods.
- When a face is extracted, a person is identified and a person in the image is specified in
step 120. - After normalizing sizes, matching is performed with respect to the extracted face image with registered face images as a template to find a coinciding degree, and a person is identified. At this point, if information indicating the person in the image exists as information relating to a comment accompanying the image, the information may be utilized for identification of a person.
- If a person is successfully specified as a result of the identification of a person, request processing for the registered person is executed with respect to the specific person in
step 130. - In this way, according to this embodiment, a finished image according to individual preference of each customer can be automatically obtained. For example, an image of the father is processed to be a suntanned face, soft focus and wrinkle erasing processing are applied to an image of the mother, and white skin and slim body finish is applied to an image of a daughter A. Thus, finished images as the members of the family wish to have are obtained, respectively.
- A second embodiment of the first aspect of the present invention will now be described.
- The second embodiment is to execute processing as described in the first embodiment in a case in which a customer performs reproduction processing of an image using a digital camera, a personal computer, a photo player or the like. That is, before the customer places an order with a lab, a photographed image is displayed on these devices to revise the image. Causing a digital camera or a photo player to have a revising function, or connecting the devices-with a personal computer can revise an image.
- In addition, if a displayed image is revised, although the revision can be automatically performed using image processing software for performing processing similar to that of the first embodiment, a customer may manually identify a person and revise a result of detecting a person while looking at the displayed image. In addition, an image of a result of the revision may be printed using a video printer for home use, or revision data of a frame may be recorded as information attached to the frame to order a print for the lab. For example, it is sufficient to display an image photographed by the APS on a photo player to revise the image, and if an order is placed with the lab to print from a negative film, to cause the photo player to have a magnetic recording function and to magnetically record the attached information in the negative film. In addition, if an order is placed with the lab using a communication line by a personal computer, it is sufficient to record the attached information in a file.
- A third embodiment of the first aspect of the present invention will now be described.
- As described in the first embodiment, this embodiment is to improve accuracy of identification of a person by adding a hair style, a dress and the like as person designation information to characteristic amounts for recognition of a person, when a face is extracted and template matching is performed between the face and the registered face image in identification of a person. Here, as the characteristic amounts for recognition of a person, a density, a distribution of color tint, a texture pattern or the like can be considered.
- In this embodiment, as in the first embodiment, at first a face region is extracted from an input image to perform matching with registered face images as a template, to find a coinciding degree and to identify a person. Then, if a person in an image is specified by the identification of a person, a hairstyle, a dress and the like of the person are extracted, which are registered as person designation information and as registration data-corresponding to the person.
- In the next processing on the input image, identification of a person is performed with reference to the hairstyle, the dress and the like that have been registered before in addition to by means of matching of the template of the extracted face image. When a person is specified by the identification of a person, a hairstyle and a dress are registered anew from the input image to automatically accumulate them as person designation information.
- For example, it is assumed that data accumulated to the present concerning color tint of dress areas is as shown in Table 1. This Table 1 classifies color tint of dress areas and indicates frequencies of colors appearing in each member of a family by pij.
TABLE 1 Classifications of color tint Achromatic i: Person Red Orange Yellow . . . . . . color 1. Father 0 . . . . . . p1j . . . p1J 2. Mother 3 . . . . . . p2j . . . p2J 3. 10 . . . . . . p3j . . . p3J Daughter A
J indicates the number of classifications of color tine.
If a dress is composed of a plurality of colors, an area ratio is used for classifications. For example, if a half of a dress is red and the other half is white, red=0.5 and white=0.5 are registered in accumulated data. - Here, it is assumed that Mi is a coinciding degree by the pattern matching of a face image represented by a point, and Ni is a point derived from a frequency of appearance of a dress. The large each coinciding degree, the large a point is. In Table 1, i is a number indicating a registered person. For example, the daughter A is i=3. It is assumed that, if color tint of a dress area of a subject person being an object of photographing corresponds to a classification number j, Ni=pij/(p1j+p2j+p3j).
- By finding a person determination point Di=a×Mi+b×Ni (a and b are weighting factors), it is determined that a person with the number i being the largest point is the person in the image.
- For example, it is assumed that M1<M2≈M3 as a result of the matching of a face image, thus the mother and the daughter A have substantially the same points. Here, if the color tint of a dress is red, since the number of times each person is photographed while dressed in red is 0 for the father, 3 for the mother and 10 for the daughter A, the daughter A has the largest point Di. Therefore, the person in the image is determined to be the daughter A.
- If there is an error in the identification of a person, an operator who has found the error or an operator who has been notified by a customer makes a revision manually, performs processing with respect to a correct person in the image and, at the same time, corrects accumulation processing of the person designation information.
- In addition, parameters for a determination algorithm for identification of a person are automatically revised at this point.
- In addition, a customer who has found an error in identification of a person may revise the processing by himself if the customer revises an image in person by downloading image processing software in the customer's own personal computer as in the second embodiment.
- According to this embodiment, in addition to simply performing the template matching of an image with a registered face image, person designation information such as a hairstyle and a dress is also added to characteristic amounts for recognition of an individual, and moreover, accumulated data is revised by adding an element of learning function to the accumulation processing of the person designation information. Therefore, accuracy of the identification of a person can be improved, and image processing according to preference of each person can be properly performed.
- The above description relates to a method of obtaining a print (output image) on which individual preference of each customer is reflected by applying image processing according to the individual preference of each customer to each person in an image. However, in some cases, it is desired that a print is outputted on which the tendency of preference according to locality of a customer or a season is reflected in addition to matching individual preference of each customer.
- For example, in a case in which an exposure control algorithm is set such that a photo print is finished in a skin color preferred by the Japanese in a photoprinter, if a printer of the same type is used in Europe without changing the exposure control algorithm, a skin color may not be a color preferred by the Europeans. In addition, in a case in which the exposure control algorithm is set based on photographing under the sunlight in Japan, if a photo printer of the same type is used in a region at a substantially different latitude, a print of the same quality as that in Japan cannot be realized because the sunlight is also different in this region from that in Japan.
- Thus, the applicant has already proposed an exposure control method with which local preference of a person or preference of an individual customer is properly reflected on a print in the case of the analog exposure method in Japanese Patent Application Laid-open No. Hei 6-308632 or Japanese Patent Application Laid-open No. Hei 8-137033.
- A print (output image) on which locality or the tendency of preference of a customer is reflected can also be realized in the case of the digital image processing as described below.
- That is, an example described below is a digital image processing method of accumulating the tendency of revising a setting of image processing conditions for each customer, lab or region to revise and optimize a setting parameter of image processing by. In addition, the method is also for classifying each frame by scene to accumulate the tendency of revision for each scene classification, and revise and optimize the setting parameter.
- For example, when performing image processing, if a correction tendency of an operator shows, after classifying each frame by scene, varying tendencies for a certain region such as that it is preferred to emphasize a contrast or to be moderate in a contrast, the operator revises a correction parameter in accordance with the tendencies.
- Describing the processing method more specifically, revisions of a processing parameter by an operator are accumulated for the number of N frames in the past to check relation with a specific scene classification. At this point, there are the following methods as methods of classifying scenes.
- For example, a scene is estimated as a strobe scene, a back-light scene or the like by estimating a normal scene, over exposure, under exposure or the like based on a distribution pattern found from a density histogram or the relation between density differences between a central part and a peripheral part of an image.
- Alternatively, a scene is estimated to be a person, a scenery or the like by extracting a face. There are some other methods such as a method with which an operator manually estimates a classification of a scene such as a person, a scenery, a night scene, underexposure or high contrast.
- In addition, as an example of changing a processing parameter, there is such a method with which a way of applying contrast emphasis or resolution (sharpness) is changed or a target value of color tint or a density of a skin color of a person is changed. Since a person and a preferable skin color varies depending on a region, this method takes into account the difference due to a region. Since, in Europe, a color of a face that is less dense than a skin color based on the Oriental people is preferred and an operator tends to reduce a density, an algorithm is automatically updated such that a density becomes slightly lower than a default value after a face is extracted judging from accumulated data.
- As described above, changes of a processing parameter by an operator are classified for the number of N frames in the past, relation between the classification and a specific scene classification is found. Then, an algorithm is updated and an image processing parameter is optimized in a fixed cycle by reflecting a statistic manual correction tendency for each image processing parameter in a predetermined scene classification.
- Further, in this case, it is preferable to also update accumulated data to latest one in a fixed cycle.
- In addition, in the above-mentioned processing, types of scene classifications or a data accumulation period may be set for each season. For example, around a ski resort, a “snow scene” classification that is determined according to a highlight ratio in a scene may be added only in the winter season, or a gradation characteristic emphasizing whiteness may be applied to the “snow scene”.
- In addition, in a fixed region, algorithm updating processing of the region may be performed by a unit of region in a lab management center. This can be attained by connecting each mini-lab shop in the region and the lab management center in the region by a network such that the lab management center collects information on revisions by an operator in each mini-lab shop, grasps a tendency of the entire region, revises image processing software of mini-lab shops all over the region at a certain timing, and distributes an updated algorithm to each mini-lab shop.
- In addition, when performing the above-mentioned processing, an image processing parameter may be optimized for each customer by recording the data together with customer IDs and managing the data for each customer.
- Further, the above-mentioned processing can be performed with respect to a printer for home use of each customer instead of a printer in the above-mentioned mini-lab.
- According to the example mentioned above, an appropriate image reproduction becomes possible which copes with the change of a tendency of preference for each scene due to an individual customer, locality, a season or the like.
- In addition, when a photoprinter is manufactured, it becomes unnecessary to set every parameter to conform to a tendency of each region being a destination of the product and operations for manufacturing and delivering the product is facilitated.
- In addition, request contents may be divided for each of print processing and monitor display processing, or registration data processing can be stopped or specific request processing can be designated only for a specific frame.
- The image processing method of the first aspect of the present invention is basically configured as described above.
- Image processing methods in accordance with second and third aspects of the present invention will now be described with reference to FIGS. 4 to 7.
- The image processing method in accordance with the second aspect of the present invention is basically to determine a type of feeling of a person in a photographed image scene based on voice data accompanying the image scene or based on an expression or gesture of the person extracted from the photographed image, and attach (compose) a mark emphasizing the feeling corresponding to the type of feeling to an image of the person in the scene, thereby enhancing amusement aspect of a photograph or an image expression. A case in which a type of feeling of a person in a photographed image scene is determined based on voice data accompanying the photographed image scene will be described below as a typical example, but the second aspect is not limited to this case. Further, objects to which the second aspect of the present invention is applied range widely from a photograph (still image) to real time image display such as a video (animation) and a TV telephone, or the like.
- A first embodiment of the second aspect of the present invention will be described first. The first embodiment is for applying predetermined image processing with respect to an image having voice data as accompanying information which is photographed by a digital camera or the like having a voice recording function.
-
FIG. 4 is a block diagram schematically showing a digital photoprinter including an image processing apparatus that implements the image processing method in accordance with the first embodiment of the second aspect of the present invention. - A
digital photoprinter 50 shown inFIG. 4 mainly includes a photographinginformation inputting device 52, animage processing apparatus 54 and animage recording apparatus 16. Further, theimage recording apparatus 16, theoperation system 18 and themonitor 20 similar to those in thephotoprinter 10 shown inFIG. 1 can be used. - The photographing
information inputting device 52 is to read image data and voice data from a recording medium in which the image data and the voice data are recorded by an image photographing device with a recording function such as a digital camera. Theimage processing apparatus 54 is to execute the image processing method in accordance with this aspect of the present invention and other various kinds of image processing. In addition, theoperation system 18 including thekeyboard 18 a and themouse 18 b for inputting and setting conditions for various kinds of image processing, selecting and instructing a particular processing, and instructing color/density correction, or the like, and themonitor 20 for displaying the image inputted from the photographinginformation inputting device 52, a setting/registration screen for various conditions including various operational instructions, and the like are connected to theimage processing apparatus 54. Theimage recording apparatus 16 is to apply image exposure and developing processing to a photosensitive material (printing paper) by light-beam that is modulated in accordance with the image data outputted from theimage processing apparatus 54 to output a (finished) image as a print. - Image processing of the first embodiment will be hereinafter described with reference to a flow chart of
FIG. 5 . - First, in
step 200, image processing patterns corresponding to various types of feelings are set in advance. However, since feelings of a human varies and it is impossible to deal with all the feelings, only typical feelings that relatively clearly appear outside are dealt with here. -
FIG. 6 shows an example of setting image processing patterns. As shown in the column indicated as a mode, the types of feelings to be dealt with here include “fret”, “surprise”, “anger”, “sorrow”, “worry (doubt)”, “love”, “pleasure” and the like. - In this embodiment, a type of feeling is determined by voice data (voice information) accompanying a photographed image scene. In addition, as an image processing pattern, a mark (image composition pattern) is composed which emphasizes a feeling with respect to each type of feeling. Therefore, setting an image processing pattern corresponding to a type of feeling is eventually setting an image composition pattern corresponding to specific voice information.
- That is, items that are set in advance include a mode for representing a type of feeling, a word for drawing each mode (keyword, omitted in the table in
FIG. 6 ), and an image composition pattern corresponding to each mode. The image composition pattern includes a composition image (mark) to be composed, a position where an image should be composed (a position in an image, a specified or relative position with respect to a position of a person or his or her face in an image), a size of a composition image (a size, a size with respect to an image, a relative size with respect to a size of a person or his or her face in an image), an orientation of a composition image with respect to a person or his or her face and the like. - For example, if a keyword such as “ow”, “chickie” or “zex” is in voice information, the “fret” mode corresponds to the feeling, and a “sweat mark” is composed with respect to an image of a person. A composition image is not limited to one, but may be plural. In addition, a composing position is preferably designated by a position relative to a position of a face in an image and is more preferably normalized with a width of the face. A size of a composition image is also preferably designated based on a face. In addition, an orientation of composing a composition image is also preferably set based on a face of a person in an image.
- For example, in a case of
FIG. 6 , two sweat marks (one set consists of two marks) are composed in the left of a face with a coordinate of a center of the sweat marks in the position beside an eye (x1, y1) (with an eye as the origin) and in the size of 0.1 when the width of the face is 1. - In addition, in a case in which a keyword such as “whoof”, “jiminy” or “amazing” is in the voice information, the “surprise” mode corresponds to the feeling, and “marks of protruding eyes due to surprise” are composed at the position where coordinates of the centers of the eye mark are (x2, y2) and (x3, y3), respectively. In addition, an “orientation determination result” in
FIG. 6 indicates that an orientation of a person with respect to a camera is determined to be either in front, at the angle to the right or at the angle to the left by a pattern matching method or the like, and patterns with different orientations of protruding eyes depending on the determination result are composed. - In addition, in a case in which a keyword such as “cold” or “chilly” is in the voice information, the “sorrow” mode corresponds to the feeling, and a “shading (slanted lines) mark” is placed on a face and a “falling leaves mark” is composed around the face. In such a case, the color of the face may be white (pale) instead of placing slanted lines in the face. Moreover, more touch of sorrow may be added by changing the background to be monotone. In addition, other marks may be composed as follows. If there is a word such as “Mmm” or “well”, a mode is determined to be the “worry (doubt)” mode and a “question mark” is composed. If there is a word such as “aah” or “love”, a mode is determined to be the “love” mode and a “heart” is composed. If there is a word such as “yahoo” or “made it”, a mode is determined to be the “pleasure” mode and a “fireworks” mark is composed. A certain number of these image processing patterns may be prepared on the system side and a customer may prepare and add data to the patterns. In this case, a word frequently used by the customer in a favorite phrase may be set as a word drawing a mode, or a composition image or a keyword for various feeling mode may be registered as the customer likes.
- It is sufficient to set the above-mentioned image processing patterns once prior to respective image processing. The setting is performed with respect to a database on the lab side.
- Then, in
step 210, photographing information having voice data as accompanying information which is photographed by a digital camera or the like having a recording function is inputted form the photographinginformation inputting device 52. Voice data and image data in the inputted photographing information are sent to theimage processing apparatus 54, respectively. - Then, in
step 220, a type of feeling is specified from the voice data in theimage processing apparatus 54. For this purpose, the voice data is recognized first and matching is performed to find if a keyword that draws a registered mode indicating a type of feeling is included in the voice data. If a specific keyword is detected from the voice data, a specific image processing pattern is specified by a mode corresponding to the keyword. - In the
next step 230, a composition image of the image processing pattern specified above is composed on the photographed image. - In a case in which a composing position of the composition image to be composed on the photographed image is predetermined with respect to the photographed image in a specified position such as any of four corners including right upper corner, and in which a specified size or relative size of the composition image is predetermined with respect to a size of the photographed image, the composition image is composed on the photographed image in the predetermined composing position and in the predetermined size.
- When the composing position and the composing size of the composition image to be composed on the photographed image is predetermined with respect to a person or his or her face in the photographed image in a fixed or relative manner, the person or the face is preferably extracted from the photographed image by a publicly known method in advance. As a method of extracting a face, there is, for example, a method disclosed in Japanese Patent Application Laid-open No. Hei 8-122944 by the applicant of this application. Further, as a device for extracting a specific region such as a face region of a person, a method disclosed in each of Japanese Patent Application Laid-open No. Hei 4-346333, 5-158164, 5-165120, 6-160993, 8-184925, 9-101579, 9-138470, 9-138471, 9-146194 and 9-197575 is preferably available in addition to the above-mentioned methods.
- When a face of a person is extracted form the photographed image, eyes are extracted from the face, and the width of the face, the positions of the eyes and the like are calculated. Then, based on the data, a composition image is composed in accordance with a composing position, a composing size and an orientation designated in the image processing pattern.
- In addition, if there are other designations such as to change a density or a color of the background or the entire screen, processing of such a designation is performed. Thereafter, normal image processing is performed and an output image is prepared.
- Determination of a type of feeling of a person (face) in the photographed image is not limited to the one depending on voice data as described above, but the type of feeling may be determined by using an expression or a gesture of the person, which will be described later in detail.
- Further, in determining a type of feeling only from voice data as described above, it is unnecessary to extract a person from the photographed image. On the other hand, if a type of feeling is determined from an expression or a gesture of a person or from voice data and an expression or a gesture of a person, it is necessary to extract a person from the image.
- In addition, although it is unnecessary to extract a person from the image for determination of a feeling, the person is preferably extracted when composing an image processing pattern, since the image processing pattern is preferably composed with respect to the person or his or her face. However, if the image processing pattern is composed in a specified position in the photographed image, or if a background is changed or a density or a color of the entire screen is changed, or if falling leaves are flown without regard to a position, it is unnecessary to extract a person specifically. In this way, extraction of a person out of an image may be performed if necessary. In addition, it is therefore preferable to make it possible to select and set a determining method or the like of a type of feeling in advance. Lastly, in
step 240, the output image is outputted from theimage recording apparatus 16. - Further, in the example described above, the lab side, which has received a request of a customer who performed photographing by a digital camera or the like, performs image processing by a printer in the lab and prepares a print on which a mark emphasizing a feeling is composed. However, the above-mentioned image processing may be performed on the digital camera side. If the processing is performed on the digital camera side, image processing patterns or the like are set on the camera side in advance.
- In addition, the above-mentioned processing may be entirely performed automatically based on voice data recognition and extraction of a face from the photographed image, or an operator may input by operation keys or the like and perform the processing in accordance with an instruction of a customer.
- Also when the processing is performed on the digital camera side, image composition processing may be performed automatically by incorporating software for executing the above-mentioned image processing in the digital camera, or a customer may instruct a camera by keys/buttons or the like to perform the composition processing at the time of photographing.
- In addition, image processing by the above-mentioned image processing patterns may be applied to an index print while a main print is normally processed.
- In addition, although an image (mark) emphasizing a feeling is composed in this embodiment, an expression may be changed by image modification processing including morphing processing so as to correspond to a type of feeling. For example, in a case of “anger”, the image modification processing including the morphing processing can be performed to slant eyes upwardly. However, in order to enhance amusement aspect of a photograph, a somewhat comical expression as in the example shown in
FIG. 6 has more fun and is effective than a change to a too real expression. - Further, instead of composition corresponding to a person in a photographed image as described above, the person per se or portions constituting the person such as face and body per se may be entirely subjected to processing for substituting with an animation image or a computer graphics (CG) image, and in particular an animation image or a CG image corresponding to a type of feeling.
- The animation image or CG image used may be one image determined in accordance with the type of feeling, but a plurality of patterns can be preferably selected for each of a still image and an animation image.
- The animation image and CG image as described above may include a content prepared and registered in advance by a customer, or a content supplied by a company managing a repeater station as described later.
- As in the above case, conversion patterns can be preferably set based on at least one kind of information selected from an expression and a gesture of a subject person in a photographed image, and voice contents. It goes without saying that a request from a customer can be more preferably registered.
- The processing as described above enables remarkable enhancement of amusement aspect in image expression in a photograph, a video, a TV telephone and the like. Further, this method is convenient and does not offend an opposite party when one does not want to display his or her face in a TV telephone or the like.
- A second embodiment of the second aspect of the present invention will now be described.
- The second embodiment is an application of the image processing method of the second aspect of the present invention to an animation such as that in a cellular TV telephone or a movie editing rather than to a still image such as a photograph.
- In a case of an image displayed on a display screen of a cellular TV telephone or an animation such as a displayed image of a movie, an image processing method itself is basically the same as that in the above-mentioned first embodiment.
- That is, the same image processing patterns as above are registered and image processing software is also incorporated in a microcomputer such as a cellular TV telephone terminal or the like in advance. Then, in a case of a cellular TV telephone, if a registered keyword is detected in conversation during a call, processing for composing a composition image of a mode corresponding to the keyword is performed in the terminal. Then, the composition image is transmitted to a terminal of the other party of the call and displayed.
- In this case, it is more effective if the displayed image itself is also an animation, and a composed “sweat mark”, for example, as shown in
FIG. 6 does not be at a standstill but, for example, flows downward little by little. - Further, by securing a sufficient operation speed of a CPU, it also becomes possible to perform image processing on a real time basis while reproducing an image during a call by a cellular TV telephone or during a movie edition.
- A third embodiment of the second aspect of the present invention will now be described.
- This embodiment also relates to an apparatus for performing image processing on a real time basis such as a TV telephone (or a cellular TV telephone). That is, this embodiment is to determine a type of feeling using expression recognition or gesture recognition instead of voice data as accompanying information for determining a type of feeling of a person.
- For expression recognition, a table associating images of expressions of emotions such as pleasure, anger, sorrow and joy with modes corresponding to the expressions is registered for each individual in a cellular TV phone terminal of each person in advance. In addition, for gesture recognition, a table associating gestures determined by each person with modes, in such ways as raising one finger means this mode and raising two fingers means that mode, is registered in a cellular TV telephone terminal of each person in advance.
- Then, during a call, a face is extracted from a photographed image of a caller and a type of expression is identified by the pattern matching in the cellular TV telephone terminal. If an expression that coincides with a specific expression registered in advance is detected, image processing of an image processing pattern of a mode corresponding to the registered expression is performed, and a processed image is sent to a terminal of the other party of the call and displayed. Alternatively, similar processing is performed if it is detected that the caller gestures in a specific way by the gesture recognition.
- Further, as examples of a publicly known technology concerning expression recognition and gesture recognition, there are, for example, the technical research report dated Nov. 18 and 19, 1999 of the Institute of Electronics, Information and Communication Engineers, PRMU99-106 “Gesture Recognition robust to Variations of a Position of Operation” Taiko Amada, Motoyuki Suzuki, Hideaki Goto, Shozo Makino (Tohoku University), PRMU99-138 “Automatic Extraction of Face Organ Outline and Automation of Expression Recognition” Hiroshi Kobayashi, Hisanori Takahasi, Kosei Kikuchi (Science University of Tokyo), PRMU99-139 “Estimation of Face Region and Expression Recognition using Potential Net” Hiroaki Bessho (Image Information Science Center), Yoshio Iwai, Masahiko Taniuchida (Osaka University), PRMU99-140 (Special Lecture) “Research on Face Expression Recognition and Image Processing Technology” Hiroshi Yamada (Nippon University/AIR), PRMU99-142 (Special Invitation Thesis) “Analysis and Recognition of Personal Movement for Interaction” Masahiko Taniuchida, Yoshio Iwai (Osaka University), which are preferably applicable to the present invention.
- A fourth embodiment of the second aspect of the present invention will now be described.
- This embodiment is to perform image composition processing on a repeater station side of the TV telephone, although it also relates to a TV telephone or a cellular TV telephone. A cellular TV telephone system in this embodiment is schematically shown in
FIG. 7 . Face image data (an expression of each mode for determining a mode corresponding to a type of feeling) and voice data (a keyword for drawing each mode) for each user as well as image composition patterns in each mode are registered in adatabase 84 of arepeater station 80 of a communication provider through cellularTV telephone terminals - During communication, a photographed image of A photographed by an
image sensor 62 and voices of A and a photographed image of B photographed by animage sensor 72 and voices of B are transmitted to therepeater station 80 from the A's cellularTV telephone terminal 60 and the B's cellularTV telephone terminal 70, respectively, as shown inFIG. 7 . - In a processing section 82 of the
repeater station 80, faces are always extracted from the photographed images of A and B, which are matched with the registered expressions, and at the same time, it is checked if registered keywords are heard in the voices in the call. - Then, for example, if the fret mode is detected in the conversation of A, processing for composing, for example, a sweat mark is applied to the photographed image of A, and the processed image is transmitted to the B's
terminal 70. In addition, at this point, a reduced-size image of the processed image of A may be transmitted to the A'sterminal 60 for confirmation. The face image of B is displayed on adisplay screen 64 of the A'sterminal 60, while the processed image of A is displayed on adisplay frame 66 that is provided at the corner of thedisplay screen 64 for confirmation of the processed image of A. At this point, the processed image of A is similarly displayed on adisplay screen 74 of the B's terminal 70 and, at the same time, a reduced-size processed image of B transmitted from therepeater station 80 is displayed on adisplay frame 76 for confirmation at the corner of thedisplay screen 74. - Further, face image data, voice data, image composition patterns and the like of each user may be registered in a terminal of a user to perform the process from the mode detection to the image composition processing on the cellular TV telephone terminal side. In addition, the above-mentioned data may be registered in both the repeater station and the terminal such that the processing can be performed in either of them.
- In addition, the image composition processing may be applied not only to voice and image data to be transmitted but also to received voice and image data. For example, in
FIG. 7 , the composition processing with respect to an image received from B may be performed in the A's terminal. In this case, amusement aspect is enhanced by adding a pattern that the receiver A desires. - A fifth embodiment of the second aspect of the present invention will now be described.
- This embodiment is to correct a position of a composition pattern if a misregistration occurs when the image composition processing is performed on a real time basis.
- Since most of misregistrations of a composition pattern are due to a failure in extracting a face, if a misregistration of a composition pattern is found on a display screen, a face in the photographed image is designated by an electronic pen or the like. At this point, the outline of the face may be circled using the electronic pen, or the eyes may be connected by a line. Moreover, a position of the mouth or the like may be designated. Alternatively, a position to which the composition pattern should be shifted in parallel, an adjustment amount of a size or the like may be designated by key operation.
- Thereafter, based on the corrected position and size of the composition pattern, a face position candidate area is automatically corrected to coincide with an original composition pattern, and the face extraction processing is executed again. Thus, a position and a size of a composition pattern is corrected in accordance with a movement of a face on a real time basis even in a case of an animation as in a cellular TV telephone. Therefore, since a composition pattern automatically follows a face and is displayed at a predetermined position in a displayed image, amusement aspect of an image display in an image displaying medium is enhanced.
- In addition, the image processing method according to the third aspect of the present invention applies the image processing method according to the second aspect of the present invention as described above to a TV image captured in a personal computer. Specifically, the TV image is captured in the personal computer and subjected to composition processing, substitution processing, image modification processing and color/density modification processing as described above. Alternatively, a digital TV may have a receiver including the above-mentioned composition processing function and other functions so that a customer can further set composition patterns. Amusement aspect can be thus enhanced still more.
- The image processing methods according to the second and third aspects of the present invention are basically configured as described above.
- An image processing method in accordance with a fourth aspect of the present invention will now be described with reference to
FIGS. 8 and 9 . - The image processing method in accordance with a fourth aspect of the present invention is basically to register in advance an area image in a specific area of a photographed image or an image characteristic amount and to compose on a corresponding area of the photographed image or adjust a density and a color tone by using the area image or image characteristic amount registered in advance.
- For example, in the image processing method of this aspect, reference face images are registered in advance as face images of a subject person in a photographed image and include an image of a preferred made-up face of the person, an image of a favorite face of the person and image characteristic amounts thereof, and a face image in a full-faced state in which a photographing direction is coincident with a line of sight. When the person is a subject of the photographed image, the face of the person in the photographed image is corrected, composed or converted to the reference face images registered in advance including the image of the preferred made-up face of the person, the image of the favorite face of the person, and the face image in a full-faced state in which the photographing direction is coincident with the line of sight. Alternatively, the image characteristic amounts registered in advance are employed to correct or adjust density or color tone of the face so that the finished face image has the preferred made-up face or favorite face of the person. Thus, anyone can easily correct the face image to the reference face images such as the image of the preferred made-up face, the image of the favorite face and the like, and unnatural feeling due to noncoincidence of the line of sight can be removed.
- A case in which a face image of a subject person in a photographed image is corrected to an image having a preferred made-up face or favorite face or a face image in a full-faced state in which a photographing direction is coincident with a line of sight will be described below as a typical example. However, this is not the sole case of this aspect.
- Further, as in the second and third aspects of the present invention, objects to which the fourth aspect of the present invention is applied range widely from a photograph (still image) to real time image display such as a video (animation) and a TV telephone, or the like.
- A first embodiment of the fourth aspect of the present invention will be described first.
- The first embodiment is for applying predetermined image processing to an image photographed with a digital camera or the like by using registration patterns registered in advance.
-
FIG. 8 is a block diagram schematically showing a digital photoprinter including an image processing apparatus that implements the image processing method in accordance with the first embodiment of the fourth aspect of the present invention. A digital photoprinter 90 shown inFIG. 8 mainly includes a photographedimage acquiring device 92, animage processing apparatus 94, adatabase 96 and animage recording apparatus 16. Basically, theimage recording apparatus 16, theoperation system 18 and themonitor 20 similar to those in thephotoprinter 50 shown inFIG. 4 can be used. - The photographed
image acquiring device 92 is to read photographing image data from a recording medium in which the image data are recorded by an image photographing device such as a digital camera. Theimage processing apparatus 94 implements the image processing method in accordance with this aspect of the present invention and other various kinds of image processing. More specifically, theimage processing apparatus 94 acquires, extracts or prepares reference face images to be registered in advance in thedatabase 96 from the photographing image data captured by the photographedimage acquiring device 92; selects any of the reference face images registered in advance in thedatabase 96; and corrects, composes or converts the face image in the photographed image by using the selected reference face image. Thedatabase 96 is used to register in advance the reference face images including an image of a preferred made-up face of the subject person in the photographed image, an image of a favorite face of the person and image characteristic amounts thereof, and a face image in a full-faced state in which a photographing direction is coincident with a line of sight (hereinafter also referred to as “face image in which line of sight is toward camera”. Other functions of theimage processing apparatus 94 and functions of theoperation system 18 including thekeyboard 18 a and themouse 18 b, themonitor 20, and theimage recording apparatus 16 are basically the same as those in thedigital photoprinter 50 shown in FIG. 4. Therefore, the description will be omitted. - Image processing of the first embodiment will be hereinafter described with reference to a flow chart of
FIG. 9 . - First, in
step 300, a face is photographed in advance in the most preferable situation by a photographing device such as a digital camera and the photographed image of the face is acquired as a photographed image for registration in the photographedimage acquiring device 92. - Then, in
step 310, a face extraction algorithm is applied in theimage processing apparatus 94 to extract a face region for a reference face image from the photographed face image acquired by the photographedimage acquiring device 92. Image characteristic amounts of the reference face image may be calculated in this process. It should be noted that calculation of the image characteristic amounts of the reference face image and hence the extracting process instep 310 may be omitted when the whole of the photographed face image is used as the reference face image. When omitted instep 310, calculation of the image characteristic amounts of the reference face image may be performed in the precedingstep 300,subsequent step 320 or step 360 for correction process to be described later. - In
step 320, the whole of the photographed face image or the face region extracted therefrom is registered in thedatabase 96 as the reference face image. Pre-processing is thus finished. - For the reference face images to be registered in the
database 96, any type of faces including a neatly made-up face, a face with hair set in a beauty salon or barbershop, a shaven face and a face with mustache set may be used. A plurality of reference face images prepared for a specific person may be changed in use for example between morning and night, for each season or for each party in the case of displaying in a TV telephone or a cellular telephone. - Each portion constituting the face image may be registered as the reference pattern in the reference face image. Namely, in the extracting process in
step 310, an area which it is desired to use for correction or composition may be automatically extracted or manually designated to register the thus extracted or designated area as the reference pattern. Examples of the reference pattern include respective portions constituting the face such as eyes, eyebrows, cheeks, lips, nose, ears, hair, and mustache, and accessories such as earrings, a hat (bandanna and headband also included) and glasses (sunglasses also included). - In this case, it is more preferable that the face extraction algorithm can be applied to automatically recognize which portion of the face the designated area indicates. When the accuracy in face extraction is low, area setting is performed while designating the respective portions one by one.
- Then, in
step 330, the photographed image is acquired by the photographedimage acquiring device 92. The process of acquiring the photographed image instep 330 precedes for example transmission or display of the photographed image to or on the monitor screen of a TV telephone or a cellular telephone. Thus, in the communication with such a telephone, an owner or an exclusive user of the TV telephone or the cellular telephone is very often a subject person in the photographed image. The subject person is specified in many cases. Therefore, it is preferable that information such as ID can be automatically acquired. Further, when there are many users, each user is given an ID so that the ID of a subject person can be acquired instep 330 together with the photographed image by the photographedimage acquiring device 92. - Then, in
step 340, theimage processing apparatus 94 applies the face extraction algorithm to the photographed image acquired by the photographedimage acquiring device 92 to extract the face region of the subject person from the photographed image as the face image. In this process, it is preferable that not only the entire face region but also the respective portions constituting the face image are determined. - For example, the contour of the face is first extracted, followed by area restriction of the respective portions, after which the areas of the respective portions can be determined finally by means of configuration pattern matching.
- If the accuracy in face extraction is low, it is preferable that, when photographing is started with a photographing device such as a digital camera, the subject person takes a posture so that the face of the subject person comes to (or is located in) the position within a guide such as a face frame displayed on a viewfinder or display device. Extraction of the face image by means of the configuration pattern matching is thus facilitated, and once having been caught, the face of the subject can be followed by means of local matching between photographing frames even in a different photographing frame.
- In
step 350, a registered reference face image of a person that theimage processing apparatus 94 searched or specified from thedatabase 96 based on the above-mentioned ID or the like, a reference pattern and optionally image characteristic amounts thereof are selected and called. It is also possible to have new combinations by selecting from different reference images for the respective portions of the face image of the specific person as described above. For example, a newly combined face image including a first reference face image for eyes, a third reference face image for eyebrows and an eighth reference face image for earrings can be also made. - Either of
steps - Then, in
step 360, the face image extracted instep 340 and the areas of the respective portions constituting the face image are corrected, composed or substituted by using the registered reference face image for the specific person, reference pattern and their image characteristic amounts selected instep 350. The registered reference face image may be composed on or substituted for the entire face image, or a registered reference pattern image may be used for each portion of the face to perform correction, composition or substitution. Alternatively, image characteristic amounts of registered reference pattern images may be used to perform correction. - For example, cheeks and lips are corrected to have a color tone of the reference pattern of the corresponding portions. In particular, lips move and hence, only the color tone of lips is preferably corrected after the lip area is separated from the skin area in color tone on a pixel basis.
- As for hair, eyebrows, cheeks, ears (earlobes), reference patterns are preferably overwritten for composition. In this case, a portion surrounding the composition area is preferably smoothed to have a blurred boundary.
- Finally, in
step 370, the photographed image in which the face image is corrected instep 360 is outputted from theimage processing apparatus 94 as an output image. The output image may be outputted from theprinter 16 as a photographic print or displayed on themonitor 20, or outputted to a TV telephone or cellular telephone terminal (more specifically its image processor; seeFIG. 7 ) so as to transmit to a TV telephone or cellular telephone terminal (more specifically its image processor; seeFIG. 7 ) on an opposite party side for displaying on a display screen. Alternatively, one may directly transmit the photographed image to the opposite party side to display on the display screen, or of course display on a display screen of his or her apparatus. - Even inexperienced or unskilled persons in personal computer or image processing software can easily correct images so as to have a preferred made-up face or a favorite face.
- In addition, a clothing portion (area set under the face) or a background (region other than the face area) of the subject person may be enchased with decoration patterns so that intimate apparel and night clothes including pajamas or interior of a room can be hidden without giving unpleasant feeling to others.
- Then, a second embodiment of the fourth aspect of the present invention will be described.
- The second embodiment is for applying predetermined image processing to an image photographed with a digital camera or the like by using registration patterns registered in advance as in the first embodiment, but is different from the first embodiment in that the registration pattern registered in advance is a face image in which a photographing direction is coincident with a line of sight of a subject person (face image in which line of sight is toward camera). Therefore, the second embodiment will be also described below with reference to
FIGS. 8 and 9 . - According to the second embodiment, in
step 300 shown inFIG. 9 , a photographed image is taken with a photographing device such as a digital camera in a state in which a subject looks at the camera, and then recorded. The recorded photographed image is acquired by the photographed image acquiring device 92 (seeFIG. 8 ) as a photographed image for registration. - Then, in
step 310, theimage processing apparatus 94 applies the face extraction algorithm to the photographed image for registration acquired by the photographedimage acquiring device 92 to extract an area of eyes as a reference pattern. Instep 320, the thus extracted area of eyes is registered in thedatabase 96 as a reference pattern. Pre-processing is thus finished. - Then, in
step 330, the photographed image is acquired by the photographedimage acquiring device 92. The process of acquiring the photographed image instep 330 precedes for example transmission or display of the photographed image to or on the monitor screen of a TV telephone or a cellular telephone. - Then, in
step 340, in the communication with such a telephone, theimage processing apparatus 94 applies the face extraction algorithm to the photographed image acquired by the photographedimage acquiring device 92 to extract the area of eyes of the subject person from the photographed image. - In
step 350, theimage processing apparatus 94 selects and calls a registered reference pattern from thedatabase 96. Either ofsteps - Then, in
step 360, the area of eyes extracted instep 340 is corrected, composed or substituted by using the registered reference pattern selected instep 350. When movement between image frames is small or when the image can be considered to be in a full-faced state (as a result of balance check between left side and right side), the registered reference pattern image may be used as such for composition. - Finally, in
step 370, a photographed image in which eyes have been corrected instep 360 so as to look at the camera is outputted from theimage processing apparatus 94 as an output image. Alternatively, in a TV telephone or a cellular telephone terminal, a stationary state of the photographed image (for example a state of an image photographed on startup in which a subject is correctly postured and looking at a monitor) is temporarily stored. When the difference between the temporarily stored image and the actual image exceeds a specified value, the photographed image can be transmitted as it is in order to prevent unnatural feeling due to the large movement. - The output image may be outputted from the
printer 16 as a photographic print or displayed on themonitor 20, or outputted to a TV telephone or cellular telephone terminal so as to transmit to a TV telephone or cellular telephone terminal on an opposite party side for displaying on a display screen. Alternatively, the photographed image may be directly transmitted to the opposite party side to display on the display screen. - Unnatural feeling due to noncoincidence of the line of sight can be thus removed.
- In addition, when a reference pattern is registered, an animation of a photographed image in which a subject looks at a camera, for example a photographed animation in which a subject winks at least once is recorded. As for composition during the communication with a TV telephone or the like, composition can be made as an animation irrespective of the actual timing of winking. Alternatively, when a photographed image is acquired, winking detection may be performed for synchronization, which is more preferable.
- In addition, in a third embodiment of the fourth aspect of the present invention, if a transmission terminal such as a TV telephone or cellular telephone terminal (see
FIG. 7 ) has a high arithmetic capability, various types of correction processing as described in the first and second embodiments of this aspect can be executed in the transmission terminal. Alternatively, a reference face image and decoration pattern data may be registered in the repeater station (seeFIG. 7 ) in combination with a customer ID and executed in the repeater station. - When family members use a TV telephone, a cellular telephone terminal or the like in the aspect under consideration, it is preferable to register a reference face image and a reference pattern for each family member as in the first aspect as described above.
- The image processing method of the fourth aspect of the present invention is basically configured as described above.
- As described above in detail, according to the first aspect of the present invention, an output image on which individual preference of each customer is reflected can be automatically obtained and amusement aspect of photography can be enhanced.
- In addition, as described above in detail, according to the second aspect of the present invention, since a content that is desired to be emphasized according to a feeling is automatically visualized and represented in the display of, in particular, a face of a person in an image, amusement aspect in image representation such as a photograph, a video, a TV telephone or the like can be significantly enhanced. In addition, the automatic visualization is convenient and does not offend an opposite party when one does not want to display his or her face in a TV telephone.
- As also described above in detail, the present invention overcomes the above-mentioned prior art problems so that even inexperienced or unskilled persons in personal computer or image processing software can easily correct images so as to have a preferred made-up face or a favorite face and remove unnatural feeling due to noncoincedence of the line of sight.
- Thus, it is seen that an image processing method is provided. One skilled in the art will appreciate that the present invention can be practiced by other than the preferred embodiments which are presented for the purposes of illustration and not of limitation, and the present invention is limited only by the claims which follow.
Claims (13)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/916,521 US20050008246A1 (en) | 2000-04-13 | 2004-08-12 | Image Processing method |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2000112134 | 2000-04-13 | ||
JP2000-112134 | 2000-04-13 | ||
JP2000-179580 | 2000-06-15 | ||
JP2000179580 | 2000-06-15 | ||
US09/833,784 US7106887B2 (en) | 2000-04-13 | 2001-04-13 | Image processing method using conditions corresponding to an identified person |
US10/916,521 US20050008246A1 (en) | 2000-04-13 | 2004-08-12 | Image Processing method |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/833,784 Division US7106887B2 (en) | 2000-04-13 | 2001-04-13 | Image processing method using conditions corresponding to an identified person |
Publications (1)
Publication Number | Publication Date |
---|---|
US20050008246A1 true US20050008246A1 (en) | 2005-01-13 |
Family
ID=26590045
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/833,784 Expired - Lifetime US7106887B2 (en) | 2000-04-13 | 2001-04-13 | Image processing method using conditions corresponding to an identified person |
US10/916,521 Abandoned US20050008246A1 (en) | 2000-04-13 | 2004-08-12 | Image Processing method |
US11/458,312 Expired - Lifetime US7577310B2 (en) | 2000-04-13 | 2006-07-18 | Image processing method |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/833,784 Expired - Lifetime US7106887B2 (en) | 2000-04-13 | 2001-04-13 | Image processing method using conditions corresponding to an identified person |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/458,312 Expired - Lifetime US7577310B2 (en) | 2000-04-13 | 2006-07-18 | Image processing method |
Country Status (1)
Country | Link |
---|---|
US (3) | US7106887B2 (en) |
Cited By (52)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020015514A1 (en) * | 2000-04-13 | 2002-02-07 | Naoto Kinjo | Image processing method |
US20040120548A1 (en) * | 2002-12-18 | 2004-06-24 | Qian Richard J. | Method and apparatus for tracking features in a video sequence |
US20040208114A1 (en) * | 2003-01-17 | 2004-10-21 | Shihong Lao | Image pickup device, image pickup device program and image pickup method |
US20040228528A1 (en) * | 2003-02-12 | 2004-11-18 | Shihong Lao | Image editing apparatus, image editing method and program |
US20040247199A1 (en) * | 2003-03-14 | 2004-12-09 | Seiko Epson Corporation | Image processing device, image processing method, and image processing program |
US20050286799A1 (en) * | 2004-06-23 | 2005-12-29 | Jincheng Huang | Method and apparatus for converting a photo to a caricature image |
US20070005422A1 (en) * | 2005-07-01 | 2007-01-04 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Techniques for image generation |
US20070092153A1 (en) * | 2005-09-21 | 2007-04-26 | Fuji Photo Film Co., Ltd/ | Person image correcting apparatus and method |
US20070183679A1 (en) * | 2004-02-05 | 2007-08-09 | Vodafone K.K. | Image processing method, image processing device and mobile communication terminal |
US20070211913A1 (en) * | 2006-03-13 | 2007-09-13 | Konica Minolta Business Technologies, Inc. | Image processing apparatus, image processing method, and computer readable recording medium storing program |
US20070263865A1 (en) * | 2005-07-01 | 2007-11-15 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Authorization rights for substitute media content |
US20070266049A1 (en) * | 2005-07-01 | 2007-11-15 | Searete Llc, A Limited Liability Corportion Of The State Of Delaware | Implementation of media content alteration |
US20070276757A1 (en) * | 2005-07-01 | 2007-11-29 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Approval technique for media content alteration |
US20070274519A1 (en) * | 2005-07-01 | 2007-11-29 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Authorization for media content alteration |
US20070294305A1 (en) * | 2005-07-01 | 2007-12-20 | Searete Llc | Implementing group content substitution in media works |
US20070294720A1 (en) * | 2005-07-01 | 2007-12-20 | Searete Llc | Promotional placement in media works |
US20080010083A1 (en) * | 2005-07-01 | 2008-01-10 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Approval technique for media content alteration |
US20080013859A1 (en) * | 2005-07-01 | 2008-01-17 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Implementation of media content alteration |
US20080052104A1 (en) * | 2005-07-01 | 2008-02-28 | Searete Llc | Group content substitution in media works |
US20080052161A1 (en) * | 2005-07-01 | 2008-02-28 | Searete Llc | Alteration of promotional content in media works |
US20080059530A1 (en) * | 2005-07-01 | 2008-03-06 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Implementing group content substitution in media works |
US20080077954A1 (en) * | 2005-07-01 | 2008-03-27 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Promotional placement in media works |
US20080086380A1 (en) * | 2005-07-01 | 2008-04-10 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Alteration of promotional content in media works |
US20080123999A1 (en) * | 2004-07-07 | 2008-05-29 | Nikon Corporation | Image Processor and Computer Program Product |
US20080180538A1 (en) * | 2005-07-01 | 2008-07-31 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Image anonymization |
US20080180539A1 (en) * | 2007-01-31 | 2008-07-31 | Searete Llc, A Limited Liability Corporation | Image anonymization |
US20080226119A1 (en) * | 2007-03-16 | 2008-09-18 | Brant Candelore | Content image search |
US20080244755A1 (en) * | 2007-03-30 | 2008-10-02 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Authorization for media content alteration |
US20080270161A1 (en) * | 2007-04-26 | 2008-10-30 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Authorization rights for substitute media content |
US20080304749A1 (en) * | 2007-06-11 | 2008-12-11 | Sony Corporation | Image processing apparatus, image display apparatus, imaging apparatus, method for image processing therefor, and program |
US20080313233A1 (en) * | 2005-07-01 | 2008-12-18 | Searete Llc | Implementing audio substitution options in media works |
US20090037243A1 (en) * | 2005-07-01 | 2009-02-05 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Audio substitution options in media works |
US20090037278A1 (en) * | 2005-07-01 | 2009-02-05 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Implementing visual substitution options in media works |
US20090046326A1 (en) * | 2007-08-13 | 2009-02-19 | Seiko Epson Corporation | Image processing device, method of controlling the same, and program |
US20090150444A1 (en) * | 2005-07-01 | 2009-06-11 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Media markup for audio content alteration |
US20090150199A1 (en) * | 2005-07-01 | 2009-06-11 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Visual substitution options in media works |
US20090151004A1 (en) * | 2005-07-01 | 2009-06-11 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Media markup for visual content alteration |
US20090204475A1 (en) * | 2005-07-01 | 2009-08-13 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Media markup for promotional visual content |
US20090210946A1 (en) * | 2005-07-01 | 2009-08-20 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Media markup for promotional audio content |
US20090235364A1 (en) * | 2005-07-01 | 2009-09-17 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Media markup for promotional content alteration |
US20090300480A1 (en) * | 2005-07-01 | 2009-12-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Media segment alteration with embedded markup identifier |
US20100017885A1 (en) * | 2005-07-01 | 2010-01-21 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Media markup identifier for alterable promotional segments |
US20100154065A1 (en) * | 2005-07-01 | 2010-06-17 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Media markup for user-activated content alteration |
US20100245612A1 (en) * | 2009-03-25 | 2010-09-30 | Takeshi Ohashi | Image processing device, image processing method, and program |
CN101883230A (en) * | 2010-05-31 | 2010-11-10 | 中山大学 | Digital television actor retrieval method and system |
US20120105676A1 (en) * | 2010-10-27 | 2012-05-03 | Samsung Electronics Co., Ltd. | Digital photographing apparatus and method of controlling the same |
US20130162780A1 (en) * | 2010-09-22 | 2013-06-27 | Fujifilm Corporation | Stereoscopic imaging device and shading correction method |
US8792673B2 (en) | 2005-07-01 | 2014-07-29 | The Invention Science Fund I, Llc | Modifying restricted images |
US9117275B2 (en) | 2012-03-05 | 2015-08-25 | Panasonic Intellectual Property Corporation Of America | Content processing device, integrated circuit, method, and program |
US9215512B2 (en) | 2007-04-27 | 2015-12-15 | Invention Science Fund I, Llc | Implementation of media content alteration |
US9230601B2 (en) | 2005-07-01 | 2016-01-05 | Invention Science Fund I, Llc | Media markup system for content alteration in derivative works |
CN106412285A (en) * | 2016-09-27 | 2017-02-15 | 维沃移动通信有限公司 | Method for entering expression mode and mobile terminal |
Families Citing this family (118)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1293933A1 (en) * | 2001-09-03 | 2003-03-19 | Agfa-Gevaert AG | Method for automatically detecting red-eye defects in photographic image data |
US7224851B2 (en) * | 2001-12-04 | 2007-05-29 | Fujifilm Corporation | Method and apparatus for registering modification pattern of transmission image and method and apparatus for reproducing the same |
EP1349060A3 (en) | 2002-03-29 | 2007-11-21 | FUJIFILM Corporation | Image processing system, and image processing apparatus and portable information communication device for use in the image processing system |
JP2003296713A (en) * | 2002-04-04 | 2003-10-17 | Mitsubishi Electric Corp | Device and method for synthesizing facial images, communication terminal provided with program for performing the method and facial image synthesizing device and communicating method by the communication terminal |
EP1357515A1 (en) * | 2002-04-22 | 2003-10-29 | Agfa-Gevaert AG | Method for processing digital image data of photographic images |
US6789962B2 (en) * | 2002-04-26 | 2004-09-14 | Matsushita Electric Industrial Co., Ltd. | Printing apparatus and communication apparatus |
JP4013684B2 (en) * | 2002-07-23 | 2007-11-28 | オムロン株式会社 | Unauthorized registration prevention device in personal authentication system |
EP1422668B1 (en) * | 2002-11-25 | 2017-07-26 | Panasonic Intellectual Property Management Co., Ltd. | Short film generation/reproduction apparatus and method thereof |
JP2004206688A (en) * | 2002-12-12 | 2004-07-22 | Fuji Photo Film Co Ltd | Face recognition method, face image cutting out method, and imaging apparatus |
US20040166462A1 (en) | 2003-02-26 | 2004-08-26 | Align Technology, Inc. | Systems and methods for fabricating a dental template |
US7609908B2 (en) * | 2003-04-30 | 2009-10-27 | Eastman Kodak Company | Method for adjusting the brightness of a digital image utilizing belief values |
KR100408921B1 (en) * | 2003-06-03 | 2003-12-11 | Picaso Info Comm Co Ltd | Image security system for protecting privacy and method therefor |
US8682097B2 (en) | 2006-02-14 | 2014-03-25 | DigitalOptics Corporation Europe Limited | Digital image enhancement with reference images |
US7844076B2 (en) * | 2003-06-26 | 2010-11-30 | Fotonation Vision Limited | Digital image processing using face detection and skin tone information |
US8494286B2 (en) | 2008-02-05 | 2013-07-23 | DigitalOptics Corporation Europe Limited | Face detection in mid-shot digital images |
US7616233B2 (en) * | 2003-06-26 | 2009-11-10 | Fotonation Vision Limited | Perfecting of digital image capture parameters within acquisition devices using face detection |
US8155397B2 (en) | 2007-09-26 | 2012-04-10 | DigitalOptics Corporation Europe Limited | Face tracking in a camera processor |
US8498452B2 (en) | 2003-06-26 | 2013-07-30 | DigitalOptics Corporation Europe Limited | Digital image processing using face detection information |
US8989453B2 (en) | 2003-06-26 | 2015-03-24 | Fotonation Limited | Digital image processing using face detection information |
US7269292B2 (en) | 2003-06-26 | 2007-09-11 | Fotonation Vision Limited | Digital image adjustable compression and resolution using face detection information |
US9129381B2 (en) | 2003-06-26 | 2015-09-08 | Fotonation Limited | Modification of post-viewing parameters for digital images using image region or feature information |
US7565030B2 (en) | 2003-06-26 | 2009-07-21 | Fotonation Vision Limited | Detecting orientation of digital images using face detection information |
US7440593B1 (en) | 2003-06-26 | 2008-10-21 | Fotonation Vision Limited | Method of improving orientation and color balance of digital images using face detection information |
US9692964B2 (en) | 2003-06-26 | 2017-06-27 | Fotonation Limited | Modification of post-viewing parameters for digital images using image region or feature information |
US7315630B2 (en) * | 2003-06-26 | 2008-01-01 | Fotonation Vision Limited | Perfecting of digital image rendering parameters within rendering devices using face detection |
US8948468B2 (en) | 2003-06-26 | 2015-02-03 | Fotonation Limited | Modification of viewing parameters for digital images using face detection information |
US8593542B2 (en) | 2005-12-27 | 2013-11-26 | DigitalOptics Corporation Europe Limited | Foreground/background separation using reference images |
US7792970B2 (en) | 2005-06-17 | 2010-09-07 | Fotonation Vision Limited | Method for establishing a paired connection between media devices |
US7471846B2 (en) | 2003-06-26 | 2008-12-30 | Fotonation Vision Limited | Perfecting the effect of flash within an image acquisition devices using face detection |
US7620218B2 (en) | 2006-08-11 | 2009-11-17 | Fotonation Ireland Limited | Real-time face tracking with reference images |
US7362368B2 (en) * | 2003-06-26 | 2008-04-22 | Fotonation Vision Limited | Perfecting the optics within a digital image acquisition device using face detection |
US8896725B2 (en) | 2007-06-21 | 2014-11-25 | Fotonation Limited | Image capture device with contemporaneous reference image capture mechanism |
US8330831B2 (en) | 2003-08-05 | 2012-12-11 | DigitalOptics Corporation Europe Limited | Method of gathering visual meta data using a reference image |
US7574016B2 (en) | 2003-06-26 | 2009-08-11 | Fotonation Vision Limited | Digital image processing using face detection information |
EP2955662B1 (en) | 2003-07-18 | 2018-04-04 | Canon Kabushiki Kaisha | Image processing device, imaging device, image processing method |
JP2005044330A (en) * | 2003-07-24 | 2005-02-17 | Univ Of California San Diego | Weak hypothesis generation device and method, learning device and method, detection device and method, expression learning device and method, expression recognition device and method, and robot device |
JP4307301B2 (en) * | 2003-07-31 | 2009-08-05 | キヤノン株式会社 | Image processing apparatus and method |
US20050076004A1 (en) * | 2003-09-30 | 2005-04-07 | Hiroyuki Yanagisawa | Computer, database generating method for electronic picture book service, photographed subject information providing method, recording medium, and computer data signal |
JP4344925B2 (en) * | 2003-12-15 | 2009-10-14 | 富士フイルム株式会社 | Image processing apparatus, image processing method, and printing system |
US20050152616A1 (en) * | 2004-01-09 | 2005-07-14 | Bailey James R. | Method and apparatus for automatic scanner defect detection |
JP2005202854A (en) * | 2004-01-19 | 2005-07-28 | Nec Corp | Image processor, image processing method and image processing program |
US7995239B2 (en) * | 2004-03-29 | 2011-08-09 | Fujifilm Corporation | Image output apparatus, method and program |
JP4078334B2 (en) * | 2004-06-14 | 2008-04-23 | キヤノン株式会社 | Image processing apparatus and image processing method |
US7751624B2 (en) * | 2004-08-19 | 2010-07-06 | Nextace Corporation | System and method for automating document search and report generation |
US7792333B2 (en) * | 2004-10-19 | 2010-09-07 | Sri International | Method and apparatus for person identification |
US8320641B2 (en) | 2004-10-28 | 2012-11-27 | DigitalOptics Corporation Europe Limited | Method and apparatus for red-eye detection using preview or other reference images |
US7315631B1 (en) | 2006-08-11 | 2008-01-01 | Fotonation Vision Limited | Real-time face tracking in a digital image acquisition device |
US8503800B2 (en) | 2007-03-05 | 2013-08-06 | DigitalOptics Corporation Europe Limited | Illumination detection using classifier chains |
US7869630B2 (en) * | 2005-03-29 | 2011-01-11 | Seiko Epson Corporation | Apparatus and method for processing image |
EP1708466A1 (en) * | 2005-03-31 | 2006-10-04 | Siemens Aktiengesellschaft | Method of displaying on a communication terminal a picture associated to a communication participant |
DE102005014772A1 (en) * | 2005-03-31 | 2006-10-05 | Siemens Ag | Display method for showing the image of communication participant in communication terminal, involves using face animation algorithm to process determined facial coordinates of image to form animated image of calling subscriber |
JP2006303899A (en) * | 2005-04-20 | 2006-11-02 | Fuji Photo Film Co Ltd | Image processor, image processing system, and image processing program |
JP2006330958A (en) * | 2005-05-25 | 2006-12-07 | Oki Electric Ind Co Ltd | Image composition device, communication terminal using the same, and image communication system and chat server in the system |
US8150099B2 (en) * | 2005-06-03 | 2012-04-03 | Nikon Corporation | Image processing device, image processing method, image processing program product, and imaging device |
JP2007087253A (en) * | 2005-09-26 | 2007-04-05 | Fujifilm Corp | Image correction method and device |
US20070140536A1 (en) * | 2005-12-19 | 2007-06-21 | Eastman Kodak Company | Medical image processing method and apparatus |
US20070171237A1 (en) * | 2006-01-25 | 2007-07-26 | Marco Pinter | System for superimposing a face image on a body image |
JP4882390B2 (en) * | 2006-01-25 | 2012-02-22 | 富士ゼロックス株式会社 | Image processing device |
US7787664B2 (en) * | 2006-03-29 | 2010-08-31 | Eastman Kodak Company | Recomposing photographs from multiple frames |
US8306280B2 (en) | 2006-04-11 | 2012-11-06 | Nikon Corporation | Electronic camera and image processing apparatus |
JP4769624B2 (en) * | 2006-04-25 | 2011-09-07 | 富士フイルム株式会社 | Image reproducing apparatus, control method therefor, and control program therefor |
US7965875B2 (en) | 2006-06-12 | 2011-06-21 | Tessera Technologies Ireland Limited | Advances in extending the AAM techniques from grayscale to color images |
US7403643B2 (en) | 2006-08-11 | 2008-07-22 | Fotonation Vision Limited | Real-time face tracking in a digital image acquisition device |
US7916897B2 (en) | 2006-08-11 | 2011-03-29 | Tessera Technologies Ireland Limited | Face tracking for controlling imaging parameters |
KR100828371B1 (en) | 2006-10-27 | 2008-05-08 | 삼성전자주식회사 | Method and Apparatus of generating meta data of content |
JP4244063B2 (en) * | 2006-11-06 | 2009-03-25 | ソニー株式会社 | Image processing apparatus, image processing method, and program |
US20080214168A1 (en) * | 2006-12-21 | 2008-09-04 | Ubiquity Holdings | Cell phone with Personalization of avatar |
JP2008158788A (en) * | 2006-12-22 | 2008-07-10 | Fujifilm Corp | Information processing device and method |
US8055067B2 (en) | 2007-01-18 | 2011-11-08 | DigitalOptics Corporation Europe Limited | Color segmentation |
EP2115662B1 (en) | 2007-02-28 | 2010-06-23 | Fotonation Vision Limited | Separating directional lighting variability in statistical face modelling based on texture space decomposition |
KR101247147B1 (en) | 2007-03-05 | 2013-03-29 | 디지털옵틱스 코포레이션 유럽 리미티드 | Face searching and detection in a digital image acquisition device |
US7916971B2 (en) | 2007-05-24 | 2011-03-29 | Tessera Technologies Ireland Limited | Image processing method and apparatus |
JP4924264B2 (en) * | 2007-07-24 | 2012-04-25 | セイコーエプソン株式会社 | Image processing apparatus, image processing method, and computer program |
JP4663699B2 (en) * | 2007-09-27 | 2011-04-06 | 富士フイルム株式会社 | Image display device and image display method |
US7855737B2 (en) | 2008-03-26 | 2010-12-21 | Fotonation Ireland Limited | Method of making a digital camera image of a scene including the camera user |
US8098904B2 (en) * | 2008-03-31 | 2012-01-17 | Google Inc. | Automatic face detection and identity masking in images, and applications thereof |
US7953255B2 (en) | 2008-05-01 | 2011-05-31 | At&T Intellectual Property I, L.P. | Avatars in social interactive television |
CN103475837B (en) | 2008-05-19 | 2017-06-23 | 日立麦克赛尔株式会社 | Record reproducing device and method |
US8092215B2 (en) | 2008-05-23 | 2012-01-10 | Align Technology, Inc. | Smile designer |
JP4645685B2 (en) * | 2008-06-02 | 2011-03-09 | カシオ計算機株式会社 | Camera, camera control program, and photographing method |
JP5547730B2 (en) | 2008-07-30 | 2014-07-16 | デジタルオプティックス・コーポレイション・ヨーロッパ・リミテッド | Automatic facial and skin beautification using face detection |
DE102008038608A1 (en) * | 2008-08-21 | 2010-02-25 | Heidelberger Druckmaschinen Ag | Method and device for printing different uses on a printed sheet |
JP2010086178A (en) * | 2008-09-30 | 2010-04-15 | Fujifilm Corp | Image synthesis device and control method thereof |
KR101494388B1 (en) * | 2008-10-08 | 2015-03-03 | 삼성전자주식회사 | Apparatus and method for providing emotion expression service in mobile communication terminal |
JP4788792B2 (en) * | 2009-03-11 | 2011-10-05 | カシオ計算機株式会社 | Imaging apparatus, imaging method, and imaging program |
US8452599B2 (en) * | 2009-06-10 | 2013-05-28 | Toyota Motor Engineering & Manufacturing North America, Inc. | Method and system for extracting messages |
US8269616B2 (en) * | 2009-07-16 | 2012-09-18 | Toyota Motor Engineering & Manufacturing North America, Inc. | Method and system for detecting gaps between objects |
US8379917B2 (en) | 2009-10-02 | 2013-02-19 | DigitalOptics Corporation Europe Limited | Face recognition performance using additional image features |
US8337160B2 (en) * | 2009-10-19 | 2012-12-25 | Toyota Motor Engineering & Manufacturing North America, Inc. | High efficiency turbine system |
US8237792B2 (en) | 2009-12-18 | 2012-08-07 | Toyota Motor Engineering & Manufacturing North America, Inc. | Method and system for describing and organizing image data |
US8424621B2 (en) | 2010-07-23 | 2013-04-23 | Toyota Motor Engineering & Manufacturing North America, Inc. | Omni traction wheel system and methods of operating the same |
JP2012113677A (en) * | 2010-11-05 | 2012-06-14 | Aitia Corp | Information processing device and information processing program |
JP5779938B2 (en) * | 2011-03-29 | 2015-09-16 | ソニー株式会社 | Playlist creation device, playlist creation method, and playlist creation program |
EP2701123B1 (en) | 2011-04-20 | 2018-10-17 | NEC Corporation | Individual identification character display system, terminal device, individual identification character display method, and computer program |
EP2590140A1 (en) * | 2011-09-05 | 2013-05-08 | Morpho, Inc. | Facial authentication system, facial authentication method, and facial authentication program |
JP2013101242A (en) * | 2011-11-09 | 2013-05-23 | Sony Corp | Image processor, display control method and program |
WO2013138531A1 (en) * | 2012-03-14 | 2013-09-19 | Google, Inc. | Modifying an appearance of a participant during a video conference |
US9286509B1 (en) * | 2012-10-19 | 2016-03-15 | Google Inc. | Image optimization during facial recognition |
US9721010B2 (en) | 2012-12-13 | 2017-08-01 | Microsoft Technology Licensing, Llc | Content reaction annotations |
US9799099B2 (en) * | 2013-02-22 | 2017-10-24 | Cyberlink Corp. | Systems and methods for automatic image editing |
US9542595B2 (en) * | 2013-03-25 | 2017-01-10 | Brightex Bio-Photonics Llc | Systems and methods for recommending cosmetic products for users with mobile devices |
CN104284055A (en) * | 2013-07-01 | 2015-01-14 | 索尼公司 | Image processing method, device and electronic equipment thereof |
US10114532B2 (en) * | 2013-12-06 | 2018-10-30 | Google Llc | Editing options for image regions |
JP6550642B2 (en) * | 2014-06-09 | 2019-07-31 | パナソニックIpマネジメント株式会社 | Wrinkle detection device and wrinkle detection method |
CN105447022A (en) * | 2014-08-25 | 2016-03-30 | 英业达科技有限公司 | Method for rapidly searching target object |
US9456070B2 (en) | 2014-09-11 | 2016-09-27 | Ebay Inc. | Methods and systems for recalling second party interactions with mobile devices |
CN113778114A (en) * | 2014-11-07 | 2021-12-10 | 索尼公司 | Control system, control method, and storage medium |
JP6561996B2 (en) | 2014-11-07 | 2019-08-21 | ソニー株式会社 | Information processing apparatus, control method, and storage medium |
CN104539853A (en) * | 2015-01-05 | 2015-04-22 | 锦池媒体科技(北京)有限公司 | Image synthesis method |
CN106228978A (en) * | 2016-08-04 | 2016-12-14 | 成都佳荣科技有限公司 | A kind of audio recognition method |
CN106331569B (en) * | 2016-08-23 | 2019-08-30 | 广州华多网络科技有限公司 | Character facial transform method and system in instant video picture |
US10055871B2 (en) | 2016-10-12 | 2018-08-21 | International Business Machines Corporation | Applying an image overlay to an image based on relationship of the people identified in the image |
CN110119293A (en) * | 2018-02-05 | 2019-08-13 | 阿里巴巴集团控股有限公司 | Conversation processing method, device and electronic equipment |
US11157549B2 (en) * | 2019-03-06 | 2021-10-26 | International Business Machines Corporation | Emotional experience metadata on recorded images |
US11481940B2 (en) * | 2019-04-05 | 2022-10-25 | Adobe Inc. | Structural facial modifications in images |
US11295494B2 (en) * | 2019-06-26 | 2022-04-05 | Adobe Inc. | Image modification styles learned from a limited set of modified images |
JP2023056664A (en) * | 2021-10-08 | 2023-04-20 | 株式会社リコー | Image reading device and image forming apparatus |
JP2023165433A (en) * | 2022-05-06 | 2023-11-16 | コニカミノルタ株式会社 | Processing system, processing apparatus, processing method, and program |
Citations (82)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US2003A (en) * | 1841-03-12 | Improvement in horizontal windivhlls | ||
US4791581A (en) * | 1985-07-27 | 1988-12-13 | Sony Corporation | Method and apparatus of forming curved surfaces |
US5208911A (en) * | 1990-09-28 | 1993-05-04 | Eastman Kodak Company | Method and apparatus for storing and communicating a transform definition which includes sample values representing an input/output relation of an image transformation |
US5253067A (en) * | 1991-12-16 | 1993-10-12 | Thomson Consumer Electronics, Inc. | Channel labeling apparatus for a television receiver wherein graphics and text labels may be selected from a preprogrammed list |
US5262765A (en) * | 1990-08-21 | 1993-11-16 | Ricos Co., Ltd. | Animation image composition and display device |
US5278921A (en) * | 1991-05-23 | 1994-01-11 | Fuji Photo Film Co., Ltd. | Method of determining exposure |
US5289227A (en) * | 1992-01-22 | 1994-02-22 | Fuji Photo Film Co., Ltd. | Method of automatically controlling taking exposure and focusing in a camera and a method of controlling printing exposure |
US5367454A (en) * | 1992-06-26 | 1994-11-22 | Fuji Xerox Co., Ltd. | Interactive man-machine interface for simulating human emotions |
US5393071A (en) * | 1990-11-14 | 1995-02-28 | Best; Robert M. | Talking video games with cooperative action |
US5404196A (en) * | 1991-09-12 | 1995-04-04 | Fuji Photo Film Co., Ltd. | Method of making photographic prints |
US5467168A (en) * | 1992-11-18 | 1995-11-14 | Fuji Photo Film Co., Ltd. | Photograph printing method |
US5563988A (en) * | 1994-08-01 | 1996-10-08 | Massachusetts Institute Of Technology | Method and system for facilitating wireless, full-body, real-time user interaction with a digitally represented visual environment |
US5619619A (en) * | 1993-03-11 | 1997-04-08 | Kabushiki Kaisha Toshiba | Information recognition system and control system using same |
US5629752A (en) * | 1994-10-28 | 1997-05-13 | Fuji Photo Film Co., Ltd. | Method of determining an exposure amount using optical recognition of facial features |
US5649032A (en) * | 1994-11-14 | 1997-07-15 | David Sarnoff Research Center, Inc. | System for automatically aligning images to form a mosaic image |
US5689575A (en) * | 1993-11-22 | 1997-11-18 | Hitachi, Ltd. | Method and apparatus for processing images of facial expressions |
US5734794A (en) * | 1995-06-22 | 1998-03-31 | White; Tom H. | Method and system for voice-activated cell animation |
US5774172A (en) * | 1996-02-12 | 1998-06-30 | Microsoft Corporation | Interactive graphics overlay on video images for entertainment |
US5774591A (en) * | 1995-12-15 | 1998-06-30 | Xerox Corporation | Apparatus and method for recognizing facial expressions and facial gestures in a sequence of images |
US5777252A (en) * | 1996-01-31 | 1998-07-07 | Yamaha Corporation | Atmosphere data generator and karaoke machine |
US5826234A (en) * | 1995-12-06 | 1998-10-20 | Telia Ab | Device and method for dubbing an audio-visual presentation which generates synthesized speech and corresponding facial movements |
US5835616A (en) * | 1994-02-18 | 1998-11-10 | University Of Central Florida | Face detection using templates |
US5854634A (en) * | 1995-12-26 | 1998-12-29 | Imax Corporation | Computer-assisted animation construction system using source poses within a pose transformation space |
US5870138A (en) * | 1995-03-31 | 1999-02-09 | Hitachi, Ltd. | Facial image processing |
US5881171A (en) * | 1995-09-13 | 1999-03-09 | Fuji Photo Film Co., Ltd. | Method of extracting a selected configuration from an image according to a range search and direction search of portions of the image with respect to a reference point |
US5907391A (en) * | 1994-03-09 | 1999-05-25 | Fuji Photo Film Co., Ltd. | Method and apparatus for accepting an order for photographic processing |
US5918222A (en) * | 1995-03-17 | 1999-06-29 | Kabushiki Kaisha Toshiba | Information disclosing apparatus and multi-modal information input/output system |
US5926575A (en) * | 1995-11-07 | 1999-07-20 | Telecommunications Advancement Organization Of Japan | Model-based coding/decoding method and system |
US5943049A (en) * | 1995-04-27 | 1999-08-24 | Casio Computer Co., Ltd. | Image processor for displayed message, balloon, and character's face |
US5978100A (en) * | 1995-11-14 | 1999-11-02 | Fuji Photo Film Co., Ltd. | Method of determining a principal portion of an image and method of determining a copying condition |
US5995119A (en) * | 1997-06-06 | 1999-11-30 | At&T Corp. | Method for generating photo-realistic animated characters |
US6028626A (en) * | 1995-01-03 | 2000-02-22 | Arc Incorporated | Abnormality detection and surveillance system |
US6032025A (en) * | 1994-06-06 | 2000-02-29 | Casio Computer Co., Ltd. | Communication terminal and communication system |
US6072494A (en) * | 1997-10-15 | 2000-06-06 | Electric Planet, Inc. | Method and apparatus for real-time gesture recognition |
US6088040A (en) * | 1996-09-17 | 2000-07-11 | Atr Human Information Processing Research Laboratories | Method and apparatus of facial image conversion by interpolation/extrapolation for plurality of facial expression components representing facial image |
US6097470A (en) * | 1998-05-28 | 2000-08-01 | Eastman Kodak Company | Digital photofinishing system including scene balance, contrast normalization, and image sharpening digital image processing |
US6128397A (en) * | 1997-11-21 | 2000-10-03 | Justsystem Pittsburgh Research Center | Method for finding all frontal faces in arbitrarily complex visual scenes |
US6154209A (en) * | 1993-05-24 | 2000-11-28 | Sun Microsystems, Inc. | Graphical user interface with method and apparatus for interfacing to remote devices |
US6181778B1 (en) * | 1995-08-30 | 2001-01-30 | Hitachi, Ltd. | Chronological telephone system |
US20010000126A1 (en) * | 1996-10-25 | 2001-04-05 | Naoto Kinjo | Photographic system for recording data and reproducing images using correlation data between frames |
US6219129B1 (en) * | 1997-09-11 | 2001-04-17 | Fuji Photo Film Co., Ltd. | Print system |
US6262790B1 (en) * | 1998-03-16 | 2001-07-17 | Fuji Photo Film Co., Ltd. | Printing method, printer and lens-fitted photo film unit |
US20010008417A1 (en) * | 2000-01-17 | 2001-07-19 | Naoto Kinjo | Image processing method, image processing apparatus, camera and photographing system |
US6285794B1 (en) * | 1998-04-17 | 2001-09-04 | Adobe Systems Incorporated | Compression and editing of movies by multi-image morphing |
US20010019620A1 (en) * | 2000-03-02 | 2001-09-06 | Honda Giken Kogyo Kabushiki Kaisha | Face recognition apparatus |
US20010024235A1 (en) * | 2000-03-16 | 2001-09-27 | Naoto Kinjo | Image photographing/reproducing system and method, photographing apparatus and image reproducing apparatus used in the image photographing/reproducing system and method as well as image reproducing method |
US20010042057A1 (en) * | 2000-01-25 | 2001-11-15 | Nec Corporation | Emotion expressing device |
US20010055035A1 (en) * | 2000-04-07 | 2001-12-27 | Naoto Kinjo | Image processing method and system using computer graphics |
US20020001036A1 (en) * | 2000-03-14 | 2002-01-03 | Naoto Kinjo | Digital camera and image processing method |
US6336865B1 (en) * | 1999-07-23 | 2002-01-08 | Fuji Photo Film Co., Ltd. | Game scene reproducing machine and game scene reproducing system |
US6344858B1 (en) * | 1993-02-11 | 2002-02-05 | Agfa-Gevaert | Method of evaluating image processing performed on a radiographic image |
US20020015514A1 (en) * | 2000-04-13 | 2002-02-07 | Naoto Kinjo | Image processing method |
US20020015019A1 (en) * | 2000-04-18 | 2002-02-07 | Naoto Kinjo | Image display apparatus and image display method |
US6347993B1 (en) * | 1999-05-13 | 2002-02-19 | Konami Co., Ltd. | Video game device, character growth control method for video game and readable storage medium storing growth control program |
US20020030831A1 (en) * | 2000-05-10 | 2002-03-14 | Fuji Photo Film Co., Ltd. | Image correction method |
US20020046100A1 (en) * | 2000-04-18 | 2002-04-18 | Naoto Kinjo | Image display method |
US20020047905A1 (en) * | 2000-10-20 | 2002-04-25 | Naoto Kinjo | Image processing system and ordering system |
US20020051577A1 (en) * | 2000-10-20 | 2002-05-02 | Naoto Kinjo | Method of preventing falsification of image |
US6396963B2 (en) * | 1998-12-29 | 2002-05-28 | Eastman Kodak Company | Photocollage generation and modification |
US6400835B1 (en) * | 1996-05-15 | 2002-06-04 | Jerome H. Lemelson | Taillight mounted vehicle security system employing facial recognition using a reflected image |
US20020070945A1 (en) * | 2000-12-08 | 2002-06-13 | Hiroshi Kage | Method and device for generating a person's portrait, method and device for communications, and computer product |
US20020113872A1 (en) * | 2001-02-16 | 2002-08-22 | Naoto Kinjo | Information transmitting system |
US6445819B1 (en) * | 1998-09-10 | 2002-09-03 | Fuji Photo Film Co., Ltd. | Image processing method, image processing device, and recording medium |
US6473535B1 (en) * | 1998-04-06 | 2002-10-29 | Fuji Photo Film Co., Ltd. | Image processing apparatus and method |
US6504944B2 (en) * | 1998-01-30 | 2003-01-07 | Kabushiki Kaisha Toshiba | Image recognition apparatus and method |
US6504620B1 (en) * | 1997-03-25 | 2003-01-07 | Fuji Photo Film Co., Ltd. | Print ordering method, printing system and film scanner |
US6519046B1 (en) * | 1997-03-17 | 2003-02-11 | Fuji Photo Film Co., Ltd. | Printing method and system for making a print from a photo picture frame and a graphic image written by a user |
US6526395B1 (en) * | 1999-12-31 | 2003-02-25 | Intel Corporation | Application of personality models and interaction with synthetic characters in a computing system |
US6529630B1 (en) * | 1998-03-02 | 2003-03-04 | Fuji Photo Film Co., Ltd. | Method and device for extracting principal image subjects |
US20030063575A1 (en) * | 2001-09-28 | 2003-04-03 | Fuji Photo Film Co., Ltd. | Order processing apparatus, order processing system and image photographing device |
US20030065665A1 (en) * | 2001-09-28 | 2003-04-03 | Fuji Photo Film Co., Ltd. | Device, method and recording medium for information distribution |
US20030069069A1 (en) * | 2001-09-28 | 2003-04-10 | Fuji Photo Film Co., Ltd. | Game device |
US20030068084A1 (en) * | 1998-05-29 | 2003-04-10 | Fuji Photo Film Co., Ltd. | Image processing method |
US20030112259A1 (en) * | 2001-12-04 | 2003-06-19 | Fuji Photo Film Co., Ltd. | Method and apparatus for registering modification pattern of transmission image and method and apparatus for reproducing the same |
US20030174213A1 (en) * | 1997-03-25 | 2003-09-18 | Nobuo Matsumoto | System for transferring image data from a camera to a printing system |
US20030202715A1 (en) * | 1998-03-19 | 2003-10-30 | Naoto Kinjo | Image processing method |
US6661906B1 (en) * | 1996-12-19 | 2003-12-09 | Omron Corporation | Image creating apparatus |
US6714660B1 (en) * | 1998-05-19 | 2004-03-30 | Sony Computer Entertainment Inc. | Image processing device and method, and distribution medium |
US6728428B1 (en) * | 1998-02-23 | 2004-04-27 | Fuji Photo Film Co., Ltd. | Image processing method |
US6801656B1 (en) * | 2000-11-06 | 2004-10-05 | Koninklijke Philips Electronics N.V. | Method and apparatus for determining a number of states for a hidden Markov model in a signal processing system |
US6813395B1 (en) * | 1999-07-14 | 2004-11-02 | Fuji Photo Film Co., Ltd. | Image searching method and image processing method |
US7015934B2 (en) * | 2000-11-08 | 2006-03-21 | Minolta Co., Ltd. | Image displaying apparatus |
Family Cites Families (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2708263B2 (en) * | 1990-06-22 | 1998-02-04 | 富士写真フイルム株式会社 | Image reading device |
JP2878855B2 (en) * | 1991-02-21 | 1999-04-05 | 富士写真フイルム株式会社 | Image processing device |
US6259824B1 (en) * | 1991-03-12 | 2001-07-10 | Canon Kabushiki Kaisha | Image processing apparatus utilizing a neural network to improve printed image quality |
JPH0568262A (en) | 1991-03-13 | 1993-03-19 | Olympus Optical Co Ltd | Video id photo printer and face color converter |
JP2695067B2 (en) | 1991-05-23 | 1997-12-24 | 富士写真フイルム株式会社 | Method of extracting data of human face and method of determining exposure amount |
JPH05205030A (en) | 1992-01-27 | 1993-08-13 | Nippon Telegr & Teleph Corp <Ntt> | Display device for coincidence of eyes of photographed human figure |
JP3499305B2 (en) | 1994-10-28 | 2004-02-23 | 富士写真フイルム株式会社 | Face area extraction method and exposure amount determination method |
JP3576654B2 (en) | 1994-10-31 | 2004-10-13 | 富士写真フイルム株式会社 | Exposure determination method, figure extraction method, and face area determination method |
JP3516786B2 (en) | 1995-10-05 | 2004-04-05 | 富士写真フイルム株式会社 | Face area extraction method and copy condition determination method |
EP0813336B1 (en) * | 1996-06-12 | 2007-08-08 | FUJIFILM Corporation | Image processing method and apparatus |
JP3791635B2 (en) * | 1996-10-22 | 2006-06-28 | 富士写真フイルム株式会社 | Image reproduction method, image reproduction apparatus, image processing method, and image processing apparatus |
JPH10210473A (en) * | 1997-01-16 | 1998-08-07 | Toshiba Corp | Motion vector detector |
US6701011B1 (en) * | 1997-01-20 | 2004-03-02 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method and storage medium |
JPH10214344A (en) | 1997-01-31 | 1998-08-11 | Fujitsu Ltd | Interactive display device |
JPH10221773A (en) | 1997-02-06 | 1998-08-21 | Fuji Photo Film Co Ltd | Photograph producing device |
US6282330B1 (en) * | 1997-02-19 | 2001-08-28 | Canon Kabushiki Kaisha | Image processing apparatus and method |
JPH10268447A (en) * | 1997-03-21 | 1998-10-09 | Fuji Photo Film Co Ltd | Image processor and photographic printer |
EP0891075A3 (en) * | 1997-06-09 | 2002-03-06 | Seiko Epson Corporation | An image processing apparatus and method, and an image evaluation device and method |
JPH118774A (en) * | 1997-06-17 | 1999-01-12 | Konica Corp | Image processing system and image processing method |
JP2923894B1 (en) * | 1998-03-31 | 1999-07-26 | 日本電気株式会社 | Light source determination method, skin color correction method, color image correction method, light source determination device, skin color correction device, color image correction device, and computer-readable recording medium |
JPH11331570A (en) | 1998-05-14 | 1999-11-30 | Fuji Photo Film Co Ltd | Image processing method and device therefor |
JPH11341258A (en) * | 1998-05-28 | 1999-12-10 | Toshiba Corp | Device and method for picture processing |
US6560374B1 (en) * | 1998-06-11 | 2003-05-06 | Fuji Photo Film Co., Ltd. | Image processing apparatus |
US6123362A (en) * | 1998-10-26 | 2000-09-26 | Eastman Kodak Company | System and method of constructing a photo collage |
US6385346B1 (en) * | 1998-08-04 | 2002-05-07 | Sharp Laboratories Of America, Inc. | Method of display and control of adjustable parameters for a digital scanner device |
JP2000163196A (en) * | 1998-09-25 | 2000-06-16 | Sanyo Electric Co Ltd | Gesture recognizing device and instruction recognizing device having gesture recognizing function |
JP2000151985A (en) | 1998-11-12 | 2000-05-30 | Konica Corp | Picture processing method and recording medium |
JP2000175035A (en) * | 1998-12-07 | 2000-06-23 | Toshiba Corp | Image processing unit and image processing system |
JP2001008005A (en) * | 1999-06-24 | 2001-01-12 | Fuji Photo Film Co Ltd | Image reader |
JP2001014457A (en) * | 1999-06-29 | 2001-01-19 | Minolta Co Ltd | Image processor |
JP2001154631A (en) * | 1999-11-24 | 2001-06-08 | Fujitsu General Ltd | Method and device for controlling gradation in pdp |
JP4081219B2 (en) * | 2000-04-17 | 2008-04-23 | 富士フイルム株式会社 | Image processing method and image processing apparatus |
JP2002019195A (en) * | 2000-07-04 | 2002-01-23 | Fuji Photo Film Co Ltd | Image processor and customize print system using it |
US6788824B1 (en) * | 2000-09-29 | 2004-09-07 | Adobe Systems Incorporated | Creating image-sharpening profiles |
-
2001
- 2001-04-13 US US09/833,784 patent/US7106887B2/en not_active Expired - Lifetime
-
2004
- 2004-08-12 US US10/916,521 patent/US20050008246A1/en not_active Abandoned
-
2006
- 2006-07-18 US US11/458,312 patent/US7577310B2/en not_active Expired - Lifetime
Patent Citations (96)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US2003A (en) * | 1841-03-12 | Improvement in horizontal windivhlls | ||
US4791581A (en) * | 1985-07-27 | 1988-12-13 | Sony Corporation | Method and apparatus of forming curved surfaces |
US5262765A (en) * | 1990-08-21 | 1993-11-16 | Ricos Co., Ltd. | Animation image composition and display device |
US5208911A (en) * | 1990-09-28 | 1993-05-04 | Eastman Kodak Company | Method and apparatus for storing and communicating a transform definition which includes sample values representing an input/output relation of an image transformation |
US5393071A (en) * | 1990-11-14 | 1995-02-28 | Best; Robert M. | Talking video games with cooperative action |
US5278921A (en) * | 1991-05-23 | 1994-01-11 | Fuji Photo Film Co., Ltd. | Method of determining exposure |
US5404196A (en) * | 1991-09-12 | 1995-04-04 | Fuji Photo Film Co., Ltd. | Method of making photographic prints |
US5253067A (en) * | 1991-12-16 | 1993-10-12 | Thomson Consumer Electronics, Inc. | Channel labeling apparatus for a television receiver wherein graphics and text labels may be selected from a preprogrammed list |
US5289227A (en) * | 1992-01-22 | 1994-02-22 | Fuji Photo Film Co., Ltd. | Method of automatically controlling taking exposure and focusing in a camera and a method of controlling printing exposure |
US5367454A (en) * | 1992-06-26 | 1994-11-22 | Fuji Xerox Co., Ltd. | Interactive man-machine interface for simulating human emotions |
US5467168A (en) * | 1992-11-18 | 1995-11-14 | Fuji Photo Film Co., Ltd. | Photograph printing method |
US6344858B1 (en) * | 1993-02-11 | 2002-02-05 | Agfa-Gevaert | Method of evaluating image processing performed on a radiographic image |
US5619619A (en) * | 1993-03-11 | 1997-04-08 | Kabushiki Kaisha Toshiba | Information recognition system and control system using same |
US6154209A (en) * | 1993-05-24 | 2000-11-28 | Sun Microsystems, Inc. | Graphical user interface with method and apparatus for interfacing to remote devices |
US5689575A (en) * | 1993-11-22 | 1997-11-18 | Hitachi, Ltd. | Method and apparatus for processing images of facial expressions |
US5835616A (en) * | 1994-02-18 | 1998-11-10 | University Of Central Florida | Face detection using templates |
US5907391A (en) * | 1994-03-09 | 1999-05-25 | Fuji Photo Film Co., Ltd. | Method and apparatus for accepting an order for photographic processing |
US6032025A (en) * | 1994-06-06 | 2000-02-29 | Casio Computer Co., Ltd. | Communication terminal and communication system |
US5563988A (en) * | 1994-08-01 | 1996-10-08 | Massachusetts Institute Of Technology | Method and system for facilitating wireless, full-body, real-time user interaction with a digitally represented visual environment |
US5629752A (en) * | 1994-10-28 | 1997-05-13 | Fuji Photo Film Co., Ltd. | Method of determining an exposure amount using optical recognition of facial features |
US5649032A (en) * | 1994-11-14 | 1997-07-15 | David Sarnoff Research Center, Inc. | System for automatically aligning images to form a mosaic image |
US6028626A (en) * | 1995-01-03 | 2000-02-22 | Arc Incorporated | Abnormality detection and surveillance system |
US5918222A (en) * | 1995-03-17 | 1999-06-29 | Kabushiki Kaisha Toshiba | Information disclosing apparatus and multi-modal information input/output system |
US5870138A (en) * | 1995-03-31 | 1999-02-09 | Hitachi, Ltd. | Facial image processing |
US5943049A (en) * | 1995-04-27 | 1999-08-24 | Casio Computer Co., Ltd. | Image processor for displayed message, balloon, and character's face |
US5734794A (en) * | 1995-06-22 | 1998-03-31 | White; Tom H. | Method and system for voice-activated cell animation |
US6181778B1 (en) * | 1995-08-30 | 2001-01-30 | Hitachi, Ltd. | Chronological telephone system |
US5881171A (en) * | 1995-09-13 | 1999-03-09 | Fuji Photo Film Co., Ltd. | Method of extracting a selected configuration from an image according to a range search and direction search of portions of the image with respect to a reference point |
US5930391A (en) * | 1995-09-13 | 1999-07-27 | Fuji Photo Film Co., Ltd. | Method of extracting a region of a specific configuration and determining copy conditions |
US5926575A (en) * | 1995-11-07 | 1999-07-20 | Telecommunications Advancement Organization Of Japan | Model-based coding/decoding method and system |
US5978100A (en) * | 1995-11-14 | 1999-11-02 | Fuji Photo Film Co., Ltd. | Method of determining a principal portion of an image and method of determining a copying condition |
US5826234A (en) * | 1995-12-06 | 1998-10-20 | Telia Ab | Device and method for dubbing an audio-visual presentation which generates synthesized speech and corresponding facial movements |
US5774591A (en) * | 1995-12-15 | 1998-06-30 | Xerox Corporation | Apparatus and method for recognizing facial expressions and facial gestures in a sequence of images |
US5854634A (en) * | 1995-12-26 | 1998-12-29 | Imax Corporation | Computer-assisted animation construction system using source poses within a pose transformation space |
US5777252A (en) * | 1996-01-31 | 1998-07-07 | Yamaha Corporation | Atmosphere data generator and karaoke machine |
US5774172A (en) * | 1996-02-12 | 1998-06-30 | Microsoft Corporation | Interactive graphics overlay on video images for entertainment |
US6400835B1 (en) * | 1996-05-15 | 2002-06-04 | Jerome H. Lemelson | Taillight mounted vehicle security system employing facial recognition using a reflected image |
US6088040A (en) * | 1996-09-17 | 2000-07-11 | Atr Human Information Processing Research Laboratories | Method and apparatus of facial image conversion by interpolation/extrapolation for plurality of facial expression components representing facial image |
US20030184668A1 (en) * | 1996-10-25 | 2003-10-02 | Fuji Photo Film Co., Ltd. | Photographic system linked with photographic data |
US6583811B2 (en) * | 1996-10-25 | 2003-06-24 | Fuji Photo Film Co., Ltd. | Photographic system for recording data and reproducing images using correlation data between frames |
US20010000126A1 (en) * | 1996-10-25 | 2001-04-05 | Naoto Kinjo | Photographic system for recording data and reproducing images using correlation data between frames |
US20020036693A1 (en) * | 1996-10-25 | 2002-03-28 | Fuji Photo Film Co., Ltd. | Photographic system for recording data and reproducing images using correlation data between frames |
US20030189645A1 (en) * | 1996-10-25 | 2003-10-09 | Fuji Photo Film Co., Ltd. | Photographic system linked with photographic data |
US20030184657A1 (en) * | 1996-10-25 | 2003-10-02 | Fuji Photo Film Co., Ltd. | Photographic system linked with photographic data |
US6661906B1 (en) * | 1996-12-19 | 2003-12-09 | Omron Corporation | Image creating apparatus |
US20030067631A1 (en) * | 1997-03-17 | 2003-04-10 | Fuji Photo Film Co., Ltd. | Printing method and system for making print from photo picture frame and graphic image written by user |
US20030063296A1 (en) * | 1997-03-17 | 2003-04-03 | Fuji Photo Film Co., Ltd. | Printing method and system for making print from photo picture frame and graphic image written by user |
US6519046B1 (en) * | 1997-03-17 | 2003-02-11 | Fuji Photo Film Co., Ltd. | Printing method and system for making a print from a photo picture frame and a graphic image written by a user |
US20030063295A1 (en) * | 1997-03-17 | 2003-04-03 | Fuji Photo Film Co., Ltd. | Printing method and system for making print from photo picture frame and graphic image written by user |
US20030038970A1 (en) * | 1997-03-25 | 2003-02-27 | Fuji Photo Film Co., Ltd. | Print ordering method, printing system and film scanner |
US6532080B1 (en) * | 1997-03-25 | 2003-03-11 | Fuji Photo Film Co., Ltd. | Print ordering method, printing system and film scanner |
US20030174213A1 (en) * | 1997-03-25 | 2003-09-18 | Nobuo Matsumoto | System for transferring image data from a camera to a printing system |
US6590671B1 (en) * | 1997-03-25 | 2003-07-08 | Fuji Photo Film Co., Ltd. | Print ordering method, printing system and film scanner |
US6504620B1 (en) * | 1997-03-25 | 2003-01-07 | Fuji Photo Film Co., Ltd. | Print ordering method, printing system and film scanner |
US5995119A (en) * | 1997-06-06 | 1999-11-30 | At&T Corp. | Method for generating photo-realistic animated characters |
US6219129B1 (en) * | 1997-09-11 | 2001-04-17 | Fuji Photo Film Co., Ltd. | Print system |
US6072494A (en) * | 1997-10-15 | 2000-06-06 | Electric Planet, Inc. | Method and apparatus for real-time gesture recognition |
US6128397A (en) * | 1997-11-21 | 2000-10-03 | Justsystem Pittsburgh Research Center | Method for finding all frontal faces in arbitrarily complex visual scenes |
US6504944B2 (en) * | 1998-01-30 | 2003-01-07 | Kabushiki Kaisha Toshiba | Image recognition apparatus and method |
US6728428B1 (en) * | 1998-02-23 | 2004-04-27 | Fuji Photo Film Co., Ltd. | Image processing method |
US6529630B1 (en) * | 1998-03-02 | 2003-03-04 | Fuji Photo Film Co., Ltd. | Method and device for extracting principal image subjects |
US6262790B1 (en) * | 1998-03-16 | 2001-07-17 | Fuji Photo Film Co., Ltd. | Printing method, printer and lens-fitted photo film unit |
US6798921B2 (en) * | 1998-03-19 | 2004-09-28 | Fuji Photo Film Co., Ltd. | Method for image designating and modifying process |
US20030202715A1 (en) * | 1998-03-19 | 2003-10-30 | Naoto Kinjo | Image processing method |
US6473535B1 (en) * | 1998-04-06 | 2002-10-29 | Fuji Photo Film Co., Ltd. | Image processing apparatus and method |
US6285794B1 (en) * | 1998-04-17 | 2001-09-04 | Adobe Systems Incorporated | Compression and editing of movies by multi-image morphing |
US6714660B1 (en) * | 1998-05-19 | 2004-03-30 | Sony Computer Entertainment Inc. | Image processing device and method, and distribution medium |
US6097470A (en) * | 1998-05-28 | 2000-08-01 | Eastman Kodak Company | Digital photofinishing system including scene balance, contrast normalization, and image sharpening digital image processing |
US6631208B1 (en) * | 1998-05-29 | 2003-10-07 | Fuji Photo Film Co., Ltd. | Image processing method |
US20030068084A1 (en) * | 1998-05-29 | 2003-04-10 | Fuji Photo Film Co., Ltd. | Image processing method |
US6445819B1 (en) * | 1998-09-10 | 2002-09-03 | Fuji Photo Film Co., Ltd. | Image processing method, image processing device, and recording medium |
US6396963B2 (en) * | 1998-12-29 | 2002-05-28 | Eastman Kodak Company | Photocollage generation and modification |
US6347993B1 (en) * | 1999-05-13 | 2002-02-19 | Konami Co., Ltd. | Video game device, character growth control method for video game and readable storage medium storing growth control program |
US6813395B1 (en) * | 1999-07-14 | 2004-11-02 | Fuji Photo Film Co., Ltd. | Image searching method and image processing method |
US6336865B1 (en) * | 1999-07-23 | 2002-01-08 | Fuji Photo Film Co., Ltd. | Game scene reproducing machine and game scene reproducing system |
US6526395B1 (en) * | 1999-12-31 | 2003-02-25 | Intel Corporation | Application of personality models and interaction with synthetic characters in a computing system |
US20010008417A1 (en) * | 2000-01-17 | 2001-07-19 | Naoto Kinjo | Image processing method, image processing apparatus, camera and photographing system |
US20010042057A1 (en) * | 2000-01-25 | 2001-11-15 | Nec Corporation | Emotion expressing device |
US20010019620A1 (en) * | 2000-03-02 | 2001-09-06 | Honda Giken Kogyo Kabushiki Kaisha | Face recognition apparatus |
US20020001036A1 (en) * | 2000-03-14 | 2002-01-03 | Naoto Kinjo | Digital camera and image processing method |
US20010024235A1 (en) * | 2000-03-16 | 2001-09-27 | Naoto Kinjo | Image photographing/reproducing system and method, photographing apparatus and image reproducing apparatus used in the image photographing/reproducing system and method as well as image reproducing method |
US20010055035A1 (en) * | 2000-04-07 | 2001-12-27 | Naoto Kinjo | Image processing method and system using computer graphics |
US20020015514A1 (en) * | 2000-04-13 | 2002-02-07 | Naoto Kinjo | Image processing method |
US20020015019A1 (en) * | 2000-04-18 | 2002-02-07 | Naoto Kinjo | Image display apparatus and image display method |
US20020046100A1 (en) * | 2000-04-18 | 2002-04-18 | Naoto Kinjo | Image display method |
US20020030831A1 (en) * | 2000-05-10 | 2002-03-14 | Fuji Photo Film Co., Ltd. | Image correction method |
US20020047905A1 (en) * | 2000-10-20 | 2002-04-25 | Naoto Kinjo | Image processing system and ordering system |
US20020051577A1 (en) * | 2000-10-20 | 2002-05-02 | Naoto Kinjo | Method of preventing falsification of image |
US6801656B1 (en) * | 2000-11-06 | 2004-10-05 | Koninklijke Philips Electronics N.V. | Method and apparatus for determining a number of states for a hidden Markov model in a signal processing system |
US7015934B2 (en) * | 2000-11-08 | 2006-03-21 | Minolta Co., Ltd. | Image displaying apparatus |
US20020070945A1 (en) * | 2000-12-08 | 2002-06-13 | Hiroshi Kage | Method and device for generating a person's portrait, method and device for communications, and computer product |
US20020113872A1 (en) * | 2001-02-16 | 2002-08-22 | Naoto Kinjo | Information transmitting system |
US20030063575A1 (en) * | 2001-09-28 | 2003-04-03 | Fuji Photo Film Co., Ltd. | Order processing apparatus, order processing system and image photographing device |
US20030065665A1 (en) * | 2001-09-28 | 2003-04-03 | Fuji Photo Film Co., Ltd. | Device, method and recording medium for information distribution |
US20030069069A1 (en) * | 2001-09-28 | 2003-04-10 | Fuji Photo Film Co., Ltd. | Game device |
US20030112259A1 (en) * | 2001-12-04 | 2003-06-19 | Fuji Photo Film Co., Ltd. | Method and apparatus for registering modification pattern of transmission image and method and apparatus for reproducing the same |
Cited By (75)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020015514A1 (en) * | 2000-04-13 | 2002-02-07 | Naoto Kinjo | Image processing method |
US7106887B2 (en) * | 2000-04-13 | 2006-09-12 | Fuji Photo Film Co., Ltd. | Image processing method using conditions corresponding to an identified person |
US20060251299A1 (en) * | 2000-04-13 | 2006-11-09 | Fuji Photo Film Co., Ltd. | Image processing method |
US7577310B2 (en) | 2000-04-13 | 2009-08-18 | Fujifilm Corporation | Image processing method |
US20040120548A1 (en) * | 2002-12-18 | 2004-06-24 | Qian Richard J. | Method and apparatus for tracking features in a video sequence |
US7194110B2 (en) * | 2002-12-18 | 2007-03-20 | Intel Corporation | Method and apparatus for tracking features in a video sequence |
US20040208114A1 (en) * | 2003-01-17 | 2004-10-21 | Shihong Lao | Image pickup device, image pickup device program and image pickup method |
US20040228528A1 (en) * | 2003-02-12 | 2004-11-18 | Shihong Lao | Image editing apparatus, image editing method and program |
US7313280B2 (en) * | 2003-03-14 | 2007-12-25 | Seiko Epson Corporation | Image processing device, image processing method, and image processing program |
US20040247199A1 (en) * | 2003-03-14 | 2004-12-09 | Seiko Epson Corporation | Image processing device, image processing method, and image processing program |
US7864198B2 (en) | 2004-02-05 | 2011-01-04 | Vodafone Group Plc. | Image processing method, image processing device and mobile communication terminal |
US20070183679A1 (en) * | 2004-02-05 | 2007-08-09 | Vodafone K.K. | Image processing method, image processing device and mobile communication terminal |
US7660482B2 (en) | 2004-06-23 | 2010-02-09 | Seiko Epson Corporation | Method and apparatus for converting a photo to a caricature image |
US20050286799A1 (en) * | 2004-06-23 | 2005-12-29 | Jincheng Huang | Method and apparatus for converting a photo to a caricature image |
US7957588B2 (en) * | 2004-07-07 | 2011-06-07 | Nikon Corporation | Image processor and computer program product |
US20080123999A1 (en) * | 2004-07-07 | 2008-05-29 | Nikon Corporation | Image Processor and Computer Program Product |
US20090037278A1 (en) * | 2005-07-01 | 2009-02-05 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Implementing visual substitution options in media works |
US9230601B2 (en) | 2005-07-01 | 2016-01-05 | Invention Science Fund I, Llc | Media markup system for content alteration in derivative works |
US20070294720A1 (en) * | 2005-07-01 | 2007-12-20 | Searete Llc | Promotional placement in media works |
US20070274519A1 (en) * | 2005-07-01 | 2007-11-29 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Authorization for media content alteration |
US20080010083A1 (en) * | 2005-07-01 | 2008-01-10 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Approval technique for media content alteration |
US20080013859A1 (en) * | 2005-07-01 | 2008-01-17 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Implementation of media content alteration |
US20080052104A1 (en) * | 2005-07-01 | 2008-02-28 | Searete Llc | Group content substitution in media works |
US20080052161A1 (en) * | 2005-07-01 | 2008-02-28 | Searete Llc | Alteration of promotional content in media works |
US20080059530A1 (en) * | 2005-07-01 | 2008-03-06 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Implementing group content substitution in media works |
US20080077954A1 (en) * | 2005-07-01 | 2008-03-27 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Promotional placement in media works |
US20080086380A1 (en) * | 2005-07-01 | 2008-04-10 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Alteration of promotional content in media works |
US20070276757A1 (en) * | 2005-07-01 | 2007-11-29 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Approval technique for media content alteration |
US20080180538A1 (en) * | 2005-07-01 | 2008-07-31 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Image anonymization |
US8792673B2 (en) | 2005-07-01 | 2014-07-29 | The Invention Science Fund I, Llc | Modifying restricted images |
US8910033B2 (en) | 2005-07-01 | 2014-12-09 | The Invention Science Fund I, Llc | Implementing group content substitution in media works |
US20070294305A1 (en) * | 2005-07-01 | 2007-12-20 | Searete Llc | Implementing group content substitution in media works |
US9583141B2 (en) | 2005-07-01 | 2017-02-28 | Invention Science Fund I, Llc | Implementing audio substitution options in media works |
US9426387B2 (en) | 2005-07-01 | 2016-08-23 | Invention Science Fund I, Llc | Image anonymization |
US20080313233A1 (en) * | 2005-07-01 | 2008-12-18 | Searete Llc | Implementing audio substitution options in media works |
US20090037243A1 (en) * | 2005-07-01 | 2009-02-05 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Audio substitution options in media works |
US20070266049A1 (en) * | 2005-07-01 | 2007-11-15 | Searete Llc, A Limited Liability Corportion Of The State Of Delaware | Implementation of media content alteration |
US8732087B2 (en) | 2005-07-01 | 2014-05-20 | The Invention Science Fund I, Llc | Authorization for media content alteration |
US20090150444A1 (en) * | 2005-07-01 | 2009-06-11 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Media markup for audio content alteration |
US20090150199A1 (en) * | 2005-07-01 | 2009-06-11 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Visual substitution options in media works |
US20090151004A1 (en) * | 2005-07-01 | 2009-06-11 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Media markup for visual content alteration |
US20090204475A1 (en) * | 2005-07-01 | 2009-08-13 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Media markup for promotional visual content |
US20070263865A1 (en) * | 2005-07-01 | 2007-11-15 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Authorization rights for substitute media content |
US20090210946A1 (en) * | 2005-07-01 | 2009-08-20 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Media markup for promotional audio content |
US20090235364A1 (en) * | 2005-07-01 | 2009-09-17 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Media markup for promotional content alteration |
US20090300480A1 (en) * | 2005-07-01 | 2009-12-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Media segment alteration with embedded markup identifier |
US20100017885A1 (en) * | 2005-07-01 | 2010-01-21 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Media markup identifier for alterable promotional segments |
US20070005422A1 (en) * | 2005-07-01 | 2007-01-04 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Techniques for image generation |
US20100154065A1 (en) * | 2005-07-01 | 2010-06-17 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Media markup for user-activated content alteration |
US9092928B2 (en) * | 2005-07-01 | 2015-07-28 | The Invention Science Fund I, Llc | Implementing group content substitution in media works |
US9065979B2 (en) | 2005-07-01 | 2015-06-23 | The Invention Science Fund I, Llc | Promotional placement in media works |
US20070092153A1 (en) * | 2005-09-21 | 2007-04-26 | Fuji Photo Film Co., Ltd/ | Person image correcting apparatus and method |
US7881504B2 (en) * | 2005-09-21 | 2011-02-01 | Fujifilm Corporation | Person image correcting apparatus and method |
US7911656B2 (en) * | 2006-03-13 | 2011-03-22 | Konica Minolta Business Technologies, Inc. | Image processing apparatus, image processing method, and computer readable recording medium storing program |
US20070211913A1 (en) * | 2006-03-13 | 2007-09-13 | Konica Minolta Business Technologies, Inc. | Image processing apparatus, image processing method, and computer readable recording medium storing program |
US20080180539A1 (en) * | 2007-01-31 | 2008-07-31 | Searete Llc, A Limited Liability Corporation | Image anonymization |
US20080226119A1 (en) * | 2007-03-16 | 2008-09-18 | Brant Candelore | Content image search |
US8861898B2 (en) * | 2007-03-16 | 2014-10-14 | Sony Corporation | Content image search |
US20080244755A1 (en) * | 2007-03-30 | 2008-10-02 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Authorization for media content alteration |
US20080270161A1 (en) * | 2007-04-26 | 2008-10-30 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Authorization rights for substitute media content |
US9215512B2 (en) | 2007-04-27 | 2015-12-15 | Invention Science Fund I, Llc | Implementation of media content alteration |
US20080304749A1 (en) * | 2007-06-11 | 2008-12-11 | Sony Corporation | Image processing apparatus, image display apparatus, imaging apparatus, method for image processing therefor, and program |
US8085996B2 (en) * | 2007-06-11 | 2011-12-27 | Sony Corporation | Image processing apparatus, image display apparatus, imaging apparatus, method for image processing therefor, and program |
US20090046326A1 (en) * | 2007-08-13 | 2009-02-19 | Seiko Epson Corporation | Image processing device, method of controlling the same, and program |
US8675098B2 (en) * | 2009-03-25 | 2014-03-18 | Sony Corporation | Image processing device, image processing method, and program |
US20100245612A1 (en) * | 2009-03-25 | 2010-09-30 | Takeshi Ohashi | Image processing device, image processing method, and program |
US9131149B2 (en) | 2009-03-25 | 2015-09-08 | Sony Corporation | Information processing device, information processing method, and program |
CN101883230A (en) * | 2010-05-31 | 2010-11-10 | 中山大学 | Digital television actor retrieval method and system |
US20130162780A1 (en) * | 2010-09-22 | 2013-06-27 | Fujifilm Corporation | Stereoscopic imaging device and shading correction method |
US9369693B2 (en) * | 2010-09-22 | 2016-06-14 | Fujifilm Corporation | Stereoscopic imaging device and shading correction method |
US20120105676A1 (en) * | 2010-10-27 | 2012-05-03 | Samsung Electronics Co., Ltd. | Digital photographing apparatus and method of controlling the same |
US8786749B2 (en) * | 2010-10-27 | 2014-07-22 | Samsung Electronics Co., Ltd. | Digital photographing apparatus for displaying an icon corresponding to a subject feature and method of controlling the same |
KR101755598B1 (en) * | 2010-10-27 | 2017-07-07 | 삼성전자주식회사 | Digital photographing apparatus and control method thereof |
US9117275B2 (en) | 2012-03-05 | 2015-08-25 | Panasonic Intellectual Property Corporation Of America | Content processing device, integrated circuit, method, and program |
CN106412285A (en) * | 2016-09-27 | 2017-02-15 | 维沃移动通信有限公司 | Method for entering expression mode and mobile terminal |
Also Published As
Publication number | Publication date |
---|---|
US7106887B2 (en) | 2006-09-12 |
US7577310B2 (en) | 2009-08-18 |
US20060251299A1 (en) | 2006-11-09 |
US20020015514A1 (en) | 2002-02-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7106887B2 (en) | Image processing method using conditions corresponding to an identified person | |
JP4291963B2 (en) | Image processing method | |
JP2007193824A (en) | Image processing method | |
JP4277534B2 (en) | Image editing apparatus and image editing method | |
US8599299B2 (en) | System and method of processing a digital image for user assessment of an output image product | |
EP1467549B1 (en) | Image processing system associating information with identified subject persons of image | |
JP4344925B2 (en) | Image processing apparatus, image processing method, and printing system | |
US7764828B2 (en) | Method, apparatus, and computer program for processing image | |
US20040208114A1 (en) | Image pickup device, image pickup device program and image pickup method | |
US20060257041A1 (en) | Apparatus, method, and program for image processing | |
US7251054B2 (en) | Method, apparatus and recording medium for image processing | |
JP2007087253A (en) | Image correction method and device | |
JPH11275351A (en) | Image processing method | |
CN103997593A (en) | Image creating device, image creating method and recording medium storing program | |
JP4090926B2 (en) | Image storage method, registered image retrieval method and system, registered image processing method, and program for executing these methods | |
JP3913520B2 (en) | Image processing system and order system | |
JP4795988B2 (en) | Image processing method | |
EP1443458A2 (en) | Image processing method, apparatus and computer program | |
JP2004240622A (en) | Image processing method, image processor and image processing program | |
JP4043708B2 (en) | Image processing method and apparatus | |
KR100422470B1 (en) | Method and apparatus for replacing a model face of moving image | |
JP2001218020A (en) | Picture processing method | |
JP2012003324A (en) | Image processing system, imaging apparatus, image processing program and memory medium | |
US20040150850A1 (en) | Image data processing apparatus, method, storage medium and program | |
JP7502711B1 (en) | PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING APPARATUS |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: FUJIFILM CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FUJIFILM HOLDINGS CORPORATION (FORMERLY FUJI PHOTO FILM CO., LTD.);REEL/FRAME:018904/0001 Effective date: 20070130 Owner name: FUJIFILM CORPORATION,JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FUJIFILM HOLDINGS CORPORATION (FORMERLY FUJI PHOTO FILM CO., LTD.);REEL/FRAME:018904/0001 Effective date: 20070130 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |