JP2013069187A - Image processing system, image processing method, server and program - Google Patents

Image processing system, image processing method, server and program Download PDF

Info

Publication number
JP2013069187A
JP2013069187A JP2011208456A JP2011208456A JP2013069187A JP 2013069187 A JP2013069187 A JP 2013069187A JP 2011208456 A JP2011208456 A JP 2011208456A JP 2011208456 A JP2011208456 A JP 2011208456A JP 2013069187 A JP2013069187 A JP 2013069187A
Authority
JP
Japan
Prior art keywords
information
face
image data
person
means
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2011208456A
Other languages
Japanese (ja)
Inventor
Hitomi Hyuga
ひとみ 日向
Original Assignee
Dainippon Printing Co Ltd
大日本印刷株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dainippon Printing Co Ltd, 大日本印刷株式会社 filed Critical Dainippon Printing Co Ltd
Priority to JP2011208456A priority Critical patent/JP2013069187A/en
Publication of JP2013069187A publication Critical patent/JP2013069187A/en
Application status is Pending legal-status Critical

Links

Images

Abstract

An image processing system and the like capable of automatically performing a blindfold process for privacy protection without destroying the atmosphere of an image or a person.
The blindfold design setting information 30 is obtained by associating blindfold design information 35 for each attribute information 31 and sensitivity information 33 of a person in image data. The attribute information 31 is, for example, information such as the gender (Male, Female) of the person in the image data, and the sensitivity information 33 is, for example, smile information based on smile determination. Smile information is information obtained from face information in image data to be described later, and is information based on the accuracy of determining whether or not the face is a smile. The blindfold design setting information 30 is associated with the blindfold design information 35 suitable for the attribute information 31 and the sensitivity information 33 for each of these combinations.
[Selection] Figure 6

Description

  The present invention relates to an image processing system or the like for performing blindfold processing on an image for the purpose of protecting privacy.

  Conventionally, there are cases where digital images taken with a digital camera or the like are uploaded to a website and used for a blog or the like. In such a case, in order to protect the privacy of the person appearing in the image, the user may perform a blind process on the face and perform image processing so that the person cannot be identified as necessary. is there. Such image processing can be performed manually by the user using image processing software, but the burden on the user is heavy and the finish is not good.

  For this reason, a method has been proposed in which all faces in an image are detected and the detected face is automatically blindfolded (for example, Patent Document 1 and Patent Document 2).

JP 2008-034963 A JP 2010-170265 A

  However, in both methods of Patent Document 1 and Patent Document 2, the same blindfolding process is performed on all faces in the image regardless of the atmosphere of the person in the image, so that the atmosphere of the entire image may be significantly impaired. There is. Further, since the blindfolding process is performed on all detected faces, there is a problem in that the blindfolding process is performed even on a portion (person) that originally does not need the blindfolding process.

  The present invention has been made in view of the above-described problems, and an object thereof is to provide an image processing system capable of automatically performing a blindfold process for privacy protection without destroying the atmosphere of an image or a person. And

  In order to achieve the above-described object, the first invention relates to a storage unit that associates and stores human attribute information and blindfold design, and analyzes image data to obtain human face information in the image data. A first extracting means for extracting; an estimating means for estimating human attribute information from the face information in the image data extracted by the first extracting means; and the person estimated by the estimating means Selection means for selecting a corresponding blindfold design from the storage unit using the attribute information, and the selected blindfold design on the face extracted by the first extraction means for the image data. An arrangement means for arranging the server.

  The blindfold design information is preferably associated with the person's attribute information and sensitivity information, and the selection means preferably selects the corresponding blindfold design information based on the person's attribute information and sensitivity information.

  It is preferable that the image forming apparatus further includes a setting unit that sets a target to be blindfolded, and the arrangement unit arranges the selected blindfold design only on the face in the human image set by the setting unit. .

  The storage unit further stores the face information of the person and the person-specific information corresponding to the face information in association with each other, and uses the face information extracted by the first extraction unit to store the storage unit. Further comprising second extraction means for extracting the corresponding person-specific information from the information, and the setting means performs a blind process for each person-specific information from the person-specific information extracted by the second extraction means. A target may be set.

  The arrangement unit may adjust the size of the blindfold design according to the size of the face in the target image data, and arrange the adjusted blindfold design on the corresponding face.

  According to the first invention, the attribute information of the person is estimated from the face information in the image data, and the blindfold design is selected according to the estimated person. Therefore, the atmosphere of the person in the image or the atmosphere of the entire image is broken. Nothing. Therefore, the blindfolding process can be performed on the target image with a natural feeling. In particular, if a blindfold design is selected based on the attributes and sensitivities of the target person to be blindfolded, it is possible to perform blindfolding processing that matches the atmosphere of the person.

  Further, if there is a setting means for setting a target to be subjected to the blindfolding process, the blindfolding process can be performed only on an arbitrary face with respect to the face detected in the image. At this time, by associating the face information with the person-specific information in advance, it becomes possible to divide into groups based on the person-specific information, and it is possible to select a target to be blindfolded by the group. Therefore, it is easy to set the blindfold processing target.

  In addition, since the size of the blindfold design is adjusted according to the size of the face in the image, it is possible to reliably perform the blindfold processing on the target face, and to perform a larger blindfold processing than necessary. Absent.

  A second invention is an image processing system in which a server and a terminal are connected via a network and performs image processing on a specific part of an image, and the terminal has means for transmitting image data to the server. And the server extracts a human face information from the image data by analyzing the image data sent from the terminal and a storage unit that stores the attribute information of the person and the blindfold in association with each other. 1 extracting means, estimating means for estimating human attribute information from the face information in the image data extracted by the first extracting means, and the human attribute information estimated by the estimating means. And selecting means for selecting a corresponding blindfold design from the storage unit, and an arrangement for arranging the selected blindfold design on the face extracted by the first extraction means for the image data. Means for transmitting the image data on which the blindfold design is arranged to the terminal, and the terminal has display means for displaying the image data on which the blindfold design is arranged. Is an image processing system.

  According to the second aspect of the invention, since an appropriate blindfold design is selected according to the atmosphere of the person in the image data, the optimum blindfold process for the image can be automatically performed.

  A third invention is an image processing method for performing image processing on a specific part of an image, the step of analyzing image data to extract human face information in the image data, and the extracted image From the face information in the data, the step of estimating the human attribute information and the storage unit storing the human attribute information and the blindfold in association with each other, using the estimated human attribute information An image processing method comprising: selecting a blindfold design; and placing the selected blindfold design on the extracted face with respect to the image data.

  According to the third invention, attribute information of a person in an image is estimated, and a blindfold design is selected based on the estimated attribute information. Therefore, a blindfold process suitable for the person can be automatically performed.

  The fourth invention is a program capable of causing the first invention to function.

  According to the fourth invention, the server of the first invention can be realized by installing it on a general-purpose computer.

  The present invention can provide an image processing system and the like that can automatically perform a blindfold process for privacy protection without destroying the atmosphere of an image or a person.

1 is a block diagram showing an outline of an image processing system 1. FIG. The hardware block diagram of the server 3. FIG. The figure which shows the blindfold design setting information 30. The figure which shows the personal identification information 40. FIG. The hardware block diagram of the terminal 5. FIG. 3 is a flowchart showing image processing in the image processing system 1. The flowchart which shows the process of step 104 in detail. The figure which shows the image data 50 and a face information extraction screen. The figure which shows the information 60 regarding a person. The figure which shows the blindfold process setting screen. The figure which shows the blindfold process image 80. FIG.

  DESCRIPTION OF EXEMPLARY EMBODIMENTS Hereinafter, preferred embodiments of an image processing system and the like according to the invention will be described in detail with reference to the accompanying drawings. In the following description and the accompanying drawings, the same reference numerals are given to components having substantially the same functional configuration, and redundant description will be omitted.

  FIG. 1 is a block diagram showing an outline of the image processing system 1. In the image processing system 1, a server 3 and a terminal 5 are connected via a network 4.

  The server 3 stores the image processing program according to the present embodiment, and performs various processes by executing this program. The terminal 5 is a personal computer, for example, and can access the server 3 via a network 4 such as the Internet.

  FIG. 2 is a diagram illustrating a hardware configuration example of the server 3. The server 3 includes a control unit 7, a storage unit 9, a media input / output unit 11, a communication control unit 13, an input unit 15, a display unit 17, a peripheral device I / F unit 19, and the like connected via a bus 21.

The control unit 7 includes a CPU (Central Processing Unit), a ROM (Read Only Memory), a RAM (Random Access Memory), and the like.
The CPU calls and executes a program stored in the ROM, the storage unit 9 and the like in a work memory area on the RAM, drives and controls each device connected via the bus 21, and realizes processing performed by the computer.
The ROM is a non-volatile memory and permanently holds a computer boot program, a program such as BIOS, data, and the like.
The RAM is a volatile memory, and temporarily stores a program, data, and the like loaded from the storage unit 9, ROM, storage medium, and the like, and includes a work area used by the control unit 7 to perform various processes.

The storage unit 9 is an HDD (hard disk drive), and stores a program executed by the control unit 7, data necessary for program execution, an OS (Operating System), and the like. Regarding programs, a control program corresponding to the OS, application programs, files, and the like are stored.
These program codes are read by the control unit 7 as necessary, transferred to the RAM, and executed as various means by the CPU.

The media input / output unit 11 is a drive device that inputs / outputs data of a recording medium. For example, a floppy (registered trademark) disk drive, a CD drive (-ROM, -R, -RW, etc.), a DVD drive (-ROM). , -R, -RW, etc.) and a media input / output device such as an MO drive.
The communication control unit 13 includes a communication control device, a communication port, and the like, and is a communication interface that mediates communication via the network 4 and controls communication with other computers.

The input unit 15 inputs data and includes, for example, a keyboard, a pointing device such as a mouse, and an input device such as a numeric keypad. An operation instruction, an operation instruction, data input, and the like can be performed on the computer via the input unit 15.
The display unit 17 includes a CRT (Cathode Ray Tube) monitor, a display device such as a liquid crystal panel, and a logic circuit (such as a video adapter) for realizing a video function in cooperation with the display device.
The peripheral device I / F unit 19 is a USB (Universal Serial Bus) port or the like for connecting peripheral devices.
The bus 21 is a path that mediates transmission / reception of control signals, data signals, and the like between the devices.

  FIG. 3A is a diagram illustrating an example of the blindfold design setting information 30 stored in the storage unit 9. The blindfold design setting information 30 is obtained by associating the blindfold design information 35 with the attribute information 31 and the sensitivity information 33 of the person in the image data.

  The attribute information 31 is, for example, information such as the gender (Male, Female) of the person in the image data, and the sensitivity information 33 is, for example, smile information based on smile determination. Smile information is information obtained from face information in image data to be described later, and is information such as the accuracy of determining whether or not the face is a smile. That is, the higher the smile accuracy, the more the person's mood is determined to be “positive”, and the lower the smile accuracy is, the more the person's mood is determined to be “negative”. The blindfold design setting information 30 is associated with the blindfold design information 35 suitable for the attribute information 31 and the sensitivity information 33 for each of these combinations.

  FIG. 3B is a diagram illustrating an example of the blindfold design information 35. As described above, in the blindfold design setting information 30, appropriate blindfold design information 35 is associated with each attribute information 31 and each sensitivity information 33. For example, in the example shown in FIG. 3B, the upper part is applied to a male and the lower part is applied to a female. Set accordingly. Here, the blindfold design information is image information inserted on the face in order to hide the target face in the image data in the present invention.

  For example, if it is determined that the person in the image data is a male and the smile accuracy (true) is 20%, “B.jpg” (FIG. 3B) is displayed as the blindfold design information 35. The second blindfold design information from the left is selected. It should be noted that blindfold design information may be associated with “default” which is a standard setting for items that do not correspond to the combination of the attribute information 31 and the sensitivity information 33 set in advance.

  FIG. 4 is a diagram illustrating an example of the personal identification information 40 stored in the storage unit 9. The personal identification information 40 is stored in the storage unit 9 in advance as necessary. The personal identification information 40 is associated with human face information 41 and person-specific information 43.

  The face information 41 is personal face information registered in advance. That is, image data of a person's face in front of or near the front is registered in the storage unit 9, and the eyes, nose, mouth, and other data are analyzed based on the registered face information, so that the person's face Information feature points are extracted and stored.

  The person unique information 43 is information unique to the person. For example, group information such as the name of the person and the relationship with the person is stored. That is, when a face having a feature close to that of the registered face information is extracted from certain image data, the person-specific information 43 associated with the person can be extracted.

  FIG. 5 is a diagram illustrating a hardware configuration example of the terminal 5. The terminal 5 includes a control unit 8, a storage unit 10, a media input / output unit 12, a communication control unit 14, an input unit 16, a display unit 18, a peripheral device I / F unit 20, and the like connected via a bus 22. The control unit 8, the storage unit 10, the media input / output unit 12, the communication control unit 14, the input unit 16, the display unit 18, the peripheral device I / F unit 20, and the bus 22 of the terminal 5 are connected to the control unit 7 of the server 3. The storage unit 9, the media input / output unit 11, the communication control unit 13, the input unit 15, the display unit 17, the peripheral device I / F unit 19, and the bus 21 have the same configuration, and redundant description is omitted.

  Next, image processing in the image processing system 1 will be described. FIG. 6 is a flowchart showing image processing.

  First, the control unit 8 of the terminal 5 transmits image data to the server 3 (step 100). The control unit 7 of the server 3 acquires image data from the terminal 5. The server 3 that has acquired the image data from the terminal 5 causes the control unit 7 to analyze the acquired image data and extract face information in the image data (step 101).

  FIG. 8A is a diagram illustrating an example of the acquired image data 50, and FIG. 8B is a conceptual diagram illustrating the image data 50 at the time of analysis. The control unit 7 extracts face information for all the faces in the image data 50. For example, the position of a person's eyes, nose, mouth, etc. is acquired, and the face area of the part recognized as a face is estimated. In the example shown in FIG. 8B, three pieces of face information of ID = 0, 1, and 2 are extracted, and face areas 51a, 51b, and 51c of each face are extracted.

  Next, the control unit 7 analyzes the face information and estimates information about the person for each piece of face information (step 102).

  FIG. 9 is a diagram showing information 60 related to a person estimated from face information. The information 60 about a person includes, for example, attribute information of a person in image data such as the person's gender and age group, sensitivity information indicating the degree of smile of the person, person-specific information, and the like. For example, for the face information of ID = 2 (the right face in the image data of FIG. 8A), as information about the person, the gender is female, the estimated age is 20's, and a smile. An example in which the person-specific information is estimated to be unregistered is shown. The person-specific information will be described later.

  As methods for extracting face information from the image data 50 and estimating the face attributes and sensibility (smile determination), for example, JP2009-294925A, JP2005-165447A, and JP2007. A known method such as -336124 may be used, for example, as follows.

  First, a face area is detected from a face image by a face area detection unit, and face feature information is extracted by a face feature extraction unit. In addition, personal facial feature information for a wide range of age groups by gender is created in advance and stored in the facial feature holding unit together with age and gender information, and the facial feature information and facial features extracted by the facial feature extraction unit The similarity is obtained by collating with the personal face feature information in the holding unit. What is necessary is just to discriminate | determine the age and sex of the said person from the information of the obtained similarity and the age and sex attached to it.

  Smile recognition can be quantified based on how the mouth bends, how the mouth corners rise, the size of the eyes, the degree of wrinkles, and the like. By comparing with a preset threshold value based on the quantified value, it can be determined how much the subject image is laughing.

  Further, as a means for extracting face information from the image data 50 and estimating person-specific information of a person in the image from pre-registered person-specific information, for example, Non-Patent Document 1 (“Use Face Image” is used. Known face recognition system ", scientific technique PRMU97-50,) June 1997, Yamaguchi, Fukui, Maeda) and Japanese Patent Laid-Open No. 2003-141542 may be used.

  For example, as described in Japanese Patent Laid-Open No. 2003-141542, a similar face is maintained in order to maintain a certain matching performance and security level even when a similar face pattern exists in the face matching dictionary. When multiple patterns are registered in the dictionary, similar face patterns are grouped into similar groups, and whether face patterns belonging to the similar group can be verified by special processing different from normal verification processing It is only necessary to improve the accuracy of face recognition by the above method.

  Next, the control unit 7 extracts the blindfold design information 35 corresponding to the information 60 related to each estimated person from the blindfold design setting information 30 (FIG. 3A) in the storage unit 9 (step 103). . If the attributes and sensibility of a person in the image cannot be estimated, a preset standard blindfold design is selected.

  Next, the control unit 7 sets which face in the image data to insert the blindfold design (step 104). The selection of the target for performing the blindfolding process may be set from the terminal 5 for each face, but may be performed as follows, for example.

  FIG. 7 is a flowchart showing details of step 104. First, the control unit 7 extracts person-specific information corresponding to each face information on the image data from the storage unit 9 (step 200). Next, the blindfold processing targets are grouped based on the extracted person-specific information (step 201). For example, when there are multiple persons in the image data, if the group in the person-specific information of each face information can be classified into “person”, “friend”, and “unregistered”, ”Group,“ friend ”group, and“ unregistered ”group.

  Next, the control part 7 produces the selection screen which selects a blindfold process, and displays it on the terminal 5 (step 202).

  FIG. 10 is a diagram showing a blindfold processing setting screen 70. The blindfold processing setting screen 70 is provided with a blindfold processing selection unit 71, a setting button 73, and a person-specific information display unit 75 along with image data. The blindfold processing selection unit 71 can select a blindfold processing target based on the person-specific information of each person who is grouped as described above. For example, in the example shown in FIG. 10, all the persons in the image are divided into groups of “person”, “friends”, and “unregistered”. An example of selecting the blindfold processing is shown.

  In addition, each person specific information is displayed on the person specific information display part 75 corresponding to each ID in the image. Therefore, it is also possible to newly input person-specific information from this screen and add it to the personal identification information 40 in the storage unit 9. If the person-specific information is estimated differently, correct information or more detailed grouping (for example, “friend A group”, “friend B group”, etc.) by inputting correct information into this part. It is also possible to perform.

  By selecting a blindfold processing target and pressing the setting button 73, the blindfold processing target is transmitted from the terminal 5 to the server 3 (step 203 in FIG. 7). When receiving the blindfold processing target from the terminal 5, the control unit 7 sets the face information in the image data into which the blindfold design information is inserted (step 204).

  Next, the control unit 7 adjusts the size of the selected blindfold design information according to the size of the target face in the image data (step 105). The size adjustment may be expanded or contracted in the vertical or horizontal direction of the face information as necessary, and may be rotated according to the orientation of the face. That is, the size of the blindfold design information is adjusted so as to be suitable for the face area of the target face in the image data.

  Next, the control unit 7 inserts the set blindfold design information on the set face information, and transmits an image in which the blindfold design information is inserted to the user terminal (step 106). In the terminal 5, an image is displayed on the display unit 18 (step 107). As described above, the image in which the blindfold design information is inserted on the image data is transmitted to the user.

  FIG. 11 is a diagram illustrating an example of the blindfold processing image 80. In the blindfold processed image 80, blindfold design information is inserted into the face selected as the target for the blindfold processing (in the figure, the middle person = friend and the right person = unregistered). Since the blindfold design information is selected based on attributes and sensibilities in the original image data, the blindfold process has little influence on the atmosphere of the image data. In addition, since the size of the blindfold design information is adjusted according to the face area on the image data, the face is surely hidden and an unnecessary part is not hidden.

  The user can also manually correct the blindfold design information set automatically via the terminal 5. For example, the automatically set blindfold design information may be modified to another design, and its size and arrangement may be adjusted.

  As described above, according to the present invention, the user can insert the blindfold design information into the image without manually performing the troublesome blindfolding process by transmitting the target image data to the server. Therefore, the burden on the user can be reduced.

  In particular, in order to estimate the person's attribute information and sensitivity information from the face information in the image data, not only simply hide the face, but also automatically set a blindfold design that matches the person (expression) in the image data can do. Therefore, more natural blindfold design information corresponding to the person in the image data can be automatically inserted.

  In addition, it is possible to estimate the person-specific information of the person from the face information, group them and select the blindfold processing target, so if there are many people in the image data, select the blindfold processing target individually There is no need to do.

  In addition, since the size of the blindfold design information is adjusted according to the size of the face area, the face can be surely hidden, other parts are not hidden excessively, and the atmosphere of the image is not destroyed. Can be blindfolded.

  As mentioned above, although embodiment of this invention was described referring an accompanying drawing, the technical scope of this invention is not influenced by embodiment mentioned above. It is obvious for those skilled in the art that various modifications or modifications can be conceived within the scope of the technical idea described in the claims, and these are naturally within the technical scope of the present invention. It is understood that it belongs.

  For example, as the information regarding the person associated with the blindfold design information, different blindfold design information may be associated with age and each group as well as gender and smile determination.

DESCRIPTION OF SYMBOLS 1 ......... Image processing system 3 ......... Server 4 ......... Network 5 ......... Terminal 30 ......... Blindfold design setting information 31 ......... Attribute information 33 ......... Sensitivity information 35 ......... Blindfold design information 40 ... …… Personal identification information 41 ……… Face information 43 ……… Person-specific information 50 ……… Image data 51a, 51b, 51c ……… Face area 60 ……… Personal information 70 ……… Blindfolding processing setting screen 71 ......... Blindfold processing selection part 73 ......... Setting button 75 ......... Personal information display part 80 ......... Blindfold processing image

Claims (8)

  1. A storage unit for storing human attribute information and blindfold design information in association with each other;
    First extraction means for analyzing image data and extracting human face information in the image data;
    Estimating means for estimating human attribute information from the face information in the image data extracted by the first extracting means;
    Selection means for selecting the corresponding blindfold design information from the storage unit, using the attribute information of the person estimated by the estimation means,
    Arranging means for arranging the selected blindfold design information on the face extracted by the first extracting means for the image data;
    A server comprising:
  2.   The blindfold design information is associated with the attribute information and sensitivity information of the person, and the selection means selects the corresponding blindfold design information according to the attribute information and sensitivity information of the person. Server according to claim 1
  3. It further comprises setting means for setting a target to be blindfolded,
    The server according to claim 1, wherein the arrangement unit arranges the selected blindfold design information only on a face in a person image set by the setting unit.
  4. The storage unit further stores a person's face information and person-specific information corresponding to the face information in association with each other,
    Using the face information extracted by the first extraction means, further comprising second extraction means for extracting the corresponding person-specific information from the storage unit;
    4. The server according to claim 3, wherein the setting unit sets a target to be blindfolded for each person-specific information from the person-specific information extracted by the second extraction unit.
  5.   The arrangement means adjusts the size of the blindfold design information according to the size of the face in the target image data, and arranges the adjusted blindfold design information on the corresponding face. The server according to any one of claims 1 to 4.
  6. An image processing system in which a server and a terminal are connected via a network and perform image processing on a specific part of an image,
    The terminal includes means for transmitting image data to the server;
    The server
    A storage unit for storing human attribute information and blindfold design information in association with each other;
    First extraction means for analyzing image data sent from the terminal and extracting human face information in the image data;
    Estimating means for estimating human attribute information from the face information in the image data extracted by the first extracting means;
    Using the attribute information of the person estimated by the estimation means, a selection means for selecting a corresponding blindfold design from the storage unit;
    Arranging means for arranging the selected blindfold design information on the face extracted by the first extracting means for the image data;
    Means for transmitting the image data in which the blindfold design information is arranged to the terminal;
    Have
    The image processing system, wherein the terminal includes display means for displaying the image data in which the blindfold design is arranged.
  7. An image processing method for performing image processing on a specific part of an image,
    Analyzing image data and extracting human face information in the image data;
    Estimating human attribute information from the face information in the extracted image data;
    Selecting the corresponding blindfold design information from the storage unit storing the attribute information of the person and the blindfold design information in association with each other,
    Placing the selected blindfold design information on the extracted face with respect to the image data;
    An image processing method comprising:
  8.   A program for causing a computer to function as the server according to claim 1.
JP2011208456A 2011-09-26 2011-09-26 Image processing system, image processing method, server and program Pending JP2013069187A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2011208456A JP2013069187A (en) 2011-09-26 2011-09-26 Image processing system, image processing method, server and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2011208456A JP2013069187A (en) 2011-09-26 2011-09-26 Image processing system, image processing method, server and program

Publications (1)

Publication Number Publication Date
JP2013069187A true JP2013069187A (en) 2013-04-18

Family

ID=48474807

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2011208456A Pending JP2013069187A (en) 2011-09-26 2011-09-26 Image processing system, image processing method, server and program

Country Status (1)

Country Link
JP (1) JP2013069187A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0668224A (en) * 1992-08-24 1994-03-11 Casio Comput Co Ltd Montage preparing device
JP2002049912A (en) * 2000-08-04 2002-02-15 Nri & Ncc Co Ltd System for acquiring person image
JP2005049939A (en) * 2003-07-29 2005-02-24 Casio Comput Co Ltd Image outputting device, image outputting method, image output processing program, image distributing server, and image distribution processing program
JP2008034963A (en) * 2006-07-26 2008-02-14 Fujifilm Corp Imaging apparatus and method therefor
JP2008236141A (en) * 2007-03-19 2008-10-02 Sony Corp Image processing device and image processing method
JP2009033738A (en) * 2007-07-04 2009-02-12 Sanyo Electric Co Ltd Imaging apparatus, data structure of image file
JP2010086178A (en) * 2008-09-30 2010-04-15 Fujifilm Corp Image synthesis device and control method thereof
JP2011516965A (en) * 2008-03-31 2011-05-26 グーグル インコーポレイテッド Automatic facial detection and identity masking in images and its applications

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0668224A (en) * 1992-08-24 1994-03-11 Casio Comput Co Ltd Montage preparing device
JP2002049912A (en) * 2000-08-04 2002-02-15 Nri & Ncc Co Ltd System for acquiring person image
JP2005049939A (en) * 2003-07-29 2005-02-24 Casio Comput Co Ltd Image outputting device, image outputting method, image output processing program, image distributing server, and image distribution processing program
JP2008034963A (en) * 2006-07-26 2008-02-14 Fujifilm Corp Imaging apparatus and method therefor
JP2008236141A (en) * 2007-03-19 2008-10-02 Sony Corp Image processing device and image processing method
JP2009033738A (en) * 2007-07-04 2009-02-12 Sanyo Electric Co Ltd Imaging apparatus, data structure of image file
JP2011516965A (en) * 2008-03-31 2011-05-26 グーグル インコーポレイテッド Automatic facial detection and identity masking in images and its applications
JP2010086178A (en) * 2008-09-30 2010-04-15 Fujifilm Corp Image synthesis device and control method thereof

Similar Documents

Publication Publication Date Title
US7158657B2 (en) Face image recording system
EP2339498B1 (en) Biometric authentication method and biometric authentication apparatus
CN1320485C (en) Image searching device and key word providing method therefor
KR101326221B1 (en) Facial feature detection
US20120083294A1 (en) Integrated image detection and contextual commands
KR101346539B1 (en) Organizing digital images by correlating faces
JP2007272896A (en) Digital image processing method and device for performing adapted context-aided human classification
TW201003539A (en) Method, apparatus and computer program product for providing gesture analysis
US9977952B2 (en) Organizing images by correlating faces
CN101178768B (en) Image processing apparatus, image processing method and person identification apparatus,
US8655029B2 (en) Hash-based face recognition system
US8242881B2 (en) Method of adjusting reference information for biometric authentication and apparatus
DE112011101927T5 (en) Semantic parsing of objects in videos
DE102012216191A1 (en) authentication system
EP3254232A2 (en) Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
WO2005122093A1 (en) Image recognition device, image recognition method, and program for causing computer to execute the method
TWI496092B (en) Face identifying device, character image searching system, program for controlling face identifying device, computer readable recording medium, and method for controlling face identifying device
US8379937B1 (en) Method and system for robust human ethnicity recognition using image feature-based probabilistic graphical models
US9489574B2 (en) Apparatus and method for enhancing user recognition
CN102073807A (en) Information processing apparatus, information processing method, and program
US20140036099A1 (en) Automated Scanning
US8929595B2 (en) Dictionary creation using image similarity
US20080304749A1 (en) Image processing apparatus, image display apparatus, imaging apparatus, method for image processing therefor, and program
Abdelrahman et al. Stay cool! understanding thermal attacks on mobile-based user authentication
US20120294496A1 (en) Face recognition apparatus, control method thereof, and face recognition method

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20140718

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20150414

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20150421

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20150818