WO2017078793A1 - Automatic image product creation for user accounts comprising large number of images - Google Patents
Automatic image product creation for user accounts comprising large number of images Download PDFInfo
- Publication number
- WO2017078793A1 WO2017078793A1 PCT/US2016/035436 US2016035436W WO2017078793A1 WO 2017078793 A1 WO2017078793 A1 WO 2017078793A1 US 2016035436 W US2016035436 W US 2016035436W WO 2017078793 A1 WO2017078793 A1 WO 2017078793A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- chunk
- images
- face images
- computer
- Prior art date
Links
- 238000000034 method Methods 0.000 claims abstract description 84
- 238000013461 design Methods 0.000 claims abstract description 28
- 230000006870 function Effects 0.000 claims description 57
- 239000011159 matrix material Substances 0.000 claims description 42
- 238000005315 distribution function Methods 0.000 claims description 28
- 238000012549 training Methods 0.000 claims description 26
- 239000013598 vector Substances 0.000 claims description 19
- 238000012360 testing method Methods 0.000 claims description 15
- 238000009826 distribution Methods 0.000 description 9
- 238000012706 support-vector machine Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 238000003384 imaging method Methods 0.000 description 6
- 238000001514 detection method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000007639 printing Methods 0.000 description 5
- 238000013500 data storage Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- RZVAJINKPMORJF-UHFFFAOYSA-N Acetaminophen Chemical compound CC(=O)NC1=CC=C(O)C=C1 RZVAJINKPMORJF-UHFFFAOYSA-N 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- BJQHLKABXJIVAM-UHFFFAOYSA-N bis(2-ethylhexyl) phthalate Chemical compound CCCCC(CC)COC(=O)C1=CC=CC=C1C(=O)OCC(CC)CCCC BJQHLKABXJIVAM-UHFFFAOYSA-N 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000000875 corresponding effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000000491 multivariate analysis Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000000859 sublimation Methods 0.000 description 1
- 230000008022 sublimation Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/772—Determining representative reference patterns, e.g. averaging or distorting patterns; Generating dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/28—Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/30—Scenes; Scene-specific elements in albums, collections or shared content, e.g. social network photos or video
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
- G06V40/173—Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks
Definitions
- This application relates to technologies for automatically creating image-based products, and more specifically, to creating image-based products that best present people's faces.
- a photo book can include a cover page and a plurality of image pages each containing one or more images.
- Designing a photobook can include many iterative steps such as selecting suitable images, selecting layout, selecting images for each page, selecting backgrounds, picture frames, overall Style, add text, choose text font, and rearrange the pages, images and text, which can be quite time consuming. It is desirable to provide methods to allow users to design and produce image albums in a time efficient manner.
- Common face detection methods include: knowledge-based methods; feature-invariant approaches, including the identification of facial features, texture and skin color; template matching methods, both fixed and deformable; and appearance based methods.
- face images of each individual can be categorized into a group regardless whether the identity of the individual is known or not. For example, if two individuals Person A and Person B are detected in ten images. Each of the images can be categorized or tagged one of the four types: A only; B only, A and B; or neither A nor B.
- the tagging of face images require training based one face images of known persons (or face models), for example, the face images of family members or friends of a user who uploaded the images.
- the methods are based on the statistics of the face images to be categorized, and do not require prior retraining with known people' faces or supervision during the grouping of face images. Acceptance criteria in the methods are based on probabilistic description and can be adjusted.
- the disclosed methods are applicable to different similarity functions, and are compatible with different types of face analyses and face descriptors.
- the present application discloses robust and accurate methods that can automatically sort and group images in a user account that includes a large number of photos comprising a large number of face images that are difficult to be processed using
- the disclosed method is scalable to the size of user account.
- the disclosed methods of grouping of a large number of face images are not limited by the number of photos per account and are applicable to the increasingly larger user accounts in the future.
- the disclosed method is compatible to different face grouping techniques including methods based on training faces of known persons.
- the disclosed methods are cognitive and flexible because they can adapt to properties of the large user account.
- the disclosed methods can accurately determine which faces are relevant to the owner of the user account and which faces may be only
- the disclosed methods allow learning and knowledge accumulation on even less frequently appearing friends and family members to be grouped and recognized, and used in image product creation.
- the disclosed methods also effectively remove faces of strangers and acquaintances which significantly increase computation efficiency in face grouping.
- the present invention relates to a computer-implemented method of grouping faces in large user account for creating an image product.
- the method includes: acquiring face images from an image album in a user' s account by a computer processor; adding the face images obtained from the image album into a first chunk;
- comparing chuck size of the first chunk with a maximum chuck value for an optimal chunk size range by the computer processor if the chunk size of the first chuck is smaller than the maximum chuck value, keeping the face images from the image album in the first chunk; if the chunk size of the first chuck is larger than the maximum chuck value, automatically separating, by the computer processor, the face images from the image album into a first portion and one or more second portions; keeping the first portion in the first chunk to keep the current chunk size below the maximum chuck value; automatically moving the one or more second portions of face images to one or more subsequent chunks by the computer processor; automatically grouping face images in the first chunk by the computer processor to form face groups; assigning at least some of the face groups in the first chunk to known face models associated with the user account; and creating a design for an image-based product based at least in part on the face images in the first chunk associated with the face models.
- Implementations of the system may include one or more of the following.
- the computer-implemented method can further include setting up new face models by the computer processor for at least some of the face groups that cannot be assigned to existing face models, wherein the design for an image-based product can be created based on the face images associated with the known face models and the new face models.
- the computer- implemented method can further include moving the ungrouped face images in the first chunk to one or more subsequent chunks that have not been processed with face grouping.
- the computer-implemented method can further include discarding ungrouped face images that have been moved to subsequent chunks for more than a predetermined number of times.
- the computer-implemented method can further include repeating steps from acquiring face images in the image album or additional image albums to automatically grouping face images, to group images in a second chunk subsequent to the first chunk; assigning at least some of the face groups in the second chunk to known face models associated with the user account; and creating the design for the image-based product based at least in part on the face images in the first chunk and the second chunk associated with the face models.
- the step of automatically grouping face images in the first chunk can include: calculating similarity functions between pairs of face images in the first chunk by the computer processor; joining face images that have values of the similarity functions above a predetermined threshold into a hypothetical face group, wherein the face images in the hypothetical face group
- the computer-implemented method can further include rejecting the hypothetical face group as a true face group if a percentage of the associated similarity functions being true is below a threshold.
- the step of conducting non-negative matrix factorization can include: forming a non-negative matrix using values of similarity functions between all different pairs of face images in the hypothetical face group, wherein the non-negative matrix factorization is conducted over the non-negative matrix.
- the computer-implemented method can further include joining two true face groups to form a joint face group; conducting non-negative matrix factorization on values of similarity functions in the joint face group; and merging the two true face groups if a percentage of the associated similarity functions being true is above a threshold in the joint face group.
- the step of automatically grouping face images in the first chunk can include: receiving an initial set of n* face groups in the face images in the first chunk, wherein n* is a positive integer bigger than 1; training classifiers between pairs of face groups in the initial set of face groups using image-product statistics; classifying the plurality of face images by n*( n*-l)/2 classifiers to output binary vectors for the face images by the computer processor; calculating a value for an improved similarity function using the binary vectors for each pair of the face images; and grouping the face images in the first chunk into modified face groups based on values of the binary similarity functions by the computer processor.
- the computer-implemented method can further include comparing a difference between the modified face groups and the initial face groups to a threshold value, wherein the image product is created based at least in part on the modified face groups if the difference is smaller than the threshold value.
- a threshold value There can be an integer m number of face images in the plurality of face images, wherein the step of classifying the plurality of face images by n*(n*-l)/2 classifiers outputs m number of binary vectors.
- the face images can be grouped into modified face groups using non-negative matrix factorization based on values of the improved similarity functions.
- the step of assigning at least some of the face groups in the first chunk to known face models can include: storing training faces associated with the known face models of known persons in a computer storage; joining the face images in the first chunk with a group of training faces associated with the known face models; calculating similarity functions between pairs of the face images or the training faces in the joint group by a computer processor; conducting non-negative matrix factorization on values of the similarity functions in the joint face group to test truthfulness of the joint face group; and identifying the face images in the first chunk that belong to the known face models if a percentage of the associated similarity functions being true is above a threshold based on the non-negative matrix factorization.
- the computer-implemented method can further include merging the face images with the training faces of the known face model to form a new set of training faces for the known face model.
- the step of conducting non-negative matrix factorization can include: forming a non-negative matrix using values of similarity functions between all different pairs of the face images and the training faces in the joint face group, wherein the non-negative matrix factorization is conducted over the non-negative matrix.
- the similarity functions in the joint face group can be described in a similarity distribution function, wherein the step of non-negative matrix factorization outputs a True similarity distribution function and a False similarity distribution function.
- the step of identifying can include comparing the similarity distribution function to the True similarity distribution function and the False similarity distribution function.
- Figure 1 is a block diagram for a network-based system for producing personalized image products, image designs, or image projects compatible with the present invention.
- Figure 2 is a flow diagram for categorizing face images that belong to different persons for image product creation in accordance with the present invention.
- Figure 3 is a flow diagram for identifying face images in accordance with the present invention.
- Figure 4 is a flow diagram for identifying face images in accordance with the present invention.
- Figure 5 is a flow diagram for grouping face images for image product creation in user accounts comprising a large number of photos in accordance with the present invention.
- a network-based imaging service system 10 can enable users 70, 71 to organize and share images via a wired network or a wireless network 51.
- the network-based imaging service system 10 is operated by an image service provider such as Shutterfly, Inc.
- the network-based imaging service system 10 can also fulfill image products ordered by the users 70, 71.
- the network-based imaging service system 10 includes a data center 30, one or more product fulfillment centers 40, 41, and a computer network 80 that facilitates the communications between the data center 30 and the product fulfillment centers 40, 41.
- the data center 30 includes one or more servers 32 for communicating with the users 70, 71, a data storage 34 for storing user data, image and design data, and product information, and computer processor(s) 36 for rendering images and product designs, organizing images, and processing orders.
- the user data can include account information, discount information, and order information associated with the user.
- a website can be powered by the servers 32 and can be accessed by the user 70 using a computer device 60 via the Internet 50, or by the user 71 using a wireless device 61 via the wireless network 51.
- the servers 32 can also support a mobile application to be downloaded onto wireless devices 61.
- the network-based imaging service system 10 can provide products that require user participations in designs and personalization. Examples of these products include the personalized image products that incorporate photos provided by the users, the image service provider, or other sources.
- the term "personalized” refers to information that is specific to the recipient, the user, the gift product, and the occasion, which can include personalized content, personalized text messages, personalized images, and personalized designs that can be incorporated in the image products.
- the content of personalization can be provided by a user or selected by the user from a library of content provided by the service provider.
- the term “personalized information” can also be referred to as "individualized information” or "customized information”.
- Personalized image products can include users' photos, personalized text, personalized designs, and content licensed from a third party.
- Examples of personalized image products may include photobooks, personalized greeting cards, photo stationeries, photo or image prints, photo posters, photo banners, photo playing cards, photo T-shirts, photo mugs, photo aprons, photo magnets, photo mouse pads, a photo phone case, a case for a tablet computer, photo key-chains, photo collectors, photo coasters, photo banners, or other types of photo gift or novelty item.
- the term photobook generally refers to as bound multi- page product that includes at least one image on a book page.
- Photobooks can include image albums, scrapbooks, bound photo calendars, or photo snap books, etc.
- An image product can include a single page or multiple pages. Each page can include one or more images, text, and design elements. Some of the images may be laid out in an image collage.
- the user 70 or his/her family may own multiple cameras 62, 63.
- the user 70 transfers images from cameras 62, 63 to the computer device 60.
- the user 70 can edit, organize images from the cameras 62, 63 on the computer device 60.
- the computer device 60 can be in many different forms: a personal computer, a laptop, or tablet computer, a mobile phone etc.
- the camera 62 can include an image capture device integrated in or connected with in the computer device 60.
- laptop computers or computer monitors can include built-in camera for picture taking.
- the user 70 can also print pictures using a printer 65 and make image products based on the images from the cameras 62, 63.
- Examples for the cameras 62, 63 include a digital camera, a camera phone, a video camera capable of taking motion and still images, a laptop computer, or a tablet computer.
- Images in the cameras 62, 63 or stored on the computer device 60 and the wireless device 61 can be uploaded to the server 32 to allow the user 70 to organize and render images at the website, share the images with others, and design or order image product using the images from the cameras 62, 63.
- the wireless device 61 can include a mobile phone, a tablet computer, or a laptop computer, etc.
- the wireless device 61 can include a built-in camera (e.g. in the case of a camera phone).
- the pictures taken by the user 71 using the wireless device 61 can be uploaded to the data center 30. If users 70, 71 are members of a family or associated in a group (e.g.
- the images from the cameras 62, 63 and the mobile device 61 can be grouped together to be incorporated into an image product such as a photobook, or used in a blog page for an event such as a soccer game.
- the users 70, 71 can order a physical product based on the design of the image product, which can be manufactured by the printing and finishing facilities 40 and 41.
- a recipient receives the physical product with messages from the users at locations 80, 85.
- the recipient can also receive a digital version of the design of the image product over the Internet 50 and/or a wireless network 51.
- the recipient can receive, on her mobile phone, an electronic version of the greeting card signed by handwritten signatures from her family members.
- the images stored in the data storage 34 can be associated with metadata that characterize the images.
- metadata can also include user input parameters such as the occasions for which the images were taken, favorite rating of the photo, keyword, and the folder or the group to which the images are assigned, etc.
- image applications especially for creating personalized image products or digital photo stories, it is beneficial to recognize and identify people's faces in the images stored in the data storage 34, the computer device 60, or the mobile device 61. For example, when a family photobook is to be created, it would very helpful to be able to automatically find photos that include members within that family.
- faces are detected and grouped by individual persons before images are selected and incorporated into image products.
- M number of faces can be detected in the digital images (step 210) by a computer processor (such as the computer processor 36, the computer device 60, or the mobile device 61).
- the portions of the images that contain the detected faces are cropped out to produce face images, each of which usually includes a single face.
- M feature vectors are then obtained by the computer processor for the m face images (step 220).
- a feature vector is an n-dimensional vector of numerical features that represent some objects (i.e. a face image in the present disclosure). Representing human faces by numerical feature vectors can facilitate processing and statistical analysis of the human faces. The vector space associated with these vectors is often called the feature space.
- Similarity function S(i,j) for each pair of face images i and j among the detected faces are then calculated (step 230) automatically by the computer processor.
- the disclosed method is generally not restricted to the specific design of similarity function S(i,j).
- the similar function can be based on inner products of feature vectors from two face image.
- two face images can be compared to an etalon set of faces. Similar faces will be similar to the same third party faces and dissimilar with the others. Eigen-space best describing all album faces is calculated. The similarity between the two face images is the exponent of minus distance between the two face feature vectors in this space.
- the similarity value between a pair of face images is related to the probability that the two face images belonging to a same person, but it does not tell which face images together belong to a hypothetical person (identifiable or not).
- the present method disclosure statistically assesses the probability that a group of face images are indeed faces of the same person.
- the values of similarity functions for different pairs of face images are compared to a threshold value T.
- the face images that are connected through a chain of similarity values higher than T are automatically joined by the computer processor into a hypothetical face group g that potentially belongs to a single person (step 240). [0034] This process is generally known as greedy join.
- the similarity distribution function ⁇ P(S(i g , j g )) ⁇ has a plurality of similarity function values S(i g , j g ) for different pair of face images i, j .
- the use of the similar distribution function P(S(i,j)) to describe a group of face images in the disclosed method is based on several empiric observations: In a given small ( ⁇ 100) set of face images, the similarities inside true face groups (face images of the same person) have the same similarity distribution Ptrue(S), where both i and j are faces in the same face group. The similarities between faces of different persons are distributed with similarity distribution Pf a i se (S). For larger face sets, several P t rue(S) distributions are established. Thus, when Ptrue and Pf a i se are known, we can assess how many of the face pairs in a group of face images are of the same persons by solving a linear regression.
- non-negative matrix factorization is performed by the computer processor on the similarity distribution function (P(S(i g ,j g )) ⁇ to estimate ⁇ Ptrue, Pfaise ⁇ and test the truthfulness of the face groups ⁇ g ⁇ (step 260).
- the similarity distribution function (P(S(i g ,j g )) ⁇ has non-negative values for different S(i g ,j g )'s. Organized in vectors, they form a non-negative matrix.
- NMF Non- negative matrix factorization
- NMF is a group of algorithms in multivariate analysis and linear algebra where a matrix V is factorized into two or more non-negative matrices. This non- negativity makes the resulting matrices easier to analyze.
- NMF in general is not exactly solvable; it is commonly approximated numerically.
- the resulting factor matrices are initialized with random values, or using some problem-tied heuristic. Then, all-but-one of the factors are fixed, and the remaining matrix values are solved, e.g., by regression. This process is continued for each factor matrix. The iterations continue until conversion.
- NMF An output of NMF is a matrix having columns Ptrue and Pf a i se - Another result of NMF is a matrix for determining similarities of the hypothesized face groups to Ptme and Pfai se distributions. Face groups that are similar to the "true" distribution are accepted as good face groups. Other face groups are ignored. It should be noted that P ⁇ e and Pf a i se distributions can be different for each group of face images. Thus the NMF needs to be performed for every group of user images of interest, such as each user album.
- the presently disclosed method characterizes a face image by a distribution of its similarities to all other face images in the same face group.
- P(S(i,j)) can be tested to see how close it is to P true and P juicee by solving linear equation.
- the obtained weights i.e. precision in data analysis
- a face group g is identified as a true face group by the computer processor if percentage of its similarity distribution function P(S(i,j)) being true is above a threshold (step 270).
- a face group is rejected if it has P(S(i,j)) values that have "truthfulness" less than a predetermined percentage value.
- a wrong face is highly similar to a single face in a face group, but is dissimilar to all face images in the same face group.
- P(S(i,j)) similar to P pulpe and the merge between the wrong face and the face group is rejected.
- a face has relatively low similarity to all face images in a group, but P(S(i,j)) can still be more similar to P true and the merge is be accepted.
- the main benefit of the presently disclosed approach is that it does not define rules on similarities or dissimilarities between a pair of individual faces.
- the determination if a face image belongs to a face group is statistical and based on the collective similarity properties a whole of face images.
- n face groups representing n hypothetical persons are obtained from the m face images (step 290).
- An image-based product can then then created based in part on the n face groups (step 300).
- the m face images that are grouped can be extracted from images contained in one or more image albums.
- a design for an image product can be automatically created by the computer processor 36, the computer device 60, or the mobile device 61 ( Figure 1), then presented to a user 70 or 71 ( Figure 1).
- the face images of the people who appear most frequently in the user account can be selected for creating image-product designs over others.
- the image product creation can also include partial user input or selection on styles, themes, format, or sizes of an image product, or text to be incorporated into an image product.
- the design of the image product is sent from the servers 32 to the server 42 in the printing and finishing facilities 40 and 41 ( Figure 1) wherein hardcopies of the image-based products are manufactured.
- the detection and grouping of face images can significantly reduce time used for design and creation, and improve the accuracy and appeal of an image product. For example, the most important people can be determined and to be emphasized in an image product. Redundant person's face images can be filtered and selected before incorporated into an image product. Irrelevant persons can be minimized or avoided in the image product.
- initial face groups are evaluated; the ones having undesirable/improbable distributions are first eliminated using image-product statistics (step 310).
- Each face can be described by a feature vector of several hundred values.
- the initial face groups can be obtained in different ways, including the fully automated computer methods such as the one described above in relation to Figure 2, or partially and fully manual methods with assistance of the users.
- Leading image product service providers such as Shutterfly, Inc.
- Shutterfly, Inc. have accumulated vast amount of statistics about the appearance of people's faces in image products. For example, it has been discovered that most family albums or family photobooks typically include 2-4 main characters that appear at high frequencies in each photobook, and the frequencies for other people's faces drastically decrease in the photobook.
- the people whose faces appear in pictures ca be assigned as VIP persons and non-VIP persons. It is highly improbable that a non-VIP person will be associated with the largest face group in an image album.
- products ordered by the customer are tracked and stored in a database. The largest groups in the image albums are cross referenced with and found to be highly correlated with the most frequent faces in already purchased products.
- support vector machine (SVM) classifiers are trained between pairs of the n* face groups (gi, g j ) using image-product statistics (step 320).
- Each of the n* face groups represents a potentially unique person.
- n* face groups there are n*(n*-l)/2 such classifiers.
- the n* face groups are the same as the initial input face groups.
- the number n* of face groups as well as face compositions within the face groups can vary as the face grouping converges in consecutive iterations.
- face similarity functions can be built based on different features such as two-dimensional or two-dimensional features obtained with the aid of different filters, biometric distances, image masks, etc.
- face categorization technologies it is often a change to properly define and normalize of similarity or distance between the faces, in the Euclidian (or other) spaces.
- face similarity functions are defined using SVM in the presently disclosed method.
- Each image album or photobook can include several hundreds, or even several thousands of faces.
- SVM is a suitable tool for classifying faces at this scale. The task of face grouping does not use training information, which is different from face recognition.
- TSVM transductive support vector machines
- the face models created by the initial grouping can be used to improve the face grouping itself.
- Other knowledge about an image album and image collection can include titles, keywords, occasions, as well as time and geolocations associated or input in association with each image album or image collection.
- the m faces f 1 ⁇ ... J m are classified by n*(n*-l)/2 classifiers to output m binary vectors c 1 ⁇ c m for the m faces (step 330).
- step 350 The operation is similar to those described above in step 260 ( Figure 2) but with improved accuracy in face grouping.
- the initial face groups in this iteration may be spit or merged to form new face groups or alter compositions in existing face groups.
- the difference between the modified face groups ⁇ g* ⁇ and the initial face groups ⁇ g ⁇ in the same iteration is calculated (e.g. using norm of similarity matrices for m faces) and compared to a threshold value (step 360).
- the threshold value can be a constant and/or found empirically.
- Steps 320-360 are repeated (step 370) if the difference is larger than the threshold value. In other words, the process of training SVM classifiers, calculating binary functions, and grouping based on the binary functions are repeated till the face groups converge to a stable set of groups.
- a stable set of modified face groups ⁇ g* ⁇ are obtained, they are used to create image products (step 380) such as photobooks, photo calendars, photo greeting cards, or photo mugs.
- the image product can be automatically created by the computer processor 36, the computer device 60, or the mobile device 61 ( Figure 1), then presented to a user 70 or 71 ( Figure 1), which allows the image product be ordered and made by the printing and finishing facilities 40 and 41 ( Figure 1).
- the image product creation can also include partial user input or selection on styles, themes, format, or sizes of an image product, or text to be incorporated into an image product.
- the detection and grouping of face images can significantly reduce time used for design and creation, and improve the accuracy and appeal of an image product.
- the modified face groups are more accurate than the method shown in Figure 2.
- the modified face groups can be used in different ways when incorporating photos into image products. For example, the most important people (in a family or close friend circle) can be determined and to be emphasized in an image product such as automatic VIPs in the image cloud recognition service. Redundant person's face images (of the same or similar scenes) can be filtered and selected before incorporated into an image product. Unimportant people or strangers can be minimized or avoided in the image product.
- face recognition can include one or more of the following steps. Face images of known persons (denoted sometimes as face models) are stored (step 410) in computer storage in data storage (34 in Figure 1) or user devices (60, 61 in Figure 1) as training faces. Examples of the know persons can include a family members and friends of a user the uploaded or stored the images from which the face images are extracted. The face images to be identified in the face groups are called testing faces.
- a group of testing faces is then automatically hypothetically joined by a computer processor with a training faces of a known person to form a joint group (step 420).
- the group of testing faces can be already tested to be true as described in step 270 (in Figure 2).
- Similarity functions S(i,j) are calculated by the computer processor between each pair of testing or training face images in the joint face group (step 430). The collection of the similarity functions S(i,j) in the joint face group are described in a similarity distribution function
- non-negative matrix factorization is be performed by the computer processor on the similarity function values to estimate Ptrue(S) and Pf a ise(S) of the pairs of training and testing face images in the joint face group (step 440).
- the similarity distribution function P(S(i,j)) is compared to Ptrue(S) and Pfai se (S) and the precision (similarity to Ptme) is tested versus a predetermined threshold (step 440).
- the testing faces in the joint face group are identified to be a known person if the similarity distribution function P(S(i,j)) is True at a percentage higher than a threshold (step 450), that is, when the precision is above a threshold.
- the group of testing face images can be merged with the known person's face images (step 460), thus producing a new set of training faces for the known person.
- Figure 5 discloses an improved method for grouping face images for image product creation in user accounts comprising a large number of photos.
- the disclosed method analyzes and group face images in a large user account in working batches called chunks.
- Chunk size refers to the number of photos in a chunk.
- An optimal chunk size range for a chunk is first defined (step 510).
- the optimum chuck size can be defined by a minimum Cmin and a maximum Cmax numbers of face images in a chunk.
- a user account typically includes multiple albums each including one or more photos, typically arranged based on the occasions in which the photos were taken.
- the chunk size can be larger than most of the albums and may be smaller than some (the very large ones).
- face images are automatically acquired by a computer processor from an image album in the user account (step 520).
- the computer processor adds the face images from the image album into a first chunk (step 530).
- the computer processor can be a computer server (32 in Figure 1) tasked for the processing functions, a computer processor (36 in Figure 1) coupled to a cloud image storage system, or a local processor to a user device (60, 61 in Figure 1).
- the current chuck size is compared with the optimal chunk size (step 540). If the current chunk size is smaller than Cmax, face images from additional image albums is continued to be added to the current chunk (step 550). If the current chunk size becomes larger than Cmax, face images from this image album are separated into multiple portions (step 560). A first portion is included in the current chunk with the current chunk size kept below Cmax (step 570). Other portion(s) of the face images are added to subsequent chunk(s) (step 580). For example, the other portions of face images can be distributed to four or more subsequent chunks.
- the face images in the current chunk are grouped into face groups (step 590) using methods such as the process disclosed in Figure 2 (steps 210- 290) and the process disclosed in Figure 3 (steps 310-370). Then the computer processor will attempt to assign the face groups in the current chunk to known face models (step 600). In this step, the known face models are pre-established for the faces identified as the family members, friends, or key members associated with the user account. This step can be implemented using the process disclosed in Figure 4 in which face models are used as the training faces in steps 410- 460.
- New face models are set up for those face groups in the current chunk that cannot be assigned to existing face models associated with the user account (step 610). People associated with the new face models can be identified automatically by information such as metadata and image tags or by a user.
- step 620 The face images that cannot be grouped in the current chunk are moved to one or more subsequent chunks that have not been processed for face grouping yet.
- the purpose of this step is to accumulate these ungrouped faces until there is enough number of sufficiently quality face images to allow them to be grouped.
- Steps 520-620 are then repeated to first build the subsequent chunks of face images, and then group the face images in the subsequent chunk (step 630).
- the face images can be acquired from the same image album or additional image albums.
- the face groups in the subsequent chunk are then assigned to existing face models, and if that is not successful, new face models are set up for the unassigned face groups. Again, ungrouped face images can be moved to subsequent chunks to be analyzed with other face images later.
- step 640 This step is especially important for large user account because when the number of images increases, the number of face images from strangers and people unimportant to the user increases significantly, which often become a heavy burden to face grouping computations.
- an image-based product can be created based in part on the face images associated with the face models (step 650), including the known face models and the new face models.
- the face images can be from the first chunk or subsequent chunks. For example, the face images of the people who appear most frequently in the user account (indicating are significant to the owner of the user account) can be selected for creating image-product designs over others.
- a design for an image product can be automatically created by the computer processor 36, the computer device 60, or the mobile device 61 ( Figure 1), then presented to a user 70 or 71 ( Figure 1).
- the image product creation can also include partial user input or selection on styles, themes, format, or sizes of an image product, or text to be incorporated into an image product.
- the detection and grouping of face images can significantly reduce time used for design and creation, and improve the accuracy and appeal of an image product. For example, the most important people can be determined and to be emphasized in an image product.
- the design of the image product is sent from the servers 32 to the server 42 in the printing and finishing facilities 40 and 41 ( Figure 1) wherein hardcopies of the image- based products are manufactured.
- the disclosed methods can include one or more of the following advantages.
- the disclosed method can automatically group faces in user account that contain a large number of faces in photos that are difficult to be processed using conventional technologies.
- the disclosed method is scalable to any number of photos in user account.
- the disclosed method is compatible to different face grouping techniques including methods based on training faces of known persons.
- the disclosed face grouping method does not rely on the prior knowledge about who are in the image album or photo collection, and thus are more flexible and easier to use.
- the disclosed face grouping method has the benefits of improved accuracy in grouping faces (more accurate merging and splitting), improved relevance of grouping faces to image products, and improved relevance of grouping faces to families and close friends.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Image Analysis (AREA)
Abstract
A computer-implemented method of grouping faces in large user account for creating an image product includes adding the face images obtained from an image album in a user's account into a first chunk; if the chunk size of the first chuck is smaller than a maximum chuck value, keeping the face images from the image album into the first chunk; otherwise, automatically separating the face images from the image album into a first portion and one or more second portions; keeping the first portion in the first chunk; automatically moving the second portions to subsequent chunks; automatically grouping face images in the first chunk to form face groups; assigning the face groups to known face models associated with the user account; and creating a design for an image-based product based on the face images in the first chunk associated with the face models.
Description
AUTOMATIC IMAGE PRODUCT CREATION
FOR USER ACCOUNTS COMPRISING LARGE NUMBER OF IMAGES
TECHNICAL FIELD
100011 This application relates to technologies for automatically creating image-based products, and more specifically, to creating image-based products that best present people's faces.
BACKGROUND OF THE INVENTION
[0002] In recent years, photography has been rapidly transformed from chemical based technologies to digital imaging technologies. Images captured by digital cameras can be stored in computers and viewed on display devices. Users can also produce image prints based on the digital images. Such image prints can be generated locally using output devices such an inkjet printer or a dye sublimation printer or remotely by a photo printing service provider. Other products that can be produced using the digital images can include photo books, photo calendars, photo mug, photo T-shirt, and so on. A photo book can include a cover page and a plurality of image pages each containing one or more images. Designing a photobook can include many iterative steps such as selecting suitable images, selecting layout, selecting images for each page, selecting backgrounds, picture frames, overall Style, add text, choose text font, and rearrange the pages, images and text, which can be quite time consuming. It is desirable to provide methods to allow users to design and produce image albums in a time efficient manner.
[0003] Many digital images contain people's faces; creating high-quality image products naturally requires proper consideration of people faces. For example the most important and relevant people such as family members should have their faces be shown in image products while strangers' faces should be minimized. In another example, while pictures of different faces at a same scene can be included in an image-based product, the pictures of a same person at a same scene should normally be filtered to allow the best one(s) to be presented in the image product.
[0004] Faces need to be detected and group based on persons' identities before they can be properly selected and placed in image products. Most conventional face detection techniques concentrate on face recognition, assuming that a region of an image containing a single face has already been detected and extracted and will be provided as an input. Common face detection methods include: knowledge-based methods; feature-invariant approaches, including the identification of facial features, texture and skin color; template matching methods, both fixed and deformable; and appearance based methods. After faces are detected, face images of each individual can be categorized into a group regardless whether the identity of the individual is known or not. For example, if two individuals Person A and Person B are detected in ten images. Each of the images can be categorized or tagged one of the four types: A only; B only, A and B; or neither A nor B. Algorithmically, the tagging of face images require training based one face images of known persons (or face models), for example, the face images of family members or friends of a user who uploaded the images.
100051 To save users' time, technologies have been developed by Shutterfly, Inc. and others to automatically create image products using users' images. These automatic methods are facing increasing challenges as people take more and more digital photos. A person or a family can easily take thousands of photos in an average vacation trip. A user often has hundreds of thousands to even millions of photos in his or her account. Automatic sorting, analyzing, grouping, and laying out such a great number photos in the correct and meaningful manner are an immense task.
[0006] There is still a need for more accurate methods to accurately group face images for different persons and incorporate the face images in image products.
SUMMARY OF THE INVENTION
100071 The present application discloses computer implemented methods that
automatically categorize face images that belong to different persons. The methods are based on the statistics of the face images to be categorized, and do not require prior retraining with known people' faces or supervision during the grouping of face images. Acceptance criteria in the methods are based on probabilistic description and can be adjusted. The disclosed
methods are applicable to different similarity functions, and are compatible with different types of face analyses and face descriptors.
[0008] Furthermore, the present application discloses robust and accurate methods that can automatically sort and group images in a user account that includes a large number of photos comprising a large number of face images that are difficult to be processed using
conventional technologies. The disclosed method is scalable to the size of user account. The disclosed methods of grouping of a large number of face images are not limited by the number of photos per account and are applicable to the increasingly larger user accounts in the future.
100091 The disclosed method is compatible to different face grouping techniques including methods based on training faces of known persons.
[0010] The disclosed methods are cognitive and flexible because they can adapt to properties of the large user account. The disclosed methods can accurately determine which faces are relevant to the owner of the user account and which faces may be only
acquaintances or strangers. The disclosed methods allow learning and knowledge accumulation on even less frequently appearing friends and family members to be grouped and recognized, and used in image product creation. The disclosed methods also effectively remove faces of strangers and acquaintances which significantly increase computation efficiency in face grouping.
{ 001 11 In a general aspect, the present invention relates to a computer-implemented method of grouping faces in large user account for creating an image product. The method includes: acquiring face images from an image album in a user' s account by a computer processor; adding the face images obtained from the image album into a first chunk;
comparing chuck size of the first chunk with a maximum chuck value for an optimal chunk size range by the computer processor; if the chunk size of the first chuck is smaller than the maximum chuck value, keeping the face images from the image album in the first chunk; if the chunk size of the first chuck is larger than the maximum chuck value, automatically separating, by the computer processor, the face images from the image album into a first portion and one or more second portions; keeping the first portion in the first chunk to keep the current chunk size below the maximum chuck value; automatically moving the one or
more second portions of face images to one or more subsequent chunks by the computer processor; automatically grouping face images in the first chunk by the computer processor to form face groups; assigning at least some of the face groups in the first chunk to known face models associated with the user account; and creating a design for an image-based product based at least in part on the face images in the first chunk associated with the face models.
[0012] Implementations of the system may include one or more of the following. The computer-implemented method can further include setting up new face models by the computer processor for at least some of the face groups that cannot be assigned to existing face models, wherein the design for an image-based product can be created based on the face images associated with the known face models and the new face models. The computer- implemented method can further include moving the ungrouped face images in the first chunk to one or more subsequent chunks that have not been processed with face grouping. The computer-implemented method can further include discarding ungrouped face images that have been moved to subsequent chunks for more than a predetermined number of times. The computer-implemented method can further include repeating steps from acquiring face images in the image album or additional image albums to automatically grouping face images, to group images in a second chunk subsequent to the first chunk; assigning at least some of the face groups in the second chunk to known face models associated with the user account; and creating the design for the image-based product based at least in part on the face images in the first chunk and the second chunk associated with the face models. The step of automatically grouping face images in the first chunk can include: calculating similarity functions between pairs of face images in the first chunk by the computer processor; joining face images that have values of the similarity functions above a predetermined threshold into a hypothetical face group, wherein the face images in the hypothetical face group
hypothetically belong to a same person; conducting non-negative matrix factorization on values of the similarity functions in the hypothetical face group to test truthfulness of the hypothetical face group; and identifying the hypothetical face group as a true face group if a percentage of the associated similarity functions being true is above a threshold based on the non-negative matrix factorization. The computer-implemented method can further include rejecting the hypothetical face group as a true face group if a percentage of the associated
similarity functions being true is below a threshold. The step of conducting non-negative matrix factorization can include: forming a non-negative matrix using values of similarity functions between all different pairs of face images in the hypothetical face group, wherein the non-negative matrix factorization is conducted over the non-negative matrix. The similarity functions in the hypothetical face group can be described in a similarity
distribution function, wherein the step of non-negative matrix factorization outputs a True similarity distribution function and a False similarity distribution function. Every pair of face images in the hypothetical face group has a similarity function above the predetermined threshold. The computer-implemented method can further include joining two true face groups to form a joint face group; conducting non-negative matrix factorization on values of similarity functions in the joint face group; and merging the two true face groups if a percentage of the associated similarity functions being true is above a threshold in the joint face group. The step of automatically grouping face images in the first chunk can include: receiving an initial set of n* face groups in the face images in the first chunk, wherein n* is a positive integer bigger than 1; training classifiers between pairs of face groups in the initial set of face groups using image-product statistics; classifying the plurality of face images by n*( n*-l)/2 classifiers to output binary vectors for the face images by the computer processor; calculating a value for an improved similarity function using the binary vectors for each pair of the face images; and grouping the face images in the first chunk into modified face groups based on values of the binary similarity functions by the computer processor. The computer-implemented method can further include comparing a difference between the modified face groups and the initial face groups to a threshold value, wherein the image product is created based at least in part on the modified face groups if the difference is smaller than the threshold value. There can be an integer m number of face images in the plurality of face images, wherein the step of classifying the plurality of face images by n*(n*-l)/2 classifiers outputs m number of binary vectors. The face images can be grouped into modified face groups using non-negative matrix factorization based on values of the improved similarity functions. The step of assigning at least some of the face groups in the first chunk to known face models can include: storing training faces associated with the known face models of known persons in a computer storage; joining the face images in the first chunk with a group of training faces associated with the known face models; calculating
similarity functions between pairs of the face images or the training faces in the joint group by a computer processor; conducting non-negative matrix factorization on values of the similarity functions in the joint face group to test truthfulness of the joint face group; and identifying the face images in the first chunk that belong to the known face models if a percentage of the associated similarity functions being true is above a threshold based on the non-negative matrix factorization. The computer-implemented method can further include merging the face images with the training faces of the known face model to form a new set of training faces for the known face model. The step of conducting non-negative matrix factorization can include: forming a non-negative matrix using values of similarity functions between all different pairs of the face images and the training faces in the joint face group, wherein the non-negative matrix factorization is conducted over the non-negative matrix. The similarity functions in the joint face group can be described in a similarity distribution function, wherein the step of non-negative matrix factorization outputs a True similarity distribution function and a False similarity distribution function. The step of identifying can include comparing the similarity distribution function to the True similarity distribution function and the False similarity distribution function.
[0013] These and other aspects, their implementations and other features are described in detail in the drawings, the description and the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] Figure 1 is a block diagram for a network-based system for producing personalized image products, image designs, or image projects compatible with the present invention.
[0015] Figure 2 is a flow diagram for categorizing face images that belong to different persons for image product creation in accordance with the present invention.
[0016] Figure 3 is a flow diagram for identifying face images in accordance with the present invention.
[0017] Figure 4 is a flow diagram for identifying face images in accordance with the present invention.
[0018] Figure 5 is a flow diagram for grouping face images for image product creation in user accounts comprising a large number of photos in accordance with the present invention.
DETAILED DESCRIPTION OF THE INVENTION
[0019] Referring to Figure 1, a network-based imaging service system 10 can enable users 70, 71 to organize and share images via a wired network or a wireless network 51. The network-based imaging service system 10 is operated by an image service provider such as Shutterfly, Inc. Optionally, the network-based imaging service system 10 can also fulfill image products ordered by the users 70, 71. The network-based imaging service system 10 includes a data center 30, one or more product fulfillment centers 40, 41, and a computer network 80 that facilitates the communications between the data center 30 and the product fulfillment centers 40, 41.
100201 The data center 30 includes one or more servers 32 for communicating with the users 70, 71, a data storage 34 for storing user data, image and design data, and product information, and computer processor(s) 36 for rendering images and product designs, organizing images, and processing orders. The user data can include account information, discount information, and order information associated with the user. A website can be powered by the servers 32 and can be accessed by the user 70 using a computer device 60 via the Internet 50, or by the user 71 using a wireless device 61 via the wireless network 51. The servers 32 can also support a mobile application to be downloaded onto wireless devices 61.
[00211 The network-based imaging service system 10 can provide products that require user participations in designs and personalization. Examples of these products include the personalized image products that incorporate photos provided by the users, the image service provider, or other sources. In the present disclosure, the term "personalized" refers to information that is specific to the recipient, the user, the gift product, and the occasion, which can include personalized content, personalized text messages, personalized images, and personalized designs that can be incorporated in the image products. The content of personalization can be provided by a user or selected by the user from a library of content provided by the service provider. The term "personalized information" can also be referred to as "individualized information" or "customized information".
[0022] Personalized image products can include users' photos, personalized text, personalized designs, and content licensed from a third party. Examples of personalized image products may include photobooks, personalized greeting cards, photo stationeries, photo or image prints, photo posters, photo banners, photo playing cards, photo T-shirts, photo mugs, photo aprons, photo magnets, photo mouse pads, a photo phone case, a case for a tablet computer, photo key-chains, photo collectors, photo coasters, photo banners, or other types of photo gift or novelty item. The term photobook generally refers to as bound multi- page product that includes at least one image on a book page. Photobooks can include image albums, scrapbooks, bound photo calendars, or photo snap books, etc. An image product can include a single page or multiple pages. Each page can include one or more images, text, and design elements. Some of the images may be laid out in an image collage.
[0023] The user 70 or his/her family may own multiple cameras 62, 63. The user 70 transfers images from cameras 62, 63 to the computer device 60. The user 70 can edit, organize images from the cameras 62, 63 on the computer device 60. The computer device 60 can be in many different forms: a personal computer, a laptop, or tablet computer, a mobile phone etc. The camera 62 can include an image capture device integrated in or connected with in the computer device 60. For example, laptop computers or computer monitors can include built-in camera for picture taking. The user 70 can also print pictures using a printer 65 and make image products based on the images from the cameras 62, 63. Examples for the cameras 62, 63 include a digital camera, a camera phone, a video camera capable of taking motion and still images, a laptop computer, or a tablet computer.
[0024] Images in the cameras 62, 63 or stored on the computer device 60 and the wireless device 61 can be uploaded to the server 32 to allow the user 70 to organize and render images at the website, share the images with others, and design or order image product using the images from the cameras 62, 63. The wireless device 61 can include a mobile phone, a tablet computer, or a laptop computer, etc. The wireless device 61 can include a built-in camera (e.g. in the case of a camera phone). The pictures taken by the user 71 using the wireless device 61 can be uploaded to the data center 30. If users 70, 71 are members of a family or associated in a group (e.g. a soccer team), the images from the cameras 62, 63 and the mobile device 61 can be grouped together to be incorporated into an image product such as a photobook, or used in a blog page for an event such as a soccer game.
[0025] The users 70, 71 can order a physical product based on the design of the image product, which can be manufactured by the printing and finishing facilities 40 and 41. A recipient receives the physical product with messages from the users at locations 80, 85. The recipient can also receive a digital version of the design of the image product over the Internet 50 and/or a wireless network 51. For example, the recipient can receive, on her mobile phone, an electronic version of the greeting card signed by handwritten signatures from her family members.
100261 The creation of personalized image products, however, can take considerable amount of time and effort. In some occasions, several people may want to contribute to a common image product. For example, a group of people may want or need to jointly sign their names, and write comments on a get-well card, a baby-shower card, a wedding-gift card. The group of people may be at different locations. In particular, it will be desirable to enable the group of people to quickly write their names and messages in the common image product using mobile devices.
[0027] The images stored in the data storage 34 (e.g. a cloud image storage), the computer device 60, or the mobile device 61 can be associated with metadata that characterize the images. Examples of such data include image size or resolutions, image colors, image capture time and locations, image exposure conditions, image editing parameters, image borders, etc. The metadata can also include user input parameters such as the occasions for which the images were taken, favorite rating of the photo, keyword, and the folder or the group to which the images are assigned, etc. For many image applications, especially for creating personalized image products or digital photo stories, it is beneficial to recognize and identify people's faces in the images stored in the data storage 34, the computer device 60, or the mobile device 61. For example, when a family photobook is to be created, it would very helpful to be able to automatically find photos that include members within that family.
[0028] Referring to Figures 1 and 2, faces are detected and grouped by individual persons before images are selected and incorporated into image products. M number of faces can be detected in the digital images (step 210) by a computer processor (such as the computer processor 36, the computer device 60, or the mobile device 61). The portions of the images
that contain the detected faces are cropped out to produce face images, each of which usually includes a single face.
[0029] M feature vectors are then obtained by the computer processor for the m face images (step 220). In pattern recognition and machine learning, a feature vector is an n-dimensional vector of numerical features that represent some objects (i.e. a face image in the present disclosure). Representing human faces by numerical feature vectors can facilitate processing and statistical analysis of the human faces. The vector space associated with these vectors is often called the feature space.
100301 Similarity function S(i,j) for each pair of face images i and j among the detected faces are then calculated (step 230) automatically by the computer processor. The disclosed method is generally not restricted to the specific design of similarity function S(i,j). The similar function can be based on inner products of feature vectors from two face image.
[0031] In another example, two face images can be compared to an etalon set of faces. Similar faces will be similar to the same third party faces and dissimilar with the others. Eigen-space best describing all album faces is calculated. The similarity between the two face images is the exponent of minus distance between the two face feature vectors in this space. j 00321 For ease of computation, the similarity function can be scaled to a numeric range between -1 and 1, that is, -1 ^ S(i,i) ^ 1. For two identical face images i, S(i,i) = 1. In general, the average similarity value between face images of a same person is larger than the average similarity function value between face images of different people.
100331 The similarity value between a pair of face images is related to the probability that the two face images belonging to a same person, but it does not tell which face images together belong to a hypothetical person (identifiable or not). The present method disclosure statistically assesses the probability that a group of face images are indeed faces of the same person. In some embodiments, the values of similarity functions for different pairs of face images are compared to a threshold value T. The face images that are connected through a chain of similarity values higher than T are automatically joined by the computer processor into a hypothetical face group g that potentially belongs to a single person (step 240).
[0034] This process is generally known as greedy join. In principle, if ground truth is known, the hypotheses created this way can be assessed using the basic analysis and the overall precision and recall associated with T can be estimated. Since the ground truth in not known, the quality of the hypothesis will be estimated in a different way, as described below. Moreover, by repeating greedy join for different thresholds we can find T associated with the best estimate. Applying greedy join for this threshold results in good face groups.
[0035] Once the groups {g} are constructed by greedy join for random values of T, a similarity distribution function {P(S(ig, jg))} between different pairs of face images in each face group g is obtained by the computer processor (step 250). Face images in each face group g are
characterized by a similar distribution function P(S(i,j)), which is the probability distribution of similarity function values for all different pairs of face images in the face group g. The similarity distribution function {P(S(ig, jg))} has a plurality of similarity function values S(ig, jg) for different pair of face images i, j .
[0036] In some aspects, the use of the similar distribution function P(S(i,j)) to describe a group of face images in the disclosed method is based on several empiric observations: In a given small (<100) set of face images, the similarities inside true face groups (face images of the same person) have the same similarity distribution Ptrue(S), where both i and j are faces in the same face group. The similarities between faces of different persons are distributed with similarity distribution Pfaise(S). For larger face sets, several Ptrue(S) distributions are established. Thus, when Ptrue and Pfaise are known, we can assess how many of the face pairs in a group of face images are of the same persons by solving a linear regression.
[0037] Next, non-negative matrix factorization is performed by the computer processor on the similarity distribution function (P(S(ig,jg))} to estimate {Ptrue, Pfaise} and test the truthfulness of the face groups {g} (step 260). The similarity distribution function (P(S(ig,jg))} has non-negative values for different S(ig,jg)'s. Organized in vectors, they form a non-negative matrix. Non- negative matrix factorization (NMF) is a group of algorithms in multivariate analysis and linear algebra where a matrix V is factorized into two or more non-negative matrices. This non- negativity makes the resulting matrices easier to analyze. NMF in general is not exactly solvable; it is commonly approximated numerically. Specifically, the resulting factor matrices are initialized with random values, or using some problem-tied heuristic. Then, all-but-one of the
factors are fixed, and the remaining matrix values are solved, e.g., by regression. This process is continued for each factor matrix. The iterations continue until conversion.
[0038] An output of NMF is a matrix having columns Ptrue and Pfaise- Another result of NMF is a matrix for determining similarities of the hypothesized face groups to Ptme and Pfaise distributions. Face groups that are similar to the "true" distribution are accepted as good face groups. Other face groups are ignored. It should be noted that P^e and Pfaise distributions can be different for each group of face images. Thus the NMF needs to be performed for every group of user images of interest, such as each user album.
[0039] In one general aspect, rather than characterizing each face separately, the presently disclosed method characterizes a face image by a distribution of its similarities to all other face images in the same face group. Thus, when P true(S) and P false(S) are known, P(S(i,j)) can be tested to see how close it is to P true and P faise by solving linear equation. Furthermore, the obtained weights (i.e. precision in data analysis) specify how many pairs in P(S(i,j)) belong to P true(S) and the rest part of P(S(i,j)) belongs to P false(S). A face group g is identified as a true face group by the computer processor if percentage of its similarity distribution function P(S(i,j)) being true is above a threshold (step 270). A face group is rejected if it has P(S(i,j)) values that have "truthfulness" less than a predetermined percentage value.
100401 In an often occurring example, a wrong face is highly similar to a single face in a face group, but is dissimilar to all face images in the same face group. In this case, P(S(i,j)) similar to P faise, and the merge between the wrong face and the face group is rejected. In another example, a face has relatively low similarity to all face images in a group, but P(S(i,j)) can still be more similar to P true and the merge is be accepted. The main benefit of the presently disclosed approach is that it does not define rules on similarities or dissimilarities between a pair of individual faces. The determination if a face image belongs to a face group is statistical and based on the collective similarity properties a whole of face images.
[00411 After accepting some of the initial groups, there can still be true face groups and single faces that need to be joined. For every group pair (gi,g2), a joint hypothesis group hi2 is considered (gi can be a single face). Ptrue(S) and Pfaise(S) are calculated using NMF as described above to test if face pair similarities of hy has high precision (i.e. similarity functions in the joint face group are true above a predetermined threshold) and, thus, groups gi and gj should be
merged (step 280). Accurate hypotheses are accepted and the overall recall rises. This enhancement method allows merging faces that associated by relatively low similarity between them, without merging all faces associated with this similarity, as done by the greedy join method.
100421 As a result, n face groups representing n hypothetical persons are obtained from the m face images (step 290).
[0043] An image-based product can then then created based in part on the n face groups (step 300). The m face images that are grouped can be extracted from images contained in one or more image albums. A design for an image product can be automatically created by the computer processor 36, the computer device 60, or the mobile device 61 (Figure 1), then presented to a user 70 or 71 (Figure 1). For example, the face images of the people who appear most frequently in the user account (indicating are significant to the owner of the user account) can be selected for creating image-product designs over others. The image product creation can also include partial user input or selection on styles, themes, format, or sizes of an image product, or text to be incorporated into an image product. When an order of the image product is received from a user, the design of the image product is sent from the servers 32 to the server 42 in the printing and finishing facilities 40 and 41 (Figure 1) wherein hardcopies of the image-based products are manufactured. The detection and grouping of face images can significantly reduce time used for design and creation, and improve the accuracy and appeal of an image product. For example, the most important people can be determined and to be emphasized in an image product. Redundant person's face images can be filtered and selected before incorporated into an image product. Irrelevant persons can be minimized or avoided in the image product.
[0044] Although the method shown in Figure 2 and described above can provide a rather effective way of grouping faces for creating image products, it can be improved further incorporating knowledge or intelligence about the general nature and statistics of image products, and about the users (the product designer or orderers, the recipients, and the people who appear in the photos in the image products) of the image products.
[0045] In some embodiments, referring to Figure 3, initial face groups are evaluated; the ones having undesirable/improbable distributions are first eliminated using image-product statistics (step 310). Each face can be described by a feature vector of several hundred values. The initial
face groups can be obtained in different ways, including the fully automated computer methods such as the one described above in relation to Figure 2, or partially and fully manual methods with assistance of the users. Leading image product service providers (such as Shutterfly, Inc.) have accumulated vast amount of statistics about the appearance of people's faces in image products. For example, it has been discovered that most family albums or family photobooks typically include 2-4 main characters that appear at high frequencies in each photobook, and the frequencies for other people's faces drastically decrease in the photobook. In another example, the people whose faces appear in pictures ca be assigned as VIP persons and non-VIP persons. It is highly improbable that a non-VIP person will be associated with the largest face group in an image album. In another example, products ordered by the customer are tracked and stored in a database. The largest groups in the image albums are cross referenced with and found to be highly correlated with the most frequent faces in already purchased products.
[0046] Next, support vector machine (SVM) classifiers are trained between pairs of the n* face groups (gi, gj) using image-product statistics (step 320). Each of the n* face groups represents a potentially unique person. For the n* face groups, there are n*(n*-l)/2 such classifiers. In the first iteration, the n* face groups are the same as the initial input face groups. As it is described in steps 330-370 below, the number n* of face groups as well as face compositions within the face groups can vary as the face grouping converges in consecutive iterations.
[0047] In general, face similarity functions can be built based on different features such as two-dimensional or two-dimensional features obtained with the aid of different filters, biometric distances, image masks, etc. In conventional face categorization technologies, it is often a change to properly define and normalize of similarity or distance between the faces, in the Euclidian (or other) spaces. To address this issue, face similarity functions are defined using SVM in the presently disclosed method. Each image album or photobook can include several hundreds, or even several thousands of faces. SVM is a suitable tool for classifying faces at this scale. The task of face grouping does not use training information, which is different from face recognition. If identities of people in photos of an image album or photo collection are beforehand and have their face images are available, face recognition instead of face grouping can be conducted using SVM.
[0048] In the disclosed method, external knowledge on general properties and statistics of faces in image albums or photo collections is combined with methodology of transductive support vector machines (TSVM). TSVM allows using non-labeled (test) data points for SVM training, improving by this the separation of the test data during the learning. A prior knowledge about image albums or collections is that they contain face pairs that are more likely to belong to the same person than other pairs (from different photo collections). Moreover, the frequencies of people's appearances in an image album or a photo collection is usually distributed
exponentially, meaning, that main face groups are built by 2-3 main characters and the rest of participants appear only several times at most. Thus, iterative grouping and learning from the most probable recognitions can help classify faces in ambiguous cases. The face models created by the initial grouping can be used to improve the face grouping itself. Other knowledge about an image album and image collection can include titles, keywords, occasions, as well as time and geolocations associated or input in association with each image album or image collection.
[0049] Next, the m faces f1} ... Jm are classified by n*(n*-l)/2 classifiers to output m binary vectors c1} cm for the m faces (step 330). The binary vectors c1} ... ,cm can have values of 0 or 1 : c, = 1 if the face is classified as similar to model number /', and otherwise, c, = 0.
100501 An improved similarity function is calculated using the m binary vectors for each pair of the m faces (step 340):
-,n(n-l)/2
S(i, j) = ∑^ ^C^ - ! (2)
'J n(n-l) '
[0051] The m faces are then grouped into modified face groups using non-negative matrix factorization based on values of the improved similarity functions (step 350). The operation is similar to those described above in step 260 (Figure 2) but with improved accuracy in face grouping. In this step, the initial face groups in this iteration may be spit or merged to form new face groups or alter compositions in existing face groups.
[0052] The difference between the modified face groups {g*} and the initial face groups {g} in the same iteration is calculated (e.g. using norm of similarity matrices for m faces) and compared to a threshold value (step 360). The threshold value can be a constant and/or found empirically. Steps 320-360 are repeated (step 370) if the difference is larger than the threshold value. In other words, the process of training SVM classifiers, calculating binary functions, and
grouping based on the binary functions are repeated till the face groups converge to a stable set of groups.
[0053] When a stable set of modified face groups {g*} are obtained, they are used to create image products (step 380) such as photobooks, photo calendars, photo greeting cards, or photo mugs. The image product can be automatically created by the computer processor 36, the computer device 60, or the mobile device 61 (Figure 1), then presented to a user 70 or 71 (Figure 1), which allows the image product be ordered and made by the printing and finishing facilities 40 and 41 (Figure 1). The image product creation can also include partial user input or selection on styles, themes, format, or sizes of an image product, or text to be incorporated into an image product. The detection and grouping of face images can significantly reduce time used for design and creation, and improve the accuracy and appeal of an image product.
[0054] With the input of knowledge about the image products and users, the modified face groups are more accurate than the method shown in Figure 2. The modified face groups can be used in different ways when incorporating photos into image products. For example, the most important people (in a family or close friend circle) can be determined and to be emphasized in an image product such as automatic VIPs in the image cloud recognition service. Redundant person's face images (of the same or similar scenes) can be filtered and selected before incorporated into an image product. Unimportant people or strangers can be minimized or avoided in the image product.
100551 In some embodiments, referring to Figure 4, face recognition can include one or more of the following steps. Face images of known persons (denoted sometimes as face models) are stored (step 410) in computer storage in data storage (34 in Figure 1) or user devices (60, 61 in Figure 1) as training faces. Examples of the know persons can include a family members and friends of a user the uploaded or stored the images from which the face images are extracted. The face images to be identified in the face groups are called testing faces.
[0056] A group of testing faces is then automatically hypothetically joined by a computer processor with a training faces of a known person to form a joint group (step 420). The group of testing faces can be already tested to be true as described in step 270 (in Figure 2).
[0057] Similarity functions S(i,j) are calculated by the computer processor between each pair of testing or training face images in the joint face group (step 430). The collection of the
similarity functions S(i,j) in the joint face group are described in a similarity distribution function
P(S(i,j)).
[0058] Similar to the previous discussions relating to steps 260-270, non-negative matrix factorization is be performed by the computer processor on the similarity function values to estimate Ptrue(S) and Pfaise(S) of the pairs of training and testing face images in the joint face group (step 440). The similarity distribution function P(S(i,j)) is compared to Ptrue(S) and Pfaise(S) and the precision (similarity to Ptme) is tested versus a predetermined threshold (step 440).
[0059] The testing faces in the joint face group are identified to be a known person if the similarity distribution function P(S(i,j)) is True at a percentage higher than a threshold (step 450), that is, when the precision is above a threshold.
100601 The group of testing face images can be merged with the known person's face images (step 460), thus producing a new set of training faces for the known person.
[00611 As described above, users are having increasingly large number of images in their accounts. Some users now have thousands to tens of thousands of photos taken just from one event, and may have hundreds of thousands to millions of photos in his or her account.
Grouping faces and organizing them in a meaningful for creating photo products present new challenges to automated methods of creating image products. One reason for such challenge is that as the number of photos per account increases, the pair comparison type of
calculations such as the similarity functional mentioned above increase as a power function of the number of photos in the user account. The power is typically higher than 2 resulted from the number of combinations in the possible comparative calculations, but the number of different faces will also increase as number of photos in a user account. The faces may
include the family members and friends of the user, which become more complete in the user's family and friend circle as the number of photos increases, but will also include
increased number of casual acquaintances and strangers in the background.
100621 In some embodiments, Figure 5 discloses an improved method for grouping face images for image product creation in user accounts comprising a large number of photos. The disclosed method analyzes and group face images in a large user account in working batches called chunks. Chunk size refers to the number of photos in a chunk. An optimal chunk size
range for a chunk is first defined (step 510). The optimum chuck size can be defined by a minimum Cmin and a maximum Cmax numbers of face images in a chunk.
[0063] A user account typically includes multiple albums each including one or more photos, typically arranged based on the occasions in which the photos were taken. The chunk size can be larger than most of the albums and may be smaller than some (the very large ones).
[0064] As described above, face images are automatically acquired by a computer processor from an image album in the user account (step 520). The computer processor adds the face images from the image album into a first chunk (step 530). The computer processor can be a computer server (32 in Figure 1) tasked for the processing functions, a computer processor (36 in Figure 1) coupled to a cloud image storage system, or a local processor to a user device (60, 61 in Figure 1).
[0065] For each addition of face images from a new image album into the first chunk, the current chuck size is compared with the optimal chunk size (step 540). If the current chunk size is smaller than Cmax, face images from additional image albums is continued to be added to the current chunk (step 550). If the current chunk size becomes larger than Cmax, face images from this image album are separated into multiple portions (step 560). A first portion is included in the current chunk with the current chunk size kept below Cmax (step 570). Other portion(s) of the face images are added to subsequent chunk(s) (step 580). For example, the other portions of face images can be distributed to four or more subsequent chunks.
100661 Once the current chunk is completed, the face images in the current chunk are grouped into face groups (step 590) using methods such as the process disclosed in Figure 2 (steps 210- 290) and the process disclosed in Figure 3 (steps 310-370). Then the computer processor will attempt to assign the face groups in the current chunk to known face models (step 600). In this step, the known face models are pre-established for the faces identified as the family members, friends, or key members associated with the user account. This step can be implemented using the process disclosed in Figure 4 in which face models are used as the training faces in steps 410- 460.
[0067] New face models are set up for those face groups in the current chunk that cannot be assigned to existing face models associated with the user account (step 610). People associated
with the new face models can be identified automatically by information such as metadata and image tags or by a user.
[0068] The face images that cannot be grouped in the current chunk are moved to one or more subsequent chunks that have not been processed for face grouping yet (step 620). The purpose of this step is to accumulate these ungrouped faces until there is enough number of sufficiently quality face images to allow them to be grouped.
[0069] Steps 520-620 are then repeated to first build the subsequent chunks of face images, and then group the face images in the subsequent chunk (step 630). The face images can be acquired from the same image album or additional image albums. The face groups in the subsequent chunk are then assigned to existing face models, and if that is not successful, new face models are set up for the unassigned face groups. Again, ungrouped face images can be moved to subsequent chunks to be analyzed with other face images later.
100701 If ungrouped face images have been moved down more than a predetermined number of times, the people corresponding to these faces are likely to be of strangers or casual acquaintances that are not important to the owner of the user account. Those images are discarded (step 640). This step is especially important for large user account because when the number of images increases, the number of face images from strangers and people unimportant to the user increases significantly, which often become a heavy burden to face grouping computations. By effectively removing faces of strangers and acquaintances, computation efficiency of the computer processor in face grouping and efficiency in
computer storage are significantly increased.
[0071] Once the face images in the user account are grouped and assigned to face models, an image-based product can be created based in part on the face images associated with the face models (step 650), including the known face models and the new face models. The face images can be from the first chunk or subsequent chunks. For example, the face images of the people who appear most frequently in the user account (indicating are significant to the owner of the user account) can be selected for creating image-product designs over others. A design for an image product can be automatically created by the computer processor 36, the computer device 60, or the mobile device 61 (Figure 1), then presented to a user 70 or 71 (Figure 1). The image product creation can also include partial user input or selection on styles, themes, format, or sizes
of an image product, or text to be incorporated into an image product. The detection and grouping of face images can significantly reduce time used for design and creation, and improve the accuracy and appeal of an image product. For example, the most important people can be determined and to be emphasized in an image product. When an order of the image product is received from a user, the design of the image product is sent from the servers 32 to the server 42 in the printing and finishing facilities 40 and 41 (Figure 1) wherein hardcopies of the image- based products are manufactured.
[0072] The disclosed methods can include one or more of the following advantages. The disclosed method can automatically group faces in user account that contain a large number of faces in photos that are difficult to be processed using conventional technologies. The disclosed method is scalable to any number of photos in user account. The disclosed method is compatible to different face grouping techniques including methods based on training faces of known persons.
[0073] The disclosed face grouping method does not rely on the prior knowledge about who are in the image album or photo collection, and thus are more flexible and easier to use. The disclosed face grouping method has the benefits of improved accuracy in grouping faces (more accurate merging and splitting), improved relevance of grouping faces to image products, and improved relevance of grouping faces to families and close friends.
[0074] It should be understood that the presently disclosed systems and methods can be compatible with different devices or applications other than the examples described above. For example, the disclosed method is suitable for desktop, tablet computers, mobile phones and other types of network connectable computer devices.
Claims
1. A computer-implemented method of grouping faces in large user account for creating an image product, comprising:
acquiring face images from an image album in a user's account by a computer processor;
adding the face images obtained from the image album into a first chunk;
comparing chuck size of the first chunk with a maximum chuck value for an optimal chunk size range by the computer processor;
if the chunk size of the first chuck is smaller than the maximum chuck value, keeping the face images from the image album in the first chunk;
if the chunk size of the first chuck is larger than the maximum chuck value, automatically separating, by the computer processor, the face images from the image album into a first portion and one or more second portions;
keeping the first portion in the first chunk to keep the current chunk size below the maximum chuck value;
automatically moving the one or more second portions of face images to one or more subsequent chunks by the computer processor;
automatically grouping face images in the first chunk by the computer processor to form face groups;
assigning at least some of the face groups in the first chunk to known face models associated with the user account; and
creating a design for an image-based product based at least in part on the face images in the first chunk associated with the face models.
2. The computer-implemented method of claim 1, further comprising:
setting up new face models by the computer processor for at least some of the face groups that cannot be assigned to existing face models,
wherein the design for an image-based product is created based on the face images associated with the known face models and the new face models.
3. The computer-implemented method of claim 1, further comprising:
moving the ungrouped face images in the first chunk to one or more subsequent chunks that have not been processed with face grouping.
4. The computer-implemented method of claim 3, further comprising:
discarding ungrouped face images that have been moved to subsequent chunks for more than a predetermined number of times.
5. The computer-implemented method of claim 1, further comprising:
repeating steps from acquiring face images in the image album or additional image albums to automatically grouping face images, to group images in a second chunk subsequent to the first chunk;
assigning at least some of the face groups in the second chunk to known face models associated with the user account; and
creating the design for the image-based product based at least in part on the face images in the first chunk and the second chunk associated with the face models.
6. The computer-implemented method of claim 1, wherein the step of automatically grouping face images in the first chunk comprises:
calculating similarity functions between pairs of face images in the first chunk by the computer processor;
joining face images that have values of the similarity functions above a predetermined threshold into a hypothetical face group, wherein the face images in the hypothetical face group hypothetically belong to a same person;
conducting non-negative matrix factorization on values of the similarity functions in the hypothetical face group to test truthfulness of the hypothetical face group; and
identifying the hypothetical face group as a true face group if a percentage of the associated similarity functions being true is above a threshold based on the non-negative matrix factorization.
7. The computer-implemented method of claim 6, further comprising:
rejecting the hypothetical face group as a true face group if a percentage of the associated similarity functions being true is below a threshold.
8 The computer-implemented method of claim 6, wherein the step of conducting non- negative matrix factorization comprises:
forming a non-negative matrix using values of similarity functions between all different pairs of face images in the hypothetical face group,
wherein the non-negative matrix factorization is conducted over the non-negative matrix.
9. The computer-implemented method of claim 6, wherein the similarity functions in the hypothetical face group are described in a similarity distribution function, wherein the step of non-negative matrix factorization outputs a True similarity distribution function and a False similarity distribution function.
10. The computer-implemented method of claim 6, wherein every pair of face images in the hypothetical face group has a similarity function above the predetermined threshold.
11. The computer-implemented method of claim 6, further comprising:
joining two true face groups to form a joint face group;
conducting non-negative matrix factorization on values of similarity functions in the joint face group; and
merging the two true face groups if a percentage of the associated similarity functions being true is above a threshold in the joint face group.
12. The computer-implemented method of claim 1, wherein the step of automatically grouping face images in the first chunk comprises:
receiving an initial set of n* face groups in the face images in the first chunk, wherein n* is a positive integer bigger than 1;
training classifiers between pairs of face groups in the initial set of face groups using image-product statistics;
classifying the plurality of face images by n*( n*-l)/2 classifiers to output binary vectors for the face images by the computer processor;
calculating a value for an improved similarity function using the binary vectors for each pair of the face images; and
grouping the face images in the first chunk into modified face groups based on values of the binary similarity functions by the computer processor.
13. The computer-implemented method of claim 12, further comprising:
comparing a difference between the modified face groups and the initial face groups to a threshold value, wherein the image product is created based at least in part on the modified face groups if the difference is smaller than the threshold value.
14. The computer-implemented method of claim 12, wherein there are an integer m number of face images in the plurality of face images, wherein the step of classifying the plurality of face images by n*( n*-l)/2 classifiers outputs m number of binary vectors.
15. The computer-implemented method of claim 12, wherein the face images are grouped into modified face groups using non-negative matrix factorization based on values of the improved similarity functions.
16. The computer-implemented method of claim 1, wherein the step of assigning at least some of the face groups in the first chunk to known face models comprises:
storing training faces associated with the known face models of known persons in a computer storage;
joining the face images in the first chunk with a group of training faces associated with the known face models;
calculating similarity functions between pairs of the face images or the training faces in the joint group by a computer processor;
conducting non-negative matrix factorization on values of the similarity functions in the joint face group to test truthfulness of the joint face group; and
identifying the face images in the first chunk that belong to the known face models if a percentage of the associated similarity functions being true is above a threshold based on the non-negative matrix factorization.
17. The computer-implemented method of claim 16, further comprising:
merging the face images with the training faces of the known face model to form a new set of training faces for the known face model.
18. The computer-implemented method of claim 16, wherein the step of conducting non- negative matrix factorization comprises:
forming a non-negative matrix using values of similarity functions between all different pairs of the face images and the training faces in the joint face group,
wherein the non-negative matrix factorization is conducted over the non-negative matrix.
19. The computer-implemented method of claim 16, wherein the similarity functions in the joint face group are described in a similarity distribution function, wherein the step of non-negative matrix factorization outputs a True similarity distribution function and a False similarity distribution function.
20. The computer-implemented method of claim 16, wherein the step of identifying comprises:
comparing the similarity distribution function to the True similarity distribution function and the False similarity distribution function.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP18185337.5A EP3413237A1 (en) | 2015-11-04 | 2016-06-02 | Automatic image product creation for user accounts comprising large number of images |
CA3000955A CA3000955C (en) | 2015-11-04 | 2016-06-02 | Automatic image product creation for user accounts comprising large number of images |
EP16862617.4A EP3371742A4 (en) | 2015-11-04 | 2016-06-02 | Automatic image product creation for user accounts comprising large number of images |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/932,378 US9519826B2 (en) | 2014-05-08 | 2015-11-04 | Automatic image product creation for user accounts comprising large number of images |
US14/932,378 | 2015-11-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017078793A1 true WO2017078793A1 (en) | 2017-05-11 |
Family
ID=58662296
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2016/035436 WO2017078793A1 (en) | 2015-11-04 | 2016-06-02 | Automatic image product creation for user accounts comprising large number of images |
Country Status (4)
Country | Link |
---|---|
US (1) | US9892342B2 (en) |
EP (2) | EP3413237A1 (en) |
CA (1) | CA3000955C (en) |
WO (1) | WO2017078793A1 (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018222387A1 (en) * | 2017-05-30 | 2018-12-06 | Google Llc | Systems and methods for person recognition data management |
US10410086B2 (en) | 2017-05-30 | 2019-09-10 | Google Llc | Systems and methods of person recognition in video streams |
US10664688B2 (en) | 2017-09-20 | 2020-05-26 | Google Llc | Systems and methods of detecting and responding to a visitor to a smart home environment |
US10957171B2 (en) | 2016-07-11 | 2021-03-23 | Google Llc | Methods and systems for providing event alerts |
US11256951B2 (en) | 2017-05-30 | 2022-02-22 | Google Llc | Systems and methods of person recognition in video streams |
US11356643B2 (en) | 2017-09-20 | 2022-06-07 | Google Llc | Systems and methods of presenting appropriate actions for responding to a visitor to a smart home environment |
US11587320B2 (en) | 2016-07-11 | 2023-02-21 | Google Llc | Methods and systems for person detection in a video feed |
US11783010B2 (en) | 2017-05-30 | 2023-10-10 | Google Llc | Systems and methods of person recognition in video streams |
US11893795B2 (en) | 2019-12-09 | 2024-02-06 | Google Llc | Interacting with visitors of a connected home environment |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6958618B2 (en) * | 2017-07-07 | 2021-11-02 | 日本電気株式会社 | Information processing equipment, information processing methods, and programs |
US10885558B2 (en) | 2018-10-09 | 2021-01-05 | Ebay Inc. | Generating personalized banner images using machine learning |
US11144748B2 (en) * | 2018-12-07 | 2021-10-12 | IOT Technology, LLC. | Classification system |
US11455518B2 (en) * | 2019-11-12 | 2022-09-27 | Adobe Inc. | User classification from data via deep segmentation for semi-supervised learning |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080298766A1 (en) * | 2007-05-29 | 2008-12-04 | Microsoft Corporation | Interactive Photo Annotation Based on Face Clustering |
US20110293188A1 (en) * | 2010-06-01 | 2011-12-01 | Wei Zhang | Processing image data |
US20130279816A1 (en) * | 2010-06-01 | 2013-10-24 | Wei Zhang | Clustering images |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006091869A2 (en) * | 2005-02-25 | 2006-08-31 | Youfinder Intellectual Property Licensing Limited Liability Company | Automated indexing for distributing event photography |
EP1835036A1 (en) | 2006-03-16 | 2007-09-19 | Exonhit Therapeutics SA | Methods and compositions for the detection and treatment of cancers |
EP2618290A3 (en) | 2008-04-02 | 2014-08-06 | Google, Inc. | Method and apparatus to incorporate automatic face recognition in digital image collections |
US8385971B2 (en) | 2008-08-19 | 2013-02-26 | Digimarc Corporation | Methods and systems for content processing |
US8670597B2 (en) | 2009-08-07 | 2014-03-11 | Google Inc. | Facial recognition with social network aiding |
US8503739B2 (en) | 2009-09-18 | 2013-08-06 | Adobe Systems Incorporated | System and method for using contextual features to improve face recognition in digital images |
US9465993B2 (en) | 2010-03-01 | 2016-10-11 | Microsoft Technology Licensing, Llc | Ranking clusters based on facial image analysis |
US8543517B2 (en) | 2010-06-09 | 2013-09-24 | Microsoft Corporation | Distributed decision tree training |
-
2016
- 2016-06-02 EP EP18185337.5A patent/EP3413237A1/en not_active Withdrawn
- 2016-06-02 WO PCT/US2016/035436 patent/WO2017078793A1/en active Application Filing
- 2016-06-02 EP EP16862617.4A patent/EP3371742A4/en not_active Ceased
- 2016-06-02 CA CA3000955A patent/CA3000955C/en active Active
-
2017
- 2017-09-05 US US15/695,298 patent/US9892342B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080298766A1 (en) * | 2007-05-29 | 2008-12-04 | Microsoft Corporation | Interactive Photo Annotation Based on Face Clustering |
US20110293188A1 (en) * | 2010-06-01 | 2011-12-01 | Wei Zhang | Processing image data |
US20130279816A1 (en) * | 2010-06-01 | 2013-10-24 | Wei Zhang | Clustering images |
Non-Patent Citations (1)
Title |
---|
See also references of EP3371742A4 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10957171B2 (en) | 2016-07-11 | 2021-03-23 | Google Llc | Methods and systems for providing event alerts |
US11587320B2 (en) | 2016-07-11 | 2023-02-21 | Google Llc | Methods and systems for person detection in a video feed |
US11386285B2 (en) | 2017-05-30 | 2022-07-12 | Google Llc | Systems and methods of person recognition in video streams |
US10685257B2 (en) | 2017-05-30 | 2020-06-16 | Google Llc | Systems and methods of person recognition in video streams |
US10599950B2 (en) | 2017-05-30 | 2020-03-24 | Google Llc | Systems and methods for person recognition data management |
US11256951B2 (en) | 2017-05-30 | 2022-02-22 | Google Llc | Systems and methods of person recognition in video streams |
WO2018222387A1 (en) * | 2017-05-30 | 2018-12-06 | Google Llc | Systems and methods for person recognition data management |
US10410086B2 (en) | 2017-05-30 | 2019-09-10 | Google Llc | Systems and methods of person recognition in video streams |
US11783010B2 (en) | 2017-05-30 | 2023-10-10 | Google Llc | Systems and methods of person recognition in video streams |
US10664688B2 (en) | 2017-09-20 | 2020-05-26 | Google Llc | Systems and methods of detecting and responding to a visitor to a smart home environment |
US11256908B2 (en) | 2017-09-20 | 2022-02-22 | Google Llc | Systems and methods of detecting and responding to a visitor to a smart home environment |
US11356643B2 (en) | 2017-09-20 | 2022-06-07 | Google Llc | Systems and methods of presenting appropriate actions for responding to a visitor to a smart home environment |
US11710387B2 (en) | 2017-09-20 | 2023-07-25 | Google Llc | Systems and methods of detecting and responding to a visitor to a smart home environment |
US12125369B2 (en) | 2017-09-20 | 2024-10-22 | Google Llc | Systems and methods of detecting and responding to a visitor to a smart home environment |
US11893795B2 (en) | 2019-12-09 | 2024-02-06 | Google Llc | Interacting with visitors of a connected home environment |
Also Published As
Publication number | Publication date |
---|---|
CA3000955A1 (en) | 2017-05-11 |
CA3000955C (en) | 2022-10-11 |
US20170364769A1 (en) | 2017-12-21 |
US9892342B2 (en) | 2018-02-13 |
EP3371742A4 (en) | 2019-06-26 |
EP3371742A1 (en) | 2018-09-12 |
EP3413237A1 (en) | 2018-12-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9892342B2 (en) | Automatic image product creation for user accounts comprising large number of images | |
US9626550B2 (en) | Grouping face images using statistic distribution estimate | |
US9116924B2 (en) | System and method for image selection using multivariate time series analysis | |
US20110184950A1 (en) | System for creative image navigation and exploration | |
US20200160035A1 (en) | High recall additive pattern recognition for image and other applications | |
US10803298B2 (en) | High precision additive pattern recognition for image and other applications | |
US10891522B2 (en) | System for support vector machine prediction | |
WO2006122164A2 (en) | System and method for enabling the use of captured images through recognition | |
US9792535B2 (en) | Automatic image product creation for user accounts comprising large number of images | |
Abaci et al. | Matching caricatures to photographs | |
US9495617B2 (en) | Image product creation based on face images grouped using image product statistics | |
Kairanbay et al. | Beauty is in the eye of the beholder: Demographically oriented analysis of aesthetics in photographs | |
CN107506735A (en) | Photo classifying method and taxis system | |
US9594946B2 (en) | Image product creation based on face images grouped using image product statistics | |
US11308360B2 (en) | Support vector machine prediction system | |
US9619521B1 (en) | Classification using concept ranking according to negative exemplars | |
Farhat et al. | CAPTAIN: Comprehensive composition assistance for photo taking | |
CA3000989C (en) | Image product creation based on face images grouped using image product statistics | |
Shen et al. | Photo selection for family album using deep neural networks | |
JP2012064082A (en) | Image classification device | |
Moskovchuk et al. | Video Metadata Extraction in a Video-Mail System | |
US20130050746A1 (en) | Automated photo-product specification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16862617 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 3000955 Country of ref document: CA |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2016862617 Country of ref document: EP |