WO2017020476A1 - 关联用户的确定方法及装置 - Google Patents

关联用户的确定方法及装置 Download PDF

Info

Publication number
WO2017020476A1
WO2017020476A1 PCT/CN2015/097611 CN2015097611W WO2017020476A1 WO 2017020476 A1 WO2017020476 A1 WO 2017020476A1 CN 2015097611 W CN2015097611 W CN 2015097611W WO 2017020476 A1 WO2017020476 A1 WO 2017020476A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
associated user
user
gender
age
Prior art date
Application number
PCT/CN2015/097611
Other languages
English (en)
French (fr)
Inventor
张涛
龙飞
陈志军
Original Assignee
小米科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 小米科技有限责任公司 filed Critical 小米科技有限责任公司
Priority to MX2016006745A priority Critical patent/MX361672B/es
Priority to KR1020167013623A priority patent/KR101771153B1/ko
Priority to JP2016532556A priority patent/JP6263263B2/ja
Priority to RU2016119495A priority patent/RU2664003C2/ru
Publication of WO2017020476A1 publication Critical patent/WO2017020476A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/30Scenes; Scene-specific elements in albums, collections or shared content, e.g. social network photos or video
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/179Human faces, e.g. facial parts, sketches or expressions metadata assisted face recognition

Definitions

  • the present disclosure relates to the field of image technology processing, and in particular, to a method and apparatus for determining an associated user.
  • the embodiments of the present disclosure provide a method and an apparatus for determining an associated user, which are used to solve the problem that the operation of determining an associated user is relatively cumbersome.
  • a method for determining an associated user including:
  • the face album comprising a set of faces of a plurality of users
  • the screening of at least one candidate associated user of the target user from the face album includes:
  • the number of identical face source photos between the target users and the target number is greater than a preset number of users as the candidate associated users.
  • the acquiring the attribute information of the candidate associated user, and determining the associated user of the target user according to the attribute information includes:
  • the obtaining gender and age information of the candidate associated user includes:
  • the classifier is used to acquire the gender of the candidate associated user and the age group to which it belongs.
  • the determining the associated user according to a preset condition includes:
  • the acquiring, by the classifier, the gender of the candidate associated user and the age group to which the candidate belongs includes:
  • the classifier For each candidate associated user, the classifier is used to obtain the age of each face of the current candidate associated user, obtain a photo shooting time corresponding to each face, and calculate each face corresponding according to the age and the photo shooting time. a birth time, determining an age group to which the current candidate associated user belongs according to the calculated birth time;
  • the classifier For each candidate associated user, the classifier is used to obtain the gender corresponding to each face of the current candidate associated user. If the obtained gender is the same, the obtained gender is used as the gender of the current candidate associated user. If the gender is different, the number of current candidate associated user faces belonging to different genders is counted, and the gender corresponding to the larger number of faces is used as the gender of the current candidate associated user.
  • the using the classifier to obtain the age and gender of each face of the current candidate associated user including:
  • a determining apparatus for an associated user including:
  • An obtaining module configured to obtain a face album, the face album comprising a plurality of user's face sets
  • a screening module configured to determine in the face album acquired by the obtaining module Target user, screening at least one candidate associated user of the target user from the face album;
  • determining a setting module configured to acquire attribute information of the candidate associated user that is filtered by the determining screening module, determine an associated user of the target user according to the attribute information, and set label information for the associated user.
  • the determining the screening module comprises:
  • Obtaining a comparison sub-module configured to acquire a face source photo of all users in the face album, and compare the obtained face source photo of the user other than the target user with the face source photo of the target user ;
  • the determining submodule is configured to use, as the candidate associated user, the same number of face source photos as the target user is greater than a preset number of users.
  • the determining the setting module comprises:
  • Obtaining a deletion sub-module configured to acquire gender and age information of the candidate associated user, and deleting a candidate associated user that does not meet the age requirement according to the age information;
  • the determination determining sub-module is configured to determine, according to the gender of the remaining candidate associated users, whether the remaining candidate associated users exceed the number of associated users, and if not, determine that the remaining candidate associated users are the associated users, if exceeded, Then, the associated user is determined according to a preset condition.
  • the obtaining and deleting submodule comprises:
  • Collecting and extracting training units configured to collect training samples, extract features of the training samples, and train a classifier according to the features, the features include a gabor feature, and the classifier includes an SVM classifier;
  • an obtaining unit configured to acquire, by using the classifier trained by the collection and extraction training unit, the gender of the candidate associated user and an age group to which the candidate belongs.
  • the determination determining submodule is configured to:
  • the obtaining unit is configured to:
  • the classifier For each candidate associated user, the classifier is used to obtain the age of each face of the current candidate associated user, obtain a photo shooting time corresponding to each face, and calculate each face corresponding according to the age and the photo shooting time. a birth time, determining an age group to which the current candidate associated user belongs according to the calculated birth time;
  • the classifier For each candidate associated user, the classifier is used to obtain the gender corresponding to each face of the current candidate associated user. If the obtained gender is the same, the obtained gender is used as the gender of the current candidate associated user. If the gender is different, the number of current candidate associated user faces belonging to different genders is counted, and the gender corresponding to the larger number of faces is used as the gender of the current candidate associated user.
  • the obtaining unit is configured to:
  • a determining apparatus for an associated user including:
  • a memory for storing processor executable instructions
  • processor is configured to:
  • the face album comprising a set of faces of a plurality of users
  • Determining a target user in the face album screening the target from the face album At least one candidate associated user of the user;
  • the technical solution provided by the embodiment of the present disclosure may include the following beneficial effects: determining a target user in the face album by acquiring a face album, and screening at least one candidate associated user of the target user from the face album, and then acquiring the candidate associated user.
  • the attribute information determines the associated user of the target user according to the attribute information, and finally sets the label information for the associated user.
  • the implementation process is fast and simple, and the user does not need to perform cumbersome operations, thereby saving a lot of time for the user.
  • the way to identify candidate associated users is simple and easy to implement.
  • Determining the associated user according to the attribute information of the candidate associated user is simple and easy to implement.
  • the method of determining the associated user according to the preset condition is simple and accurate.
  • the accuracy is high.
  • the obtained age and gender are directly used as the age and gender of the current face.
  • the matching face of the current face is obtained from the database and will match.
  • the age and gender of the face are the age and gender of the current face, thus ensuring the recognition accuracy of the current face gender and age.
  • FIG. 1 is a flowchart of a method for determining an associated user, according to an exemplary embodiment
  • FIG. 2a is a schematic diagram of a face album according to an exemplary embodiment
  • 2b is a schematic diagram of a set of faces, according to an exemplary embodiment
  • FIG. 3 is a scene diagram of a method for determining an associated user, according to an exemplary embodiment
  • FIG. 4a is a flowchart of acquiring user attribute information according to an exemplary embodiment
  • FIG. 4b is a flowchart illustrating acquiring age information of a current candidate associated user, according to an exemplary embodiment
  • FIG. 5 is a flow chart showing an age of a face according to an exemplary embodiment
  • FIG. 6 is a block diagram of a determining device for an associated user, according to an exemplary embodiment
  • FIG. 7 is a block diagram of another determining device for an associated user, according to an exemplary embodiment.
  • FIG. 8 is a block diagram of still another determining device for an associated user according to an exemplary embodiment
  • FIG. 9 is a block diagram of still another determining device for an associated user, according to an exemplary embodiment.
  • FIG. 10 is a block diagram of a determining apparatus suitable for an associated user, according to an exemplary embodiment.
  • FIG. 1 is a flowchart of a method for determining an associated user according to an exemplary embodiment. As shown in FIG. 1 , the method for determining the associated user is applicable to a mobile terminal, including but not limited to a mobile phone. The following steps S101-S103 are included:
  • step S101 a face album is acquired, the face album containing a set of faces of a plurality of users.
  • the face album may be obtained from the server, and the face album may include a set of faces of the plurality of users.
  • Figure 2a shows an example of a face album containing a collection of faces of a plurality of users, a set of faces of a user as shown in Figure 2b.
  • step S102 the target user in the face album is determined, and at least one candidate associated user of the target user is filtered out from the face album.
  • the target user may be a baby.
  • the baby's face set may be identified from the face album, and the target user may be determined according to the number of faces in the baby face set. For example, suppose there is a collection of faces of two babies in the current face album, wherein the number of faces included in the face set of the first baby is 4, and the number of faces included in the face set of the second baby is 50, then it can be determined The second baby is the target user.
  • At least one candidate associated user of the target user may be selected from the face album by, but not limited to, obtaining a face source photo of all users in the face album, and acquiring the obtained user other than the target user
  • the face source photo is compared with the target user's face source photo, and then the number of identical face source photos between the target user and the target user is greater than the preset number of users as candidate associated users.
  • the face source photo refers to the photo of the face, assuming that the photo 1 includes the face 1 and the face 2, the source photo of the face 1 and the face 2 are both the photo 1, the photo 2 includes the face 3, and the source of the face 3
  • the photo is photo 2.
  • the preset number can be flexibly set as needed, for example, 10 sheets, 15 sheets, or the like.
  • the current face album contains a set of faces of five users of user 1-5, wherein user 1 is the target user, obtains the face source photos of the five users, and separates the face source photos of the user 2-5 with the user.
  • the face source photos of 1 are compared, assuming that the number of the same face source photos between the user 2 and the user 1 is two, that is, the user 2 has two photos with the user 1; the same face source photo between the user 3 and the user 1
  • the number of users is 30, that is, the user 3 has 30 photos with the user 1; the number of the same face source photos between the user 4 and the user 1 is 33, that is, the user 4 has 33 photos with the user 1; the user 5 and the user 1
  • the number of photos between the same face source is 20, that is, user 5 has 20 photos with user 1; assuming that the preset number is 10, it can be determined that user 3, user 4, and user 5 are candidate associated users of the target user.
  • step S103 the attribute information of the candidate associated user is acquired, the associated user of the target user is determined according to the attribute information, and the tag information is set for the associated user.
  • the attribute information of the candidate associated user may be acquired to determine the associated user of the target user according to the attribute information.
  • the gender and age information of the candidate associated user may be obtained, and the candidate associated users that do not meet the age requirement are deleted according to the age information, and then the remaining candidate associated users are determined to exceed the number of associated users according to the remaining candidate associated users. If it is exceeded, it is determined that the remaining candidate associated users are associated users, and if they are exceeded, the associated users are determined according to preset conditions, such as the number of faces of the candidate associated users.
  • the gender of the user 3 is male, the age group is 10-15, the gender of the user 4 is female, the age group is 25-30, and the gender of the user 5 is male, belonging to The age group is 28-35. Since the age of the user 3 does not meet the age requirement, the user 3 is deleted. Since the user 4 and the user 5 can determine that the user 4 and the user 5 meet the requirements of the number of associated users, therefore, it is determined.
  • the user 4 and the user 5 are associated users of the target user, for example, the user 4 is the mother of the target user, and the user 5 is the father of the target user.
  • the users 3-5 are all in accordance with the age requirement. Since both the user 3 and the user 5 are male, the number of associated users is exceeded.
  • the user 3 and the user 5 are further filtered according to preset conditions, for example, the number of faces of the user 3 and the user 5 can be obtained, since the number of faces of the user 3 (30 sheets) is larger than the number of faces of the user 5 (20 sheets), therefore, determining User 3 is the associated user of the target user.
  • the manner of determining the associated user according to the preset condition is simple and accurate.
  • the associated information may be set for the associated user so that the operation can be performed subsequently based on the tag information.
  • the label information may be “baby's father” or “baby's mother”, or may be a mark indicating “baby's father” or “baby's mother”.
  • the mobile terminal can display the set tag information, for example, the tag information can be displayed on the bottom or top of the user's face in the face album, or the tag information can be displayed on the user's face, for example, displaying the tag information in the upper right corner of the user's face. It should be noted that the style of the label information and the position of the setting are not specifically limited herein.
  • the mobile terminal can simultaneously extract the face of the target user and the associated user of the target user, instead of manually finding the associated user of the target user. Then, extract the faces of the associated users one by one, which is simple and fast.
  • the user can use the mobile phone 31 to take a lot of photos for the baby and himself, and can upload the photos to the server 32, and the current user clicks on the "face album" option.
  • the mobile phone 31 can obtain the face album from the server 32.
  • the mobile phone 31 can automatically identify the target user, for example, the baby of the current user, and select at least one candidate associated user of the target user from the face album, and then obtain the candidate associated user.
  • the associated user of the target user that is, the baby's father and mother
  • the tag information is set for the baby's father and mother, thereby facilitating subsequent operation according to the tag information.
  • the target user in the face album is determined by acquiring the face album, and at least one candidate associated user of the target user is filtered out from the face album, and then the attribute information of the candidate associated user is obtained, according to the attribute information.
  • the associated user of the target user is determined, and finally the label information is set for the associated user.
  • FIG. 4a is a flowchart of acquiring user attribute information, by which the gender and age information of a candidate associated user can be obtained, as shown in FIG. 4a, according to an exemplary embodiment.
  • the process includes:
  • step S401 training samples are collected, features of the training samples are extracted, and the classifier is trained according to the features.
  • the gender training sample and the age training sample need to be collected, and the features of the corresponding training samples are extracted, and the features may include but are not limited to the gabor feature, and then the corresponding classifier is trained according to the feature.
  • the above classifiers may include, but are not limited to, a Support Vector Machine (SVM) classifier.
  • SVM Support Vector Machine
  • the gabor feature is a local feature metric method for images, which is mainly used to characterize local texture features.
  • step S402 the gender of the candidate associated user and the age group to which it belongs are obtained by using the classifier.
  • the embodiment can obtain the gender and age information of all the faces of each candidate associated user through the corresponding classifier, and then perform statistics on the gender and age information of all the faces. And according to the statistical result, the gender of the corresponding candidate user and the age group to which it belongs are obtained.
  • the process of obtaining the age information of the current candidate associated user may be as shown in FIG. 4b, and the process includes:
  • step S4031 the age of each face of the current candidate associated user is acquired by the classifier, and the photo shooting time corresponding to each face is acquired.
  • step S4032 the birth time corresponding to each face is calculated based on the acquired age and photo taking time.
  • step S4033 the age segment to which the current candidate associated user belongs is determined according to the calculated birth time.
  • the current candidate associated users have 40 faces, of which 10 faces correspond to birth time in 1988, 8 faces correspond to birth time in 1990, 7 faces correspond to birth time in 1989, and 8 faces correspond to Birth time is 1987, 2 faces corresponding to birth time In 1980, the birth time of the two faces was 1981, the birth time of the two faces was 1995, and the birth time of one face was 1996. It was determined that the current age of the candidate related users was 25 -28.
  • the age range is determined by the above method, and the accuracy is high.
  • the classifier is used to obtain the gender corresponding to each face of the current candidate associated user. If the obtained gender is the same, the acquired gender is used as the gender of the current candidate associated user. If the gender is different, the number of current candidate associated user faces belonging to different genders is counted, and the gender corresponding to the larger number of faces is used as the gender of the current candidate associated user.
  • the current candidate associated user has 40 faces, wherein 38 faces correspond to genders and 2 faces correspond to genders, and the gender of the current candidate associated users is determined to be male.
  • the gender is determined by the above method, and the accuracy is high.
  • the gender and age information corresponding to all the faces of each candidate associated user are obtained, and then the gender and age range of the corresponding candidate related users are determined according to the gender and age information corresponding to all the faces, and the accuracy is high.
  • FIG. 5 is a flowchart of obtaining face age and gender according to an exemplary embodiment. As shown in FIG. 5, for each face of the current candidate associated user, obtaining the age and gender of the current face may include:
  • step S501 the age and gender of the current face of the current candidate associated user are acquired by the classifier, and the illumination and posture information of the current face are calculated to obtain a calculation result.
  • the illumination information can be calculated by using the mean and variance of the pixel gray values.
  • step S502 it is determined whether the calculation result meets the illumination and posture requirements, and if yes, then Step S503 is executed, and if not, step S504 is performed.
  • the posture of the user can be judged to be a frontal posture in various ways. For example, the positions of several points on the current face, for example, the positions of the left eye and the right eye, can be extracted, and then whether the left eye and the right eye are symmetric, if symmetrical, It indicates a positive attitude.
  • step S503 the acquired age and gender are taken as the age and gender of the current face, and the current face and its corresponding age and gender are saved in the database.
  • the obtained age and gender can be used as the age and gender of the current face, and the current face and its corresponding age and gender can be saved in the database for subsequent matching.
  • step S504 the matching face of the current face is obtained from the database, and the age and gender of the matching face are taken as the age and gender of the current face.
  • the current face does not meet the lighting and posture requirements, for example, the current face is the side, and the light is too dark, the acquired age and gender are inaccurate as the age and gender of the current face, and the matching face of the current face needs to be obtained from the database.
  • the age and gender of the matching face are taken as the age and gender of the current face to improve the accuracy.
  • the obtained age and gender are directly used as the age and gender of the current face, and the matching face of the current face is obtained from the database when the current face does not meet the requirements of illumination and posture. And the age and gender of the matching face are taken as the age and gender of the current face, thereby ensuring the recognition accuracy of the current face gender and age.
  • the embodiment of the present disclosure further provides an embodiment of the determining device of the associated user.
  • FIG. 6 is a block diagram of a determining device for an associated user, as shown in FIG. 6, the determining device of the associated user includes an obtaining module 61, a determining screening module 62, and a determining setting module 63, according to an exemplary embodiment.
  • the obtaining module 61 is configured to acquire a face album, and the face album includes a set of faces of a plurality of users.
  • the determining screening module 62 is configured to determine the target user in the face album acquired by the obtaining module 61, and select at least one candidate associated user of the target user from the face album.
  • the determining setting module 63 is configured to acquire the attribute information of the candidate associated user selected by the determining screening module 62, determine the associated user of the target user according to the attribute information, and set the tag information for the associated user.
  • the acquiring module obtains the face album, determines the target user in the face album by determining the screening module, and selects at least one candidate associated user of the target user from the face album, and then obtains by setting the setting module.
  • the attribute information of the candidate associated user determines the associated user of the target user according to the attribute information, and sets the label information for the associated user.
  • FIG. 7 is a block diagram of another determining device for associating a user according to an exemplary embodiment. As shown in FIG. 7, on the basis of the foregoing embodiment shown in FIG. 6, determining the screening module 62 may include: obtaining an alignment. Sub-module 621 and determination sub-module 622.
  • the obtaining comparison sub-module 621 is configured to obtain a face source photo of all users in the face album, and compare the obtained face source photo of the user other than the target user with the face source photo of the target user.
  • Determining sub-module 622 configured to have the same number of face source photos as the target user A user greater than the preset number is used as a candidate associated user.
  • the manner of determining the candidate associated users is simple and easy to implement.
  • FIG. 8 is a block diagram of another determining device for an associated user according to an exemplary embodiment. As shown in FIG. 8, on the basis of the foregoing embodiment shown in FIG. 6, determining the setting module 63 may include: acquiring a deleter. Module 631 and decision determination sub-module 632.
  • the obtaining deletion sub-module 631 is configured to acquire gender and age information of the candidate associated users, and delete the candidate associated users that do not meet the age requirement according to the age information.
  • the determination determining sub-module 632 is configured to determine, according to the gender of the remaining candidate associated users, whether the remaining candidate associated users exceed the number of associated users, and if not, determine that the remaining candidate associated users are related users, and if exceeded, according to the pre- Set the condition to determine the associated user.
  • the determination determining sub-module 632 can be configured to obtain the number of faces of the remaining candidate associated users, and the candidate associated users with the largest number of faces as the associated users.
  • the manner of determining the associated user according to the attribute information of the candidate associated user is simple and easy to implement.
  • FIG. 9 is a block diagram of another determining device for associating a user according to an exemplary embodiment.
  • the obtaining delete submodule 631 may include: collecting and extracting.
  • the collection extraction training unit 6311 is configured to collect training samples, extract features of the training samples, and train the classifier according to the features, the features include a gabor feature, and the classifier includes an SVM classifier.
  • the obtaining unit 6312 is configured to acquire the gender of the candidate associated user and the associated age group by using the classifier trained by the collection extraction training unit 6311.
  • the obtaining unit 6312 may be configured to: for each candidate associated user, acquire the age of each face of the current candidate associated user by using a classifier, obtain a photo shooting time corresponding to each face, and according to the age and The photo shooting time calculates the birth time corresponding to each face, and determines the age segment to which the current candidate associated user belongs according to the calculated birth time; for each candidate associated user, the classifier is used to obtain the gender corresponding to each face of the current candidate associated user. If the acquired genders are the same, the acquired gender is used as the gender of the current candidate associated user. If the acquired genders are different, the number of current candidate related user faces belonging to different genders is counted, and the number of faces corresponding to the larger number is corresponding. Gender is the gender of the current candidate associated user.
  • the obtaining unit 6312 may be configured to: for each face of the current candidate associated user, use a classifier to acquire the age and gender of the current face of the current candidate associated user, and calculate the illumination and posture information of the current face. If the calculation result meets the requirements of illumination and posture, the obtained age and gender are taken as the age and gender of the current face, and the current face and its corresponding age and gender are saved in the database, if the calculation result does not meet the illumination and posture If required, the matching faces of the current face are obtained from the database, and the age and gender of the matching face are taken as the age and gender of the current face.
  • the manner of obtaining the associated user attribute information is flexible, diverse, and high in accuracy.
  • FIG. 10 is a block diagram of a determining apparatus suitable for an associated user, according to an exemplary embodiment.
  • device 1000 can be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, a fitness device, a personal digital assistant, an aircraft, and the like.
  • apparatus 1000 can include one or more of the following components: processing component 1002, memory 1004, power component 1006, multimedia component 1008, audio component 1010, input/output (I/O) interface 1012, sensor component 1014, and Communication component 1016.
  • Processing component 1002 typically controls the overall operation of device 1000, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • Processing component 1002 can include one or more processors 1020 to execute instructions to perform all or part of the steps of the above described methods.
  • processing component 1002 can include one or more modules to facilitate interaction between component 1002 and other components.
  • processing component 1002 can include a multimedia module to facilitate interaction between multimedia component 1008 and processing component 1002.
  • the memory 1004 is configured to store various types of data to support operation at the device 1000. Examples of such data include instructions for any application or method operating on device 1000, contact data, phone book data, messages, pictures, videos, and the like.
  • the memory 1004 can be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Disk Disk
  • Optical Disk Optical Disk
  • Power component 1006 provides power to various components of device 1000.
  • Power component 1006 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 1000.
  • the multimedia component 1008 includes a screen between the device 1000 and a user that provides an output interface.
  • the screen can include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen can be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, slides, and gestures on the touch panel. The touch sensor may sense not only the boundary of the touch or sliding action, but also the duration and pressure associated with the touch or slide operation.
  • the multimedia component 1008 includes a front camera and/or a rear camera. When the device 1000 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 1010 is configured to output and/or input an audio signal.
  • the audio component 1010 includes a microphone (MIC) that is configured to receive an external audio signal when the device 1000 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode.
  • the received audio signal may be further stored in memory 1004 or transmitted via communication component 1016.
  • the audio component 1010 also includes a speaker for outputting an audio signal.
  • the I/O interface 1012 provides an interface between the processing component 1002 and the peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to, a home button, a volume button, a start button, and a lock button.
  • Sensor assembly 1014 includes one or more sensors for providing device 1000 with various aspects of state assessment.
  • sensor assembly 1014 can detect an open/closed state of device 1000, relative positioning of components, such as the display and keypad of device 1000, and sensor component 1014 can also detect changes in position of one component of device 1000 or device 1000. The presence or absence of contact by the user with the device 1000, the orientation of the device 1000 or acceleration/deceleration and temperature changes of the device 1000.
  • Sensor assembly 1014 can include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor assembly 1014 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor assembly 1014 can also include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 1016 is configured to facilitate wired or wireless communication between device 1000 and other devices.
  • the device 1000 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
  • communication component 1016 is via a broadcast channel Receive broadcast signals or broadcast related information from an external broadcast management system.
  • the communication component 1016 also includes a near field communication (NFC) module to facilitate short range communication.
  • NFC near field communication
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • apparatus 1000 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A gate array (FPGA), controller, microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A gate array
  • controller microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
  • non-transitory computer readable storage medium comprising instructions, such as a memory 1004 comprising instructions executable by processor 1020 of apparatus 1000 to perform the above method.
  • the non-transitory computer readable storage medium may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.
  • An embodiment of the present disclosure determines a target user in a face album by acquiring a face album, and selects at least one candidate associated user of the target user from the face album, and then obtains candidate candidates.
  • the attribute information of the associated user determines the associated user of the target user according to the attribute information, and finally sets the label information for the associated user.
  • the implementation process is fast and simple, and does not require the user to perform cumbersome operations, thereby saving a lot of time for the user.
  • the way to identify candidate associated users is simple and easy to implement.
  • Determining the associated user according to the attribute information of the candidate associated user is simple and easy to implement.
  • the method of determining the associated user according to the preset condition is simple and accurate.
  • the accuracy is high.
  • the obtained age and gender are directly used as the age and gender of the current face.
  • the matching face of the current face is obtained from the database and will match.
  • the age and gender of the face are the age and gender of the current face, thus ensuring the recognition accuracy of the current face gender and age.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Business, Economics & Management (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种关联用户的确定方法及装置,其中,关联用户的确定方法包括:获取面孔相册(S101),面孔相册包含多个用户的面孔集合;确定面孔相册中的目标用户,从面孔相册中筛选出目标用户的至少一个候选关联用户(S102);获取候选关联用户的属性信息,根据属性信息确定目标用户的关联用户,并为关联用户设置标签信息(S103)。

Description

关联用户的确定方法及装置
相关申请的交叉引用
本申请基于申请号为201510463635.X、申请日为2015年7月31日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本公开涉及图像技术处理领域,尤其涉及一种关联用户的确定方法及装置。
背景技术
随着移动终端技术的快速发展,各种移动终端例如手机已非常普及,并且,功能日益强大。例如,用户可以利用手机拍摄照片,并可以和亲朋好友分享照片。
随着照片数量的日益增多,当用户希望创建家庭相册等操作时,需要人工找到某个或某些用户的关联用户,然后根据关联用户执行后续创建家庭相册等操作。例如,目前只有宝宝的相册,需要从海量照片中找到宝宝的关联用户即他的爸爸和妈妈以创建家庭相册。
但是,人工确定关联用户的操作比较繁琐,浪费用户大量的时间。
发明内容
本公开实施例提供一种关联用户的确定方法及装置,用以解决目前确定关联用户的操作比较繁琐的问题。
根据本公开实施例的第一方面,提供一种关联用户的确定方法,包括:
获取面孔相册,所述面孔相册包含多个用户的面孔集合;
确定所述面孔相册中的目标用户,从所述面孔相册中筛选出所述目标用户的至少一个候选关联用户;
获取所述候选关联用户的属性信息,根据所述属性信息确定所述目标用户的关联用户,并为所述关联用户设置标签信息。
在一实施例中,所述从所述面孔相册中筛选出所述目标用户的至少一个候选关联用户,包括:
获取所述面孔相册中所有用户的面孔来源照片,并将获取的除所述目标用户以外的用户的面孔来源照片和所述目标用户的面孔来源照片进行比对;
将与所述目标用户之间相同面孔来源照片的数量大于预设数量的用户作为所述候选关联用户。
在一实施例中,所述获取所述候选关联用户的属性信息,根据所述属性信息确定所述目标用户的关联用户,包括:
获取所述候选关联用户的性别和年龄信息,并根据所述年龄信息删除不符合年龄要求的候选关联用户;
根据剩余的候选关联用户的性别判断剩余的候选关联用户是否超出关联用户数量,若未超出,则确定所述剩余的候选关联用户为所述关联用户,若超出,则根据预设条件确定所述关联用户。
在一实施例中,所述获取所述候选关联用户的性别和年龄信息,包括:
收集训练样本,提取所述训练样本的特征,根据所述特征训练分类器,所述特征包括gabor特征,所述分类器包括SVM分类器;
利用所述分类器获取所述候选关联用户的性别和所属的年龄段。
在一实施例中,所述根据预设条件确定所述关联用户,包括:
获得所述剩余的候选关联用户的面孔数量,将所述面孔数量最大的候 选关联用户作为所述关联用户。
在一实施例中,所述利用所述分类器获取所述候选关联用户的性别和所属的年龄段,包括:
针对每个候选关联用户,利用所述分类器获取当前候选关联用户的每个面孔的年龄,获取每个面孔对应的照片拍摄时间,并根据所述年龄和所述照片拍摄时间计算每个面孔对应的出生时间,根据计算出的所述出生时间确定所述当前候选关联用户所属的年龄段;
针对每个候选关联用户,利用所述分类器获取当前候选关联用户的每个面孔对应的性别,若获取到的性别相同,则获取到的性别作为所述当前候选关联用户的性别,若获取到的性别不同,则统计属于不同性别的当前候选关联用户面孔的数量,将所述数量较大的面孔对应的性别作为所述当前候选关联用户的性别。
在一实施例中,所述利用所述分类器获取当前候选关联用户的每个面孔的年龄和性别,包括:
针对当前候选关联用户的每个面孔,利用所述分类器获取当前候选关联用户的当前面孔的年龄和性别,并计算所述当前面孔的光照和姿态信息,若计算结果符合光照和姿态要求,则将获取到的年龄和性别作为所述当前面孔的年龄和性别,并将所述当前面孔及其对应的年龄和性别保存在数据库中,若计算结果不符合光照和姿态要求,则从所述数据库中获得所述当前面孔的匹配面孔,并将所述匹配面孔的年龄和性别作为所述当前面孔的年龄和性别。
根据本公开实施例的第二方面,提供一种关联用户的确定装置,包括:
获取模块,被配置为获取面孔相册,所述面孔相册包含多个用户的面孔集合;
确定筛选模块,被配置为确定所述获取模块获取的所述面孔相册中的 目标用户,从所述面孔相册中筛选出所述目标用户的至少一个候选关联用户;
确定设置模块,被配置为获取所述确定筛选模块筛选出的所述候选关联用户的属性信息,根据所述属性信息确定所述目标用户的关联用户,并为所述关联用户设置标签信息。
在一实施例中,所述确定筛选模块包括:
获取比对子模块,被配置为获取所述面孔相册中所有用户的面孔来源照片,并将获取的除所述目标用户以外的用户的面孔来源照片和所述目标用户的面孔来源照片进行比对;
确定子模块,被配置为将与所述目标用户之间相同面孔来源照片的数量大于预设数量的用户作为所述候选关联用户。
在一实施例中,所述确定设置模块包括:
获取删除子模块,被配置为获取所述候选关联用户的性别和年龄信息,并根据所述年龄信息删除不符合年龄要求的候选关联用户;
判断确定子模块,被配置为根据剩余的候选关联用户的性别判断剩余的候选关联用户是否超出关联用户数量,若未超出,则确定所述剩余的候选关联用户为所述关联用户,若超出,则根据预设条件确定所述关联用户。
在一实施例中,所述获取删除子模块包括:
收集提取训练单元,被配置为收集训练样本,提取所述训练样本的特征,根据所述特征训练分类器,所述特征包括gabor特征,所述分类器包括SVM分类器;
获取单元,被配置为利用所述收集提取训练单元训练的所述分类器获取所述候选关联用户的性别和所属的年龄段。
在一实施例中,所述判断确定子模块,被配置为:
获得所述剩余的候选关联用户的面孔数量,将所述面孔数量最大的候 选关联用户作为所述关联用户。
在一实施例中,所述获取单元,被配置为:
针对每个候选关联用户,利用所述分类器获取当前候选关联用户的每个面孔的年龄,获取每个面孔对应的照片拍摄时间,并根据所述年龄和所述照片拍摄时间计算每个面孔对应的出生时间,根据计算出的所述出生时间确定所述当前候选关联用户所属的年龄段;
针对每个候选关联用户,利用所述分类器获取当前候选关联用户的每个面孔对应的性别,若获取到的性别相同,则获取到的性别作为所述当前候选关联用户的性别,若获取到的性别不同,则统计属于不同性别的当前候选关联用户面孔的数量,将所述数量较大的面孔对应的性别作为所述当前候选关联用户的性别。
在一实施例中,所述获取单元,被配置为:
针对当前候选关联用户的每个面孔,利用所述分类器获取当前候选关联用户的当前面孔的年龄和性别,并计算所述当前面孔的光照和姿态信息,若计算结果符合光照和姿态要求,则将获取到的年龄和性别作为所述当前面孔的年龄和性别,并将所述当前面孔及其对应的年龄和性别保存在数据库中,若计算结果不符合光照和姿态要求,则从所述数据库中获得所述当前面孔的匹配面孔,并将所述匹配面孔的年龄和性别作为所述当前面孔的年龄和性别。
根据本公开实施例的第三方面,提供一种关联用户的确定装置,包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,处理器被配置为:
获取面孔相册,所述面孔相册包含多个用户的面孔集合;
确定所述面孔相册中的目标用户,从所述面孔相册中筛选出所述目标 用户的至少一个候选关联用户;
获取所述候选关联用户的属性信息,根据所述属性信息确定所述目标用户的关联用户,并为所述关联用户设置标签信息。
本公开的实施例提供的技术方案可以包括以下有益效果:通过获取面孔相册,确定面孔相册中的目标用户,并从面孔相册中筛选出目标用户的至少一个候选关联用户,然后获取候选关联用户的属性信息,根据属性信息确定目标用户的关联用户,最后为关联用户设置标签信息,实现过程快速、简单,不需要用户进行繁琐的操作,为用户节省了大量的时间。
确定候选关联用户的方式简单,易于实现。
根据候选关联用户的属性信息确定关联用户的方式简单,易于实现。
获取关联用户属性信息的方式简单、灵活。
根据预设条件确定关联用户的方式简单、准确率高。
通过获得每个候选关联用户所有面孔对应的性别和年龄信息,然后根据所有面孔对应的性别和年龄信息确定对应候选关联用户的性别和年龄段,准确率高。
在当前面孔符合光照和姿态要求时,直接将获取到的年龄和性别作为当前面孔的年龄和性别,在当前面孔不符合光照和姿态要求时,从数据库中获得当前面孔的匹配面孔,并将匹配面孔的年龄和性别作为当前面孔的年龄和性别,从而保证了当前面孔性别和年龄的识别准确率。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本发明的实施例,并与说明书一起用于解释本发明的原理。
图1是根据一示例性实施例示出的一种关联用户的确定方法的流程图;
图2a是根据一示例性实施例示出的一种面孔相册的示意图;
图2b是根据一示例性实施例示出的一种面孔集合的示意图;
图3是根据一示例性实施例示出的一种关联用户的确定方法的场景图;
图4a是根据一示例性实施例示出的一种获取用户属性信息的流程图;
图4b是根据一示例性实施例示出的一种获取当前候选关联用户的年龄信息的流程图;
图5是根据一示例性实施例示出的一种获取面孔年龄的流程图;
图6是根据一示例性实施例示出的一种关联用户的确定装置的框图;
图7是根据一示例性实施例示出的另一种关联用户的确定装置的框图;
图8是根据一示例性实施例示出的又一种关联用户的确定装置的框图;
图9是根据一示例性实施例示出的再一种关联用户的确定装置的框图;
图10是根据一示例性实施例示出的一种适用于关联用户的确定装置的框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本发明相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本发明的一些方面相一致的装置和方法的例子。
图1是根据一示例性实施例示出的关联用户的确定方法的流程图,如图1所示,该关联用户的确定方法可应用于移动终端上,该移动终端包括但不限于手机,该方法包括以下步骤S101-S103:
在步骤S101中,获取面孔相册,该面孔相册包含多个用户的面孔集合。
在该实施例中,若移动终端例如手机开启了“面孔相册”,则可以从服务端获取面孔相册,该面孔相册可以包含多个用户的面孔集合。
图2a给出了一个面孔相册的示例,该面孔相册中包含了多个用户的面孔集合,某个用户的面孔集合可图2b所示。
在步骤S102中,确定该面孔相册中的目标用户,从该面孔相册中筛选出目标用户的至少一个候选关联用户。
在该实施例中,目标用户可以为宝宝,在获取面孔相册后,可以从该面孔相册中识别出宝宝的面孔集合,同时可以根据宝宝面孔集合中的面孔数量来确定目标用户。例如,假设当前面孔相册中有两个宝宝的面孔集合,其中,第一个宝宝的面孔集合中包含的面孔数量为4,第二个宝宝的面孔集合中包含的面孔数量为50,则可以确定第二个宝贝为目标用户。
在确定目标用户后,可以通过但不限于以下方式从面孔相册中筛选出目标用户的至少一个候选关联用户:获取面孔相册中所有用户的面孔来源照片,并将获取的除目标用户以外的用户的面孔来源照片和目标用户的面孔来源照片进行比对,然后将与目标用户之间相同面孔来源照片的数量大于预设数量的用户作为候选关联用户。
其中,面孔来源照片是指面孔所在的照片,假设,照片1上包括面孔1和面孔2,则面孔1和面孔2的来源照片均为照片1,照片2上包括面孔3,则面孔3的来源照片为照片2。上述预设数量可以根据需要灵活设置,例如可以为10张、15张等。
假设,当前面孔相册中包含用户1-5共5个用户的面孔集合,其中,用户1为目标用户,获取这5个用户的面孔来源照片,并将用户2-5的面孔来源照片分别与用户1的面孔来源照片进行比对,假设发现用户2与用户1之间相同面孔来源照片的数量为2张,即用户2与用户1有2张合影;用户3与用户1之间相同面孔来源照片的数量为30张,即用户3与用户1有30张合影;用户4与用户1之间相同面孔来源照片的数量为33张,即用户4与用户1有33张合影;用户5与用户1之间相同面孔来源照片的数量为 20张,即用户5与用户1有20张合影;假设预设数量为10张,则可以确定用户3、用户4和用户5为目标用户的候选关联用户。
在步骤S103中,获取候选关联用户的属性信息,根据属性信息确定目标用户的关联用户,并为关联用户设置标签信息。
在确定目标用户的候选关联用户之后,可以获取候选关联用户的属性信息,以根据属性信息确定目标用户的关联用户。
例如,可以获取候选关联用户的性别和年龄信息,并根据年龄信息删除不符合年龄要求的候选关联用户,然后根据剩余的候选关联用户的性别判断剩余的候选关联用户是否超出关联用户数量,若未超出,则确定剩余的候选关联用户为关联用户,若超出,则根据预设条件例如候选关联用户的面孔数量确定关联用户。
继续上例进行描述,假设获取到用户3的性别为男,所属的年龄段为10-15,用户4的性别为女,所属的年龄段为25-30,用户5的性别为男,所属的年龄段为28-35,由于用户3的年龄段不符合年龄要求,因此,删除用户3,由于根据用户4和用户5的性别可以确定用户4和用户5符合关联用户数量的要求,因此,确定用户4和用户5为目标用户的关联用户,例如用户4为目标用户的妈妈,用户5为目标用户的爸爸。
但是,假设获取到用户3的性别为男,所属的年龄段为25-30,则用户3-5均符合年龄要求,由于用户3和用户5均为男性,超出了关联用户数量,此时需要根据预设条件对用户3和用户5进行进一步筛选,例如可以获得用户3和用户5的面孔数量,由于用户3的面孔数量(30张)大于用户5的面孔数量(20张),因此,确定用户3为目标用户的关联用户。
上述根据预设条件确定关联用户的方式简单、准确率高。
在确定目标用户的关联用户之后,可以为关联用户设置标签信息,以便后续可以根据该标签信息执行操作。
其中,该标签信息可以为“宝宝的爸爸”或“宝宝的妈妈”,也可以为表示“宝宝的爸爸”或“宝宝的妈妈”的标记。另外,移动终端可以显示设置的标签信息,例如可以在面孔相册中对应用户面孔的底部或顶部显示标签信息,也可以在用户面孔上显示标签信息,例如在用户面孔的右上角等位置显示标签信息,需要说明的是,此处对标签信息的样式和设置的位置不进行具体限定。
进一步地,在为关联用户设置好标签信息之后,若用户触发了创建家庭相册等操作,则移动终端可以同时提取目标用户以及该目标用户的关联用户的面孔,而不是人工找到目标用户的关联用户,然后逐个提取关联用户的面孔,实现简单、快速。
下面结合图3对本公开进行示例性说明,如图3所示,用户可以使用手机31为宝宝和自己拍摄很多照片,同时可以将照片上传到服务器32上,当前用户点击开启“面孔相册”选项后,手机31可以从服务器32上获取面孔相册,同时,手机31可以自动识别出目标用户例如当前用户的宝宝,并从该面孔相册中筛选出目标用户的至少一个候选关联用户,然后获取候选关联用户的属性信息,并根据属性信息确定目标用户的关联用户即宝宝的爸爸和妈妈,并为宝宝的爸爸和妈妈设置标签信息,从而方便后续根据标签信息进行操作。
上述关联用户的确定方法实施例,通过获取面孔相册,确定面孔相册中的目标用户,并从面孔相册中筛选出目标用户的至少一个候选关联用户,然后获取候选关联用户的属性信息,根据属性信息确定目标用户的关联用户,最后为关联用户设置标签信息,实现过程快速、简单,不需要用户进行繁琐的操作,为用户节省了大量的时间。
图4a是根据一示例性实施例示出的一种获取用户属性信息的流程图,通过该实施例可以获取候选关联用户的性别和年龄信息,如图4a所示,该 过程包括:
在步骤S401中,收集训练样本,提取训练样本的特征,根据特征训练分类器。
由于该实施例中获取性别和年龄信息,因此,需要收集性别训练样本和年龄训练样本,并提取对应训练样本的特征,该特征可以包括但不限于gabor特征,然后根据特征训练对应的分类器,上述分类器可以包括但不限于支持向量机(Support Vector Machine,SVM)分类器。
其中,gabor特征是一种图像的局部特征度量方法,主要用来刻画局部纹理特征。
在步骤S402中,利用分类器获取候选关联用户的性别和所属的年龄段。
由于面孔相册中包含候选关联用户的多个面孔,因此,该实施例可以通过对应的分类器获取每个候选关联用户所有面孔的性别和年龄信息,然后通过对所有面孔的性别和年龄信息进行统计,并根据统计结果获得对应候选关联用户的性别和所属的年龄段。
例如,针对每个候选关联用户,获取当前候选关联用户的年龄信息的过程可以如图4b所示,该过程包括:
在步骤S4031中,利用分类器获取当前候选关联用户的每个面孔的年龄,并获取每个面孔对应的照片拍摄时间。
在步骤S4032中,根据获取的年龄和照片拍摄时间计算每个面孔对应的出生时间。
在步骤S4033中,根据计算出的出生时间确定当前候选关联用户所属的年龄段。
假定,当前候选关联用户有40个面孔,其中,10个面孔对应的出生时间为1988年,8个面孔对应的出生时间为1990年,7个面孔对应的出生时间为1989年,8个面孔对应的出生时间为1987年,2个面孔对应的出生时 间为1980年,2个面孔对应的出生时间为1981年,2个面孔对应的出生时间为1995年,1个面孔对应的出生时间为1996年,则确定当前候选关联用户所属的年龄段为25-28。
采用上述方式确定年龄段,准确率高。
又例如,针对每个候选关联用户,利用分类器获取当前候选关联用户的每个面孔对应的性别,若获取到的性别相同,则获取到的性别作为当前候选关联用户的性别,若获取到的性别不同,则统计属于不同性别的当前候选关联用户面孔的数量,将数量较大的面孔对应的性别作为当前候选关联用户的性别。
假定,当前候选关联用户有40个面孔,其中,38个面孔对应的性别为男,2个面孔对应的性别为女,则确定当前候选关联用户的性别为男。
采用上述方式确定性别,准确率高。
上述实施例,通过获得每个候选关联用户的所有面孔对应的性别和年龄信息,然后根据所有面孔对应的性别和年龄信息确定对应候选关联用户的性别和年龄段,准确率高。
图5是根据一示例性实施例示出的一种获取面孔年龄和性别的流程图,如图5所示,针对当前候选关联用户的每个面孔,获取当前面孔的年龄和性别可包括:
在步骤S501中,利用分类器获取当前候选关联用户的当前面孔的年龄和性别,并计算当前面孔的光照和姿态信息,以获得计算结果。
由于拍摄角度、拍摄光线等原因,通常导致对同一用户不同面孔的性别和年龄的识别结果不准确。为了解决这个问题,本实施例中,需要计算当前面孔的光照和姿态信息。
其中,光照信息可以利用像素灰度值的均值和方差来计算。
在步骤S502中,判断计算结果是否符合光照和姿态要求,若符合,则 执行步骤S503,若不符合,则执行步骤S504。
在该实施例中,可以判断用户的姿态是否为正面姿态,且像素灰度值的均值是否位于预设范围例如50-100,若用户的姿态是正面姿态,且像素灰度值的均值位于50-100,则确定当前面孔符合光照和姿态要求,否则,确定当前面孔不符合光照和姿态要求。
其中,可以通过多种方式判断用户的姿态是否为正面姿态,例如可以提取当前面孔上几个点的位置,例如左眼和右眼的位置,然后判断左眼和右眼是否对称,若对称,则表明是正面姿态。
在步骤S503中,将获取到的年龄和性别作为当前面孔的年龄和性别,并将当前面孔及其对应的年龄和性别保存在数据库中。
若当前面孔符合光照和姿态要求,则可以将获取到的年龄和性别作为当前面孔的年龄和性别,并可以将当前面孔及其对应的年龄和性别保存在数据库中,以便后续匹配使用。
在步骤S504中,从数据库中获得当前面孔的匹配面孔,并将匹配面孔的年龄和性别作为当前面孔的年龄和性别。
若当前面孔不符合光照和姿态要求,例如当前面孔是侧面,且光线太暗,则将获取到的年龄和性别作为当前面孔的年龄和性别不准确,需要从数据库中获得当前面孔的匹配面孔,并将匹配面孔的年龄和性别作为当前面孔的年龄和性别,以提高准确率。
上述实施例,在当前面孔符合光照和姿态要求时,直接将获取到的年龄和性别作为当前面孔的年龄和性别,在当前面孔不符合光照和姿态要求时,从数据库中获得当前面孔的匹配面孔,并将匹配面孔的年龄和性别作为当前面孔的年龄和性别,从而保证了当前面孔性别和年龄的识别准确率。
与前述关联用户的确定方法实施例相对应,本公开实施例还提供了关联用户的确定装置实施例。
图6是根据一示例性实施例示出的一种关联用户的确定装置的框图,如图6所示,关联用户的确定装置包括:获取模块61、确定筛选模块62和确定设置模块63。
获取模块61,被配置为获取面孔相册,面孔相册包含多个用户的面孔集合。
确定筛选模块62,被配置为确定获取模块61获取的面孔相册中的目标用户,从面孔相册中筛选出目标用户的至少一个候选关联用户。
确定设置模块63,被配置为获取确定筛选模块62筛选出的候选关联用户的属性信息,根据属性信息确定目标用户的关联用户,并为关联用户设置标签信息。
图6所示的关联用户的确定装置确定关联用户的过程可参见图1所示实施例,此处不赘述。
上述关联用户的确定装置实施例,通过获取模块获取面孔相册,通过确定筛选模块确定面孔相册中的目标用户,并从面孔相册中筛选出目标用户的至少一个候选关联用户,然后通过确定设置模块获取候选关联用户的属性信息,根据属性信息确定目标用户的关联用户,并为关联用户设置标签信息,实现过程快速、简单,不需要用户进行繁琐的操作,为用户节省了大量的时间。
图7是根据一示例性实施例示出的另一种关联用户的确定装置的框图,如图7所示,在上述图6所示实施例的基础上,确定筛选模块62可包括:获取比对子模块621和确定子模块622。
获取比对子模块621,被配置为获取面孔相册中所有用户的面孔来源照片,并将获取的除目标用户以外的用户的面孔来源照片和目标用户的面孔来源照片进行比对。
确定子模块622,被配置为将与目标用户之间相同面孔来源照片的数量 大于预设数量的用户作为候选关联用户。
图7所示的关联用户的确定装置确定关联用户的过程可参见图1所示实施例的对应部分,此处不赘述。
上述实施例,确定候选关联用户的方式简单,易于实现。
图8是根据一示例性实施例示出的另一种关联用户的确定装置的框图,如图8所示,在上述图6所示实施例的基础上,确定设置模块63可包括:获取删除子模块631和判断确定子模块632。
获取删除子模块631,被配置为获取候选关联用户的性别和年龄信息,并根据年龄信息删除不符合年龄要求的候选关联用户。
判断确定子模块632,被配置为根据剩余的候选关联用户的性别判断剩余的候选关联用户是否超出关联用户数量,若未超出,则确定剩余的候选关联用户为关联用户,若超出,则根据预设条件确定关联用户。
在一实施例中,判断确定子模块632可被配置为:获得剩余的候选关联用户的面孔数量,将面孔数量最大的候选关联用户作为关联用户。
图8所示的关联用户的确定装置确定关联用户的过程可参见图1所示实施例的对应部分,此处不赘述。
上述实施例,根据候选关联用户的属性信息确定关联用户的方式简单,易于实现。
图9是根据一示例性实施例示出的另一种关联用户的确定装置的框图,如图9所示,在上述图8所示实施例的基础上,获取删除子模块631可包括:收集提取训练单元6311和获取单元6312。
收集提取训练单元6311,被配置为收集训练样本,提取训练样本的特征,根据特征训练分类器,特征包括gabor特征,分类器包括SVM分类器。
获取单元6312,被配置为利用收集提取训练单元6311训练的分类器获取候选关联用户的性别和所属的年龄段。
在一实施例中,获取单元6312可被配置为:针对每个候选关联用户,利用分类器获取当前候选关联用户的每个面孔的年龄,获取每个面孔对应的照片拍摄时间,并根据年龄和照片拍摄时间计算每个面孔对应的出生时间,根据计算出的出生时间确定当前候选关联用户所属的年龄段;针对每个候选关联用户,利用分类器获取当前候选关联用户的每个面孔对应的性别,若获取到的性别相同,则获取到的性别作为当前候选关联用户的性别,若获取到的性别不同,则统计属于不同性别的当前候选关联用户面孔的数量,将数量较大的面孔对应的性别作为当前候选关联用户的性别。
在另一实施例中,获取单元6312可被配置为:针对当前候选关联用户的每个面孔,利用分类器获取当前候选关联用户的当前面孔的年龄和性别,并计算当前面孔的光照和姿态信息,若计算结果符合光照和姿态要求,则将获取到的年龄和性别作为当前面孔的年龄和性别,并将当前面孔及其对应的年龄和性别保存在数据库中,若计算结果不符合光照和姿态要求,则从数据库中获得当前面孔的匹配面孔,并将匹配面孔的年龄和性别作为当前面孔的年龄和性别。
图9所示的关联用户的确定装置确定关联用户的过程可参见图4a、图4b和图5所示实施例的对应部分,此处不赘述。
上述实施例,获取关联用户属性信息的方式灵活、多样,且准确率高。
关于上述实施例中的装置,其中各个模块、子模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图10是根据一示例性实施例示出的一种适用于关联用户的确定装置的框图。例如,装置1000可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理,飞行器等。
参照图10,装置1000可以包括以下一个或多个组件:处理组件1002,存储器1004,电源组件1006,多媒体组件1008,音频组件1010,输入/输出(I/O)接口1012,传感器组件1014,以及通信组件1016。
处理组件1002通常控制装置1000的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件1002可以包括一个或多个处理器1020来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件1002可以包括一个或多个模块,便于处理组件1002和其他组件之间的交互。例如,处理组件1002可以包括多媒体模块,以方便多媒体组件1008和处理组件1002之间的交互。
存储器1004被配置为存储各种类型的数据以支持在装置1000的操作。这些数据的示例包括用于在装置1000上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器1004可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件1006为装置1000的各种组件提供电力。电源组件1006可以包括电源管理系统,一个或多个电源,及其他与为装置1000生成、管理和分配电力相关联的组件。
多媒体组件1008包括在所述装置1000和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实 施例中,多媒体组件1008包括一个前置摄像头和/或后置摄像头。当装置1000处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件1010被配置为输出和/或输入音频信号。例如,音频组件1010包括一个麦克风(MIC),当装置1000处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器1004或经由通信组件1016发送。在一些实施例中,音频组件1010还包括一个扬声器,用于输出音频信号。
I/O接口1012为处理组件1002和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件1014包括一个或多个传感器,用于为装置1000提供各个方面的状态评估。例如,传感器组件1014可以检测到装置1000的打开/关闭状态,组件的相对定位,例如所述组件为装置1000的显示器和小键盘,传感器组件1014还可以检测装置1000或装置1000一个组件的位置改变,用户与装置1000接触的存在或不存在,装置1000方位或加速/减速和装置1000的温度变化。传感器组件1014可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件1014还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件1014还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件1016被配置为便于装置1000和其他设备之间有线或无线方式的通信。装置1000可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件1016经由广播信道 接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件1016还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,装置1000可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器1004,上述指令可由装置1000的处理器1020执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本领域技术人员在考虑说明书及实践这里公开的实施例公开后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开实施例的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开实施例的一般性原理并包括本公开实施例未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开实施例的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开实施例并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开实施例的范围仅由所附的权利要求来限制。
工业实用性
本公开的实施例通过获取面孔相册,确定面孔相册中的目标用户,并从面孔相册中筛选出目标用户的至少一个候选关联用户,然后获取候选关 联用户的属性信息,根据属性信息确定目标用户的关联用户,最后为关联用户设置标签信息,实现过程快速、简单,不需要用户进行繁琐的操作,为用户节省了大量的时间。
确定候选关联用户的方式简单,易于实现。
根据候选关联用户的属性信息确定关联用户的方式简单,易于实现。
获取关联用户属性信息的方式简单、灵活。
根据预设条件确定关联用户的方式简单、准确率高。
通过获得每个候选关联用户所有面孔对应的性别和年龄信息,然后根据所有面孔对应的性别和年龄信息确定对应候选关联用户的性别和年龄段,准确率高。
在当前面孔符合光照和姿态要求时,直接将获取到的年龄和性别作为当前面孔的年龄和性别,在当前面孔不符合光照和姿态要求时,从数据库中获得当前面孔的匹配面孔,并将匹配面孔的年龄和性别作为当前面孔的年龄和性别,从而保证了当前面孔性别和年龄的识别准确率。

Claims (15)

  1. 一种关联用户的确定方法,所述方法包括:
    获取面孔相册,所述面孔相册包含多个用户的面孔集合;
    确定所述面孔相册中的目标用户,从所述面孔相册中筛选出所述目标用户的至少一个候选关联用户;
    获取所述候选关联用户的属性信息,根据所述属性信息确定所述目标用户的关联用户,并为所述关联用户设置标签信息。
  2. 根据权利要求1所述的关联用户的确定方法,其中,所述从所述面孔相册中筛选出所述目标用户的至少一个候选关联用户,包括:
    获取所述面孔相册中所有用户的面孔来源照片,并将获取的除所述目标用户以外的用户的面孔来源照片和所述目标用户的面孔来源照片进行比对;
    将与所述目标用户之间相同面孔来源照片的数量大于预设数量的用户作为所述候选关联用户。
  3. 根据权利要求1所述的关联用户的确定方法,其中,所述获取所述候选关联用户的属性信息,根据所述属性信息确定所述目标用户的关联用户,包括:
    获取所述候选关联用户的性别和年龄信息,并根据所述年龄信息删除不符合年龄要求的候选关联用户;
    根据剩余的候选关联用户的性别判断剩余的候选关联用户是否超出关联用户数量,若未超出,则确定所述剩余的候选关联用户为所述关联用户,若超出,则根据预设条件确定所述关联用户。
  4. 根据权利要求3所述的关联用户的确定方法,其中,所述获取所述候选关联用户的性别和年龄信息,包括:
    收集训练样本,提取所述训练样本的特征,根据所述特征训练分类 器,所述特征包括gabor特征,所述分类器包括SVM分类器;
    利用所述分类器获取所述候选关联用户的性别和所属的年龄段。
  5. 根据权利要求3所述的关联用户的确定方法,其中,所述根据预设条件确定所述关联用户,包括:
    获得所述剩余的候选关联用户的面孔数量,将所述面孔数量最大的候选关联用户作为所述关联用户。
  6. 根据权利要求4所述的关联用户的确定方法,其中,所述利用所述分类器获取所述候选关联用户的性别和所属的年龄段,包括:
    针对每个候选关联用户,利用所述分类器获取当前候选关联用户的每个面孔的年龄,获取每个面孔对应的照片拍摄时间,并根据所述年龄和所述照片拍摄时间计算每个面孔对应的出生时间,根据计算出的所述出生时间确定所述当前候选关联用户所属的年龄段;
    针对每个候选关联用户,利用所述分类器获取当前候选关联用户的每个面孔对应的性别,若获取到的性别相同,则获取到的性别作为所述当前候选关联用户的性别,若获取到的性别不同,则统计属于不同性别的当前候选关联用户面孔的数量,将所述数量较大的面孔对应的性别作为所述当前候选关联用户的性别。
  7. 根据权利要求6所述的关联用户的确定方法,其中,所述利用所述分类器获取当前候选关联用户的每个面孔的年龄和性别,包括:
    针对当前候选关联用户的每个面孔,利用所述分类器获取当前候选关联用户的当前面孔的年龄和性别,并计算所述当前面孔的光照和姿态信息,若计算结果符合光照和姿态要求,则将获取到的年龄和性别作为所述当前面孔的年龄和性别,并将所述当前面孔及其对应的年龄和性别保存在数据库中,若计算结果不符合光照和姿态要求,则从所述数据库中获得所述当前面孔的匹配面孔,并将所述匹配面孔的年龄和性别作为 所述当前面孔的年龄和性别。
  8. 一种关联用户的确定装置,所述装置包括:
    获取模块,被配置为获取面孔相册,所述面孔相册包含多个用户的面孔集合;
    确定筛选模块,被配置为确定所述获取模块获取的所述面孔相册中的目标用户,从所述面孔相册中筛选出所述目标用户的至少一个候选关联用户;
    确定设置模块,被配置为获取所述确定筛选模块筛选出的所述候选关联用户的属性信息,根据所述属性信息确定所述目标用户的关联用户,并为所述关联用户设置标签信息。
  9. 根据权利要求8所述的关联用户的确定装置,其中,所述确定筛选模块包括:
    获取比对子模块,被配置为获取所述面孔相册中所有用户的面孔来源照片,并将获取的除所述目标用户以外的用户的面孔来源照片和所述目标用户的面孔来源照片进行比对;
    确定子模块,被配置为将与所述目标用户之间相同面孔来源照片的数量大于预设数量的用户作为所述候选关联用户。
  10. 根据权利要求8所述的关联用户的确定装置,其中,所述确定设置模块包括:
    获取删除子模块,被配置为获取所述候选关联用户的性别和年龄信息,并根据所述年龄信息删除不符合年龄要求的候选关联用户;
    判断确定子模块,被配置为根据剩余的候选关联用户的性别判断剩余的候选关联用户是否超出关联用户数量,若未超出,则确定所述剩余的候选关联用户为所述关联用户,若超出,则根据预设条件确定所述关联用户。
  11. 根据权利要求10所述的关联用户的确定装置,其中,所述获取删除子模块包括:
    收集提取训练单元,被配置为收集训练样本,提取所述训练样本的特征,根据所述特征训练分类器,所述特征包括gabor特征,所述分类器包括SVM分类器;
    获取单元,被配置为利用所述收集提取训练单元训练的所述分类器获取所述候选关联用户的性别和所属的年龄段。
  12. 根据权利要求10所述的关联用户的确定装置,其中,所述判断确定子模块,被配置为:
    获得所述剩余的候选关联用户的面孔数量,将所述面孔数量最大的候选关联用户作为所述关联用户。
  13. 根据权利要求11所述的关联用户的确定装置,其中,所述获取单元,被配置为:
    针对每个候选关联用户,利用所述分类器获取当前候选关联用户的每个面孔的年龄,获取每个面孔对应的照片拍摄时间,并根据所述年龄和所述照片拍摄时间计算每个面孔对应的出生时间,根据计算出的所述出生时间确定所述当前候选关联用户所属的年龄段;
    针对每个候选关联用户,利用所述分类器获取当前候选关联用户的每个面孔对应的性别,若获取到的性别相同,则获取到的性别作为所述当前候选关联用户的性别,若获取到的性别不同,则统计属于不同性别的当前候选关联用户面孔的数量,将所述数量较大的面孔对应的性别作为所述当前候选关联用户的性别。
  14. 根据权利要求13所述的关联用户的确定装置,其中,所述获取单元,被配置为:
    针对当前候选关联用户的每个面孔,利用所述分类器获取当前候选 关联用户的当前面孔的年龄和性别,并计算所述当前面孔的光照和姿态信息,若计算结果符合光照和姿态要求,则将获取到的年龄和性别作为所述当前面孔的年龄和性别,并将所述当前面孔及其对应的年龄和性别保存在数据库中,若计算结果不符合光照和姿态要求,则从所述数据库中获得所述当前面孔的匹配面孔,并将所述匹配面孔的年龄和性别作为所述当前面孔的年龄和性别。
  15. 一种关联用户的确定装置,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为:
    获取面孔相册,所述面孔相册包含多个用户的面孔集合;
    确定所述面孔相册中的目标用户,从所述面孔相册中筛选出所述目标用户的至少一个候选关联用户;
    获取所述候选关联用户的属性信息,根据所述属性信息确定所述目标用户的关联用户,并为所述关联用户设置标签信息。
PCT/CN2015/097611 2015-07-31 2015-12-16 关联用户的确定方法及装置 WO2017020476A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
MX2016006745A MX361672B (es) 2015-07-31 2015-12-16 Método y dispositivo para determinar un usuario asociado.
KR1020167013623A KR101771153B1 (ko) 2015-07-31 2015-12-16 연관 사용자의 확정 방법 및 장치
JP2016532556A JP6263263B2 (ja) 2015-07-31 2015-12-16 関連ユーザー確定方法および装置
RU2016119495A RU2664003C2 (ru) 2015-07-31 2015-12-16 Способ и устройство для определения ассоциированного пользователя

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510463635.X 2015-07-31
CN201510463635.XA CN105069083B (zh) 2015-07-31 2015-07-31 关联用户的确定方法及装置

Publications (1)

Publication Number Publication Date
WO2017020476A1 true WO2017020476A1 (zh) 2017-02-09

Family

ID=54498453

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/097611 WO2017020476A1 (zh) 2015-07-31 2015-12-16 关联用户的确定方法及装置

Country Status (8)

Country Link
US (1) US9892314B2 (zh)
EP (1) EP3125188A1 (zh)
JP (1) JP6263263B2 (zh)
KR (1) KR101771153B1 (zh)
CN (1) CN105069083B (zh)
MX (1) MX361672B (zh)
RU (1) RU2664003C2 (zh)
WO (1) WO2017020476A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069083B (zh) * 2015-07-31 2019-03-08 小米科技有限责任公司 关联用户的确定方法及装置
CN105488470A (zh) * 2015-11-30 2016-04-13 小米科技有限责任公司 确定人物属性信息的方法及装置
CN106295499B (zh) * 2016-07-21 2019-10-11 北京小米移动软件有限公司 年龄估计方法及装置
CN110020155A (zh) * 2017-12-06 2019-07-16 广东欧珀移动通信有限公司 用户性别识别方法及装置
CN110162956B (zh) * 2018-03-12 2024-01-19 华东师范大学 确定关联账户的方法和装置
CN108806699B (zh) * 2018-05-30 2021-03-23 Oppo广东移动通信有限公司 语音反馈方法、装置、存储介质及电子设备
US11928181B2 (en) * 2018-12-27 2024-03-12 Nec Corporation Information processing apparatus, information processing method, and program
CN109886158B (zh) * 2019-01-30 2023-01-10 广州轨道交通建设监理有限公司 一种基于施工现场的定位标签佩戴监控方法与装置
CN110351389B (zh) * 2019-08-07 2020-12-25 北京瑞策科技有限公司 用户社区关联数据的上链方法及其装置
CN112256982B (zh) * 2020-09-15 2022-08-16 中国科学院信息工程研究所 基于稀疏采样时空数据的目标同行关系分析方法及电子装置
CN112528842A (zh) * 2020-12-07 2021-03-19 北京嘀嘀无限科技发展有限公司 用于姿态检测的方法、装置、设备和存储介质
CN112817920A (zh) * 2021-03-03 2021-05-18 深圳市知小兵科技有限公司 分布式大数据的清理方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090248703A1 (en) * 2008-03-26 2009-10-01 Fujifilm Corporation Saving device for image sharing, image sharing system, and image sharing method
CN103399896A (zh) * 2013-07-19 2013-11-20 广州华多网络科技有限公司 识别用户间关联关系的方法及系统
CN104021150A (zh) * 2009-08-07 2014-09-03 谷歌公司 带有社交网络辅助的面部识别
CN104299001A (zh) * 2014-10-11 2015-01-21 小米科技有限责任公司 生成影集的方法及装置
CN105069083A (zh) * 2015-07-31 2015-11-18 小米科技有限责任公司 关联用户的确定方法及装置

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SU1755268A1 (ru) * 1989-01-06 1992-08-15 Рижское Высшее Военно-Политическое Краснознаменное Училище Им.Бирюзова С.С. Анализатор изображений
JP2004227158A (ja) * 2003-01-21 2004-08-12 Omron Corp 情報提供装置および情報提供方法
US8041082B1 (en) 2007-11-02 2011-10-18 Google Inc. Inferring the gender of a face in an image
CN104866553A (zh) 2007-12-31 2015-08-26 应用识别公司 利用脸部签名来标识和共享数字图像的方法、系统和计算机程序
US7953690B2 (en) * 2008-01-25 2011-05-31 Eastman Kodak Company Discovering social relationships from personal photo collections
US9135277B2 (en) 2009-08-07 2015-09-15 Google Inc. Architecture for responding to a visual query
US9087059B2 (en) 2009-08-07 2015-07-21 Google Inc. User interface for presenting search results for multiple regions of a visual query
CN102043820A (zh) * 2009-10-26 2011-05-04 鸿富锦精密工业(深圳)有限公司 人脉关系分析系统及方法
KR101138822B1 (ko) 2009-11-19 2012-05-10 한국과학기술원 디지털 사진들에 첨부된 인물들의 이름들을 관리하는 방법 및 시스템
US8805079B2 (en) 2009-12-02 2014-08-12 Google Inc. Identifying matching canonical documents in response to a visual query and in accordance with geographic information
US8462224B2 (en) * 2010-06-01 2013-06-11 Hewlett-Packard Development Company, L.P. Image retrieval
JP2013069024A (ja) * 2011-09-21 2013-04-18 Fuji Xerox Co Ltd 画像検索プログラム及び画像検索装置
US8929615B2 (en) * 2011-11-03 2015-01-06 Facebook, Inc. Feature-extraction-based image scoring
JP2012079354A (ja) * 2012-01-26 2012-04-19 Casio Comput Co Ltd 画像表示制御装置、画像表示制御方法及びプログラム
US20150032535A1 (en) * 2013-07-25 2015-01-29 Yahoo! Inc. System and method for content based social recommendations and monetization thereof
KR101479260B1 (ko) * 2013-09-10 2015-01-09 부산대학교 산학협력단 사진 기반 인물 친밀도 검색 방법
US9420442B2 (en) * 2014-10-06 2016-08-16 Facebook, Inc. Ping compensation factor for location updates
CN104408402B (zh) * 2014-10-29 2018-04-24 小米科技有限责任公司 人脸识别方法及装置
CN104715007A (zh) * 2014-12-26 2015-06-17 小米科技有限责任公司 用户标识方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090248703A1 (en) * 2008-03-26 2009-10-01 Fujifilm Corporation Saving device for image sharing, image sharing system, and image sharing method
CN104021150A (zh) * 2009-08-07 2014-09-03 谷歌公司 带有社交网络辅助的面部识别
CN103399896A (zh) * 2013-07-19 2013-11-20 广州华多网络科技有限公司 识别用户间关联关系的方法及系统
CN104299001A (zh) * 2014-10-11 2015-01-21 小米科技有限责任公司 生成影集的方法及装置
CN105069083A (zh) * 2015-07-31 2015-11-18 小米科技有限责任公司 关联用户的确定方法及装置

Also Published As

Publication number Publication date
MX361672B (es) 2018-12-13
KR20170023768A (ko) 2017-03-06
RU2016119495A (ru) 2017-11-23
CN105069083B (zh) 2019-03-08
RU2664003C2 (ru) 2018-08-14
US20170032180A1 (en) 2017-02-02
US9892314B2 (en) 2018-02-13
KR101771153B1 (ko) 2017-08-24
JP6263263B2 (ja) 2018-01-17
CN105069083A (zh) 2015-11-18
MX2016006745A (es) 2017-06-28
EP3125188A1 (en) 2017-02-01
JP2017526989A (ja) 2017-09-14

Similar Documents

Publication Publication Date Title
WO2017020476A1 (zh) 关联用户的确定方法及装置
US9953212B2 (en) Method and apparatus for album display, and storage medium
WO2017088470A1 (zh) 图像分类方法及装置
EP3125135B1 (en) Picture processing method and device
WO2017084182A1 (zh) 图片处理方法及装置
US20170154206A1 (en) Image processing method and apparatus
TWI702544B (zh) 圖像處理方法、電子設備和電腦可讀儲存介質
WO2021036382A1 (zh) 图像处理方法及装置、电子设备和存储介质
WO2017214793A1 (zh) 指纹模板生成方法及装置
RU2643464C2 (ru) Способ и устройство для классификации изображений
EP3173969B1 (en) Method, apparatus and terminal device for playing music based on a target face photo album
JP6305565B2 (ja) 写真を集合する方法及び装置
WO2017000491A1 (zh) 获取虹膜图像的方法、装置及红膜识别设备
US9779294B2 (en) Methods and devices for classifying pictures
US11551465B2 (en) Method and apparatus for detecting finger occlusion image, and storage medium
WO2017140109A1 (zh) 压力检测方法和装置
CN105335714A (zh) 照片处理方法、装置和设备
CN112069951A (zh) 视频片段提取方法、视频片段提取装置及存储介质
CN111797746B (zh) 人脸识别方法、装置及计算机可读存储介质
CN105426904A (zh) 照片处理方法、装置和设备
CN109145151B (zh) 一种视频的情感分类获取方法及装置
CN110020117B (zh) 一种兴趣信息获取方法、装置及电子设备

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2016532556

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2016119495

Country of ref document: RU

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20167013623

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: MX/A/2016/006745

Country of ref document: MX

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15900243

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15900243

Country of ref document: EP

Kind code of ref document: A1