WO2019233086A1 - 人脸匹配方法及装置、存储介质 - Google Patents

人脸匹配方法及装置、存储介质 Download PDF

Info

Publication number
WO2019233086A1
WO2019233086A1 PCT/CN2018/122887 CN2018122887W WO2019233086A1 WO 2019233086 A1 WO2019233086 A1 WO 2019233086A1 CN 2018122887 W CN2018122887 W CN 2018122887W WO 2019233086 A1 WO2019233086 A1 WO 2019233086A1
Authority
WO
WIPO (PCT)
Prior art keywords
face information
priority
matching
range
attribute
Prior art date
Application number
PCT/CN2018/122887
Other languages
English (en)
French (fr)
Inventor
张帆
彭彬绪
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to SG11202007986PA priority Critical patent/SG11202007986PA/en
Priority to JP2020560795A priority patent/JP7136925B2/ja
Publication of WO2019233086A1 publication Critical patent/WO2019233086A1/zh
Priority to US17/004,761 priority patent/US11514716B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/179Human faces, e.g. facial parts, sketches or expressions metadata assisted face recognition

Definitions

  • the present disclosure relates to the field of information technology but is not limited to the field of information technology, and in particular, to a face matching method and device, and a storage medium.
  • Face matching is to match the collected face with a previously acquired face, so as to realize the identity recognition of the collected face. It is a portrait recognition or face recognition technology. However, it is found in related technologies that the efficiency of face matching is very low, which causes a large feedback delay in face matching.
  • embodiments of the present application are expected to provide a face matching method and device, and a storage medium.
  • an embodiment of the present application provides a face matching method, including:
  • an embodiment of the present application provides a face matching device, including:
  • a first acquisition module configured to obtain a first attribute of first face information to be matched
  • a determining module configured to determine a priority matching range according to the first attribute
  • the first matching module is configured to match the first face information with the second face information in the priority matching range.
  • an electronic device including:
  • the processor is connected to the memory, and is configured to implement a face matching method provided by one or more of the foregoing technical solutions by executing computer-executable instructions located on the memory.
  • an embodiment of the present application provides a computer storage medium, where the computer storage medium stores computer-executable instructions; after the computer-executable instructions are executed, the human face provided by the foregoing one or more technical solutions can be implemented Matching method.
  • an embodiment of the present application provides a computer program product, wherein the computer program product includes computer-executable instructions; after the computer-executable instructions are executed, a person provided by one or more of the foregoing technical solutions can be implemented. Face matching method.
  • the face matching method, device, and storage medium provided in the embodiments of the present application, before performing face matching, first obtain a first attribute of first face information to be matched, and then perform a matched second person based on the first attribute.
  • the face information is filtered to determine the priority matching range; the first face information is preferentially matched with the second face information in the priority matching range. Since the first attribute reflects the attribute of the first face information, the attribute corresponding to the second face information in the priority matching range is adapted to the first attribute.
  • the second face information located in the priority matching range has a higher probability of matching with the first face information than the second face information located outside the priority matching range.
  • the priority matching range is preferred.
  • the first face information is matched using the priority matching range. If the second face information matching the first face information is quickly found within the priority matching range, the matching can be terminated, thereby reducing the number of matching times and The amount of information, thereby reducing the load of the server in performing face information matching, and removing unnecessary matching.
  • FIG. 1 is a schematic flowchart of a first face matching method according to an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of a face matching system according to an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a second face matching method according to an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of another information processing apparatus according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • this embodiment provides a face matching method, including:
  • Step S110 obtaining a first attribute of the first face information to be matched
  • Step S120 Determine a priority matching range according to the first attribute
  • Step S130 Match the first face information with the second face information in the priority matching range.
  • FIG. 2 shows an information processing system, which includes: a service background, which includes one or more servers, and the method can be applied to the server body.
  • Terminal 1 Terminal 2 and Terminal 3 are shown in FIG. 2.
  • the terminals connected to the server in specific implementations can be various types of terminals, for example, various types of terminals.
  • the type of mobile terminal or fixed terminal is not limited to that shown in FIG. 2.
  • the terminal may submit to the server various image information including the first face information, login information, MAC address, IP address and connection information, collection location and collection parameters.
  • the server may receive the first face information from the terminal device, and then extract the first attribute according to an attribute extraction rule, and circle a priority matching range based on the first attribute.
  • the first attribute may include one or more attribute values.
  • One or more attribute values may determine the priority matching range. If N attribute values determine a priority matching range, and M attribute values determine two or more priority matching ranges, the multiplicity can be obtained by taking the form of an intersection. The intersection of two priority matching ranges is used as the final priority matching range.
  • the N and the M are both positive integers; the M may be larger than the N.
  • at least two priority matching ranges determined by the M attribute values may be combined to obtain a combined priority matching range, and the combined priority matching range is used as a required or final match. Range.
  • the priority matching range corresponding to the intersection may be the matching range of the first priority level, and when the priority matching range corresponding to the union is not the priority matching range of the final matching, it may be the priority matching range of the second priority level.
  • the first priority level is higher than the second priority level.
  • the order of priority of the matching results. If the second face information matching the first face information is matched in the multiple priority matching ranges of the plurality of priority levels, the first face information in the high priority matching range is preferentially selected to match.
  • the second face information is output as the final matching result.
  • the priority matching ranges of different priority levels do not limit the matching timing, and the priority matching ranges of different priority levels are matched at the same time to obtain multiple second matching ranges in the different priority matching ranges that match the first face information.
  • face information is used, the second face information in the priority matching range with a higher priority is preferentially selected as the final matching result.
  • the priority matching range may include: a plurality of information databases of second face information.
  • the data amount of the second face information included in the priority matching range is smaller than the data amount of the second face information included in the full amount database.
  • the priority matching range has a higher probability of including second face information that satisfies the matching condition with the first face information.
  • the priority matching range reduces the second face information that needs to be matched, and has a higher probability to provide the second face information that matches the first face information, so it has fewer matching times.
  • the matching efficiency is high, and the second face information matching the first face information can be found faster.
  • both the first face information and the second face information may be an image including a face graphic, or may be text including a face feature.
  • the method further includes:
  • Step S140 If the first face information fails to match the second face information in the priority matching range, then the first face information is matched with the second face information outside the priority matching range. Make a match.
  • the first face information is not successfully matched within the priority matching range (that is, the first face information fails to match with the second face information within the priority matching range)
  • the first face information is matched with non-priority
  • the second face information within the range continues to be matched until the second face information successfully matched with the first face information is found, or until the second face information of the full amount database is matched.
  • the first face information fails to match with the second face information within the priority matching range
  • matching is also performed with the second face information outside the priority matching range, which can be avoided because it is only within the priority matching range.
  • the missing problem caused by the matching of the second face information ensures that the matching is successful when the second face information matching the first face information exists in the entire database.
  • the step S110 may include at least one of the following:
  • the first attribute may be first face information of a first collected object, the first collected object may be a collected object, and the first face information may be information formed by collecting the first collected object.
  • the object attribute is information describing the first collection object.
  • the object attributes may include:
  • the information of the first collected subject is gender, age, height, fat and thin.
  • the terminal device collects image information such as a photo or a video, and the image information may be a full-body image or a bust including the first face information; first, according to the face image, an electronic device such as a server can identify the corresponding face image.
  • the gender of the first collection object for example, the server may identify the gender of the first collection object based on the recognition model.
  • the recognition model may include: a data model obtained by training on sample data.
  • the data model may include: distinguishing a big data model such as a neural network, a binary tree model, a linear regression model, or a vector machine.
  • the identification model includes a data model, but is not limited to a data model.
  • the weight range of the first capture object (corresponding to its fatness and thinness) can be inferred based on the bust in combination with the terminal device's acquisition parameters (for example, focal length and acquisition angle). .
  • the height range of the first collection object can be inferred based on the whole body image combined with the acquisition parameters of the terminal device.
  • information such as the hair length, dress, and jewelry of the first collection object may also be determined according to a bust or full-length image.
  • the wearing jewelry may include earrings, earrings, headwear or necklaces, hand jewelry, and the like.
  • the object attribute may be information describing characteristics of the collected object.
  • the acquisition attribute may include: a collection position, a collection time, and scene characteristic information of a space corresponding to the collection position of the image information.
  • the resident address of the first collection object to be collected may be country A.
  • obtaining the object attributes of the first collected object according to the first face information includes at least one of the following:
  • the step S120 may include at least one of the following:
  • a first priority matching range is determined according to the gender of the first collection object, wherein the second collection object corresponding to the second face information included in the first priority match range is related to the first collection object Same sex
  • a second priority matching range is determined according to the age of the first collection object, where the second collection object corresponding to the second face information included in the second priority match range is related to the first collection object Age-appropriate
  • a third priority matching range is determined according to the hair length of the first collection object, and the second collection object corresponding to the second face information included in the third priority matching range is related to the first collection object Matching hair length;
  • a fourth priority matching range is determined according to the wearing of the first collection object, and the second collection object corresponding to the second face information included in the fourth priority matching range is the same as the first collection object. There are the same specific accessories.
  • the age of the first collection object may be determined through analysis of the first face information or image information including the first face information, for example, it is determined that the first collection object is an infant , Children, adolescents, youth or seniors.
  • a specific age range may also be distinguished, for example, an age range with 5 years as an age range may be determined through analysis of the first face information or the image information containing the first face information.
  • Age range if the first face information comes from a video, the age and gender information may also be determined based on the sound information collected in the video.
  • the hair length can be divided into at least: short hair and long hair. In other implementations, the hair length can be further divided into: ultra-short hair, short hair, medium-long hair, long hair, and extra-long hair.
  • the first collection object's own body node can be used to distinguish the long distribution points.
  • the inch size is considered to be ultra-short hair, other than the short hair, the hair above the ear is short, and the hair above the shoulder is Medium long hair, long hair below the shoulders and above the waist is long hair, long hair below the waist.
  • the hair length can be determined by a bust or full-length image containing the first face information.
  • the method further includes:
  • Update the second face information regularly or irregularly For example, periodically or irregularly, the device held by the second collection object is requested to obtain the face information of the second collection object or an image including the face information.
  • the wearing includes at least one of: glasses; clothing; accessories; bags.
  • a user eg, a person
  • a user may be one of the objects being collected.
  • Some users have vision problems, and may need to wear various vision-correcting glasses such as nearsighted eyes and farsighted eyes. In this way, glasses can be used as a reference basis for determining the priority matching range.
  • the priority matching range may also be defined according to clothing.
  • the priority matching range is determined according to a clothing style.
  • Accessories can include: various accessories worn on the head, such as earrings, necklaces. In some cases, some accessories are of particular significance to a certain user, so they are worn for a long time. In this way, the priority matching range can also be defined according to the accessories.
  • the wear with an actual frequency higher than a specific frequency is selected to delimit the priority matching range.
  • the wear that is usually actually performed more frequently than a specific frequency may include glasses, jewelry with a specific meaning, and the like.
  • the object attributes can be used for the initial screening, and the priority matching range can be selected to achieve the purpose of removing part of the second face information that does not match the first face information, thereby reducing the number of formal face information matches.
  • the amount of information reduces the amount of matching and improves the matching rate.
  • the acquiring attributes of the first face information includes at least one of the following:
  • the collection position may include: collected latitude and longitude information, collected place names (the collected place names may include: collected community-level or building-level identification information), and the like.
  • the spatial attributes of the space where the collection location is located may include: use information describing the purpose of the space, for example, the collection location is a store in a shopping mall, and the use attributes of the store may include: information describing services or products sold in the store . For example, coffee shops, clothing stores, etc.
  • the usage information may be used to indicate the type of entertainment activities that can take place in a recreation venue, or the name of the venue, for example, Beijing Workers ’Stadium.
  • the step S120 may include:
  • a fifth priority matching range is determined according to the collection position, wherein the last occurrence location of the record of the second collection object corresponding to the second face information included in the fifth priority match range is in accordance with the collection position.
  • the distance is within the first distance range;
  • a sixth priority matching range is determined according to the collection position, where the collection position is: an address, an office address, or an appearance frequency of a second collection object record corresponding to the second face information included in the fifth priority match range Places above the frequency threshold.
  • the places where the frequency is higher than the frequency threshold may include places such as shopping malls, restaurants, or gyms that users frequent.
  • the last appearance place of the second collection object may include: the place where the second collection object was last connected to the network, and the last consumption place of the second object.
  • the appearance time and the current time of the last appearing place are within a specific time range, and the specific time range may be 30 minutes, 1 hour, 2 hours, or half a day. If the second collection object appears at the collection position of the face information, it means that the second collection object has a higher probability of being the first collection object.
  • the probability that the second collection object is the first collection object is relatively high, so that it can be included in the priority matching range for priority matching On the one hand, it improves the matching efficiency; on the other hand, it can appropriately reduce the number of matches and eliminate unnecessary invalid matching operations.
  • the step S120 may include:
  • a seventh priority matching range is determined according to the spatial attributes, wherein the device identification information of the device held by the second collection object corresponding to the second face information included in the seventh priority matching range is included in the space In the device identification information collected in the space corresponding to the attribute.
  • the specific space is used for a specific purpose, and it issues identity information of a membership card or an application (APP) account.
  • APP application
  • a sports equipment store for a specific group of people collects face information and submits it to the server.
  • the image of the clerk of the sports equipment store can be preferentially used as the second face information included in the priority matching range. .
  • a lot of device identification information is collected in the predetermined space, and the correspondence between the space attribute and the collected device identification information and the face information of the corresponding user is established in the information database, so at this time, the space can be used Based on the attributes, it is determined that the face information having the above-mentioned correspondence relationship is the second face information in the priority matching range.
  • the device identification information may include information such as a network protocol (IP) address or a media access control (MAC) address, an international mobile equipment identity (IMEI), and the like. In specific implementation, the device identification information is not limited to any one of the foregoing.
  • the device identification information may further include: a communication identification corresponding to a user identity identification (SIM) card installed in the device, for example, a mobile phone number.
  • SIM user identity identification
  • the step S130 may include:
  • the intersection of the at least two priority matching ranges is taken; and the first face information is matched with the second face information in the intersection set. For example, an intersection operation is performed on the foregoing first priority matching range to the seventh priority matching range to obtain a final priority matching range (that is, the intersection), and the first face information and the final priority are combined at a first priority level. Match the second face information in the matching range. Because the second face information that is located in at least two priority matching ranges obviously has a higher probability of matching with the first face information, the highest priority matching at the first priority level can be performed at the fastest speed. Speed determines the second face information that matches the first face information.
  • the method further includes:
  • the first face information is matched with the second face information in a priority matching range other than the intersection at a second priority level.
  • the second priority level is lower than the first priority level.
  • the step S140 may include:
  • the first face information is matched with the second face information outside the priority matching range at a third priority level.
  • the third priority level is lower than the second priority level.
  • the first face information fails to match the second face information in the intersection, then the first face information is matched with the second face information in the priority matching range outside the intersection; in this way, you can While improving matching efficiency as much as possible, reducing unnecessary matching.
  • the method further includes:
  • the second face information with the highest similarity between the second attribute and the attribute tag is selected.
  • Matching the first face information with the second face information may include:
  • the two face information may be considered to be matched, otherwise the two face information may be considered to be not matched.
  • any method in the prior art may be used to determine whether the facial features have reached a preset matching degree, and no further examples are given here.
  • the second face information that has the highest degree of matching with the first face information may be used as the final matching result to identify the first collection object corresponding to the first face information.
  • multiple pieces of second face information with the same matching degree appear or multiple pieces of second face information with matching degrees higher than the matching degree threshold appear. Further assist face recognition.
  • the second attribute here may be the same attribute parameter corresponding to the first attribute, or may be a different attribute parameter.
  • the second attribute may include at least one of the following:
  • the social relationship may include: family status, marital status and other information.
  • identity attributes describing the second collection object can be used as auxiliary information to help determine the final matching second face information.
  • the possible consumption level of the first collection object may be obtained according to the spatial attributes. For example, if the collection location is a certain hotel, the average consumption or minimum consumption of the hotel can be used as the second attribute to match the consumption level of the second collection object.
  • the interest may be any information describing the preferences and disgust of the second collection object.
  • the first collection object may have a pet, and the favorite pet is used as the second attribute to assist in matching.
  • the interests may be determined according to the activities of the club. For example, for a basketball club, the second attribute may be favorite basketball.
  • whether the first collection object has a partner or a family can be determined according to a limb movement between the first collection object corresponding to the first face information and other objects in the image information including the first face information. And other social relationships as the second attribute to further determine second face information that matches the first face information.
  • the second attribute here may include an object attribute, etc., and finally the second face information matching the first face information with each other can be accurately determined, thereby achieving accurate face information matching, and solving a first
  • the priority matching range is determined first, thereby reducing the occurrence probability of matching the first face information with a plurality of second face information, and reducing the need to finally determine the matching of the second face information. The delay and operation further improve the efficiency of face matching.
  • the second attribute may include multiple attribute values, match the multiple attribute values with the identity attributes of the second collection object, and select the second face information of the second collection object with the most successfully matched attribute value as the final match.
  • the matching result may be used to identify the identity of the first collection object, and / or, the identity attribute tag of the second collection object corresponding to the final matching result may be output as the identity attribute tag of the first collection object or Transfer to other devices.
  • identity information and / or identity attribute tags corresponding to the second face information that matches the first face information can be matched. Determined so as to achieve targeted delivery of precise services, which may include accurate push of content data such as news and / or advertisements, and for example, friend recommendations.
  • the combination of the identity attribute tag and the location attribute tag (geographic location, spatial attribute of the geographic location) where the first collection object appears may be combined to provide accurate services.
  • this embodiment provides a face matching device, including:
  • a first obtaining module 110 configured to obtain a first attribute of first face information to be matched
  • the determining module 120 is configured to determine a priority matching range according to the first attribute
  • the first matching module 130 is configured to match the first face information with the second face information in the priority matching range.
  • the face matching device may be applied to the aforementioned server or terminal device, and may correspond to a client or software development kit installed in various electronic devices.
  • the first obtaining module 110, the determining module 120, and the first matching module 130 may all be program modules. After the program modules are executed by the processor, the first attributes can be obtained, the priority matching range can be determined, and the face Match of information.
  • the device further includes: a second matching module 140 configured to, if the first face information fails to match the second face information in the priority matching range, , The first face information is matched with the second face information outside the priority matching range.
  • a second matching module 140 configured to, if the first face information fails to match the second face information in the priority matching range, , The first face information is matched with the second face information outside the priority matching range.
  • the first obtaining module 110 is configured to perform at least one of the following: obtaining object attributes of a first collected object according to the first face information; obtaining first face information of the collected attributes Collection properties.
  • the first obtaining module 110 is configured to perform at least one of the following: obtaining the gender of the first collection object according to the first face information; and obtaining the first face according to the first face information. Information to obtain the age of the first collection object; to obtain the hair length of the first collection object according to the first face information; to obtain the wear of the first collection object according to the first face information .
  • the determining module 120 is configured to perform at least one of the following: determine a first priority matching range according to the gender of the first collection object, wherein the first priority matching range includes The second collection object corresponding to the second face information of the same gender as the first collection object; a second priority matching range is determined according to the age of the first collection object, wherein the second priority The second collection object corresponding to the second face information included in the matching range is adapted to the age of the first collection object; a third priority matching range is determined according to the hair length of the first collection object, where The second collection object corresponding to the second face information included in the third priority matching range is adapted to the hair length of the first collection object; a fourth is determined according to the wearing of the first collection object The priority matching range, wherein the second collection object corresponding to the second face information included in the fourth priority matching range has the same specific accessories as the first collection object.
  • the wearing includes at least one of: glasses; apparel; accessories; bags.
  • the first obtaining module 110 is specifically configured to execute at least one of the following:
  • the determining module 120 is configured to execute at least one of the following:
  • a fifth priority matching range is determined according to the collection position, wherein the last occurrence location of the record of the second collection object corresponding to the second face information included in the fifth priority match range is in accordance with the collection position.
  • the distance is within the first distance range;
  • a sixth priority matching range is determined according to the collection position, where the collection position is: an address, an office address, or an appearance frequency of a second collection object record corresponding to the second face information included in the fifth priority match range Places above the frequency threshold.
  • the determining module 120 is configured to execute at least one of the following:
  • a seventh priority matching range is determined according to the spatial attributes, wherein the device identification information of the device held by the second collection object corresponding to the second face information included in the seventh priority matching range is included in the space In the device identification information collected in the space corresponding to the attribute.
  • the first matching module 130 is configured to take the intersection of the at least two priority matching ranges if there are at least two priority matching ranges; and combine the first face information with the intersection Matching the second face information in the set.
  • the first matching module is configured to merge at least two priority matching ranges to obtain a union of the at least two priority matching ranges; and combine the first face information with the union set. Matching the second face information.
  • the first matching module is configured to merge at least two priority matching ranges, and obtain a union of the at least two priority matching ranges.
  • the apparatus further includes: a second obtaining module configured to obtain the first face information of the first face information when there are at least two pieces of second face information matching the first face information. Two attributes; a third matching module configured to match the second attribute with an attribute tag of the second face information that matches the first face information; a selection module for selecting a second attribute The second face information with the greatest similarity to the attribute tag.
  • the face information to be matched can also be referred to as a Face ID (Identity, ID) for short, and the person Face information is realized by taking any avatar offline, and the records in the database can be matched.
  • Face ID Identity, ID
  • face information collected by monitoring equipment at locations such as hotels, shopping malls, transportation stations (airports or high-speed rail stations).
  • Step 1 Photo extraction, which can be the information source of the face information to be matched;
  • Extract high-quality photos (requires a photo quality evaluation algorithm); for example, use various photo evaluation algorithms to extract photos with sharpness higher than the preset sharpness or the highest sharpness; or extract the photos according to the angle at which the face appears in the photo Photos in which the face deformation is smaller than a preset deformation value, or a photo of a face collected at a predetermined collection angle is extracted;
  • the predetermined collection angle is: the angle between the front of the collection object and the collection surface of the collection device is less than a specific angle Perspective
  • Step 2 The first attribute is extracted. Before performing the matching, the user information should be extracted to narrow the matching range.
  • the extracted information may be extracted from the analysis of image information or the analysis of the collection space of the image information to extract the first attribute, and the first attribute may include, but is not limited to, any of the following:
  • Clothing (glasses, clothing, bags, etc.);
  • Step 3 Set the priority matching range: For example, after excluding records with different genders and large age differences, you can perform face information matching.
  • a priority match range has one of the following characteristics:
  • the target ID database serves as the aforementioned priority matching range. For example, after conducting online marketing activities, it has accumulated international mobile equipment identification (IMEI) data that has reached the user group, and it is expected to count the visitor situation.
  • IMEI international mobile equipment identification
  • a type of spatial attribute that can describe the service provision attribute or purpose of the space;
  • the MAC address collected on site is in the library (suitable for those with MAC acquisition equipment on site);
  • the terminal device carried by the matched set object recently appeared near the collection position of the FaceID to be matched;
  • the frequent consumption points carried by the matched set objects are near the collection position of the FaceID to be matched;
  • the resident location of the matched collection object and the collection location of FaceID are in the same city;
  • the office location of the matched collection object and the collection location of FaceID are in the same city;
  • the age range of the matched collection object is close to the first collection object corresponding to FaceID;
  • the common clothing of the matched collection object is the same as or similar to the first collection object corresponding to FaceID;
  • the fourth step whole database matching, the whole database here can be the abbreviation of full database;
  • Step 5 Screening the population for results
  • the one with the highest similarity to the target among the remaining records is taken as the matching result.
  • this embodiment provides a terminal device, including:
  • a processor connected to the memory and configured to implement a face matching method provided by one or more of the foregoing technical solutions by executing computer-executable instructions located on the memory, for example, people shown in FIG. 1 and FIG. 3 One or more of the face matching methods.
  • the memory can be various types of memory, such as random access memory, read-only memory, flash memory, and the like.
  • the memory may be used for information storage, for example, storing computer executable instructions and the like.
  • the computer-executable instructions may be various program instructions, such as target program instructions and / or source program instructions.
  • the processor may be various types of processors, such as a central processing unit, a microprocessor, a digital signal processor, a programmable array, a digital signal processor, an application specific integrated circuit, or an image processor.
  • processors such as a central processing unit, a microprocessor, a digital signal processor, a programmable array, a digital signal processor, an application specific integrated circuit, or an image processor.
  • the processor may be connected to the memory through a bus.
  • the bus may be an integrated circuit bus or the like.
  • the terminal device may further include a communication interface, which may include a network interface, for example, a local area network interface, a transceiver antenna, and the like.
  • the communication interface is also connected to the processor and can be used for information transmission and reception.
  • the terminal device further includes a human-machine interaction interface.
  • the human-machine interaction interface may include various input and output devices, such as a keyboard, a touch screen, and the like.
  • This embodiment provides a computer storage medium that stores computer-executable instructions. After the computer-executable instructions are executed, the face matching method provided by one or more of the foregoing technical solutions can be implemented, for example, One or more of the face matching methods shown in FIG. 1 and FIG. 3.
  • the computer storage medium may include various recording media having a recording function, for example, various storage media such as a CD, a floppy disk, a hard disk, a magnetic tape, an optical disk, a U disk, or a mobile hard disk.
  • the optional computer storage medium may be a non-transitory storage medium, which can be read by a processor, so that after the computer-executable instructions stored on the computer storage mechanism are obtained and executed by the first processor,
  • the face matching method provided by any one of the foregoing technical solutions can be implemented, for example, executing a face matching method applied to a terminal device or a face matching method in an application server.
  • This embodiment also provides a computer program product, where the computer program product includes computer-executable instructions; after the computer-executable instructions are executed, the information processing method provided by one or more of the foregoing technical solutions can be implemented, for example, for example, , One or more of the methods shown in FIG. 1 and / or FIG. 3.
  • the computer program includes a computer program tangibly contained on a computer storage medium.
  • the computer program includes program code for executing the method shown in the flowchart.
  • the program code may include instructions corresponding to execution of the method steps provided in the embodiments of the present application.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the units is only a logical function division.
  • there may be another division manner such as multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed components are coupled, or directly coupled, or communicated with each other through some interfaces.
  • the indirect coupling or communications of the device or unit may be electrical, mechanical, or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, which may be located in one place or distributed across multiple network units; Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may be separately used as a unit, or two or more units may be integrated into one unit; the above integration
  • the unit can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer-readable storage medium.
  • the program is executed, the program is executed.
  • the foregoing storage medium includes: a mobile storage device, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk or an optical disk, etc A medium on which program code can be stored.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种人脸匹配方法及装置、存储介质。所述人脸匹配方法,其中,包括:获得待匹配的第一人脸信息的第一属性(S110);根据所述第一属性,确定优先匹配范围(S120);将所述第一人脸信息与所述优先匹配范围中的第二人脸信息进行匹配(S130)。

Description

人脸匹配方法及装置、存储介质
相关申请的交叉引用
本申请基于申请号为201810569921.8、申请日为2018年06月05日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本公开涉及信息技术领域但不限于信息技术领域,尤其涉及一种人脸匹配方法及装置、存储介质。
背景技术
人脸匹配是将采集到人脸与预先获取的人脸进行匹配,从而实现采集到人脸的身份识别,是一种人像识别或面部识别技术。但是在相关技术中发现,人脸匹配的效率很低、导致人脸匹配的反馈延时大。
发明内容
有鉴于此,本申请实施例期望提供一种人脸匹配方法及装置、存储介质。
本申请的技术方案是这样实现的:
第一方面,本申请实施例提供一种人脸匹配方法,包括:
获得待匹配的第一人脸信息的第一属性;
根据所述第一属性,确定优先匹配范围;
将所述第一人脸信息与所述优先匹配范围中的第二人脸信息进行匹配。
第二方面,本申请实施例提供一种人脸匹配装置,包括:
第一获取模块,配置为获得待匹配的第一人脸信息的第一属性;
确定模块,配置为根据所述第一属性,确定优先匹配范围;
第一匹配模块,配置为将所述第一人脸信息与所述优先匹配范围中的第二人脸信息进行匹配。
第三方面,本申请实施例提供一种电子设备,包括:
存储器;
处理器,与所述存储器连接,用于通过执行位于所述存储器上的计算机可执行指令,能够实现前述一个或多个技术方案提供的人脸匹配方法。
第四方面,本申请实施例提供一种计算机存储介质,所述计算机存储介质存储有计算机可执行指令;所述计算机可执行指令被执行后,能够实现前述一个或多个技术方案提供的人脸匹配方法。
第五方面,本申请实施例提供一种计算机程序产品,其中,所述计算机程序产品包括计算机可执行指令;所述计算机可执行指令被执行后,能 够实现前述一个或多个技术方案提供的人脸匹配方法。
本申请实施例提供的人脸匹配方法及装置、存储介质,在进行人脸匹配之前,先获得待匹配的第一人脸信息的第一属性,然后基于第一属性进行被匹配的第二人脸信息筛选,从而确定出优先匹配范围;优先将第一人脸信息与优先匹配范围中的第二人脸信息进行匹配。由于第一属性反映的是第一人脸信息的属性,因此,优先匹配范围中的第二人脸信息所对应的属性与第一属性相适配。
第一方面,位于优先匹配范围内第二人脸信息相较于位于优先匹配范围外的第二人脸信息,与第一人脸信息匹配成功的概率更高,显然,优先进行优先匹配范围内的第二人脸信息进行匹配,可以更快的找到与第一人脸信息匹配成功的第二人脸信息,从而提升了匹配效率,实现了人脸快速匹配。
第二方面,利用优先匹配范围进行第一人脸信息的匹配,若在优先匹配范围内快速找与第一人脸信息匹配的第二人脸信息,则可以终止匹配,从而减少需要匹配次数和信息量,从而减少服务器在执行人脸信息匹配的负荷量,去除不必要的匹配。
附图说明
图1为本申请实施例提供的第一种人脸匹配方法的流程示意图;
图2为本申请实施例提供的一种人脸匹配系统的结构示意图;
图3为本申请实施例提供的第二种人脸匹配方法的流程示意图;
图4为本申请实施例提供的一种信息处理装置的结构示意图;
图5为本申请实施例提供的另一种信息处理装置的结构示意图
图6为本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
以下结合说明书附图及具体实施例对本申请的技术方案做进一步的详细阐述。
如图1所示,本实施例提供一种人脸匹配方法,包括:
步骤S110:获得待匹配的第一人脸信息的第一属性;
步骤S120:根据所述第一属性,确定优先匹配范围;
步骤S130:将所述第一人脸信息与所述优先匹配范围中的第二人脸信息进行匹配。
本实施例提供的人脸匹配方法可应用于数据库中或后台服务器等电子设备中。图2所示为一种信息处理系统,包括:服务后台,在该服务后台中包括一台或多台服务器,该方法可应用于服务器身。在图2中显示有终端1、终端2及终端3等,值得注意的是该图2中显示有三种终端,但具体实现时与服务器对接的终端可为各种类型的终端,例如,各种类型的移动 终端或固定终端等,不局限于图2所示。终端可能会向服务器提交包含第一人脸信息的图像信息、登陆信息、MAC地址、IP地址及连接信息、采集位置及采集参数等各种信息。
在步骤S110中服务器可以从终端设备接收所述第一人脸信息,然后按照属性提取规则提取出所述第一属性,基于第一属性圈定优先匹配范围。
所述第一属性可包括:一个或多个属性值。一个或多个属性值可以确定出所述优先匹配范围,若N个属性值确定出一个优先匹配范围,M个属性值确定出两个以上的优先匹配范围,则可以通过取交集的形式得到多个优先匹配范围的交集作为最终的优先匹配范围。所述N和所述M均为正整数;所述M可大于所述N。在另一些实施例中为了减少漏配,可以将M个属性值确定出的至少两个优先匹配范围进行合并,得到合并后的优先匹配范围,将合并后的优先匹配范围作为需要匹配或最终匹配的范围。在一些实施例中,交集对应的优先匹配范围可为第一优先等级的匹配范围,并集对应的优先匹配范围不是最终匹配的优先匹配范围时,可为第二优先等级的优先匹配范围。所述第一优先等级高于所述第二优先等级。总之所述优先匹配范围可以有个,多个优先匹配范围之间可以设定优先等级。
此处优先匹配等级的优先性可以体现在以下几个方面的至少一个:
匹配时序,若多个优先匹配范围是按照先后顺序进行匹配的,则优先等级高的优先匹配范围先匹配,优先等级低的优先匹配范围后匹配;
匹配结果的优先顺序,若多个优先等级的优先匹配范围中均匹配到与第一人脸信息匹配的第二人脸信息,优先选择优先等级高的优先匹配范围中与第一人脸信息匹配的第二人脸信息作为最终的匹配结果输出。例如,在一些实施例中,不同优先等级的优先匹配范围没有限定匹配时序,同时对不同优先等级的优先匹配范围进行匹配得到不同优先匹配范围中有多张与第一人脸信息匹配的第二人脸信息时,优先选择优先等级高的优先匹配范围中的第二人脸信息作为最终的匹配结果。
该优先匹配范围可以包括:多个第二人脸信息的信息库。通常情况下,所述优先匹配范围包括的第二人脸信息的数据量小于全量数据库包括的第二人脸信息的数据量。该优先匹配范围相对于全量数据库中除所述优先匹配范围以外的非优先匹配范围,有更高的概率包含有与所述第一人脸信息满足匹配条件的第二人脸信息。
由于优先匹配范围相对于全量数据库缩小了需要匹配的第二人脸信息,且有较高的概率提供与所述第一人脸信息相匹配的第二人脸信息,故具有匹配的次数少,匹配的效率高,且可以更快的找到与所述第一人脸信息匹配的第二人脸信息。
在一些实施例中第一人脸信息和所述第二人脸信息都可以是包括人脸图形的图像,也可以是包括人脸特征的文本。
在另一些实施例中,如图3所示,所述方法还包括:
步骤S140:若所述第一人脸信息与所述优先匹配范围中的第二人脸信息匹配失败时,则将所述第一人脸信息与所述优先匹配范围以外的第二人脸信息进行匹配。
若在优先匹配范围内未成功匹配所述第一人脸信息(即第一人脸信息与所述优先匹配范围内第二人脸信息匹配失败),则将第一人脸信息与非优先匹配范围内的第二人脸信息继续匹配,直到找到与所述第一人脸信息匹配成功的第二人脸信息,或者,直到与所述全量数据库的第二人脸信息匹配完毕。
在本实施例中在第一人脸信息与优先匹配范围内的第二人脸信息匹配失败时,还与优先匹配范围以外的第二人脸信息进行匹配,可以避免因为仅与优先匹配范围内的第二人脸信息匹配导致的遗漏问题,确保在全量数据库中存在与第一人脸信息相匹配的第二人脸信息时的匹配成功。
在一些实施例中,所述步骤S110可包括以下至少之一:
根据所述第一人脸信息获得第一采集对象的对象属性;
获取所述第一人脸信息采集属性。
所述第一属性可为第一采集对象的第一人脸信息,所述第一采集对象可为被采集的对象,所述第一人脸信息可为采集所述第一采集对象形成的信息。所述对象属性为描述所述第一采集对象的信息。
在一些实施例中,所述对象属性可包括:
第一采集对象的性别、年龄、身高、胖瘦等信息。
例如,终端设备采集了一个照片或视频等图像信息,该图像信息可为包括第一人脸信息的全身像或半身像;首先根据人脸图形可以由服务器等电子设备识别出该人脸图形对应的第一采集对象的性别,例如,服务器可以基于识别模型识别第一采集对象的性别。该识别模型可包括:通过样本数据训练得到的数据模型。所述数据模型可包括:区分出由神经网络、二叉树模型、线性回归模型或向量机等大数据模型。所述识别模型包括数据模型,但是不限于数据模型。
再例如,所述图像信息包含半身像,则可以根据半身像结合终端设备的采集参数(例如,焦距及采集视角)等,推测出所述第一采集对象的体重范围(对应于其胖瘦)。
又例如,所述图像信息包含全身像,则可以根据全身像结合终端设备的采集参数,推测出第一采集对象的身高范围。
在一些实施例中还可以根据半身像或全身像,确定出第一采集对象的头发长度、穿着打扮、佩戴首饰等信息。该佩戴首饰可包括:耳钉、耳环、头饰或者项链、手部饰品等。
总之,所述对象属性可为描述采集对象的特点的信息。
所述采集属性可包括:所述图像信息的采集位置、采集时间、采集位置对应的空间的场景特点信息。
例如,图像信息的采集位置为A国,则被采集的第一采集对象的常驻住址可能为A国。
在一些实施例中,所述根据所述第一人脸信息获得第一采集对象的对象属性,包括以下至少之一:
根据所述第一人脸信息,获得所述第一采集对象的性别;
根据所述第一人脸信息,获得所述第一采集对象的年龄;
根据所述第一人脸信息,获得所述第一采集对象的发长;
根据所述第一人脸信息,获得所述第一采集对象的穿戴。
在一些实施例中,所述步骤S120可包括以下至少之一:
根据所述第一采集对象的性别,确定出第一优先匹配范围,其中,所述第一优先匹配范围包括的第二人脸信息所对应的第二采集对象,与所述第一采集对象的性别相同;
根据所述第一采集对象的年龄,确定出第二优先匹配范围,其中,所述第二优先匹配范围包括的第二人脸信息所对应的第二采集对象,与所述第一采集对象的年龄相适配;
根据所述第一采集对象的发长,确定出第三优先匹配范围,其中,所述第三优先匹配范围包括的第二人脸信息所对应的第二采集对象,与所述第一采集对象的发长相适配;
根据所述第一采集对象的穿戴,确定出第四优先匹配范围,其中,所述第四优先匹配范围包括的第二人脸信息所对应的第二采集对象,与所述第一采集对象均有相同的特定配饰。
在一些实施例中可以通过第一人脸信息或包含所述第一人脸信息的图像信息的解析,确定出所述第一采集对象的年龄阶段,例如,确定出第一采集对象是婴幼儿、儿童、少年、青年或老年人等。在一些实施例中还可以区分具体的年龄区间,例如,以5岁为一个年龄段的年龄区间,通过第一人脸信息或包含所述第一人脸信息的图像信息的解析确定出所述年龄区间。在另一些实施例中,若所述第一人脸信息来自视频,还可以基于视频中采集的声音信息确定所述年龄阶段及性别等信息。
所述发长至少可以分为:短发及长发。在另一些实施中,所述发长还可分为:超短发、短发、中长发、长发及特长发等。例如,可以将第一采集对象自身的身体节点作为区分发长的区分点,例如将寸头等视为超短发、超短发以外发长在耳朵以上为短发、发长在耳朵以下肩部以上的为中长发,发长在肩部以下腰部以上的为长发,发长在腰部以下的超长发。发长可以通过包含第一人脸信息的半身像或全身像来确定。
在一些实施例中为了确保优先匹配范围的圈定的精准性,所述方法还包括:
定期或不定期的更新第二人脸信息。例如,定期或不定期的向第二采集对象所持有的设备请求第二采集对象的人脸信息或包括人脸信息的图 像。
所述穿戴包括以下至少之一:眼镜;服饰;饰品;包。
例如,用户(例如,人)可为被采集的对象之一。有的用户有视力障碍问题,可能需要佩戴近视眼睛、远视眼睛等各种视力矫正的眼镜,如此,眼镜可以作为确定优先匹配范围的一个参考依据。
不同的用户的穿衣风格可能不同,喜欢穿的衣服也可能不同,在本实施中还可以根据服饰来圈定所述优先匹配范围。例如,根据服装风格确定所述优先匹配范围。
饰品可包括:头部佩戴的各种饰品,例如,耳环、项链。在一些情况下,有的饰品对于某一个用户而言是有特定意义的,故长期佩戴,如此,同样可以根据饰品来圈定所述优先匹配范围。
有的用户喜欢背包或挎包、有的用户不喜欢携带包,如此,包或包的特点也可以成为圈定优先匹配范围的依据。
在本公开实施例中所述方法还包括:
选择实际频次高于特定频次的穿戴用于圈定所述优先匹配范围。例如,通常实际被佩戴的频次高于特定频次的穿戴可包括:眼镜、有特定含义的饰品等。
在本实施例中利用对象属性可以进行初次筛选,可以选择出所述优先匹配范围,达到去除部分与第一人脸信息不匹配的第二人脸信息的目的,从而减少正式人脸信息匹配的信息量,减少匹配量,提升匹配速率。
在一些实施例中,所述获取所述第一人脸信息的采集属性,包括以下至少之一:
获取所述第一人脸信息的采集位置;
获取所述第一人脸信息的采集位置所在空间的空间属性。
所述采集位置可包括:采集的经纬度信息、采集的地名(该采集的地名可包括:采集的小区级别或者楼宇级别的标识信息)等。
所述采集位置所在空间的空间属性,可包括:描述该空间用途的用途信息,例如,采集位置为商场的某一个店面,该店面的用途属性可包括:描述该店面出售的服务或商品的信息。例如,咖啡店、服装店等。
所述用途信息可以用于指示文娱场所可发生的文娱活动类型、或场所的名称,例如,北京工人体育馆。
在一些实施例中,所述步骤S120可包括:
根据所述采集位置,确定出第五优先匹配范围,其中,所述第五优先匹配范围包括的第二人脸信息所对应的第二采集对象的记录的最后出现地点,与所述采集位置之间的距离在第一距离范围内;
根据所述采集位置,确定出第六优先匹配范围,其中,所述采集位置为:第五优先匹配范围包括的第二人脸信息所对应的第二采集对象记录的住址、办公地址或者出现频次高于频次阈值的地点。
此处出现频次高于频次阈值的地点,可包括:用户常去的商场、常去的餐馆或健身馆等地方。
在本实施例中看,第二采集对象的最后出现地点可包括:第二采集对象最后连接到网络的地点、第二对象的最后消费地点。当然此处的最后出现地点的出现时间与当前时间在特定时间范围内,该特定时间范围可30分钟、1个小时、2个小时或者半天等。若第二采集对象出现在人脸信息的采集位置,则说明第二采集对象有较高的概率为第一采集对象。
若采集位置为第二采集对象的住址、办公地址或常去的地方,则该第二采集对象为第一采集对象的概率相对较高,从而可以列入到优先匹配范围,进行优先匹配,一方面提高匹配效率,另一方面可以适当的减少匹配数量,去除不必要的无效匹配操作。
在一些实施例中,所述步骤S120可包括:
根据所述空间属性,确定第六优先匹配范围,其中,所述第六优先匹配范围包括的第二人脸信息所对应的第二采集对象的至少一个身份标识信息,与所述空间属性关联;
根据所述空间属性,确定第七优先匹配范围,其中,所述第七优先匹配范围包括的第二人脸信息所对应的第二采集对象所持有设备的设备标识信息,包括在所述空间属性所对应空间采集到的设备标识信息内。
在一些实施例中特定空间是有特定用途,其自身发行了一些会员卡或应用(Application,APP)账号的身份标识信息。例如,某一个针对特定人群的运动装备店采集到一个人脸信息并提交到了服务器,为了缩小匹配范围,可以优先将该运动装备店的店员的图像作为优先匹配范围内包括的第二人脸信息。
在另一些实施例中,预定空间采集了很多设备标识信息,在信息库内建立了该空间属性与采集的设备标识信息、及对应用户的人脸信息的对应关系,故此时,可以以该空间属性为依据,确定出存在上述对应关系的人脸信息为优先匹配范围内的第二人脸信息。所述设备标识信息可包括:网络协议(Internet Protocol,IP)地址或媒体访问控制(Media Access Control,MAC)地址、国际移动设备标识(International Mobile Equipment Identity,IMEI)等信息。在具体实现时,所述设备标识信息不局限于上述任意一种。在一些实施例中,所述设备标识信息还可包括:该设备内安装的用户身份标识(SIM)卡所对应的通信标识,例如,手机号。
在一些实施例中,所述步骤S130可包括:
若存在至少两个优先匹配范围时,取所述至少两个优先匹配范围的交集;将所述第一人脸信息与所述交集中的所述第二人脸信息进行匹配。例如,将前述第一优先匹配范围至第七优先匹配范围进行取交集操作,得到最终的优先匹配范围(即所述交集),以第一优先等级的将第一人脸信息与该最终的优先匹配范围内的第二人脸信息进行匹配。由于同时位于至少两 个优先匹配范围内的第二人脸信息显然是有更高的概率与所述第一人脸信息匹配成功,故以第一优先等级进行最优先的匹配,可以以最快的速度确定出与第一人脸信息匹配的第二人脸信息。
在一些实施例中所述方法还包括:
以第二优先级等级将第一人脸信息与交集以外的优先匹配范围中的第二人脸信息进行匹配。所述第二优先等级低于所述第一优先等级。
在还有一些实施中,所述步骤S140可包括:
以第三优先等级将第一人脸信息与所述优先匹配范围外的第二人脸信息进行匹配。所述第三优先等级低于所述第二优先等级。
在一些实施例中,若第一人脸信息与交集内的第二人脸信息匹配失败,则将第一人脸信息与交集以外优先匹配范围中的第二人脸信息进行匹配;如此,可以尽可能的提升匹配效率的同时,减少不必要的匹配量。
在一些实施例中,所述方法还包括:
当存在至少两个与所述第一人脸信息匹配的第二人脸信息时,获取第一人脸信息的第二属性;
将所述第二属性和与所述第一人脸信息匹配的所述第二人脸信息的属性标签进行匹配;
选择出第二属性与所述属性标签相似度最大的第二人脸信息。
所述第一人脸信息与第二人脸信息进行匹配,可包括:
将两个人脸信息中的人脸特征进行匹配,
统计达到预设匹配度的人脸特征数;
若所述人脸特征数达到数量阈值,则可认为这两个人脸信息匹配,否则可认为这两个人脸信息不匹配。
在一些实施例中,判断人脸特征是否达到预设匹配度可以采用现有技术的任意一种方法,在此就不再一一举例了。
在一些实施中,可以将与第一人脸信息的匹配度最高的第二人脸信息作为最终的匹配结果,用于来识别所述第一人脸信息所对应的第一采集对象。
在还有一些实施例中,出现了多个匹配度相同的第二人脸信息或者出现了多个匹配度均高于匹配度阈值的第二人脸信息,此时,可以根据第二属性来进一步辅助人脸识别。
此处的第二属性可以与所述第一属性对应的同样的属性参数,也可以是不同的属性参数。
在一些实施例中,所述第二属性可包括以下至少之一:
消费水平、
兴趣爱好、
社交关系等,该社交关系可包括:家庭状况、婚姻状况等信息。
这些描述第二采集对象的身份属性可以作为辅助确定最终所匹配的第 二人脸信息的辅助信息。
例如,可以根据所述空间属性来获取第一采集对象可能的消费水平。例如,采集位置为某一个酒店,则可以根据该酒店的平均消费或最低消费等可以作为第二属性与第二采集对象的消费水平进行匹配。
所述兴趣爱好可以为任意描述第二采集对象的喜好和厌恶的信息,例如,可以在采集的图像信息中第一采集对象有携带宠物,则喜好宠物作为第二属性进行辅助匹配。
再例如,所述采集位置为一个俱乐部,则所述兴趣爱好可为根据该俱乐部的活动来确定,例如,篮球俱乐部,则所述第二属性可为喜好篮球。
又例如,可以根据包含有第一人脸信息的图像信息中第一人脸信息对应的第一采集对象与其他对象的之间的肢体动作等确定出第一采集对象是否有伴侣或是否有家庭等社交关系作为所述第二属性,来进一步确定与第一人脸信息匹配的第二人脸信息。
故在本实施例中此处的第二属性可包括对象属性等,可以最终精准确定出于第一人脸信息相互匹配的第二人脸信息,从而实现精准的人脸信息匹配,解决一个第一人脸信息与多个第二人脸信息匹配最终无法确定第一采集对象的身份的问题。且在本实施例中,由于率先确定了优先匹配范围,从而减少了第一人脸信息与多个第二人脸信息匹配的出现概率,减少了因为需要最终确定匹配第二人脸信息导致的延迟及操作,进一步提升了人脸匹配的效率。
例如,第二属性可包括多个属性值,将多个属性值与第二采集对象的身份属性进行匹配,选择匹配成功的属性值最多的第二采集对象的第二人脸信息作为最终的匹配结果,该匹配结果可以用于识别所述第一采集对象的身份,和/或,将最终的匹配结果对应的第二采集对象的身份属性标签作为所述第一采集对象的身份属性标签输出或传输给其他设备。
总之,在本申请实施例中通过所述第一人脸信息与第二人脸信息的匹配,可以将与第一人脸信息匹配的第二人脸信息对应的身份信息和/或身份属性标签确定出来,从而实现精准服务的定向提供,该精准服务可包括:新闻和/或广告等内容数据的精准推送,再比如,好友推荐等。
在一些实施例中,可以结合所述身份属性标签及所述第一采集对象所出现位置的位置属性标签(地理位置、所在地理位置的空间属性)等组合进行精准服务的提供。
如图4所示,本实施例提供一种人脸匹配装置,包括:
第一获取模块110,配置为获得待匹配的第一人脸信息的第一属性;
确定模块120,配置为根据所述第一属性,确定优先匹配范围;
第一匹配模块130,配置为将所述第一人脸信息与所述优先匹配范围中的第二人脸信息进行匹配。
该人脸匹配装置可为应用于前述的服务器或终端设备,可对应于安装 于各种电子设备中的客户端或软件开发工具包等。
所述第一获取模块110、确定模块120及第一匹配模块130均可为程序模块,所述程序模块被处理器执行后可实现所述第一属性的获取、优先匹配范围的确定及人脸信息的匹配。
在一些实施例中,如图5所示,所述装置还包括:第二匹配模块140,配置为若所述第一人脸信息与所述优先匹配范围中的第二人脸信息匹配失败时,则将所述第一人脸信息与所述优先匹配范围以外的第二人脸信息进行匹配。
在另一些实施例中,所述第一获取模块110,配置为执行以下至少之一:根据所述第一人脸信息获得第一采集对象的对象属性;获取所述采集属性第一人脸信息的采集属性。
在还有一些实施例中,所述第一获取模块110,配置为执行以下至少之一:根据所述第一人脸信息,获得所述第一采集对象的性别;根据所述第一人脸信息,获得所述第一采集对象的年龄;根据所述第一人脸信息,获得所述第一采集对象的发长;根据所述第一人脸信息,获得所述第一采集对象的穿戴。
在还有一些实施例中,所述确定模块120,配置为执行以下至少之一:根据所述第一采集对象的性别,确定出第一优先匹配范围,其中,所述第一优先匹配范围包括的第二人脸信息所对应的第二采集对象,与所述第一采集对象的性别相同;根据所述第一采集对象的年龄,确定出第二优先匹配范围,其中,所述第二优先匹配范围包括的第二人脸信息所对应的第二采集对象,与所述第一采集对象的年龄相适配;根据所述第一采集对象的发长,确定出第三优先匹配范围,其中,所述第三优先匹配范围包括的第二人脸信息所对应的第二采集对象,与所述第一采集对象的发长相适配;根据所述第一采集对象的穿戴,确定出第四优先匹配范围,其中,所述第四优先匹配范围包括的第二人脸信息所对应的第二采集对象,与所述第一采集对象均有相同的特定配饰。
在一些实施例中,所述穿戴包括以下至少之一:眼镜;服饰;饰品;包。
在一些实施例中,所述第一获取模块110,具体用于执行以下至少之一:
获取所述第一人脸信息的采集位置;
获取所述第一人脸信息的采集位置所在空间的空间属性。
进一步地,在一些实施例中,所述确定模块120,配置为执行以下至少之一:
根据所述采集位置,确定出第五优先匹配范围,其中,所述第五优先匹配范围包括的第二人脸信息所对应的第二采集对象的记录的最后出现地点,与所述采集位置之间的距离在第一距离范围内;
根据所述采集位置,确定出第六优先匹配范围,其中,所述采集位置 为:第五优先匹配范围包括的第二人脸信息所对应的第二采集对象记录的住址、办公地址或者出现频次高于频次阈值的地点。
进一步地,所述确定模块120,配置为执行以下至少之一:
根据所述空间属性,确定第六优先匹配范围,其中,所述第六优先匹配范围包括的第二人脸信息所对应的第二采集对象的至少一个身份标识信息,与所述空间属性关联;
根据所述空间属性,确定第七优先匹配范围,其中,所述第七优先匹配范围包括的第二人脸信息所对应的第二采集对象所持有设备的设备标识信息,包括在所述空间属性所对应空间采集到的设备标识信息内。
在一些实施例中,所述第一匹配模块130,配置为若存在至少两个优先匹配范围时,取所述至少两个优先匹配范围的交集;将所述第一人脸信息与所述交集中的所述第二人脸信息进行匹配。
在一些实施例中,所述第一匹配模块,配置为合并至少两个优先匹配范围,获得所述至少两个优先匹配范围的并集;将所述第一人脸信息与所述并集中的所述第二人脸信息进行匹配。
在还有一些实施例中,所述第一匹配模块,配置为合并至少两个优先匹配范围,获得所述至少两个优先匹配范围的并集若存在至少两个优先匹配范围时,取所述至少两个优先匹配范围的交集;以第一优先等级将所述第一人脸信息与所述交集中的第二人脸信息进行匹配;以第二优先等级将所述第一人脸信息与所述并集中的所述第二人脸信息进行匹配;其中,所述第二优先等级低于所述第一优先等级。
在另一些实施例中,所述装置还包括:第二获得模块,配置为当存在至少两个与所述第一人脸信息匹配的第二人脸信息时,获取第一人脸信息的第二属性;第三匹配模块,配置为将所述第二属性和与所述第一人脸信息匹配的所述第二人脸信息的属性标签进行匹配;选择模块,用于选择出第二属性与所述属性标签相似度最大的第二人脸信息。
以下结合上述任意实施例提供几个具体示例:
示例1:
本示例提供一种人脸匹配方法,在该方法中待匹配的人脸信息(即前述第一人脸信息)又可以称为脸部(Face)标识(Identity,ID)简称为FaceID,该人脸信息是在线下实现任意取一个头像,即可匹配出其在库内的记录。例如,酒店、商场、交通站点(飞机场或高铁站)等位置的监控设备等采集的人脸信息。
为保证准确性,有必要尽可能地缩小匹配范围。
第一步:照片抽取,该照片可为前述待匹配的人脸信息的信息来源;
抽取质量较高的照片(需照片质量评估算法);例如,利用各种照片评估算法抽取清晰度高于预设清晰度或者最高清晰度的照片;或者根据人脸在照片中的呈现角度,抽取人脸变形小于预设形变值的照片,或者,抽取 预定采集角度采集的人脸的照片;所述预定采集角度为:采集对象正面与采集设备的采集面之间的夹角小于特定角度的采集视角;
第二步:第一属性抽取,进行匹配前,应先对用户信息进行抽取,以缩小匹配范围。抽取的信息可通过图像信息的分析或者图像信息的采集空间的分析,来抽取第一属性,该第一属性可包括但不限于下列任意一个:
性别;
年龄;
服饰(眼镜、服装、包等);
第三步:设定优先匹配范围:例如,排除性别不同、年龄段差异大的记录后,可进行人脸信息匹配。
在线下场景进行FaceID匹配时,可根据场景特点和需要,设定优先匹配范围。例如,优先匹配范围具有以下特点之一:
目标ID库作为前述的优先匹配范围,如开展线上营销活动后,积累了已触达用户群的国际移动设备标识(IMEI)数据,期望统计到访客户情况的);该场景特点可以为前述的空间属性的一种,可以描述该空间的服务提供属性或者用途等信息;
现场采集到的MAC地址在库内的(适合现场有MAC采集设备的);
被匹配的集对象携带的终端设备最近出现点在待匹配的FaceID的采集位置的附近;
被匹配的集对象携带的频繁消费点在待匹配的FaceID的采集位置的附近;
被匹配的采集对象的常住地与FaceID的采集位置在同城;
被匹配的采集对象的办公地与FaceID的采集位置在同城;
被匹配的采集对象的年龄范围与FaceID对应的第一采集对象接近;
被匹配的采集对象的常用服饰与FaceID对应的第一采集对象相同或相似;
第四步:全库匹配,此处的全库可为全量数据库的简称;
当在优先匹配范围内未匹配到,应进行全库匹配。
第五步:筛选人群取结果
不管是优先匹配还是全库匹配,都可能出现多条相似度超过设定阈值的记录。此时可通过设定人群属性标签进行筛选,如:
消费水平
兴趣爱好
家庭状况
经过筛选后,取剩余记录中与目标相似度最高的作为匹配结果。
如图6所示,本实施例提供了一种终端设备,包括:
存储器;
处理器,与所述存储器连接,用于通过执行位于所述存储器上的计算机可执行指令,能够实现前述一个或多个技术方案提供的人脸匹配方法,例如,图1及图3所示人脸匹配方法中的一个或多个。
该存储器可为各种类型的存储器,可为随机存储器、只读存储器、闪存等。所述存储器可用于信息存储,例如,存储计算机可执行指令等。所述计算机可执行指令可为各种程序指令,例如,目标程序指令和/或源程序指令等。
所述处理器可为各种类型的处理器,例如,中央处理器、微处理器、数字信号处理器、可编程阵列、数字信号处理器、专用集成电路或图像处理器等。
所述处理器可以通过总线与所述存储器连接。所述总线可为集成电路总线等。
在一些实施例中,所述终端设备还可包括:通信接口,该通信接口可包括:网络接口、例如,局域网接口、收发天线等。所述通信接口同样与所述处理器连接,能够用于信息收发。
在一些实施例中,所述终端设备还包括人机交互接口,例如,所述人机交互接口可包括各种输入输出设备,例如,键盘、触摸屏等。
本实施例提供一种计算机存储介质,所述计算机存储介质存储有计算机可执行指令;所述计算机可执行指令被执行后,能够实现前述一个或多个技术方案提供的人脸匹配方法,例如,图1及图3所示人脸匹配方法中的一个或多个。
所述计算机存储介质可为包括具有记录功能的各种记录介质,例如,CD、软盘、硬盘、磁带、光盘、U盘或移动硬盘等各种存储介质。可选的所述计算机存储介质可为非瞬间存储介质,该计算机存储介质可被处理器读取,从而使得存储在计算机存储机制上的计算机可执行指令被处第一理器获取并执行后,能够实现前述任意一个技术方案提供的人脸匹配方法,例如,执行应用于终端设备中的人脸匹配方法或应用服务器中的人脸匹配方法。
本实施例还提供一种计算机程序产品,所述计算机程序产品包括计算机可执行指令;所述计算机可执行指令被执行后,能够实现前述一个或多个技术方案提供的信息处理方法,例如,例如,图1和/或图3所示方法中的一个或多个。
所述包括有形地包含在计算机存储介质上的计算机程序,计算机程序包含用于执行流程图所示的方法的程序代码,程序代码可包括对应执行本申请实施例提供的方法步骤对应的指令。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可 以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本申请各实施例中的各功能单元可以全部集成在一个处理模块中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (29)

  1. 一种人脸匹配方法,包括:
    获得待匹配的第一人脸信息的第一属性;
    根据所述第一属性,确定优先匹配范围;
    将所述第一人脸信息与所述优先匹配范围中的第二人脸信息进行匹配。
  2. 根据权利要求1所述的方法,其中,所述方法还包括:
    若所述第一人脸信息与所述优先匹配范围中的第二人脸信息匹配失败时,则将所述第一人脸信息与所述优先匹配范围以外的第二人脸信息进行匹配。
  3. 根据权利要求1或2所述的方法,其中,所述获得待匹配的第一人脸信息的第一属性,包括以下至少之一:
    根据所述第一人脸信息获得第一采集对象的对象属性;
    获取所述第一人脸信息采集属性。
  4. 根据权利要求3所述的方法,其中,所述根据所述第一人脸信息获得第一采集对象的对象属性,包括以下至少之一:
    根据所述第一人脸信息,获得所述第一采集对象的性别;
    根据所述第一人脸信息,获得所述第一采集对象的年龄;
    根据所述第一人脸信息,获得所述第一采集对象的发长;
    根据所述第一人脸信息,获得所述第一采集对象的穿戴。
  5. 根据权利要求4所述的方法,其中,
    所述根据所述第一属性,确定优先匹配范围,包括以下至少之一:
    根据所述第一采集对象的性别,确定出第一优先匹配范围,其中,所述第一优先匹配范围包括的第二人脸信息所对应的第二采集对象,与所述第一采集对象的性别相同;
    根据所述第一采集对象的年龄,确定出第二优先匹配范围,其中,所述第二优先匹配范围包括的第二人脸信息所对应的第二采集对象,与所述第一采集对象的年龄相适配;
    根据所述第一采集对象的发长,确定出第三优先匹配范围,其中,所述第三优先匹配范围包括的第二人脸信息所对应的第二采集对象,与所述第一采集对象的发长相适配;
    根据所述第一采集对象的穿戴,确定出第四优先匹配范围,其中,所述第四优先匹配范围包括的第二人脸信息所对应的第二采集对象,与所述第一采集对象均有相同的特定配饰。
  6. 根据权利要求4或5所述的方法,其中,所述穿戴包括以下至少之一:眼镜;服饰;饰品;包。
  7. 根据权利要求3至6中任一项所述的方法,其中,所述获取所述第一人脸信息的采集属性,包括以下至少之一:
    获取所述第一人脸信息的采集位置;
    获取所述第一人脸信息的采集位置所在空间的空间属性。
  8. 根据权利要求7所述的方法,其中,所述根据所述第一属性,确定优先匹配范围,包括以下至少之一:
    根据所述采集位置,确定出第五优先匹配范围,其中,所述第五优先匹配范围包括的第二人脸信息所对应的第二采集对象的记录的最后出现地点,与所述采集位置之间的距离在第一距离范围内;
    根据所述采集位置,确定出第六优先匹配范围,其中,所述采集位置为:第五优先匹配范围包括的第二人脸信息所对应的第二采集对象记录的住址、办公地址或者出现频次高于频次阈值的地点。
  9. 根据权利要求7或8所述的方法,其中,所述根据所述第一属性,确定优先匹配范围,包括以下至少之一:
    根据所述空间属性,确定第六优先匹配范围,其中,所述第六优先匹配范围包括的第二人脸信息所对应的第二采集对象的至少一个身份标识信息,与所述空间属性关联;
    根据所述空间属性,确定第七优先匹配范围,其中,所述第七优先匹配范围包括的第二人脸信息所对应的第二采集对象所持有设备的设备标识信息,包括在所述空间属性所对应空间采集到的设备标识信息内。
  10. 根据权利要求1至9任一项所述的方法,其中,
    所述将所述第一人脸信息与所述优先匹配范围中的第二人脸信息进行匹配,包括:
    若存在至少两个优先匹配范围时,取所述至少两个优先匹配范围的交集;
    将所述第一人脸信息与所述交集中的所述第二人脸信息进行匹配。
  11. 根据权利要求1至10任一项所述的方法,其中,所述将所述第一人脸信息与所述优先匹配范围中的第二人脸信息进行匹配,包括:
    合并至少两个优先匹配范围,获得所述至少两个优先匹配范围的并集;
    将所述第一人脸信息与所述并集中的所述第二人脸信息进行匹配。
  12. 根据权利要求1至10任一项所述的方法,其中,所述将所述第一人脸信息与所述优先匹配范围中的第二人脸信息进行匹配,包括:
    合并至少两个优先匹配范围,获得所述至少两个优先匹配范围的并集
    若存在至少两个优先匹配范围时,取所述至少两个优先匹配范围的交集;
    以第一优先等级将所述第一人脸信息与所述交集中的第二人脸信息进行匹配;
    以第二优先等级将所述第一人脸信息与所述并集中的所述第二人脸信 息进行匹配;其中,所述第二优先等级低于所述第一优先等级。
  13. 根据权利要求1至12任一项所述的方法,其中,
    所述方法还包括:
    当存在至少两个与所述第一人脸信息匹配的第二人脸信息时,获取第一人脸信息的第二属性;
    将所述第二属性和与所述第一人脸信息匹配的所述第二人脸信息的属性标签进行匹配;
    选择出第二属性与所述属性标签相似度最大的第二人脸信息。
  14. 一种人脸匹配装置,其中,包括:
    第一获取模块,配置为获得待匹配的第一人脸信息的第一属性;
    确定模块,配置为根据所述第一属性,确定优先匹配范围;
    第一匹配模块,配置为将所述第一人脸信息与所述优先匹配范围中的第二人脸信息进行匹配。
  15. 根据权利要求14所述的装置,其中,所述装置还包括:
    第二匹配模块,配置为若所述第一人脸信息与所述优先匹配范围中的第二人脸信息匹配失败时,则将所述第一人脸信息与所述优先匹配范围以外的第二人脸信息进行匹配。
  16. 根据权利要求14或15所述的装置,其中,
    所述第一获取模块,配置为执行以下至少之一:
    根据所述第一人脸信息获得第一采集对象的对象属性;
    获取所述第一人脸信息的采集属性。
  17. 根据权利要求16所述的装置,其中,
    所述第一获取模块,配置为执行以下至少之一:
    根据所述第一人脸信息,获得所述第一采集对象的性别;
    根据所述第一人脸信息,获得所述第一采集对象的年龄;
    根据所述第一人脸信息,获得所述第一采集对象的发长;
    根据所述第一人脸信息,获得所述第一采集对象的穿戴。
  18. 根据权利要求17所述的装置,其中,
    所述确定模块,配置为执行以下至少之一:
    根据所述第一采集对象的性别,确定出第一优先匹配范围,其中,所述第一优先匹配范围包括的第二人脸信息所对应的第二采集对象,与所述第一采集对象的性别相同;
    根据所述第一采集对象的年龄,确定出第二优先匹配范围,其中,所述第二优先匹配范围包括的第二人脸信息所对应的第二采集对象,与所述第一采集对象的年龄相适配;
    根据所述第一采集对象的发长,确定出第三优先匹配范围,其中,所述第三优先匹配范围包括的第二人脸信息所对应的第二采集对象,与所述第一采集对象的发长相适配;
    根据所述第一采集对象的穿戴,确定出第四优先匹配范围,其中,所述第四优先匹配范围包括的第二人脸信息所对应的第二采集对象,与所述第一采集对象均有相同的特定配饰。
  19. 根据权利要求17或18所述的装置,其中,所述穿戴包括以下至少之一:眼镜;服饰;饰品;包。
  20. 根据权利要求16至19任一项所述的装置,其中,
    所述第一获取模块,配置为执行以下至少之一:
    获取所述第一人脸信息的采集位置;
    获取所述第一人脸信息的采集位置所在空间的空间属性。
  21. 根据权利要求20所述的装置,其中,
    所述确定模块,配置为执行以下至少之一:
    根据所述采集位置,确定出第五优先匹配范围,其中,所述第五优先匹配范围包括的第二人脸信息所对应的第二采集对象的记录的最后出现地点,与所述采集位置之间的距离在第一距离范围内;
    根据所述采集位置,确定出第六优先匹配范围,其中,所述采集位置为:第五优先匹配范围包括的第二人脸信息所对应的第二采集对象记录的住址、办公地址或者出现频次高于频次阈值的地点。
  22. 根据权利要求20所述的装置,其中,
    所述确定模块,配置为执行以下至少之一:
    根据所述空间属性,确定第六优先匹配范围,其中,所述第六优先匹配范围包括的第二人脸信息所对应的第二采集对象的至少一个身份标识信息,与所述空间属性关联;
    根据所述空间属性,确定第七优先匹配范围,其中,所述第七优先匹配范围包括的第二人脸信息所对应的第二采集对象所持有设备的设备标识信息,包括在所述空间属性所对应空间采集到的设备标识信息内。
  23. 根据权利要求14至22任一项所述的装置,其中,
    所述第一匹配模块,配置为若存在至少两个优先匹配范围时,取所述至少两个优先匹配范围的交集;将所述第一人脸信息与所述交集中的所述第二人脸信息进行匹配。
  24. 根据权利要求14至23任一项所述的装置,其中,所述第一匹配模块,配置为合并至少两个优先匹配范围,获得所述至少两个优先匹配范围的并集;将所述第一人脸信息与所述并集中的所述第二人脸信息进行匹配。
  25. 根据权利要求14至23任一项所述的装置,其中,所述第一匹配模块,配置为合并至少两个优先匹配范围,获得所述至少两个优先匹配范围的并集若存在至少两个优先匹配范围时,取所述至少两个优先匹配范围的交集;以第一优先等级将所述第一人脸信息与所述交集中的第二人脸信息进行匹配;以第二优先等级将所述第一人脸信息与所述并集中的所述第 二人脸信息进行匹配;其中,所述第二优先等级低于所述第一优先等级。
  26. 根据权利要求14至25任一项所述的装置,其中,所述装置还包括:
    第二获得模块,当存在至少两个与所述第一人脸信息匹配的第二人脸信息时,获取第一人脸信息的第二属性;
    第三匹配模块,配置为将所述第二属性和与所述第一人脸信息匹配的所述第二人脸信息的属性标签进行匹配;
    选择模块,配置为选择出第二属性与所述属性标签相似度最大的第二人脸信息。
  27. 一种电子设备,
    包括:
    存储器;
    处理器,与所述存储器连接,用于通过执行位于所述存储器上的计算机可执行指令,能够实现权利要求1至13任一项提供的方法。
  28. 一种计算机存储介质,所述计算机存储介质存储有计算机可执行指令;所述计算机可执行指令被执行后,能够实现权利要求1至13任一项提供的方法。
  29. 一种计算机程序产品,其中,所述计算机程序产品包括计算机可执行指令;所述计算机可执行指令被执行后,能够实现权利要求1至13任一项提供的方法。
PCT/CN2018/122887 2018-06-05 2018-12-21 人脸匹配方法及装置、存储介质 WO2019233086A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
SG11202007986PA SG11202007986PA (en) 2018-06-05 2018-12-21 Face matching method and apparatus, storage medium
JP2020560795A JP7136925B2 (ja) 2018-06-05 2018-12-21 顔マッチング方法および装置、記憶媒体
US17/004,761 US11514716B2 (en) 2018-06-05 2020-08-27 Face matching method and apparatus, storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810569921.8A CN108921034A (zh) 2018-06-05 2018-06-05 人脸匹配方法及装置、存储介质
CN201810569921.8 2018-06-05

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/004,761 Continuation US11514716B2 (en) 2018-06-05 2020-08-27 Face matching method and apparatus, storage medium

Publications (1)

Publication Number Publication Date
WO2019233086A1 true WO2019233086A1 (zh) 2019-12-12

Family

ID=64420258

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/122887 WO2019233086A1 (zh) 2018-06-05 2018-12-21 人脸匹配方法及装置、存储介质

Country Status (5)

Country Link
US (1) US11514716B2 (zh)
JP (1) JP7136925B2 (zh)
CN (1) CN108921034A (zh)
SG (1) SG11202007986PA (zh)
WO (1) WO2019233086A1 (zh)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240240996A9 (en) * 2011-11-04 2024-07-18 Wello, Inc. Systems and methods for accurate detection of febrile conditions with varying baseline temperatures
CN108921034A (zh) * 2018-06-05 2018-11-30 北京市商汤科技开发有限公司 人脸匹配方法及装置、存储介质
CN111291077B (zh) * 2018-12-07 2024-03-01 北京京东尚科信息技术有限公司 信息处理方法、装置和计算机可读存储介质
CN110032955B (zh) * 2019-03-27 2020-12-25 深圳职业技术学院 一种基于深度学习的人脸识别新方法
CN112395904A (zh) * 2019-08-12 2021-02-23 北京蜂盒科技有限公司 一种生物特征识别方法及系统
CN110852372B (zh) * 2019-11-07 2022-05-31 北京爱笔科技有限公司 一种数据关联方法、装置、设备及可读存储介质
CN110991390B (zh) * 2019-12-16 2023-04-07 腾讯云计算(北京)有限责任公司 身份信息检索方法、装置、业务系统及电子设备
CN111209874B (zh) * 2020-01-09 2020-11-06 北京百目科技有限公司 一种对人头部穿戴属性的分析识别方法
CN111563197B (zh) * 2020-04-02 2023-05-16 北京字节跳动网络技术有限公司 一种数据匹配方法、装置、介质和电子设备
CN112418042A (zh) * 2020-11-16 2021-02-26 北京软通智慧城市科技有限公司 协助执行防控措施的社区监测方法、系统、终端及介质
JP2022134435A (ja) * 2021-03-03 2022-09-15 富士通株式会社 表示制御プログラム、表示制御方法及び表示制御装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140334734A1 (en) * 2013-05-09 2014-11-13 Tencent Technology (Shenzhen) Company Limited Systems and Methods for Facial Age Identification
CN108009521A (zh) * 2017-12-21 2018-05-08 广东欧珀移动通信有限公司 人脸图像匹配方法、装置、终端及存储介质
CN108009465A (zh) * 2016-10-31 2018-05-08 杭州海康威视数字技术股份有限公司 一种人脸识别方法及装置
CN108921034A (zh) * 2018-06-05 2018-11-30 北京市商汤科技开发有限公司 人脸匹配方法及装置、存储介质

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7634662B2 (en) * 2002-11-21 2009-12-15 Monroe David A Method for incorporating facial recognition technology in a multimedia surveillance system
US7760917B2 (en) * 2005-05-09 2010-07-20 Like.Com Computer-implemented method for performing similarity searches
KR100679049B1 (ko) * 2005-09-21 2007-02-05 삼성전자주식회사 인물 및 장소정보를 제공하는 썸네일에 의한 사진탐색 방법및 그 장치
US9342855B1 (en) * 2011-04-18 2016-05-17 Christina Bloom Dating website using face matching technology
JP5740210B2 (ja) 2011-06-06 2015-06-24 株式会社東芝 顔画像検索システム、及び顔画像検索方法
CN105243060B (zh) * 2014-05-30 2019-11-08 小米科技有限责任公司 一种检索图片的方法及装置
CN104331509A (zh) * 2014-11-21 2015-02-04 深圳市中兴移动通信有限公司 照片管理方法及装置
JP5876920B1 (ja) 2014-12-04 2016-03-02 Lykaon株式会社 徘徊防止システム
US10810252B2 (en) 2015-10-02 2020-10-20 Adobe Inc. Searching using specific attributes found in images
CN106776619B (zh) * 2015-11-20 2020-09-04 百度在线网络技术(北京)有限公司 用于确定目标对象的属性信息的方法和装置
EP4158351A1 (en) * 2020-05-25 2023-04-05 Universiteit Maastricht Method for the diagnosis and treatment of essential primary hypertension

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140334734A1 (en) * 2013-05-09 2014-11-13 Tencent Technology (Shenzhen) Company Limited Systems and Methods for Facial Age Identification
CN108009465A (zh) * 2016-10-31 2018-05-08 杭州海康威视数字技术股份有限公司 一种人脸识别方法及装置
CN108009521A (zh) * 2017-12-21 2018-05-08 广东欧珀移动通信有限公司 人脸图像匹配方法、装置、终端及存储介质
CN108921034A (zh) * 2018-06-05 2018-11-30 北京市商汤科技开发有限公司 人脸匹配方法及装置、存储介质

Also Published As

Publication number Publication date
CN108921034A (zh) 2018-11-30
US20200394391A1 (en) 2020-12-17
US11514716B2 (en) 2022-11-29
JP2021520567A (ja) 2021-08-19
SG11202007986PA (en) 2020-09-29
JP7136925B2 (ja) 2022-09-13

Similar Documents

Publication Publication Date Title
WO2019233086A1 (zh) 人脸匹配方法及装置、存储介质
CN106776619B (zh) 用于确定目标对象的属性信息的方法和装置
WO2015180385A1 (zh) 多媒体资源推荐方法及装置
CN109614556B (zh) 访问路径预测、信息推送方法及装置
WO2017167060A1 (zh) 一种信息展示方法、装置及系统
US9342855B1 (en) Dating website using face matching technology
US20110238503A1 (en) System and method for personalized dynamic web content based on photographic data
KR20160012902A (ko) 시청자 사이의 연관 정보에 근거하여 광고를 재생하기 위한 방법 및 장치
US20100131335A1 (en) User interest mining method based on user behavior sensed in mobile device
CN110533440B (zh) 一种基于视频摄像头的互联网广告信息投放应用方法
KR101905501B1 (ko) 컨텐츠를 추천하는 방법 및 장치
KR102455966B1 (ko) 중개 장치, 방법 및 컴퓨터 판독 가능한 기록매체
CN104243546A (zh) 信息处理设备、通信系统以及信息处理方法
CN108960892B (zh) 信息处理方法及装置、电子设备及存储介质
WO2017166472A1 (zh) 广告数据匹配方法、装置及系统
CN109947510A (zh) 一种界面推荐方法及装置、计算机设备
CN104240113A (zh) 优惠券发放装置和系统
WO2017185462A1 (zh) 位置推荐方法及系统
CN113870133A (zh) 多媒体显示及匹配方法、装置、设备及介质
CN109146605A (zh) 信息推送方法及相关产品
CN107798567B (zh) 品牌信息推送方法、装置及电子设备
WO2018072335A1 (zh) 交友对象的推荐方法和装置
CN108596241A (zh) 一种基于多维感知数据的用户性别快速分类方法
US9691093B2 (en) System and method of matching a consumer with a sales representative
WO2021103727A1 (zh) 一种数据处理方法、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18921619

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020560795

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18921619

Country of ref document: EP

Kind code of ref document: A1