CN113128278A - Image identification method and device - Google Patents

Image identification method and device Download PDF

Info

Publication number
CN113128278A
CN113128278A CN201911413426.9A CN201911413426A CN113128278A CN 113128278 A CN113128278 A CN 113128278A CN 201911413426 A CN201911413426 A CN 201911413426A CN 113128278 A CN113128278 A CN 113128278A
Authority
CN
China
Prior art keywords
sample
image
face
age
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911413426.9A
Other languages
Chinese (zh)
Inventor
邓旸旸
申皓全
王铭学
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201911413426.9A priority Critical patent/CN113128278A/en
Priority to PCT/CN2020/134675 priority patent/WO2021135863A1/en
Publication of CN113128278A publication Critical patent/CN113128278A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The embodiment of the application discloses an image recognition method and device, relates to the field of image processing, in particular to an image recognition method and device, and improves the recognition accuracy of a cross-age face image. The specific scheme is as follows: acquiring the identity characteristics of a target face image; selecting a first sample facial image in a sample library as a recognition result of the target facial image according to the identity characteristics of the target facial image and the sample library; the sample library comprises human face features and age features of one or more sample facial images; the first sample facial image is a sample facial image in which the face features and the first features of the target facial image relative to the first sample facial image in the sample library satisfy a first condition; a first feature of the target facial image relative to the first sample facial image is a sum of an identity feature of the target facial image and an age feature of the first sample facial image.

Description

Image identification method and device
Technical Field
The embodiment of the application relates to the field of image processing, in particular to an image identification method and device.
Background
With the popularization of computer technology and video image technology, face recognition has been widely applied in many scenes such as identity authentication, portrait contrast, and the like.
The cross-age face recognition is a sub-direction of the face recognition, and can be used for searching lost old people and children, catching people and pedlars escaping for a long time and the like. For example, a camera captures an image of a teenager aged 15 or so, and the image is searched in the lost population image library by a cross-age face recognition method, so that the teenager is the same as a 5-year-old child in the lost population image library.
With the increase of age, the facial features of people can change obviously, and how to overcome the influence of age change on face recognition becomes the key of the cross-age face recognition.
At present, the cross-age face recognition is mainly realized by erasing age information in face information, or by means of an age reference dictionary.
However, when the existing cross-age face recognition method realizes face recognition, the accuracy is not high, and the improvement of the accuracy of the cross-age face recognition is particularly important.
Disclosure of Invention
The application provides an image recognition method and device, which improve the recognition accuracy of a cross-age face (human face) image.
In order to achieve the purpose, the following technical scheme is adopted in the application:
in a first aspect, an image recognition method may include: acquiring identity features of a target face image, wherein the identity features are the features except age features in the recognizable features in the face image; selecting a first sample facial image in a sample library as a recognition result of the target facial image according to the identity characteristics of the target facial image and the sample library; the sample library comprises human face features and age features of one or more sample facial images; the human face features are recognizable features in the face image, and the age features are used for indicating the photographing age of people in the face image; the first sample facial image is a sample facial image in which the face features and the first features of the target facial image relative to the first sample facial image in the sample library satisfy a first condition; a first feature of the target facial image relative to the first sample facial image is a sum of an identity feature of the target facial image and an age feature of the first sample facial image.
According to the image identification method, the identity characteristic of the target face image is added with the age characteristic of the sample face image in the sample library to obtain the first characteristic of the target face image, and the first characteristic of the target face image is compared with the face characteristic in the sample library to obtain the identification result. Because the first feature is aligned with the face feature of the sample face image in the sample library in age, the identification process of the application is equivalent to converting cross-age comparison into same-age comparison, and thus, the identification process is equivalent to compensating the influence of the age feature on the identification precision, and the accuracy of cross-age face identification is improved.
The first condition may be configured according to a requirement of a user, which is not specifically limited in the present application.
With reference to the first aspect, in one possible implementation manner, the identity feature of the target image may be obtained by inputting the target facial image to a first neural network. The first neural network may be a converged convolutional neural network, or a converged fully-connected neural network, or the like.
With reference to the first aspect or any one of the foregoing possible implementation manners, in a possible implementation manner, according to the obtained identity information of the target face image, performing parallel comparison with features of each sample face image in the sample library, that is, performing comparison with the features of each sample face image, respectively, to realize selection of a first sample face image in the sample library.
With reference to the first aspect or any one of the foregoing possible implementation manners, in one possible implementation manner, the first sample facial image in the sample library is selected by comparing, according to the obtained identity information of the target facial image, features of the sample facial images in the sample library according to a preset sequence. The preset sequence may be configured according to actual requirements, and is not limited in the present application.
For example, the preset sequence is a sequence from small to large, or a sequence from the present to the previous photographing time, or other sequences, which is not limited in the present application.
With reference to the first aspect or any one of the foregoing possible implementation manners, in one possible implementation manner, the method may further include: acquiring a photographing age range of a target face image; the sample library includes face features and age features of sample face images whose current age is within the photographing age range. In the possible implementation mode, the sample library is the face characteristics and the age characteristics of the screened sample face image with the current age within the photographing age range, the selection range is narrowed, the processing efficiency of image recognition is improved, and meanwhile the accuracy of image recognition is improved.
With reference to the first aspect or any one of the foregoing possible implementation manners, in one possible implementation manner, the photographing age range of the target image may be obtained by inputting the target face image to a fifth neural network. Wherein the fifth neural network may be a converged convolutional neural network, or a converged fully-connected neural network, or the like.
With reference to the first aspect or any one of the foregoing possible implementations, in one possible implementation, the image recognition device subtracts the birth date of each sample face image in the base library from the current date to obtain the current age of each sample face image.
With reference to the first aspect or any one of the foregoing possible implementations, in one possible implementation, the image recognition device subtracts the current date by adding the photographing time of each sample face image in the base library to the photographing age of each sample face image to obtain the current age of each sample face image.
With reference to the first aspect or any one of the foregoing possible implementations, in one possible implementation, the current date may be obtained according to attribute information of the target face image.
With reference to the first aspect or any one of the foregoing possible implementation manners, in one possible implementation manner, the first condition may include: the distance is less than or equal to a threshold value or the similarity is greater than or equal to a limit value or the like. The threshold and the limit may be set according to experience of a user, which is not limited in the present application.
With reference to the first aspect or any one of the foregoing possible implementations, in one possible implementation, the similarity may include a cosine similarity.
With reference to the first aspect or any one of the foregoing possible implementations, in one possible implementation, the distance may include any one of: euclidean distance, mahalanobis distance.
With reference to the first aspect and the foregoing one possible implementation manner, in another possible implementation manner, if a plurality of candidate sample face images exist in the sample library, the candidate sample face images are sample face images in which the face features in the sample library and the first features of the target face image relative to the candidate sample face images satisfy a first condition; selecting a first sample facial image in a sample library as a recognition result of a target facial image, and the method comprises the following steps: the candidate sample face image satisfying the second condition is selected as the first sample face image. In this possible implementation manner, the candidate sample face image is further judged, the obtained similarity between the first sample face image and the target face image is higher, and the accuracy of the image recognition method is further improved.
The second condition may be configured according to the requirement of the user, which is not limited in the present application.
With reference to the first aspect and one possible implementation manner described above, in another possible implementation manner, the second condition may include that the distance is minimum.
With reference to the first aspect and the one possible implementation manner described above, in another possible implementation manner, the second condition may include that the similarity degree is maximum.
With reference to the first aspect and the foregoing one possible implementation manner, in another possible implementation manner, the second condition may include that a difference between a current age of a person in the candidate sample face image and a reference value of the photographing age range of the target face image is minimum. Where the reference value may be the middle of a range or other value.
In a second aspect, an image recognition apparatus is provided, which may be a server in an image recognition system, an apparatus or a chip system in the server, or an apparatus capable of being used in cooperation with the server. The image recognition apparatus may implement the functions performed in the above aspects or possible designs, and the functions may be implemented by hardware or by hardware executing corresponding software. The hardware or software comprises one or more modules corresponding to the functions. Such as: the image recognition apparatus may include: the device comprises a first acquisition unit and a processing unit.
The first acquisition unit is used for acquiring the identity characteristic of the target face image, wherein the identity characteristic is a characteristic except an age characteristic in recognizable characteristics in the face image.
And the processing unit is used for selecting the first sample facial image in the sample library as the identification result of the target facial image according to the identity characteristic of the target facial image and the sample library.
The sample library comprises human face features and age features of one or more sample facial images; the human face features are recognizable features in the face image, and the age features are used for indicating the photographing age of people in the face image; the first sample facial image is a sample facial image in which the face features and the first features of the target facial image relative to the first sample facial image in the sample library satisfy a first condition; a first feature of the target facial image relative to the first sample facial image is a sum of an identity feature of the target facial image and an age feature of the first sample facial image.
Through the image recognition device provided by the application, the identity characteristic of the target face image is added with the age characteristic of the sample face image in the sample library to obtain the first characteristic of the target face image, and the first characteristic of the target face image is compared with the face characteristic in the sample library to obtain the recognition result. Because the first feature is aligned with the face feature of the sample face image in the sample library in age, the identification process of the application is equivalent to converting cross-age comparison into same-age comparison, and thus, the identification process is equivalent to compensating the influence of the age feature on the identification precision, and the accuracy of cross-age face identification is improved.
It should be noted that, for the image recognition apparatus provided in the second aspect, reference may be made to the specific implementation of the first aspect for performing the image recognition method provided in the first aspect.
In a third aspect, an embodiment of the present application provides an image recognition apparatus, which may include: a processor, a memory; a processor, a memory coupled to the processor, the memory being operable to store computer program code comprising computer instructions which, when executed by the image recognition apparatus, cause the image recognition apparatus to perform the image recognition method as described in the first aspect or any one of the possible implementation aspects.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, which may include: computer software instructions; the computer software instructions, when executed in a computer, cause the computer to perform a method of image recognition as set forth in the first aspect or any one of the possible implementations of the first aspect.
In a fifth aspect, the present application provides a computer program product, which when run on a computer, causes the computer to execute the method of image recognition according to any one of the first aspect or any one of the possible implementations.
In a sixth aspect, an embodiment of the present application provides a chip system, where the chip system applies an image recognition apparatus; the chip system comprises an interface circuit and a processor; the interface circuit and the processor are interconnected through a line; the interface circuit is used for receiving signals from a memory in the image recognition device and sending the signals to the processor, and the signals comprise computer instructions stored in the memory; when the computer instructions are executed by a processor, the system-on-chip performs the method of image recognition as described in the first aspect or any one of the possible implementations.
It should be appreciated that the description of technical features, solutions, benefits, or similar language in this application does not imply that all of the features and advantages may be realized in any single embodiment. Rather, it is to be understood that the description of a feature or advantage is intended to include the specific features, aspects or advantages in at least one embodiment. Therefore, the descriptions of technical features, technical solutions or advantages in the present specification do not necessarily refer to the same embodiment. Furthermore, the technical features, technical solutions and advantages described in the present embodiments may also be combined in any suitable manner. One skilled in the relevant art will recognize that an embodiment may be practiced without one or more of the specific features, aspects, or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments.
Drawings
Fig. 1 is a schematic diagram of an image recognition system according to an embodiment of the present application;
fig. 2 is a schematic diagram of an image recognition apparatus according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of an image recognition method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a process for training a first neural network assisted by an anti-net according to an embodiment of the present application;
fig. 5 is a schematic flowchart of another image recognition method according to an embodiment of the present application;
fig. 6 is a schematic diagram of an image recognition process of an image recognition apparatus according to an embodiment of the present application;
fig. 7 is a schematic diagram of an image recognition apparatus according to an embodiment of the present application;
fig. 8 is a schematic diagram of another image recognition apparatus according to an embodiment of the present application.
Detailed Description
The terms "first," "second," and "third," etc. in the description and claims of this application and the above-described drawings are used for distinguishing between different objects and not for limiting a particular order.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present relevant concepts in a concrete fashion for ease of understanding.
In the description of the present application, a "/" indicates a relationship in which the objects associated before and after are an "or", for example, a/B may indicate a or B; in the present application, "and/or" is only an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. Also, in the description of the present application, "a plurality" means two or more than two unless otherwise specified. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
In the embodiments of the present application, at least one may also be described as one or more, and a plurality may be two, three, four or more, which is not limited in the present application.
For ease of understanding, the terms referred to in this application are explained first.
The face image may refer to an image including a face of a person. The face image may also be referred to as a face image of a person, a face image, or a face picture.
The target face image may refer to a face image as a recognition target in an image recognition method. For example, the target face image may be captured in real time by an image capturing device, or may be a face image input by an administrator.
A feature may refer to a mathematical expression used to quantify a facial image. Illustratively, the feature may be a vector or a matrix.
The face features may refer to features recognizable in the face image. Specifically, the face feature may be the sum of an age feature and an identity feature. The face features can be extracted through a neural network.
The age characteristic may be a characteristic for indicating an age. The age features may be extracted by a neural network, or may be obtained by age conversion.
Identity features may refer to features other than age features among recognizable features in a facial image. The face features can be extracted through a neural network.
The base library may be a database including one or more sample facial images or features of one or more sample facial images. The features of a sample facial image may include facial features and age features of the sample facial image. The base library may also include age information for each sample facial image, which may include one or more of the following: photographing age, photographing time, birth date.
A sample repository may be used to store one or more sample facial images or features of one or more sample facial images for comparison in the image recognition method. The features of a sample facial image may include facial features and age features of the sample facial image.
The photographing age may refer to the age of a person at the time of photographing the face image.
The current age may refer to the age of the person in the face image at the current date. Specifically, the current age of a sample face image in the base library may be the current date minus the date of birth of the person in the sample face image.
The recognition result of the target face image may refer to a sample face image similar to the target face image.
Currently, there are two main methods for implementing cross-age face recognition.
The method 1 realizes the cross-age face recognition by erasing the age information in the face information.
The implementation process of the method 1 can be as follows: firstly, acquiring a general face database and an age-crossing face database, wherein the age-crossing face database comprises a plurality of face images classified according to the age; then training a neural network model by using the universal face database and an age-based cross-age face database, so that the neural network can erase age characteristics in a face image; and then inputting the face images (the target face image and the sample face image) to be compared into the neural network model, and determining the age-crossing face recognition result of the target face image by judging the similarity between the identity features which are output by the neural network and have the age features removed.
And 2, realizing cross-age face recognition by means of an age reference dictionary.
The implementation process of the method 2 can be as follows: firstly, constructing an age-crossing reference dictionary according to an external face database, wherein the age-crossing reference dictionary comprises high-level features of different local blocks of different age groups; then acquiring high-level features of each local block of the face image to be recognized; coding and pooling high-level features of each local block of the face images (the target face image and the sample face image) to be compared with high-level features of different local blocks of different age groups in the cross-age reference dictionary to obtain face features with fuzzy age information, and finally determining a cross-age face recognition result of the target face image according to the similarity between the face features with fuzzy age information.
However, the method 1 erases age information while erasing other information highly coupled to the age, such as wrinkles and the like, causing loss of information; the method 2 introduces noise while introducing the age reference dictionary, and the cross-age in the method 2 depends on the establishment of the age reference dictionary, and the difference of the age reference dictionary can cause uncertainty on the final recognition result, thereby influencing the stability of the system. Therefore, since the face changes greatly with age, the method does not take age factors into consideration when realizing the cross-age face recognition, neglects the influence of age on the face image, namely neglects the influence of age characteristics on the recognition accuracy, and thus the recognition accuracy is not high.
Based on this, an embodiment of the present application provides an image recognition method, in which a first feature of a target face image is obtained by adding an identity feature of the target face image to an age feature of a sample face image in a sample library, and the first feature of the target face image is compared with a face feature in the sample library to obtain a recognition result. Because the first feature is aligned with the face feature of the sample face image in the sample library in age, the identification process of the application is equivalent to converting cross-age comparison into same-age comparison, and thus, the identification process is equivalent to compensating the influence of the age feature on the identification precision, and the accuracy of cross-age face identification is improved.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
The image recognition method provided by the embodiment of the application can be applied to the image recognition system shown in fig. 1. As shown in fig. 1, the image recognition system may include a server 101 and an administrator 102.
The administrator 102 is used to manage the server 101. Optionally, the administrator 102 may manage the server 101 through a terminal device, or the administrator 102 may directly manage the server 101.
The server 101 is used for image recognition according to the scheme provided by the application. Wherein the server 101 has stored therein a sample library. The target face image for image recognition by the server 101 may be input by the administrator 102.
Further, the server 101 may also display the result of image recognition to the administrator 102; alternatively, the server 101 may also display the word "no recognition result" to the administrator 102.
Alternatively, the server 101 may include a display screen, and the server 101 displays the result of the image recognition to the administrator 102 through the display screen. Alternatively, the server 101 may display the result of image recognition to the administrator 102 through a screen of the terminal device.
The server 101 may be a physical server, or a cloud server, or other devices with data processing capability and storage capability, which is not limited in this application.
Optionally, as shown in fig. 1, the image recognition system may further include an image collector 103, configured to collect a facial image, and upload the collected facial image to the server 101. Accordingly, the target face image for image recognition by the server 101 may be uploaded by the image collector 103.
The image collector 103 may be an independent camera, a mobile phone camera, a computer camera, a dome camera, a camera of an Augmented Reality (AR) \ Virtual Reality (VR) device, or the like, which is not limited in this application.
For example, the image recognition system illustrated in fig. 1 may be used to find lost elderly people and children, or to catch a long-term escaped prisoner, etc.
The embodiments of the present application will be described in detail with reference to the accompanying drawings.
In one aspect, an embodiment of the present application provides an image recognition apparatus, configured to execute the image recognition method provided by the present application. The image recognition apparatus may be deployed in the server 101 shown in fig. 1, and the image recognition apparatus may be a part or all of the server 101. Alternatively, the image recognition device may be deployed separately, for example, the image recognition device is an electronic device or a chip system with related data processing and storage capabilities.
Fig. 2 illustrates an image recognition apparatus 20 according to an embodiment of the present application. As shown in fig. 2, the image recognition apparatus 20 may include a processor 201, a memory 202, and a transceiver 203.
The following describes each component of the image recognition apparatus 20 in detail with reference to fig. 2:
the memory 202 may be a volatile memory (volatile memory), such as a random-access memory (RAM); or a non-volatile memory (non-volatile memory), such as a read-only memory (ROM), a flash memory (flash memory), a Hard Disk Drive (HDD) or a solid-state drive (SSD); or a combination of the above types of memories, for storing program code, configuration files, data information, image information, or other content for implementing the methods of the present application.
The processor 201 is the control center of the image recognition apparatus 20. For example, the processor 201 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement the embodiments of the present application, such as: one or more microprocessors (digital signal processors, DSPs), or one or more Field Programmable Gate Arrays (FPGAs).
The transceiver 203 is used for information interaction between the image recognition apparatus 20 and other devices. For example, the transceiver 203 is used for information interaction between the image recognition apparatus 20 and the image acquirer.
Optionally, as shown in fig. 2, the image recognition apparatus 20 may further include an image collector 204. The image collector 204 is used for collecting the face image of the person.
Specifically, the processor 201 executes or executes the software programs and/or modules stored in the memory 202 to perform the following functions:
acquiring identity features of a target face image, wherein the identity features are the features except age features in the recognizable features in the face image; selecting a first sample facial image in a sample library as a recognition result of the target facial image according to the identity characteristics of the target facial image and the sample library; the sample library comprises human face features and age features of one or more sample facial images; the human face features are recognizable features in the face image, and the age features are used for indicating the photographing age of people in the face image; the first sample facial image is a sample facial image in which the face features and the first features of the target facial image relative to the first sample facial image in the sample library satisfy a first condition; a first feature of the target facial image relative to the first sample facial image is a sum of an identity feature of the target facial image and an age feature of the first sample facial image.
On the other hand, an embodiment of the present application provides an image recognition method, as shown in fig. 3, the method may include:
s301, the image recognition device acquires the identity characteristics of the target face image.
Specifically, S301 may include, but is not limited to, step a and step B.
Step A, the image recognition device acquires a target face image.
In one possible implementation, the image recognition device may collect the uploaded facial image as the target facial image by the image collector.
In another possible implementation, the image recognition apparatus may take a face image input by the administrator as the target face image.
It should be noted that the target face image may be one or more face images, and in the embodiment of the present application, the target face image is taken as one face image for example, and the description of the other is not repeated.
And step B, the image recognition device acquires the identity characteristics of the target face image.
Specifically, the image recognition apparatus may input the target facial image into a first neural network for extracting the identity feature, and an output of the first neural network is the identity feature of the target facial image.
The first neural network may be a convolutional neural network or a fully-connected neural network or the like. The first neural network is trained to converge, and the method for training the first neural network is not limited in the present application.
Alternatively, the first neural network may be trained by the image recognition apparatus, or trained by other devices to converge and then configured in the image recognition apparatus.
By way of example, fig. 4 illustrates a process for training a first neural network with the assistance of an antagonistic network, which may be: inputting a face image into a first neural network, outputting a face feature corresponding to the face image by the first neural network, inputting the face feature into a confrontation network, outputting an age range corresponding to the face image by the confrontation network, and simultaneously obtaining age gradient information by the confrontation network according to the age range, wherein the age gradient information does not belong to the age range corresponding to the face image; the countermeasure network passes the age gradient information back to the first neural network, interfering with the first neural network to generate age-related features from the facial image. The first neural network may be trained identically through the sets of facial images until the first neural network converges.
S302, the image recognition device selects a first sample facial image in the sample library as a recognition result of the target facial image according to the identity characteristic of the target facial image and the sample library.
The sample library may include, among other things, facial features and age features of one or more sample facial images.
Specifically, the sample library described herein may be determined according to actual needs, and the manner of determining the sample library is not limited in the present application. Alternatively, the manner of determining the sample library may include, but is not limited to, the following two manners.
The first realization is as follows: the base library is converted to a sample library.
In a possible implementation manner, the base library includes a plurality of sample face images, each sample face image in the base library may be input to a neural network to extract a face feature and an age feature thereof, and the face feature and the age feature of each sample face image in the base library are recorded as a sample library.
In another possible implementation manner, the base library includes a plurality of sample face images and the photographing age of each sample face image, each sample face image in the base library may be input to the neural network to extract the face feature thereof, the photographing age of each sample face image in the base library is input to the neural network to obtain the age feature, and the face feature and the age feature of each sample face image in the base library are recorded as the sample library.
In another possible implementation manner, the base library includes a plurality of sample face images and the birth date and the photographing time of each sample face image, the photographing age can be obtained by subtracting the birth date from the photographing time of each sample face image, each sample face image in the base library is input to the neural network to extract the face feature of each sample face image, the photographing age of each sample face image in the base library is input to the neural network to obtain the age feature, and the face feature and the age feature of each sample face image in the base library are recorded as the sample library.
In yet another possible implementation manner, the base library may also include face features and age features of a plurality of sample face images, and then in the first implementation, the base library may be directly used as the sample library.
The second realization: and obtaining a sample library from the basic library through age screening.
In the second implementation, the face features and the age features of the sample face images whose current ages are within the photographing age range can be screened from the basic library as the sample library according to the photographing age range of the target face image. The specific procedure of the second implementation may refer to the procedures of S303 and S304 described below.
Specifically, the image recognition device in S302 may compare the identity information of the target face image obtained in S301 with each sample face image in the sample library, so as to implement S302 to select the first sample face image in the sample library. Optionally, the comparison method may include, but is not limited to, the following two comparison methods:
the comparison method a and the image recognition apparatus may perform parallel comparison with the features of each sample facial image in the sample library according to the identity information of the target facial image acquired in S301, that is, respectively perform comparison with the features of each sample facial image, so as to implement S302 selection of the first sample facial image in the sample library.
The comparison method b and the image recognition device may compare the identity information of the target face image obtained in S301 with the features of the sample face images in the sample library according to a preset sequence, so as to implement S302 to select the first sample face image in the sample library. The preset sequence may be configured according to actual requirements, and is not limited in the present application. For example, the preset sequence may be a sequence of codes from small to large, or may be a sequence of photographing time from the present to the previous, or other sequences.
Here, the image recognition apparatus may perform the same comparison process of the features of each sample face image in the sample library based on the identification information of the target face image acquired in S301, where the comparison process is described by taking the example that the image recognition apparatus compares the features of the image of one sample face image (second sample face image) in the sample library based on the identification information of the target face image acquired in S301, and the comparison process may include S3021 to S3023.
S3021, the image recognition apparatus acquires a first feature of the target face image with respect to the second sample face image.
Wherein the first feature of the target face image relative to the second sample face image is a sum of an identity feature of the target face image and an age feature of the second sample face image.
It should be noted that, a user may adjust the neural network used for obtaining the identity feature, the neural network used for obtaining the age feature, and the neural network parameter used for obtaining the face feature according to experience, so that the vector dimensions of the identity feature, the age feature, and the face feature output by each neural network are kept the same, and the identity feature, the age feature, and the face feature are directly subjected to the related calculation.
S3022, the image recognition apparatus determines whether the first feature of the target face image with respect to the second sample face image and the face feature of the second sample face image satisfy a first condition.
Wherein, the first condition can be configured according to actual requirements.
For example, the first condition may include the distance being less than or equal to a threshold or the degree of similarity being greater than or equal to a limit, or otherwise. The threshold and the limit may be set according to experience of a user, which is not limited in the present application.
Wherein the distance may comprise any one of: euclidean distance, mahalanobis distance. The similarity may include a cosine similarity.
If it is determined in S3022 that the first feature of the target face image with respect to the second sample face image and the face feature of the second sample face image satisfy the first condition, the second sample face image is taken as a candidate sample face image.
S3023, the image recognition apparatus determines the first sample face image.
In one possible implementation, in S3023, corresponding to the comparison mode a, if only one candidate sample face image exists in the sample library, the candidate sample face image may be regarded as the first sample face image.
In another possible implementation, after S3023, corresponding to the comparison method a, if there are a plurality of candidate sample face images in the sample library, a candidate sample face image satisfying the second condition may be selected as the first sample face image.
Wherein, the second condition can be configured according to actual requirements.
For example, the second condition may include a distance minimum or a similarity maximum; alternatively, the second condition may include that the difference between the current age of the person in the candidate sample face image and the reference value of the photographing age range of the target face image is minimum. Where the reference value may be the middle of a range or other value.
In another possible implementation manner, in S3023, corresponding to the comparison manner b, one candidate sample face image is acquired, that is, is taken as the first sample face image.
The embodiment of the application provides an image identification method, which is characterized in that the identity characteristic of a target face image is added with the age characteristic of a sample face image in a sample library to obtain a first characteristic of the target face image, and the first characteristic of the target face image is compared with the face characteristic in the sample library to obtain an identification result. Because the first feature is aligned with the face feature of the sample face image in the sample library in age, the identification process of the application is equivalent to converting cross-age comparison into same-age comparison, and thus, the identification process is equivalent to compensating the influence of the age feature on the identification precision, and the accuracy of cross-age face identification is improved.
Specifically, the first implementation in S302 is described herein as converting a base library into a sample library.
A face image can be input into a second neural network for extracting facial features, and the output of the second neural network is the facial features of the face image. A face image may be input to a third neural network for extracting age features from the face image, the output of the third neural network being the age features of the face image. The age of photographing of a face image may be input to a fourth neural network for extracting age features according to age, and the output of the fourth neural network is the age features of the face image.
The second neural network, the third neural network and the fourth neural network can be convolutional neural networks or fully-connected neural networks or other networks. The second, third and fourth neural networks are neural networks trained to converge, and the method for training the second, third and fourth neural networks is not limited in the present application.
Optionally, the second neural network, the third neural network, and the fourth neural network may be trained by the image recognition apparatus, or trained by other devices to converge and then configured in the image recognition apparatus.
As an example, the training process of the second neural network may be: inputting a plurality of face images (different face images of the same person and face images of different persons) into a second neural network, outputting face image face features by the second neural network, calculating the similarity between the face features of different face images of the same person and the similarity between the face features of face images of different persons, and then adjusting the weight and bias in the second neural network through reverse transmission until the second neural network converges.
As an example, the training process of the third neural network may be: inputting a plurality of facial images into a third neural network, outputting age characteristics (characteristics corresponding to age ranges) of the facial images by the third neural network, and adjusting the weight and the bias in the third neural network through reverse transmission according to the error between the age characteristics output by the third neural network and the actual age characteristics (the difference between the facial characteristics and the identity characteristics) of the facial images until the third neural network converges.
As an example, the training process of the fourth neural network may be: and the photographing ages of the plurality of facial images are input into a fourth neural network, the fourth neural network outputs age characteristics of the facial images, and the weights and the offsets in the fourth neural network are adjusted through reverse transmission according to the error between the age characteristics corresponding to the facial images output by the fourth neural network and the actual age characteristics (the difference between the facial characteristics and the identity characteristics) of the facial images until the fourth neural network converges.
Further, when the sample library is obtained in the second implementation in S302, as shown in fig. 5, before S302, the image recognition method provided in the embodiment of the present application may further include S303 and S304.
And S303, the image recognition device acquires the photographing age range of the target face image.
The target face image may be input to a fifth neural network for extracting a photographing age range, and an output of the fifth neural network is the photographing age range of the target face image.
The fifth neural network may be a convolutional neural network or a fully-connected neural network or the like. The fifth neural network is trained to converge, and the method for training the fifth neural network is not limited in the present application.
Optionally, the fifth neural network may be trained by the image recognition apparatus, or trained by other devices to converge and then configured in the image recognition apparatus.
As an example, the training process of the fifth neural network may be: and inputting a plurality of face images into a fifth neural network, outputting a probability value of an age range corresponding to the face images by the fifth neural network, and reversely transmitting and adjusting the weight and the bias in the fifth neural network according to an error between the probability value of the age range corresponding to the face images output by the fifth neural network and the actual age range probability (the probability is 1) of the face images until the fifth neural network converges.
S304, the image recognition device screens the face characteristics and the age characteristics of the sample face images with the current ages within the photographing age range from the basic library to serve as a sample library.
S304 may be implemented as: the image recognition device obtains the current age of each sample facial image in the basic library, and the image recognition device screens the facial features and the age features of the sample facial images with the current age within the photographing age range of the target facial image from the basic library according to the current age of each sample facial image in the basic library to serve as the sample library.
In one possible implementation, the image recognition device obtains the current age of each sample face image according to the birth date and the current date of each sample face image in the basic library. The current date may be obtained from the attribute information of the target face image, or may be input by an administrator or the like, which is not limited in the present application.
In another possible implementation manner, the image recognition device obtains the current age of each sample face image according to the photographing age of each sample face image in the basic library, the photographing time of each sample face image, and the current date. The current date may be obtained from the attribute information of the target face image, or may be input by an administrator or the like, which is not limited in the present application.
Optionally, the different sample facial images in the base library and the sample library may be distinguished by information such as image identification, ID number, or person name.
If the basic library includes the sample face image, S304 may be performed after the conversion into the face feature and the age feature is performed according to the first implementation in S302.
For example, assuming that the photographing age range of the target face image acquired by the image recognition apparatus is 20 to 30 years and the current date is 2019, the basic library is as shown in table 1. Where one row in table 1 represents the relevant content of one sample face image.
TABLE 1
Sample facial image identification Human face features Age characteristics Date of birth (year)
Sample face image 1 R1 N1 1980
Sample face image 2 R2 N2 1998
Sample face image 3 R3 N3 2003
Sample face image 4 R4 N4 1992
…… …… …… ……
Table 1 is only an example of the basic library, and is not particularly limited.
The image recognition device may subtract the date of birth of each sample face image from the current date to obtain the current age of each sample face image in the basic library according to the current date and the date of birth of each sample face image in table 1, at which time the basic library shown in table 1 is converted into the basic library shown in table 2.
TABLE 2
Sample facial image identification Human face features Age characteristics Date of birth (year) The current age (year of age)
Sample face image 1 R1 N1 1980 39
Sample face image 2 R2 N2 1998 21
Sample face image 3 R3 N3 2003 15
Sample face image 4 R4 N4 1992 27
…… …… …… …… ……
Since the photographing age range of the target face image is 20 to 30 years, and the sample face images whose current ages fall within the photographing age range in the basic library illustrated in table 2 are the sample face image 2 and the sample face image 4, the feature contents of the sample face image 2 and the sample face image 4 can be selected from the basic library as a sample library, which can be shown in table 3.
TABLE 3
Sample facial image identification Human face features Age characteristics
Sample face image 2 R2 N2
Sample face image 4 R4 N4
It should be noted that table 3 is only an example of the sample library and is not particularly limited.
Further optionally, the first to fifth neural networks for extracting features are independent neural networks, and a multi-feature extraction neural network may be established and trained to converge, so as to implement functions of part or all of the neural networks of the first to fifth neural networks, and extract part or all of the identity features, the age features, and the face features. The training process of the multi-feature extraction neural network may refer to the training process of the first to fifth neural networks, which is not described in detail in this application.
The following describes an image recognition method provided by the present application, taking a scene of finding a lost child as an example.
Specifically, a public bureau in a certain city finds a lost child, and needs to confirm the specific identity of the lost child. The policemen obtains the facial image of the lost child by photographing, and inputs the facial image of the lost child into the image recognition device as a target facial image. The image recognition device adopts the image recognition method provided by the application to compare and select the face image similar to the face image of the lost child in the information base of the lost child.
As shown in fig. 6, the image recognition process in which the image recognition apparatus compares in the lost child's information base the face image selected to be similar to the lost child's face image may include:
the image recognition device acquires the identity characteristic and the photographing age range of the target face image.
The image recognition apparatus acquires an information base (basic base) of a lost child, in which face features, age features, and photographing ages of a plurality of sample face images such as a sample face image 1, a sample face image 2, and a sample face image 3 are stored.
The image recognition device screens out sample face images (sample face image 1, sample face image 3) whose current age is within the photographing age range of the target face image, and records the face features and age features of the sample face image 1 and sample face image 3 as a sample library.
The image recognition device adds the identity feature of the target face image to the age features of the sample face image 1 and the sample face image 3, respectively, to obtain a first feature of the target face image relative to the sample face image 1 and a first feature of the target face image relative to the sample face image 3.
The image recognition device calculates the Euclidean distance between the first feature of the target face image relative to the sample face image 1 and the sample face image 1, calculates the Euclidean distance between the first feature of the target face image relative to the sample face image 3 and the sample face image 3, obtains that the Euclidean distance between the first feature of the target face image relative to the sample face image 1 and the sample face image 1 is smaller than a preset threshold and meets a first condition (smaller than the preset threshold), and the Euclidean distance between the first feature of the target face image relative to the sample face image 3 and the sample face image 3 is larger than the preset threshold and does not meet the first condition, so the image recognition device outputs the sample face image 1 to a user as a recognition result, namely, the lost child is considered to be a person corresponding to the sample face image 1.
The above description mainly introduces the scheme provided in the embodiment of the present application from the viewpoint of the operation principle of the image recognition apparatus. It is understood that the image recognition apparatus includes hardware structures and/or software modules corresponding to the respective functions in order to implement the functions. Those skilled in the art will readily appreciate that the present application is capable of being implemented in hardware or a combination of hardware and computer software in connection with the examples described herein for the embodiments disclosed. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the present application may perform division of functional modules on the image recognition apparatus according to the above method, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
In the case of dividing each functional module corresponding to each function, as shown in fig. 7, an image recognition apparatus 70 provided in the embodiment of the present application is used for implementing the function of the image recognition apparatus in the above method. The image recognition device 70 may be a server, may be a device in a server, or may be a device that can be used in cooperation with a server. The image recognition device 70 may be a chip system. In the embodiment of the present application, the chip system may be composed of a chip, and may also include a chip and other discrete devices. As shown in fig. 7, the image recognition device 70 may include: a first acquisition unit 701 and a processing unit 702. The first obtaining unit 701 is configured to perform S301 in fig. 3 or fig. 5; the processing unit 702 is configured to execute S302 in fig. 3 or fig. 5. All relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
Further, as shown in fig. 7, the image recognition apparatus 70 may further include a second obtaining unit 703. The second obtaining unit 703 is configured to execute S303 in fig. 5.
As shown in fig. 8, an image recognition device 80 is provided for the embodiment of the present application, and is used to implement the function of the image recognition device in the above method. The image recognition device 80 may be a server, may be a device in a server, or may be a device that can be used in cooperation with a server. The image recognition device 80 may be a chip system. The image recognition device 80 includes at least one processing module 801 for implementing the functions of the image recognition device in the method provided by the embodiment of the present application. Exemplarily, the processing module 801 may be configured to execute the processes S301 and S302 in fig. 3 or the processes S301, S302, S303 and S304 in fig. 5. For details, reference is made to the detailed description in the method example, which is not repeated herein.
The image recognition device 80 may also include at least one memory module 802 for storing program instructions and/or data. The memory module 802 is coupled with the processing module 801. The coupling in the embodiments of the present application is an indirect coupling or a communication connection between devices, units or modules, and may be an electrical, mechanical or other form for information interaction between the devices, units or modules. The processing module 801 may cooperate with the memory module 802. The processing module 801 may execute program instructions stored in the memory module 802. At least one of the at least one memory module may be included in the processing module.
The image recognition apparatus 80 may further include a communication module 803 for communicating with other devices via a transmission medium, so as to determine that the apparatus in the image recognition apparatus 80 can communicate with other devices.
When the processing module 801 is a processor, the storage module 802 is a memory, and the communication module 803 is a transceiver, the image recognition apparatus 80 according to the embodiment of the present application, which is shown in fig. 8, may be the image recognition apparatus shown in fig. 2.
As described above, the image recognition apparatus 70 or the image recognition apparatus 80 provided in the embodiments of the present application can be used to implement the functions of the image recognition apparatus in the methods implemented in the embodiments of the present application, and for convenience of description, only the portions related to the embodiments of the present application are shown, and details of the specific technology are not disclosed, please refer to the embodiments of the present application.
Further embodiments of the present application also provide a computer readable storage medium, which may include computer software instructions, when the computer software instructions are executed on a computer, cause the computer to perform the steps performed by the image recognition apparatus in the embodiment shown in fig. 3 or fig. 5.
Further embodiments of the present application also provide a computer program product, which when run on a computer causes the computer to perform the steps performed by the image recognition apparatus in the embodiments shown in fig. 3 or fig. 5.
Other embodiments of the present application further provide a chip system. The chip system comprises an interface circuit and a processor; the interface circuit and the processor are interconnected through a line; the interface circuit is used for receiving signals from a memory of the image recognition device and sending the signals to the processor, and the signals comprise computer instructions stored in the memory; when the processor executes the computer instructions, the system-on-chip performs the steps performed by the image recognition apparatus as described in the embodiments of fig. 3 or fig. 5 above.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical functional division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, that is, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (13)

1. An image recognition method, comprising:
acquiring identity features of a target face image, wherein the identity features are the features except age features in the recognizable features in the face image;
selecting a first sample facial image in a sample library as an identification result of the target facial image according to the identity characteristics of the target facial image and the sample library;
wherein the sample library comprises facial features and age features of one or more sample facial images; the human face features are recognizable features in the face image, and the age features are used for indicating the photographing age of people in the face image; the first sample facial image is a sample facial image in the sample library in which a first condition is satisfied by a face feature and a first feature of the target facial image relative to the first sample facial image; a first feature of the target facial image relative to the first sample facial image is a sum of an identity feature of the target facial image and an age feature of the first sample facial image.
2. The method of claim 1, further comprising:
acquiring a photographing age range of the target face image;
the sample library includes face features and age features of sample face images of which the current age is within the photographing age range.
3. The method of claim 1 or 2, wherein the first condition comprises:
the distance is less than or equal to a threshold; the distance includes any one of: euclidean distance, mahalanobis distance.
4. The method according to claim 3, wherein if a plurality of candidate sample face images exist in the sample library, the candidate sample face images are sample face images in the sample library in which the face features and the first features of the target face image relative to the candidate sample face images satisfy the first condition; the selecting a first sample facial image in the sample library as the recognition result of the target facial image includes:
selecting the candidate sample face image satisfying a second condition as the first sample face image.
5. The method of claim 4, wherein the second condition comprises a distance minimum.
6. An image recognition apparatus, comprising:
a first acquisition unit configured to acquire an identity feature of a target face image, the identity feature being a feature other than an age feature among recognizable features in the face image;
the processing unit is used for selecting a first sample facial image in the sample library as the identification result of the target facial image according to the identity characteristic of the target facial image and the sample library;
wherein the sample library comprises facial features and age features of one or more sample facial images; the human face features are recognizable features in the face image, and the age features are used for indicating the photographing age of people in the face image; the first sample facial image is a sample facial image in the sample library in which a first condition is satisfied by a face feature and a first feature of the target facial image relative to the first sample facial image; a first feature of the target facial image relative to the first sample facial image is a sum of an identity feature of the target facial image and an age feature of the first sample facial image.
7. The apparatus of claim 6, further comprising:
a second acquisition unit configured to acquire a photographing age range of the target face image;
the sample library includes face features and age features of sample face images of which the current age is within the photographing age range.
8. The apparatus of claim 6 or 7, wherein the first condition comprises:
the distance is less than or equal to a threshold; the distance includes any one of: euclidean distance, mahalanobis distance.
9. The apparatus according to claim 8, wherein if there are a plurality of candidate sample face images in the sample library, the candidate sample face images are sample face images in the sample library in which the first condition is satisfied with a first feature of the target face image with respect to the candidate sample face images; the processing unit is specifically configured to:
selecting the candidate sample face image satisfying a second condition as the first sample face image.
10. The apparatus of claim 9, wherein the second condition comprises a distance minimum.
11. An image recognition apparatus, characterized in that the apparatus comprises: a processor, a memory; the processor is coupled with the memory for storing computer program code comprising computer instructions which, when executed by the apparatus, cause the apparatus to perform the method of image recognition according to any one of claims 1-5.
12. A computer-readable storage medium, comprising: computer software instructions;
the computer software instructions, when executed in a computer, cause the computer to perform the method of image recognition according to any one of claims 1-5.
13. A computer program product, characterized in that it causes a computer to carry out the method of image recognition according to any one of claims 1-5, when the computer program product is run on the computer.
CN201911413426.9A 2019-12-31 2019-12-31 Image identification method and device Pending CN113128278A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911413426.9A CN113128278A (en) 2019-12-31 2019-12-31 Image identification method and device
PCT/CN2020/134675 WO2021135863A1 (en) 2019-12-31 2020-12-08 Image recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911413426.9A CN113128278A (en) 2019-12-31 2019-12-31 Image identification method and device

Publications (1)

Publication Number Publication Date
CN113128278A true CN113128278A (en) 2021-07-16

Family

ID=76686558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911413426.9A Pending CN113128278A (en) 2019-12-31 2019-12-31 Image identification method and device

Country Status (2)

Country Link
CN (1) CN113128278A (en)
WO (1) WO2021135863A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114093008A (en) * 2021-12-01 2022-02-25 支付宝(杭州)信息技术有限公司 Method and device for face recognition
CN114582006A (en) * 2022-05-06 2022-06-03 广东红橙云大数据有限公司 Child age-crossing face recognition method and device, electronic equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680119A (en) * 2013-11-29 2015-06-03 华为技术有限公司 Image identity recognition method, related device and identity recognition system
CN107480575A (en) * 2016-06-07 2017-12-15 深圳市商汤科技有限公司 The training method of model, across age face identification method and corresponding device
US20190138787A1 (en) * 2017-08-11 2019-05-09 Beijing Sensetime Technology Development Co., Ltd Method and apparatus for facial age identification, and electronic device
CN110008926A (en) * 2019-04-15 2019-07-12 北京字节跳动网络技术有限公司 The method and apparatus at age for identification
CN110197099A (en) * 2018-02-26 2019-09-03 腾讯科技(深圳)有限公司 The method and apparatus of across age recognition of face and its model training

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070098225A1 (en) * 2005-10-28 2007-05-03 Piccionelli Gregory A Age verification method for website access
CN101819628B (en) * 2010-04-02 2011-12-28 清华大学 Method for performing face recognition by combining rarefaction of shape characteristic

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680119A (en) * 2013-11-29 2015-06-03 华为技术有限公司 Image identity recognition method, related device and identity recognition system
CN107480575A (en) * 2016-06-07 2017-12-15 深圳市商汤科技有限公司 The training method of model, across age face identification method and corresponding device
US20190138787A1 (en) * 2017-08-11 2019-05-09 Beijing Sensetime Technology Development Co., Ltd Method and apparatus for facial age identification, and electronic device
CN110197099A (en) * 2018-02-26 2019-09-03 腾讯科技(深圳)有限公司 The method and apparatus of across age recognition of face and its model training
CN110008926A (en) * 2019-04-15 2019-07-12 北京字节跳动网络技术有限公司 The method and apparatus at age for identification

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114093008A (en) * 2021-12-01 2022-02-25 支付宝(杭州)信息技术有限公司 Method and device for face recognition
CN114582006A (en) * 2022-05-06 2022-06-03 广东红橙云大数据有限公司 Child age-crossing face recognition method and device, electronic equipment and medium
CN114582006B (en) * 2022-05-06 2022-07-08 广东红橙云大数据有限公司 Child age-crossing face recognition method and device, electronic equipment and medium

Also Published As

Publication number Publication date
WO2021135863A1 (en) 2021-07-08

Similar Documents

Publication Publication Date Title
CN110147710B (en) Method and device for processing human face features and storage medium
CN110362677B (en) Text data category identification method and device, storage medium and computer equipment
CN110996123B (en) Video processing method, device, equipment and medium
JP7089045B2 (en) Media processing methods, related equipment and computer programs
CN112801054B (en) Face recognition model processing method, face recognition method and device
CN105303449B (en) The recognition methods and system of social network user based on camera fingerprint characteristic
CN109871762B (en) Face recognition model evaluation method and device
CN110941978B (en) Face clustering method and device for unidentified personnel and storage medium
CN112883980B (en) Data processing method and system
CN113642639B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
CN110245573A (en) A kind of register method, apparatus and terminal device based on recognition of face
JP2019153092A (en) Position identifying device, position identifying method, and computer program
CN114139013A (en) Image searching method and device, electronic equipment and computer readable storage medium
CN113128278A (en) Image identification method and device
CN111177436A (en) Face feature retrieval method, device and equipment
CN110135428B (en) Image segmentation processing method and device
CN112052251B (en) Target data updating method and related device, equipment and storage medium
WO2021027555A1 (en) Face retrieval method and apparatus
CN112257689A (en) Training and recognition method of face recognition model, storage medium and related equipment
KR20220058915A (en) Image detection and related model training methods, apparatus, apparatus, media and programs
CN113743533B (en) Picture clustering method and device and storage medium
CN112487082A (en) Biological feature recognition method and related equipment
CN112633369B (en) Image matching method and device, electronic equipment and computer-readable storage medium
CN115082999A (en) Group photo image person analysis method and device, computer equipment and storage medium
CN114463799A (en) Living body detection method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination