CN112613480A - Face recognition method, face recognition system, electronic equipment and storage medium - Google Patents

Face recognition method, face recognition system, electronic equipment and storage medium Download PDF

Info

Publication number
CN112613480A
CN112613480A CN202110003723.7A CN202110003723A CN112613480A CN 112613480 A CN112613480 A CN 112613480A CN 202110003723 A CN202110003723 A CN 202110003723A CN 112613480 A CN112613480 A CN 112613480A
Authority
CN
China
Prior art keywords
face recognition
face
sample pair
learning
recognition method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110003723.7A
Other languages
Chinese (zh)
Inventor
吴康乐
赵晨旭
苏安炀
刘向阳
唐大闰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Minglue Artificial Intelligence Group Co Ltd
Original Assignee
Shanghai Minglue Artificial Intelligence Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Minglue Artificial Intelligence Group Co Ltd filed Critical Shanghai Minglue Artificial Intelligence Group Co Ltd
Priority to CN202110003723.7A priority Critical patent/CN112613480A/en
Publication of CN112613480A publication Critical patent/CN112613480A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a face recognition method, a face recognition system, electronic equipment and a storage medium. The face recognition method comprises the following steps: the construction steps are as follows: constructing at least two sample pairs according to the face image; removing: and removing information irrelevant to face recognition from the sample pair by using a learning mode of contrast learning. The invention provides a face recognition method and a face recognition system, and provides a face recognition method, a face recognition system, electronic equipment and a storage medium.

Description

Face recognition method, face recognition system, electronic equipment and storage medium
Technical Field
The present application relates to the field of face recognition technologies, and in particular, to a face recognition method, a face recognition system, an electronic device, and a storage medium.
Background
Face recognition refers to a biometric technology that performs identification based on facial feature information of a person. A series of related technologies, also commonly called face recognition and face recognition, are used to collect images or video streams containing faces by using a camera or a video camera, automatically detect and track the faces in the images, and then perform face recognition on the detected faces. In the twin neural network of the cafe, the loss function adopted is coherent loss, and the loss function can effectively process the relation of paired data in the twin neural network. The expression of coherent loss is as follows:
Figure BDA0002882142120000011
wherein d | | an-bn | | |2 represents the euclidean distance between the features of the two samples, y is a label indicating whether the two samples match, y | | 1 represents that the two samples are similar or match, y | 0 represents mismatch, and margin is a set threshold. If the Euclidean distance in the feature space is larger in the original similar samples, the current model is not good, and therefore loss is increased; when the samples are not similar, if the euclidean distance in the feature space is rather small, the loss value becomes large. A paper, Momentum Contrast for Unsupervised Visual Representation Learning, proposes an Unsupervised Visual Representation comparison Learning method, organizes a data set into a query and a key, respectively uses two encoders to extract features of the images of the query and the key and performs comparison Learning, uses the comparison Learning ratio as a dictionary looking-up process, uses a dictionary as a queue, and introduces Momentum to update two endicers. The encoder obtained by training can be applied to downstream tasks. In the methods, the performance of face recognition is improved through the design of some loss functions, but the methods do not remove some identity ID irrelevant features (such as face angle information) in a face image feature space. For example: the paper "motion Contrast for Unsupervised Visual Representation Learning" proposes an Unsupervised comparative Learning method, but the paper uses different pictures enhanced by the same picture data as a positive sample pair and uses different pictures as a negative sample pair. In the face recognition task, different images with the same ID belong to the same category, and it is difficult to converge model training without supervision, so an Unsupervised Contrast Learning method proposed in the paper "motion context for Unsupervised Visual Representation Learning" cannot be effectively applied to the face recognition problem.
Therefore, aiming at the current situation, the invention provides a face recognition method, a face recognition system, electronic equipment and a storage medium, and the invention uses a learning mode of contrast learning to more effectively supervise information irrelevant to identity characteristics in a face image from other angles and remove the information from a feature space of the face image, thereby improving the performance of a face recognition model and providing a brand new contrast learning idea.
Disclosure of Invention
The embodiment of the application provides a face recognition method, a face recognition system, electronic equipment and a storage medium, and aims to at least solve the problem of subjective factor influence in the related technology.
The invention provides a face recognition method, which comprises the following steps:
the construction steps are as follows: constructing at least two sample pairs according to the face image;
removing: and removing information irrelevant to face recognition from the sample pair by using a learning mode of contrast learning.
In the above face recognition method, at least two of the sample pairs at least include a positive sample pair and a negative sample pair.
In the above face recognition method, the construction step includes:
and (3) positive sample pair construction: using the facial images with the same ID at different angles to organize the positive sample pairs;
negative sample pair construction: and organizing the face images with different IDs at the same angle into the negative sample pairs.
In the above face recognition method, the removing step includes training the positive sample pair and the negative sample pair respectively by mutual supervision using the learning method, and removing the information irrelevant to the face angle from the positive sample pair and the negative sample pair.
The present invention provides a face recognition system, which is characterized in that the face recognition system is suitable for the face recognition method, and the face recognition system comprises:
a construction unit: constructing at least two sample pairs according to the face image;
a rejection unit: and removing information irrelevant to face recognition from the sample pair by using a learning mode of contrast learning.
In the face recognition system, at least two of the sample pairs at least include a positive sample pair and a negative sample pair.
In the face recognition system, the construction unit includes:
positive sample pair construction module: using the facial images with the same ID at different angles to organize the positive sample pairs;
a negative sample pair construction module: and organizing the face images with different IDs at the same angle into the negative sample pairs.
In the face recognition system, the removing unit uses the learning mode to supervise and train the positive sample pair and the negative sample pair respectively, and removes the information irrelevant to the face angle from the positive sample pair and the negative sample pair.
Further, the present invention provides an electronic device, comprising a memory, a processor and a computer program stored on the memory and operable on the processor, wherein the processor executes the computer program to implement the face recognition method according to any one of the above aspects.
The invention provides an electronic device readable storage medium having stored thereon computer program instructions which, when executed by the processor, implement any of the above-described face recognition methods.
Compared with the prior art, the invention provides a face recognition method, a face recognition system, electronic equipment and a storage medium, and the face recognition system and the face recognition method have the advantages that through the learning mode of contrast learning, information irrelevant to identity characteristics in a face image is monitored more effectively from other angles, and the information is removed from the characteristic space of the face image, so that the performance of a face recognition model is improved, and a brand new contrast learning idea is provided.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a flow chart of a face recognition method according to an embodiment of the present application;
FIG. 2 is a flow chart of positive sample versus contrast learning according to an embodiment of the present application;
FIG. 3 is a flow chart of negative sample versus contrast learning according to an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a face recognition system according to the present invention;
fig. 5 is a frame diagram of an electronic device according to an embodiment of the present application.
Wherein the reference numerals are:
a construction unit: 31;
a rejection unit: 32, a first step of removing the first layer;
positive sample pair construction module: 311;
a negative sample pair construction module: 312;
81: a processor;
82: a memory;
83: a communication interface;
80: a bus.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The present invention is based on face recognition of alternative supervised contrast learning, which is briefly introduced below.
Face recognition refers to a biometric technology that performs identification based on facial feature information of a person. A series of related technologies, also commonly called face recognition and face recognition, are used to collect images or video streams containing faces by using a camera or a video camera, automatically detect and track the faces in the images, and then perform face recognition on the detected faces. The research of the face recognition system starts in the 60 s of the 20 th century, the development of the computer technology and the optical imaging technology is improved after the 80 s, and the research really enters the early application stage in the later 90 s and mainly realizes the technology of the United states, Germany and Japan; the key to the success of the face recognition system is whether the face recognition system has a core algorithm with a sharp end or not, and the recognition result has practical recognition rate and recognition speed; the human face recognition system integrates various professional technologies such as artificial intelligence, machine recognition, machine learning, model theory, expert system and video image processing, and meanwhile, the theory and implementation of intermediate value processing need to be combined, so that the human face recognition system is the latest application of biological feature recognition, the core technology of the human face recognition system is implemented, and the conversion from weak artificial intelligence to strong artificial intelligence is shown. The traditional face recognition technology is mainly based on face recognition of visible light images, which is a familiar recognition mode, and has been developed for over 30 years. However, this method has a defect that it is difficult to overcome, and especially when the ambient light changes, the recognition effect will be rapidly reduced, which cannot meet the needs of the actual system. The scheme for solving the illumination problem comprises three-dimensional image face recognition and thermal imaging face recognition. However, the two technologies are still far from mature and the recognition effect is not satisfactory. One solution that has rapidly developed is a multi-light source face recognition technique based on active near-infrared images. The method can overcome the influence of light change, has excellent recognition performance, and has overall system performance exceeding that of three-dimensional image face recognition in the aspects of precision, stability and speed. The technology is rapidly developed in two or three years, and the face recognition technology gradually becomes practical. The human face is inherent like other biological characteristics (fingerprints, irises and the like) of a human body, the uniqueness and the good characteristic that the human face is not easy to copy provide necessary premise for identity identification, and compared with other types of biological identification, the human face identification has the following characteristics: optional characteristics: the user does not need to be specially matched with face acquisition equipment, and can almost acquire a face image in an unconscious state, and the sampling mode is not mandatory; non-contact property: the user can obtain the face image without directly contacting with the equipment; concurrency: the method can be used for sorting, judging and identifying a plurality of faces in an actual application scene; in addition, the visual characteristics are also met: the characteristic of 'people can be identified by the appearance', and the characteristics of simple operation, visual result, good concealment and the like. The face recognition system mainly comprises four components, which are respectively: the method comprises the steps of face image acquisition and detection, face image preprocessing, face image feature extraction, matching and identification. Acquiring a face image: different face images can be collected through the camera lens, and for example, static images, dynamic images, different positions, different expressions and the like can be well collected. When the user is in the shooting range of the acquisition equipment, the acquisition equipment can automatically search and shoot the face image of the user. Face detection: in practice, face detection is mainly used for preprocessing of face recognition, namely, the position and size of a face are accurately calibrated in an image. The face image contains abundant pattern features, such as histogram features, color features, template features, structural features, Haar features, and the like. The face detection is to extract the useful information and to use the features to realize the face detection. The mainstream face detection method adopts an Adaboost learning algorithm based on the characteristics, wherein the Adaboost algorithm is a method for classification, and combines weak classification methods to form a new strong classification method. In the process of face detection, an Adaboost algorithm is used for picking out some rectangular features (weak classifiers) which can represent the face most, the weak classifiers are constructed into a strong classifier according to a weighted voting mode, and then a plurality of strong classifiers obtained by training are connected in series to form a cascade-structured stacked classifier, so that the detection speed of the classifier is effectively improved. Preprocessing a face image: the image preprocessing for the human face is a process of processing the image based on the human face detection result and finally serving for feature extraction. The original image acquired by the system is limited by various conditions and random interference, so that the original image cannot be directly used, and the original image needs to be subjected to image preprocessing such as gray scale correction, noise filtering and the like in the early stage of image processing. For the face image, the preprocessing process mainly includes light compensation, gray level transformation, histogram equalization, normalization, geometric correction, filtering, sharpening, and the like of the face image. Extracting the features of the face image: features that can be used by a face recognition system are generally classified into visual features, pixel statistical features, face image transform coefficient features, face image algebraic features, and the like. The face feature extraction is performed on some features of the face. Face feature extraction, also known as face characterization, is a process of feature modeling for a face. The methods for extracting human face features are classified into two main categories: one is a knowledge-based characterization method; the other is a characterization method based on algebraic features or statistical learning. The knowledge-based characterization method mainly obtains feature data which is helpful for face classification according to shape description of face organs and distance characteristics between the face organs, and feature components of the feature data generally comprise Euclidean distance, curvature, angle and the like between feature points. The human face is composed of parts such as eyes, nose, mouth, and chin, and geometric description of the parts and their structural relationship can be used as important features for recognizing the human face, and these features are called geometric features. The knowledge-based face characterization mainly comprises a geometric feature-based method and a template matching method. Matching and identifying the face image: and searching and matching the extracted feature data of the face image with a feature template stored in a database, and outputting a result obtained by matching when the similarity exceeds a threshold value by setting the threshold value. The face recognition is to compare the face features to be recognized with the obtained face feature template, and judge the identity information of the face according to the similarity degree. This process is divided into two categories: one is confirmation, which is a process of performing one-to-one image comparison, and the other is recognition, which is a process of performing one-to-many image matching comparison.
Face Recognition requires the accumulation of a large amount of collected data related to Face images, such as a Neural Network Face Recognition Assignment, orl Face database, majors institute of technology and biology and computational learning center Face Recognition database, the computer of the university of eiscoss and the Face Recognition data of the electronics institute, etc., for verifying algorithms and continuously improving Recognition accuracy. The existing face recognition system can obtain satisfactory results under the condition that the user coordination and acquisition conditions are ideal. However, in the case where the user is not matched and the acquisition condition is not ideal, the recognition rate of the existing system is suddenly reduced. For example, when a face is aligned, the face may go in and out of the face stored in the system, for example, a beard is shaved, a hairstyle is changed, glasses are added, and an expression is changed, which may cause the alignment failure.
Face recognition is considered to be one of the most difficult research topics in the field of biometric recognition and even in the field of artificial intelligence. The difficulty of face recognition is mainly caused by the characteristics of the face as biological features. The human face similarity is not greatly different among different individuals, all human faces are similar in structure, and even the structural shapes of human face organs are similar. Such features are advantageous for localization using human faces, but are disadvantageous for distinguishing human individuals using human faces. The shape of the face is unstable, a person can generate a plurality of expressions through the change of the face, the visual images of the face are greatly different at different observation angles, and in addition, the face recognition is also influenced by various factors such as illumination conditions (such as day and night, indoor and outdoor and the like), a plurality of covers of the face (such as a mask, sunglasses, hair, beard and the like), age and the like. In face recognition, the first category of variations should be magnified as a criterion to distinguish individuals, while the second category of variations should be eliminated, since they may represent the same individual. The first type of variation is commonly referred to as inter-class variation (inter-class difference) and the second type of variation is referred to as intra-class variation (intra-class difference). For human faces, intra-class variations tend to be larger than inter-class variations, making it extremely difficult to distinguish individuals using inter-class variations if disturbed by intra-class variations. Many other elements in the face image affect the identity characteristics of the face, such as angle information characteristics in the face image, light characteristics in the image, and the like, and the information is irrelevant to the identity characteristics of the face. If the information can be stripped from the identity characteristics of the human face through a contrast means, the performance of the human face recognition can be obviously improved. Some recent methods of contrast learning are exploring unsupervised, semi-supervised or self-supervised scenarios. For example, patent CN111339988A relates to a video face recognition method based on dynamic interval loss function and probability feature, which includes the following steps: step S1: training a recognition network through a face recognition training set; step S2: adopting a trained recognition network as a feature extraction module, and training an uncertainty module through the same training set; step S3: aggregating the input video feature set by using the learned uncertainty as the importance degree of the features to obtain aggregated features; step S4: and comparing the aggregated features by using the mutual likelihood fraction to complete the final recognition. The method can effectively identify the face in the video. For example: in patent CN111274947A, a multitask and multi-thread face recognition method, system and storage medium are disclosed, the method includes: the method comprises the steps of carrying out face detection on an obtained picture, obtaining a face frame image, obtaining face key point coordinates according to feature point detection positioning, intercepting to obtain a face feature region set, respectively inputting the face feature region set into corresponding feature neural networks for feature extraction, fusing extracted feature vectors to obtain a face overall feature vector, and calculating by adopting cosine similarity to obtain a face recognition result. The invention distinguishes different characteristic part areas of the human face, inputs the different characteristic part areas into respective human face recognition networks for characteristic extraction, fuses the characteristics and carries out human face recognition. Because the method utilizes partial areas of the human face and does not only depend on the human face, the method still has better recognition effect under the condition that the human face is shielded, but does not remove information except the identity characteristics of the human face in the image. For the task of face recognition, different images with the same ID belong to the same category, so that the performance of face recognition can be improved more simply and effectively by using contrast learning.
The invention provides a face recognition method, a face recognition system, electronic equipment and a storage medium, which can be used for effectively monitoring information irrelevant to identity characteristics in a face image from other angles by using a learning mode of contrast learning and removing the information from a feature space of the face image, thereby improving the performance of a face recognition model and providing a brand-new contrast learning idea. The following describes embodiments of the present application with human face recognition as an example.
Example one
The embodiment provides a face recognition method. Referring to fig. 1 to fig. 3, fig. 1 is a flowchart illustrating a face recognition method according to an embodiment of the present application; FIG. 2 is a flow chart of positive sample versus contrast learning according to an embodiment of the present application; fig. 3 is a flowchart of negative sample-versus-contrast learning according to an embodiment of the present application, and as shown in the figure, the face recognition method includes the following steps:
a construction step S1: constructing at least two sample pairs according to the face image;
a rejection step S2: and removing information irrelevant to face recognition from the sample pair by using a learning mode of contrast learning.
In an embodiment, at least two of the sample pairs comprise at least a positive sample pair and a negative sample pair.
In an embodiment, the constructing step S1 includes:
positive sample pair construction step S11: using the facial images with the same ID at different angles to organize the positive sample pairs;
negative sample pair construction step S12: and organizing the face images with different IDs at the same angle into the negative sample pairs.
In specific implementation, in the construction step S1, the face images with the same ID have very different angles, and by using this characteristic, a positive sample pair can be organized, and the face images with different IDs with the same angle can also form a negative sample pair, and the identity information of the faces is different, and the face angles and other information are as consistent as possible.
In an embodiment, the removing step S2 includes training the positive sample pair and the negative sample pair respectively with mutual supervision using the learning method, and removing the information that is not related to the face angle from the positive sample pair and the negative sample pair.
In a specific implementation, the positive and negative sample pair comparison learning flow chart is shown in fig. 2 and fig. 3. In the positive sample pair contrast learning, different from the common self-supervision and unsupervised contrast learning modes, in the positive sample pair, two images are not data enhancement of one image, but are taken from face images of the same ID and different angles. Although the angles are different, the same ID still belongs to the same ID, and different angles are mutually supervised by using a comparison learning mode, so that the similarity of the final feature matrixes of the two networks is as high as possible, and irrelevant information such as the angles is removed. In the negative sample contrast learning, different from the common self-supervision and unsupervised contrast learning mode, in the positive sample pair, two images are not data enhancement of one image, but are taken from face images with different IDs and the same angle. Even if the angle information of the face is consistent, the identity information of the face is still inconsistent, the two images are supervised with each other in a comparison learning mode, so that the similarity of the final feature matrixes of the two networks is as low as possible, and information irrelevant to the face angle and the like is removed.
Therefore, the invention provides a face recognition method, a face recognition system, electronic equipment and a storage medium, and the invention uses a learning mode of contrast learning to more effectively supervise information irrelevant to identity characteristics in a face image from other angles and remove the information from a feature space of the face image, thereby improving the performance of a face recognition model and providing a brand-new contrast learning idea.
Example two
Referring to fig. 4, fig. 4 is a schematic structural diagram of a face recognition system according to the present invention. As shown in fig. 4, the face recognition system of the present invention is suitable for the face recognition method, and includes:
the construction unit 31: constructing at least two sample pairs according to the face image;
the eliminating unit 32: and removing information irrelevant to face recognition from the sample pair by using a learning mode of contrast learning.
In this embodiment, the at least two sample pairs include at least a positive sample pair and a negative sample pair.
In this embodiment, the building unit 31 includes:
positive sample pair construction module 311: using the facial images with the same ID at different angles to organize the positive sample pairs;
negative example pair construction module 312: and organizing the face images with different IDs at the same angle into the negative sample pairs.
In this embodiment, the removing unit 32 uses the learning method to supervise and train the positive sample pair and the negative sample pair respectively, and removes the information that is not related to the face angle from the positive sample pair and the negative sample pair.
EXAMPLE III
Referring to fig. 5, this embodiment discloses a specific implementation of an electronic device. The electronic device may include a processor 81 and a memory 82 storing computer program instructions.
Specifically, the processor 81 may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
Memory 82 may include, among other things, mass storage for data or instructions. By way of example, and not limitation, memory 82 may include a Hard Disk Drive (Hard Disk Drive, abbreviated to HDD), a floppy Disk Drive, a Solid State Drive (SSD), flash memory, an optical Disk, a magneto-optical Disk, tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 82 may include removable or non-removable (or fixed) media, where appropriate. The memory 82 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 82 is a Non-Volatile (Non-Volatile) memory. In particular embodiments, Memory 82 includes Read-Only Memory (ROM) and Random Access Memory (RAM). The ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (FPROM), Electrically Erasable PROM (EFPROM), Electrically rewritable ROM (EAROM), or FLASH Memory (FLASH), or a combination of two or more of these, where appropriate. The RAM may be a Static Random-Access Memory (SRAM) or a Dynamic Random-Access Memory (DRAM), where the DRAM may be a Fast Page Mode Dynamic Random-Access Memory (FPMDRAM), an Extended data output Dynamic Random-Access Memory (EDODRAM), a Synchronous Dynamic Random-Access Memory (SDRAM), and the like.
The memory 82 may be used to store or cache various data files for processing and/or communication use, as well as possible computer program instructions executed by the processor 81.
The processor 81 implements any of the face recognition methods in the above embodiments by reading and executing computer program instructions stored in the memory 82.
In some of these embodiments, the electronic device may also include a communication interface 83 and a bus 80. As shown in fig. 5, the processor 81, the memory 82, and the communication interface 83 are connected via the bus 80 to complete communication therebetween.
The communication interface 83 is used for implementing communication between modules, devices, units and/or equipment in the embodiment of the present application. The communication port 83 may also be implemented with other components such as: the data communication is carried out among external equipment, image/data acquisition equipment, a database, external storage, an image/data processing workstation and the like.
The bus 80 includes hardware, software, or both to couple the components of the electronic device to one another. Bus 80 includes, but is not limited to, at least one of the following: data Bus (Data Bus), Address Bus (Address Bus), Control Bus (Control Bus), Expansion Bus (Expansion Bus), and Local Bus (Local Bus). By way of example, and not limitation, Bus 80 may include an Accelerated Graphics Port (AGP) or other Graphics Bus, an Enhanced Industry Standard Architecture (EISA) Bus, a Front-Side Bus (FSB), a HyperTransport (HT) Interconnect, an ISA (ISA) Bus, an InfiniBand (InfiniBand) Interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a Microchannel Architecture (MCA) Bus, a PCI (peripheral Component Interconnect) Bus, a PCI-Express (PCI-X) Bus, a Serial Advanced Technology Attachment (SATA) Bus, abbreviated VLB) bus or other suitable bus or a combination of two or more of these. Bus 80 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
The electronic device may be connected to a face recognition system to implement the methods described in connection with fig. 1-3.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A face recognition method, comprising:
the construction steps are as follows: constructing at least two sample pairs according to the face image;
removing: and removing information irrelevant to face recognition from the sample pair by using a learning mode of contrast learning.
2. The face recognition method of claim 1, wherein at least two of the sample pairs comprise at least a positive sample pair and a negative sample pair.
3. The face recognition method of claim 2, wherein the constructing step comprises:
and (3) positive sample pair construction: using the facial images with the same ID at different angles to organize the positive sample pairs;
negative sample pair construction: and organizing the face images with different IDs at the same angle into the negative sample pairs.
4. The face recognition method according to claim 3, wherein the removing step includes training the positive sample pair and the negative sample pair respectively by mutual supervision using the learning method, and removing the information that is not related to the face angle from the positive sample pair and the negative sample pair.
5. A face recognition system, which is applied to the face recognition method of claims 1-4, the face recognition system comprising:
a construction unit: constructing at least two sample pairs according to the face image;
a rejection unit: and removing information irrelevant to face recognition from the sample pair by using a learning mode of contrast learning.
6. The face recognition system of claim 5, wherein at least two of the sample pairs comprise at least a positive sample pair and a negative sample pair.
7. The face recognition system of claim 6, wherein the construction unit comprises:
positive sample pair construction module: using the facial images with the same ID at different angles to organize the positive sample pairs;
a negative sample pair construction module: and organizing the face images with different IDs at the same angle into the negative sample pairs.
8. The face recognition system according to claim 7, wherein the eliminating unit mutually supervises and trains the positive sample pair and the negative sample pair respectively using the learning manner, and eliminates the information that is not related to the face angle from the positive sample pair and the negative sample pair.
9. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the face recognition method of any one of claims 1 to 4 when executing the computer program.
10. An electronic device readable storage medium having stored thereon computer program instructions which, when executed by the processor, implement the face recognition method of any one of claims 1 to 4.
CN202110003723.7A 2021-01-04 2021-01-04 Face recognition method, face recognition system, electronic equipment and storage medium Pending CN112613480A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110003723.7A CN112613480A (en) 2021-01-04 2021-01-04 Face recognition method, face recognition system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110003723.7A CN112613480A (en) 2021-01-04 2021-01-04 Face recognition method, face recognition system, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112613480A true CN112613480A (en) 2021-04-06

Family

ID=75253990

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110003723.7A Pending CN112613480A (en) 2021-01-04 2021-01-04 Face recognition method, face recognition system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112613480A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113210264A (en) * 2021-05-19 2021-08-06 江苏鑫源烟草薄片有限公司 Method and device for removing tobacco impurities
CN113239798A (en) * 2021-05-12 2021-08-10 成都珊瑚鱼科技有限公司 Three-dimensional head posture estimation method based on twin neural network, storage medium and terminal
CN114267038A (en) * 2022-03-03 2022-04-01 南京甄视智能科技有限公司 Nameplate type identification method and device, storage medium and equipment
CN114565970A (en) * 2022-01-27 2022-05-31 内蒙古工业大学 High-precision multi-angle behavior recognition method based on deep learning

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503687A (en) * 2016-11-09 2017-03-15 合肥工业大学 The monitor video system for identifying figures of fusion face multi-angle feature and its method
CN109002790A (en) * 2018-07-11 2018-12-14 广州视源电子科技股份有限公司 Face recognition method, device, equipment and storage medium
CN109389030A (en) * 2018-08-23 2019-02-26 平安科技(深圳)有限公司 Facial feature points detection method, apparatus, computer equipment and storage medium
KR20190142553A (en) * 2018-06-18 2019-12-27 주식회사 쓰임기술 Tracking method and system using a database of a person's faces
KR102063492B1 (en) * 2018-11-30 2020-01-08 아주대학교산학협력단 The Method and System for Filtering the Obstacle Data in Machine Learning of Medical Images
CN110717481A (en) * 2019-12-12 2020-01-21 浙江鹏信信息科技股份有限公司 Method for realizing face detection by using cascaded convolutional neural network
CN111062308A (en) * 2019-12-12 2020-04-24 国网新疆电力有限公司信息通信公司 Face recognition method based on sparse expression and neural network
CN111368764A (en) * 2020-03-09 2020-07-03 零秩科技(深圳)有限公司 False video detection method based on computer vision and deep learning algorithm
CN112101150A (en) * 2020-09-01 2020-12-18 北京航空航天大学 Multi-feature fusion pedestrian re-identification method based on orientation constraint

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503687A (en) * 2016-11-09 2017-03-15 合肥工业大学 The monitor video system for identifying figures of fusion face multi-angle feature and its method
KR20190142553A (en) * 2018-06-18 2019-12-27 주식회사 쓰임기술 Tracking method and system using a database of a person's faces
CN109002790A (en) * 2018-07-11 2018-12-14 广州视源电子科技股份有限公司 Face recognition method, device, equipment and storage medium
CN109389030A (en) * 2018-08-23 2019-02-26 平安科技(深圳)有限公司 Facial feature points detection method, apparatus, computer equipment and storage medium
WO2020037898A1 (en) * 2018-08-23 2020-02-27 平安科技(深圳)有限公司 Face feature point detection method and apparatus, computer device, and storage medium
KR102063492B1 (en) * 2018-11-30 2020-01-08 아주대학교산학협력단 The Method and System for Filtering the Obstacle Data in Machine Learning of Medical Images
CN110717481A (en) * 2019-12-12 2020-01-21 浙江鹏信信息科技股份有限公司 Method for realizing face detection by using cascaded convolutional neural network
CN111062308A (en) * 2019-12-12 2020-04-24 国网新疆电力有限公司信息通信公司 Face recognition method based on sparse expression and neural network
CN111368764A (en) * 2020-03-09 2020-07-03 零秩科技(深圳)有限公司 False video detection method based on computer vision and deep learning algorithm
CN112101150A (en) * 2020-09-01 2020-12-18 北京航空航天大学 Multi-feature fusion pedestrian re-identification method based on orientation constraint

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KAIMING HE, ET AL: "Momentum Contrast for Unsupervised Visual Representation Learning", PROCEEDINGS OF THE IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 23 March 2020 (2020-03-23), pages 1 - 12 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239798A (en) * 2021-05-12 2021-08-10 成都珊瑚鱼科技有限公司 Three-dimensional head posture estimation method based on twin neural network, storage medium and terminal
CN113210264A (en) * 2021-05-19 2021-08-06 江苏鑫源烟草薄片有限公司 Method and device for removing tobacco impurities
CN113210264B (en) * 2021-05-19 2023-09-05 江苏鑫源烟草薄片有限公司 Tobacco sundry removing method and device
CN114565970A (en) * 2022-01-27 2022-05-31 内蒙古工业大学 High-precision multi-angle behavior recognition method based on deep learning
CN114267038A (en) * 2022-03-03 2022-04-01 南京甄视智能科技有限公司 Nameplate type identification method and device, storage medium and equipment

Similar Documents

Publication Publication Date Title
CN107194341B (en) Face recognition method and system based on fusion of Maxout multi-convolution neural network
CN111325115B (en) Cross-modal countervailing pedestrian re-identification method and system with triple constraint loss
CN109508663B (en) Pedestrian re-identification method based on multi-level supervision network
CN112613480A (en) Face recognition method, face recognition system, electronic equipment and storage medium
CN110598535B (en) Face recognition analysis method used in monitoring video data
Shao et al. Genealogical face recognition based on ub kinface database
CN109284675B (en) User identification method, device and equipment
CN110263774A (en) A kind of method for detecting human face
CN111523462A (en) Video sequence list situation recognition system and method based on self-attention enhanced CNN
CN110348416A (en) Multi-task face recognition method based on multi-scale feature fusion convolutional neural network
CN106650574A (en) Face identification method based on PCANet
Mady et al. Efficient real time attendance system based on face detection case study “MEDIU staff”
Xia et al. Face occlusion detection using deep convolutional neural networks
CN115240280A (en) Construction method of human face living body detection classification model, detection classification method and device
CN112749605A (en) Identity recognition method, system and equipment
CN110443577A (en) A kind of campus attendance checking system based on recognition of face
Shaker et al. Human Gender and Age Detection Based on Attributes of Face.
CN110969101A (en) Face detection and tracking method based on HOG and feature descriptor
CN111523461A (en) Expression recognition system and method based on enhanced CNN and cross-layer LSTM
Geetha et al. 3D face recognition using Hadoop
Mukherjee et al. Iris recognition using wavelet features and various distance based classification
Mau et al. Video face matching using subset selection and clustering of probabilistic multi-region histograms
CN113920573B (en) Face change decoupling relativity relationship verification method based on counterstudy
Zuobin et al. Effective feature fusion for pattern classification based on intra-class and extra-class discriminative correlation analysis
Delna et al. Sclera vein identification in real time using single board computer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination