CN111931628B - Training method and device of face recognition model and related equipment - Google Patents

Training method and device of face recognition model and related equipment Download PDF

Info

Publication number
CN111931628B
CN111931628B CN202010773454.8A CN202010773454A CN111931628B CN 111931628 B CN111931628 B CN 111931628B CN 202010773454 A CN202010773454 A CN 202010773454A CN 111931628 B CN111931628 B CN 111931628B
Authority
CN
China
Prior art keywords
model
target
face recognition
occluded
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010773454.8A
Other languages
Chinese (zh)
Other versions
CN111931628A (en
Inventor
骆雄辉
滕一帆
孙傲冰
高小宏
姜晓萌
胡静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010773454.8A priority Critical patent/CN111931628B/en
Publication of CN111931628A publication Critical patent/CN111931628A/en
Application granted granted Critical
Publication of CN111931628B publication Critical patent/CN111931628B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a training method and apparatus for a face recognition model, and an electronic device and a computer readable storage medium, where the method includes: acquiring an unobstructed facial image of a first object; training a feature extraction model using the unobstructed facial image of the first object; acquiring an occluded facial image of a second object, the first object comprising the second object; extracting features of the occluded facial image of the second object through the feature extraction model to obtain occluded facial features of the second object; and training a target face recognition model according to the blocked facial features, wherein the target face recognition model is used for recognizing the blocked face image of the second object. The target face recognition model provided by the embodiment of the disclosure can be used for recognizing the face of the second object with the occlusion face image.

Description

Training method and device of face recognition model and related equipment
Technical Field
The disclosure relates to the technical field of artificial intelligence, and in particular relates to a training method and device of a face recognition model, electronic equipment and a computer readable storage medium.
Background
Along with the continuous development of human society, the face recognition has wide application in a plurality of fields such as security, traffic inspection and the like.
In the related art, a face recognition technology generally recognizes an unobstructed face image of a target crowd to determine an object to be recognized in the target crowd. However, with the advent of target epidemic (e.g., new coronaviruses), people wear masks in an increasing number of situations. Due to the shielding of the mask, the face recognition cannot be realized in the related art.
Therefore, the method for identifying the facial image of the wearer of the mask is very important for the technical field of face identification.
It should be noted that the information disclosed in the foregoing background section is only for enhancing understanding of the background of the present disclosure.
Disclosure of Invention
The embodiment of the disclosure provides a training method and device for a face recognition model, electronic equipment and a computer readable storage medium, and can provide a target face recognition model capable of recognizing a blocked face image.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
The embodiment of the disclosure provides a training method of a face recognition model, which comprises the following steps: acquiring an unobstructed facial image of a first object; training a feature extraction model using the unobstructed facial image of the first object; acquiring an occluded facial image of a second object, the first object comprising the second object; extracting features of the occluded facial image of the second object through the feature extraction model to obtain occluded facial features of the second object; and training a target face recognition model according to the blocked facial features, wherein the target face recognition model is used for recognizing the blocked face image of the second object.
The embodiment of the disclosure provides a training device of a face recognition model, which may include: the device comprises a non-occlusion facial image acquisition module, a feature extraction model training module, an occlusion facial image acquisition module, an occlusion facial feature extraction module and a target face recognition model determining module.
Wherein the unobstructed facial image acquisition module may be configured to acquire an unobstructed facial image of the first object. The feature extraction model training module may be configured to train a feature extraction model using an unobstructed facial image of the first object. The occluded facial image acquisition module may be configured to acquire an occluded facial image of a second object, the first object comprising the second object. The occluded facial feature extraction module may be configured to perform feature extraction on the occluded facial image of the second object by the feature extraction model to obtain occluded facial features of the second object. The target face recognition model determination module may be configured to train a target face recognition model based on the occluded facial features, the target face recognition model being configured to identify an occluded face image of the second object.
In some embodiments, the target face recognition model determination module may include: the face recognition system comprises a plurality of model training units, a face recognition unit, a recognition accuracy rate determining unit and a target face recognition model determining unit.
Wherein the plurality of model training units may be configured to train at least one target machine learning model according to the occluded facial features of the second object. The face recognition unit may be configured to perform face recognition on the occluded face image of the second object by the trained at least one target machine learning model. The recognition accuracy determination unit may be configured to obtain a recognition accuracy of the at least one target machine learning model for the occluded face image of the second object. The target face recognition model determination unit may be configured to determine the target face recognition model in the at least one target machine learning model according to the recognition accuracy.
In some embodiments, the target face recognition model determination module may include: a first display unit, a file importing unit, a second display unit and a target machine learning model determining unit.
The first display unit may be configured to display a file import interface. The file importing unit may be configured to import the occluded facial feature of the second object through the file importing interface. The second display unit may be configured to display a model selection interface. The target machine learning model determination unit may be configured to determine a target machine learning model at the model selection interface based on the second object, so as to train the target machine learning model based on the occluded facial feature to obtain the target face recognition model.
In some embodiments, the model selection interface includes a two-class model and a multi-class model.
In some embodiments, the target machine learning model determination unit may include: the classification model determination subunit and the multi-classification model determination subunit.
Wherein the classification model determination subunit may be configured to select the classification model at the model selection interface to determine the target machine learning model if the second object comprises an object. The multi-classification model determination subunit may be configured to select the multi-classification model at the model selection interface to determine the target machine learning model if the second object includes at least two objects.
In some embodiments, the feature extraction model is trained in a first device and the target face recognition model is trained in a second device.
In some embodiments, the occluded facial feature extraction module may include: a storage unit and a transmission unit.
Wherein the storage unit may be configured to store the occluded facial feature of the second object in a target file. The transmitting unit may be configured to transmit the target file to the second device so as to complete training of the target face recognition model.
In some embodiments, the occluded facial image is a mask-worn facial image.
In some embodiments, the training device of the face recognition model may further include: dai Kouzhao facial image acquisition module and Dai Kouzhao facial recognition module.
The Dai Kouzhao facial image acquisition module may be configured to acquire images of the mask wearing face of each mask wearing object in the target scene through the image acquisition device. The Dai Kouzhao face recognition module may be configured to process the mask face images of the respective mask-wearing subjects by the target face recognition model to determine the mask face image of the second subject from the mask face images of the respective mask-wearing subjects.
In some embodiments, the target scene is a punch-through scene.
In some embodiments, the training device of the face recognition model may further include: and a card punching recording module.
The card punching recording module may be configured to record card punching information of the second object after the facial image of the mask of the second object is determined, so as to complete card punching.
In some embodiments, the Dai Kouzhao facial image acquisition module may comprise: an image acquisition unit and a facial image acquisition unit of the wearing mask.
The image acquisition unit may be configured to perform image acquisition on each mask wearing object in the target scene through the image acquisition device, so as to obtain a mask wearing image of each mask wearing object. The Dai Kouzhao face image obtaining unit may be configured to perform face alignment processing on the mask-wearing images of the respective mask-wearing subjects to determine the mask-wearing face images of the respective mask-wearing subjects.
The embodiment of the disclosure provides an electronic device, which comprises: one or more processors; and a storage device for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the method of training a face recognition model as described in any of the preceding claims.
The disclosed embodiments provide a computer readable storage medium having stored thereon a computer program which when executed by a processor implements a method of training a face recognition model as described in any of the above.
Embodiments of the present disclosure propose a computer program product or a computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the training method of the face recognition model provided above.
According to the training method and device for the face recognition model, the electronic equipment and the computer-readable storage medium, on one hand, a feature extraction model capable of extracting features of facial features of a first object is trained through an unobstructed facial image of the first object; on the other hand, the occluded facial feature of the second object (included in the first object) is extracted from the occluded facial image of the second object based on the feature extraction model; in addition, a target face recognition model that can recognize the occluded face image of the second object is trained based on the occluded face image of the second object. The target face recognition model provided by the embodiment of the disclosure can realize the recognition of the occluded face image of the second object.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. The drawings described below are merely examples of the present disclosure and other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 shows a schematic diagram of an exemplary system architecture of a training method of a face recognition model or a training apparatus of a face recognition model applied to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram illustrating a computer system of a training apparatus applied to a face recognition model according to an exemplary embodiment.
Fig. 3 is a flowchart illustrating a training method of a face recognition model according to an exemplary embodiment.
FIG. 4 is a schematic diagram illustrating unobstructed facial image acquisition according to an exemplary embodiment.
FIG. 5 is a schematic diagram illustrating an occluded facial image acquisition according to an exemplary embodiment.
Fig. 6 is a diagram illustrating a method of training a target face recognition model according to an example embodiment.
Fig. 7-9 are diagrams illustrating a target machine learning model configuration interface, according to an example embodiment.
Fig. 10 is a flowchart of step S5 of fig. 3 in an exemplary embodiment.
Fig. 11 is a flowchart illustrating a face recognition method according to an exemplary embodiment.
Fig. 12 is a block diagram illustrating a training apparatus of a face recognition model according to an exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments can be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted.
The described features, structures, or characteristics of the disclosure may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. However, those skilled in the art will recognize that the aspects of the present disclosure may be practiced with one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
The drawings are merely schematic illustrations of the present disclosure, in which like reference numerals denote like or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and not necessarily all of the elements or steps are included or performed in the order described. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
In the present specification, the terms "a," "an," "the," "said" and "at least one" are used to indicate the presence of one or more elements/components/etc.; the terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements/components/etc., in addition to the listed elements/components/etc.; the terms "first," "second," and "third," etc. are used merely as labels, and do not limit the number of their objects.
The following describes example embodiments of the present disclosure in detail with reference to the accompanying drawings.
Fig. 1 shows a schematic diagram of an exemplary system architecture of a training method of a face recognition model or a training apparatus of a face recognition model, which may be applied to embodiments of the present disclosure.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop computers, desktop computers, wearable devices, virtual reality devices, smart homes, etc.
The server 105 may be a server providing various services, such as a background management server providing support for devices operated by users with the terminal devices 101, 102, 103. The background management server can analyze and process the received data such as the request and the like, and feed back the processing result to the terminal equipment.
The server 105 may, for example, obtain an unobstructed facial image of the first object; the server 105 may train a feature extraction model, for example, using the unobstructed facial image of the first object; the server 105 may, for example, obtain an occluded facial image of a second object, the first object comprising the second object; the server 105 may perform feature extraction on the occluded facial image of the second object, for example, by the feature extraction model, to obtain occluded facial features of the second object; the server 105 may for example train a target face recognition model for identifying the second object occluded face image based on the occluded facial features.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative, and that the server 105 may be a server of one entity, or may be composed of a plurality of servers, and may have any number of terminal devices, networks and servers according to actual needs.
Referring now to FIG. 2, a schematic diagram of a computer system 200 suitable for use in implementing an embodiment of the present application is shown. The terminal device shown in fig. 2 is only an example, and should not impose any limitation on the functions and the scope of use of the embodiment of the present application.
As shown in fig. 2, the computer system 200 includes a Central Processing Unit (CPU) 201, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 202 or a program loaded from a storage section 208 into a Random Access Memory (RAM) 203. In the RAM 203, various programs and data required for the operation of the system 200 are also stored. The CPU 201, ROM 202, and RAM 203 are connected to each other through a bus 204. An input/output (I/O) interface 205 is also connected to bus 204.
The following components are connected to the I/O interface 205: an input section 206 including a keyboard, a mouse, and the like; an output portion 207 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker, and the like; a storage section 208 including a hard disk or the like; and a communication section 209 including a network interface card such as a LAN card, a modem, and the like. The communication section 209 performs communication processing via a network such as the internet. The drive 210 is also connected to the I/O interface 205 as needed. A removable medium 211 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed on the drive 210 as needed, so that a computer program read therefrom is installed into the storage section 208 as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 209, and/or installed from the removable medium 211. The above-described functions defined in the system of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 201.
The computer readable storage medium shown in the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules and/or units and/or sub-units involved in the embodiments of the present application may be implemented in software or in hardware. The described modules and/or units and/or sub-units may also be provided in a processor, e.g. may be described as: a processor includes a transmitting unit, an acquiring unit, a determining unit, and a first processing unit. Wherein the names of the modules and/or units and/or sub-units do not in some cases constitute a limitation of the modules and/or units and/or sub-units themselves.
As another aspect, the present application also provides a computer-readable storage medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer-readable storage medium carries one or more programs which, when executed by a device, cause the device to perform functions including: acquiring an unobstructed facial image of a first object; training a feature extraction model using the unobstructed facial image of the first object; acquiring an occluded facial image of a second object, the first object comprising the second object; extracting features of the occluded facial image of the second object through the feature extraction model to obtain occluded facial features of the second object; and training a target face recognition model according to the blocked facial features, wherein the target face recognition model is used for recognizing the blocked face image of the second object.
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Among them, machine Learning (ML) is a multi-domain interdisciplinary, and involves multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, etc. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, confidence networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like.
The technical scheme provided by the embodiment of the application relates to the technology of artificial intelligence such as machine learning, and is specifically described by the following embodiment.
Fig. 3 is a flowchart illustrating a training method of a face recognition model according to an exemplary embodiment. The method provided in the embodiments of the present disclosure may be processed by any electronic device having computing processing capability, for example, the server 105 and/or the terminal devices 102 and 103 in the embodiment of fig. 1, and in the following embodiments, the server 105 is taken as an example to illustrate the execution subject, but the present disclosure is not limited thereto.
In step S1, an unobstructed facial image of a first object is acquired.
In some embodiments, an unobstructed facial image may refer to a facial image in which facial information (e.g., no occlusion at critical locations of the face, eyes, nose, mouth, etc.) is not occluded.
In some embodiments, a first target image including a first object as shown in the left image of fig. 4 may be acquired, and an unobstructed facial image of the first object as shown in the right image of fig. 4 may be truncated from the first target image by a face alignment technique.
The face alignment technique may refer to a technique of capturing a face image of a first object from a target image. In some embodiments, the first target image shown in the left diagram of fig. 4 may be subjected to a face alignment operation, for example, by FaceNet (an open source face recognition model), to obtain an unobstructed face image of the first object as shown in the right diagram of fig. 4.
In some embodiments, the unobstructed facial image of the first object may be processed into a picture that retains a face and has a picture size of 184 pixels x 184 pixels.
In step S2, a feature extraction model is trained using the unobstructed facial image of the first object.
In some embodiments, the feature extraction machine learning model may be trained with the unobstructed facial image of the first object to obtain a feature extraction model.
The feature extraction machine learning model may refer to some machine learning models that may implement classification, such as FaceNet, SVC (Support Vector Classification, support vector machine for classification), GBDT (Gradient Boosting Decision Tree, gradient boost decision tree) or XGBoost (eXtreme Gradient Boosting, extreme gradient boost), and the like. It is understood that any machine learning model that can implement the classification function is within the scope of the present disclosure, which is not limited by the present disclosure.
The FaceNet network model can directly map the face image to the euclidean space, and the length of the space distance represents the similarity of the face image. As long as the mapping space is generated, the tasks such as face recognition, verification, clustering and the like can be easily completed.
In some embodiments, the feature extraction model may perform feature extraction on the facial image of the first object to obtain facial features of the first object (e.g., key point features with distinguishing features at the eyes, nose, mouth, etc.).
In step S3, an occluded facial image of a second object is acquired, the first object comprising the second object.
In some embodiments, an occluded facial image may refer to a facial image in which the facial information is partially occluded (e.g., portions of the face, such as eyes, nose, or mouth, are strategically positioned unobstructed).
It will be appreciated that the facial image of a wearer is an occluded facial image.
In some embodiments, a second target image including a second object as shown in the left diagram of fig. 5 may be acquired, and an occluded facial image of the second object as shown in the right diagram of fig. 5 may be truncated from the second target image by a face alignment technique.
In some embodiments, the occluded facial image of the second object may be processed into a picture that retains the face and has a picture size of 164 pixels by 164 pixels.
In step S4, feature extraction is performed on the occluded facial image of the second object through the feature extraction model, so as to obtain occluded facial features of the second object.
In some embodiments, feature extraction at key points (e.g., nose, eyes, mouth, etc.) locations in a facial image of a second object (included in a first object) may be achieved by training a completed feature extraction model to more accurately extract features at key points that are not occluded from the occluded facial image.
In step S5, a target face recognition model is trained according to the occluded facial feature, where the target face recognition model is used to recognize the occluded face image of the second object.
In some embodiments, the selected target machine learning model may be trained by occluded facial features of the second object to determine the target face recognition model.
In some embodiments, the target machine learning model may refer to a face recognition model FaceNet, SVC, GBDT, XGBoost or other various machine learning models that may implement classification, which is not limited by the present disclosure.
It should be noted that, in order to extract the key point features of the facial image from the occluded facial image of the second object more accurately, the second object to be identified needs to be included in the first object for feature extraction model training, that is, the second object may be identical to the first object or may be included in the first object, which is not limited in this disclosure.
According to the technical scheme provided by the embodiment of the disclosure, on one hand, a feature extraction model capable of extracting facial features is trained through an unobstructed facial object of a first object, and on the other hand, the occluded facial features with higher discrimination degree are accurately extracted from an occluded facial image of a second object through the feature extraction model capable of extracting the facial features; and finally training a target face recognition model which can be used for recognizing the occluded face image of the second object according to the occluded face feature of the second object, wherein the target face recognition model can accurately recognize the occluded face image of the second object.
In some embodiments, face recognition may be applied in different scenarios. For example, in an institution management (e.g., traffic management) to search for a target person; for example, in a company to assist personnel in the company in punching cards.
It will be appreciated that there may be a lack of specialized work by technicians, such as image processing, machine learning model training, etc., whether in the administrative department of institutions or in corporate units.
If the second object to be identified changes every time, the professional is invited to train the target face recognition model in advance, the timeliness of model training can not be guaranteed, and resources are wasted greatly.
Accordingly, embodiments of the present disclosure provide a training method for a target face recognition model to help a target user (e.g., a management authority lacking a technician, a company unit, etc.) quickly, conveniently, and at low cost construct the target face recognition model.
In some embodiments, the technical solution provided by the embodiment of fig. 3 may be completed in two parts, and the first half (e.g., the feature extraction model and the occluded facial feature extraction portion of the second object) may be completed in the first device by a professional technician, and the occluded facial features of the second object may be stored in the target file and sent to the target user; the training of the target face recognition model in the second half can be completed by the target user in a second device with the automatic training function of the machine learning model. Since the first half (e.g. the feature extraction model and the occluded facial feature extraction portion of the second object) is simple and understandable to those skilled in the art, the present embodiment will not be repeated. For training of the latter half of the target face recognition model, this embodiment will be described in detail with reference to fig. 6.
Fig. 6 is a diagram illustrating a method of training a target face recognition model according to an example embodiment.
In step S51, a file import interface is displayed.
In some embodiments, when training of the target face recognition model is desired, a file import interface as shown in fig. 7 may be entered.
In step S52, the occluded facial feature of the second object is imported through the file import interface.
In some embodiments, when the target user receives a target file storing occluded facial features of the second object, the target file may be imported into the interface through a file as shown in FIG. 7 to obtain occluded facial features of the second object.
In some embodiments, if the second object includes more than one object, the separators between the data of the respective objects in the second object may also be determined by the "separators" line shown in fig. 7. In fig. 7, the "data structure configuration", "primary key field" and "data table name" may be configured according to actual needs, which is not limited by the present disclosure.
In step S53, a model selection interface is displayed.
In some embodiments, after the occluded facial data of the second object is imported, the target user also needs to click on the "Add model" button in the model selection interface shown in FIG. 8 to display entering the "input model information" interface shown in FIG. 8.
In step S54, a target machine learning model is determined at the model selection interface according to the second object, so that the target machine learning model is trained according to the occluded facial features to obtain the target face recognition model.
In some embodiments, the target user may also select an appropriate target model class for training based on the second object to be identified. For example, if the second object includes only one object, a two-classification model may be selected in the model selection interface as shown in fig. 8 to determine the target machine learning model, and if the second object includes at least two objects, a multi-classification model may be selected in the model selection interface as shown in fig. 8 to determine the target machine learning model.
In some embodiments, the training strategy of the target machine learning model may also need to be determined after the target machine learning model selection is completed, e.g., the training strategy of the target machine learning model may be determined through an interface as shown in fig. 9. For example, stopping strategies, exploration algorithms, job scheduling, data range dataset proportions, etc. of the target machine learning model may be determined.
The job scheduling may include "one-time job" and "periodic job", where the "periodic" job may represent that the training target face recognition model is updated and used at regular intervals (e.g., one week).
In addition, the duty cycle of the training set, the verification set, and the prediction set in the second subject occluded facial feature dataset may also be adjusted through an interface as shown in fig. 9. The training set can be used for training the target face recognition model, the verification set can be used for verifying the target face recognition model, and the prediction set can be used for predicting the recognition accuracy of the target face recognition model.
According to the technical scheme provided by the embodiment, the configuration of the target machine learning model to be trained can be realized conveniently and simply through the display interface, the training difficulty of the target face recognition model is reduced, and the training speed of the target face recognition model is improved.
In some embodiments, face recognition models with different recognition accuracy may be trained due to different machine learning models. Therefore, the embodiment of the disclosure provides a technical scheme to obtain the target face recognition model with higher face recognition accuracy.
In step S55, at least one target machine learning model is trained based on the occluded facial features of the second object.
In some embodiments, the at least one machine learning model may be trained based on occluded facial features of the second object, wherein the at least one machine learning model may include, without limitation, a FaceNet machine learning model, an SVC machine learning model, a GBDT machine learning model, an XGBoost machine learning model, and the like machine learning models.
In step S56, the face recognition is performed on the occluded face image of the second object through the trained at least one target machine learning model.
In some embodiments, after the training of the at least one target machine learning model is completed, the at least one target machine learning model may be used to identify the occluded face image of the second object, and the identification accuracy of the at least one target machine learning model may be determined according to the identification result.
In step S57, the accuracy of the at least one target machine learning model in recognizing the occluded face image of the second object is obtained.
In some embodiments, the recognition result of the trained at least one target machine learning model on the occluded face image may be compared with a second object corresponding to the occluded face image to determine the recognition accuracy of the at least one target machine learning model.
In step S58, the target face recognition model is determined from the at least one target machine learning model according to the recognition accuracy.
In some embodiments, a target machine learning model with the highest recognition accuracy can be selected from the at least one trained target machine learning model as the target face recognition model.
For example, if the at least one target machine learning model includes an SVC machine learning model and a GBDT machine learning model. The recognition accuracy of the trained SVC machine learning model for recognizing the blocked face image may be 0.867, and the recognition accuracy of the trained GBDT machine learning model for recognizing the blocked face image may be 0.200, so that the SVC machine learning model with higher recognition accuracy may be selected as the final target face recognition model.
According to the technical scheme provided by the embodiment of the disclosure, the target machine learning model with the highest recognition accuracy can be used as the target face recognition model, so that the recognition accuracy of the target face recognition model is improved.
In some embodiments, the occluded facial image of the second subject may be a mask facial image of the second subject. Fig. 11 provides a technical solution, in which a target face recognition model capable of recognizing a face image of a mask worn by a second subject is trained, and the target face recognition model is applied to a target scene.
Fig. 11 is a flowchart illustrating a face recognition method according to an exemplary embodiment.
Referring to fig. 11, the above-described face recognition method may include the following steps.
In step S6, image acquisition is performed on each mask wearing object in the target scene by the image acquisition device, so as to obtain a mask wearing image of each mask wearing object.
In some embodiments, the image capturing device may refer to a device that may perform image capturing, such as a tablet computer, a mobile phone, a camera, and the like, which is not limited by the present disclosure.
In some embodiments, the target face recognition model may be applied in the target scenario after the target face recognition model training is completed. The target application scene can be, for example, identifying the evasion person in the traffic lane during epidemic situation; the target application scenario may be, for example, the identification of high risk personnel from an epidemic situation at a mall gate during the epidemic situation; the target application scenario may be, for example, a person of a company performing a punch card through facial recognition during an epidemic situation, which is not limited by the present disclosure.
In some embodiments, image acquisition may be performed on each mask-wearing object in the target scene by the image acquisition device to obtain a mask-wearing image of each mask-wearing object.
In step S7, face alignment processing is performed on the mask-wearing image of each mask-wearing object to determine a mask-wearing face image of each mask-wearing object.
In some embodiments, not only the facial information of each mask-wearing object, but also the body information of the mask-wearing object and some background information may be included in the mask-wearing image of each mask-wearing object in each target scene. Therefore, the face alignment processing can be performed on the mask wearing images of the respective mask wearing subjects to determine the mask wearing face images of the respective mask wearing subjects.
In some embodiments, if the mask-wearing face image of the second subject is determined among the plurality of mask-wearing face images, the target cue information may be displayed (or issued) while the mask-wearing face image of the second subject is displayed.
In some embodiments, if the target scene is an employee card-punching scene, after determining the facial image of the mask of the second object, card-punching information of the second object (such as identity information of the second object and card-punching time information) needs to be recorded to complete card-punching.
According to the technical scheme provided by the embodiment of the disclosure, the face image of the wearing mask is identified through the target face recognition model, and the second object can be accurately identified from a plurality of wearing mask objects.
Fig. 12 is a block diagram illustrating a training apparatus of a face recognition model according to an exemplary embodiment. Referring to fig. 12, a training apparatus 1200 of a face recognition model provided by an embodiment of the present disclosure may include: an unobstructed facial image acquisition module 1201, a feature extraction model training module 1202, an obstructed facial image acquisition module 1203, an obstructed facial feature extraction module 1204, and a target face recognition model determination module 1205.
Wherein the unoccluded face image acquisition module 1201 may be configured to acquire an unoccluded face image of the first object. The feature extraction model training module 1202 may be configured to train a feature extraction model using the unobstructed facial image of the first object. The occluded facial image acquisition module 1203 may be configured to acquire an occluded facial image of a second object, the first object comprising the second object. The occluded facial feature extraction module 1204 may be configured to perform feature extraction on the occluded facial image of the second object by the feature extraction model to obtain occluded facial features of the second object. The target face recognition model determination module 1205 may be configured to train a target face recognition model for identifying the second object with the occlusion face image based on the occluded facial features.
In some embodiments, the target face recognition model determination module 1205 may include: the face recognition system comprises a plurality of model training units, a face recognition unit, a recognition accuracy rate determining unit and a target face recognition model determining unit.
Wherein the plurality of model training units may be configured to train at least one target machine learning model according to the occluded facial features of the second object. The face recognition unit may be configured to perform face recognition on the occluded face image of the second object by the trained at least one target machine learning model. The recognition accuracy determination unit may be configured to obtain a recognition accuracy of the at least one target machine learning model for the occluded face image of the second object. The target face recognition model determination unit may be configured to determine the target face recognition model in the at least one target machine learning model according to the recognition accuracy.
In some embodiments, the target face recognition model determination module 1205 may include: a first display unit, a file importing unit, a second display unit and a target machine learning model determining unit.
The first display unit may be configured to display a file import interface. The file importing unit may be configured to import the occluded facial feature of the second object through the file importing interface. The second display unit may be configured to display a model selection interface. The target machine learning model determination unit may be configured to determine a target machine learning model at the model selection interface based on the second object, so as to train the target machine learning model based on the occluded facial feature to obtain the target face recognition model.
In some embodiments, the model selection interface includes a two-class model and a multi-class model.
In some embodiments, the target machine learning model determination unit may include: the classification model determination subunit and the multi-classification model determination subunit.
Wherein the classification model determination subunit may be configured to select the classification model at the model selection interface to determine the target machine learning model if the second object comprises an object. The multi-classification model determination subunit may be configured to select the multi-classification model at the model selection interface to determine the target machine learning model if the second object includes at least two objects.
In some embodiments, the feature extraction model is trained in a first device and the target face recognition model is trained in a second device.
In some embodiments, the occluded facial feature extraction module 1204 may include: a storage unit and a transmission unit.
Wherein the storage unit may be configured to store the occluded facial feature of the second object in a target file. The transmitting unit may be configured to transmit the target file to the second device so as to complete training of the target face recognition model.
In some embodiments, the occluded facial image is a mask-worn facial image.
In some embodiments, the training apparatus 1200 of the face recognition model may further include: dai Kouzhao facial image acquisition module and Dai Kouzhao facial recognition module.
The Dai Kouzhao facial image acquisition module may be configured to acquire images of the mask wearing face of each mask wearing object in the target scene through the image acquisition device. The Dai Kouzhao face recognition module may be configured to process the mask face images of the respective mask-wearing subjects by the target face recognition model to determine the mask face image of the second subject from the mask face images of the respective mask-wearing subjects.
In some embodiments, the target scene is a punch-through scene.
In some embodiments, the training apparatus 1200 of the face recognition model may further include: and a card punching recording module.
The card punching recording module may be configured to record card punching information of the second object after the facial image of the mask of the second object is determined, so as to complete card punching.
In some embodiments, the Dai Kouzhao facial image acquisition module may comprise: an image acquisition unit and a facial image acquisition unit of the wearing mask.
The image acquisition unit may be configured to perform image acquisition on each mask wearing object in the target scene through the image acquisition device, so as to obtain a mask wearing image of each mask wearing object. The Dai Kouzhao face image obtaining unit may be configured to perform face alignment processing on the mask-wearing images of the respective mask-wearing subjects to determine the mask-wearing face images of the respective mask-wearing subjects.
Since each functional module of the training apparatus 1200 for a face recognition model according to the example embodiment of the present disclosure corresponds to the steps of the example embodiment of the training method for a face recognition model described above, a detailed description thereof will be omitted.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, aspects of embodiments of the present disclosure may be embodied in a software product, which may be stored on a non-volatile storage medium (which may be a CD-ROM, a U-disk, a mobile hard disk, etc.), comprising instructions for causing a computing device (which may be a personal computer, a server, a mobile terminal, or a smart device, etc.) to perform a method in accordance with embodiments of the present disclosure, such as one or more of the steps shown in fig. 3.
Furthermore, the above-described figures are only schematic illustrations of processes included in the method according to the exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the disclosure is not to be limited to the details of construction, the manner of drawing, or the manner of implementation, which has been set forth herein, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (16)

1. A method for training a face recognition model, comprising:
Acquiring an unobstructed facial image of a first object;
training a feature extraction model using the unobstructed facial image of the first object;
acquiring an occluded facial image of a second object, the first object comprising the second object;
extracting features of the occluded facial image of the second object through the feature extraction model to obtain occluded facial features of the second object;
training a target face recognition model according to the blocked facial features, wherein the target face recognition model is used for recognizing a blocked face image of the second object;
the training of a target face recognition model according to the occluded facial features, wherein the target face recognition model is used for recognizing the occluded face image of the second object, and comprises the following steps:
training at least one target machine learning model according to the occluded facial features of the second object;
performing face recognition on the image of the blocked face of the second object through the at least one trained target machine learning model;
acquiring the recognition accuracy of the at least one target machine learning model on the occluded face image of the second object;
And determining the target face recognition model in the at least one target machine learning model according to the recognition accuracy.
2. The method of claim 1, wherein training a target face recognition model based on the occluded facial features, the target face recognition model for recognizing the second object occluded face image comprises:
displaying a file import interface;
importing the occluded facial features of the second object through the file importing interface;
displaying a model selection interface;
and determining a target machine learning model at the model selection interface according to the second object so as to train the target machine learning model according to the occluded facial features and obtain the target face recognition model.
3. The method of claim 2, wherein the model selection interface comprises a two-class model and a multi-class model; determining a target machine learning model at the model selection interface from the second object, comprising:
if the second object comprises an object, selecting the classification model at the model selection interface to determine the target machine learning model;
If the second object includes at least two objects, the multi-classification model is selected at the model selection interface to determine the target machine learning model.
4. The method of claim 2, wherein the feature extraction model is trained in a first device and the target face recognition model is trained in a second device; extracting features of the occluded facial image of the second object through the feature extraction model to obtain occluded facial features of the second object, wherein the feature extraction comprises:
storing occluded facial features of the second object in a target file;
and sending the target file to the second equipment so as to complete training of the target face recognition model.
5. The method of claim 1, wherein the occluded facial image is a mask-worn facial image; wherein the method further comprises:
image acquisition is carried out on each mask wearing object in the target scene through image acquisition equipment, so that a mask wearing face image of each mask wearing object is obtained;
and processing the facial images of the mask wearing parts of the mask wearing objects through the target face recognition model so as to determine the facial image of the mask wearing part of the second object from the facial images of the mask wearing parts of the mask wearing objects.
6. The method of claim 5, wherein the target scene is a punch-through scene; wherein the method further comprises:
and after the facial image of the wearing mask of the second object is determined, recording the card punching information of the second object to finish card punching.
7. The method of claim 5, wherein acquiring, by the image acquisition device, the image of each mask-wearing object in the target scene to obtain the mask-wearing face image of each mask-wearing object, comprises:
image acquisition is carried out on each mask wearing object in the target scene through image acquisition equipment so as to obtain mask wearing images of each mask wearing object;
face alignment processing is performed on the mask images of the respective mask wearing subjects to determine the mask face images of the respective mask wearing subjects.
8. A training device for a face recognition model, comprising:
a non-occlusion facial image acquisition module configured to acquire a non-occlusion facial image of a first object;
a feature extraction model training module configured to train a feature extraction model using an unobstructed facial image of the first object;
an occluded facial image acquisition module configured to acquire an occluded facial image of a second object, the first object comprising the second object;
An occluded facial feature extraction module configured to perform feature extraction on an occluded facial image of the second object through the feature extraction model to obtain occluded facial features of the second object;
the target face recognition model determining module is configured to train a target face recognition model according to the occluded face features, and the target face recognition model is used for recognizing the occluded face image of the second object;
the training of a target face recognition model according to the occluded facial features, wherein the target face recognition model is used for recognizing the occluded face image of the second object, and comprises the following steps:
training at least one target machine learning model according to the occluded facial features of the second object;
performing face recognition on the image of the blocked face of the second object through the at least one trained target machine learning model;
acquiring the recognition accuracy of the at least one target machine learning model on the occluded face image of the second object;
and determining the target face recognition model in the at least one target machine learning model according to the recognition accuracy.
9. The apparatus of claim 8, wherein the target face recognition model determination module comprises:
a first display unit configured to display a file import interface;
a file importing unit configured to import, through the file importing interface, the occluded facial feature of the second object;
a second display unit configured to display a model selection interface;
and a target machine learning model determining unit configured to determine a target machine learning model at the model selection interface according to the second object, so as to train the target machine learning model according to the occluded facial feature to obtain the target face recognition model.
10. The apparatus of claim 9, wherein the model selection interface comprises a two-class model and a multi-class model; wherein the target machine learning model determination unit includes:
a classification model determination subunit configured to select the classification model at the model selection interface to determine the target machine learning model if the second object comprises an object;
a multi-classification model determination subunit configured to select the multi-classification model at the model selection interface to determine the target machine learning model if the second object includes at least two objects.
11. The apparatus of claim 9, wherein the feature extraction model is trained in a first device and the target face recognition model is trained in a second device; wherein, have and shelter from facial feature extraction module includes:
a storage unit configured to store occluded facial features of the second object in a target file;
and the sending unit is configured to send the target file to the second device so as to complete training of the target face recognition model.
12. The apparatus of claim 8, wherein the occluded facial image is a mask-worn facial image; wherein, the training device of the face recognition model further comprises
Dai Kouzhao facial image acquisition module configured to acquire images of each mask wearing object in a target scene by using an image acquisition device to acquire facial images of each mask wearing object;
dai Kouzhao face recognition module configured to process the mask face images of the respective mask-wearing subjects by the target face recognition model to determine the mask face image of the second subject from the mask face images of the respective mask-wearing subjects.
13. The apparatus of claim 12, wherein the target scene is a punch-through scene; wherein, the training device of face recognition model still includes:
and the card punching recording module is configured to record card punching information of the second object after the facial image of the wearing mask of the second object is determined so as to finish card punching.
14. The apparatus of claim 12, wherein the Dai Kouzhao facial image acquisition module comprises:
the image acquisition unit is configured to acquire images of all mask wearing objects in the target scene through the image acquisition equipment so as to obtain mask wearing images of all mask wearing objects;
dai Kouzhao face image acquisition unit configured to perform face alignment processing on the mask wearing images of the respective mask wearing subjects to determine the mask wearing face images of the respective mask wearing subjects.
15. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-7.
16. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-7.
CN202010773454.8A 2020-08-04 2020-08-04 Training method and device of face recognition model and related equipment Active CN111931628B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010773454.8A CN111931628B (en) 2020-08-04 2020-08-04 Training method and device of face recognition model and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010773454.8A CN111931628B (en) 2020-08-04 2020-08-04 Training method and device of face recognition model and related equipment

Publications (2)

Publication Number Publication Date
CN111931628A CN111931628A (en) 2020-11-13
CN111931628B true CN111931628B (en) 2023-10-24

Family

ID=73307646

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010773454.8A Active CN111931628B (en) 2020-08-04 2020-08-04 Training method and device of face recognition model and related equipment

Country Status (1)

Country Link
CN (1) CN111931628B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418190B (en) * 2021-01-21 2021-04-02 成都点泽智能科技有限公司 Mobile terminal medical protective shielding face recognition method, device, system and server
CN113255617B (en) * 2021-07-07 2021-09-21 腾讯科技(深圳)有限公司 Image recognition method and device, electronic equipment and computer-readable storage medium
CN116092166B (en) * 2023-03-06 2023-06-20 深圳市慧为智能科技股份有限公司 Mask face recognition method and device, computer equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855496A (en) * 2012-08-24 2013-01-02 苏州大学 Method and system for authenticating shielded face
WO2015165365A1 (en) * 2014-04-29 2015-11-05 华为技术有限公司 Facial recognition method and system
CN106372603A (en) * 2016-08-31 2017-02-01 重庆大学 Shielding face identification method and shielding face identification device
CN107909065A (en) * 2017-12-29 2018-04-13 百度在线网络技术(北京)有限公司 The method and device blocked for detecting face
CN108509915A (en) * 2018-04-03 2018-09-07 百度在线网络技术(北京)有限公司 The generation method and device of human face recognition model
EP3428843A1 (en) * 2017-07-14 2019-01-16 GB Group plc Improvements relating to face recognition
CN111274916A (en) * 2020-01-16 2020-06-12 华为技术有限公司 Face recognition method and face recognition device
WO2020134478A1 (en) * 2018-12-29 2020-07-02 北京灵汐科技有限公司 Face recognition method, feature extraction model training method and device thereof
CN111460962A (en) * 2020-03-27 2020-07-28 武汉大学 Mask face recognition method and system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855496A (en) * 2012-08-24 2013-01-02 苏州大学 Method and system for authenticating shielded face
WO2015165365A1 (en) * 2014-04-29 2015-11-05 华为技术有限公司 Facial recognition method and system
CN106372603A (en) * 2016-08-31 2017-02-01 重庆大学 Shielding face identification method and shielding face identification device
EP3428843A1 (en) * 2017-07-14 2019-01-16 GB Group plc Improvements relating to face recognition
CN107909065A (en) * 2017-12-29 2018-04-13 百度在线网络技术(北京)有限公司 The method and device blocked for detecting face
CN108509915A (en) * 2018-04-03 2018-09-07 百度在线网络技术(北京)有限公司 The generation method and device of human face recognition model
WO2020134478A1 (en) * 2018-12-29 2020-07-02 北京灵汐科技有限公司 Face recognition method, feature extraction model training method and device thereof
CN111274916A (en) * 2020-01-16 2020-06-12 华为技术有限公司 Face recognition method and face recognition device
CN111460962A (en) * 2020-03-27 2020-07-28 武汉大学 Mask face recognition method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于深度学习的部分遮挡人脸识别;王振华;苏金善;仝琼琳;;电子技术与软件工程(第02期);151-153 *
王振华 ; 苏金善 ; 仝琼琳 ; .基于深度学习的部分遮挡人脸识别.电子技术与软件工程.2020,(第02期),151-153. *

Also Published As

Publication number Publication date
CN111931628A (en) 2020-11-13

Similar Documents

Publication Publication Date Title
US11487995B2 (en) Method and apparatus for determining image quality
CN108427939B (en) Model generation method and device
CN111931628B (en) Training method and device of face recognition model and related equipment
CN108197618B (en) Method and device for generating human face detection model
CN109034069B (en) Method and apparatus for generating information
US20220172518A1 (en) Image recognition method and apparatus, computer-readable storage medium, and electronic device
CN109325148A (en) The method and apparatus for generating information
CN109993150B (en) Method and device for identifying age
CN108509915A (en) The generation method and device of human face recognition model
CN111914812B (en) Image processing model training method, device, equipment and storage medium
US20190087683A1 (en) Method and apparatus for outputting information
CN110796089B (en) Method and apparatus for training face model
CN110163096B (en) Person identification method, person identification device, electronic equipment and computer readable medium
CN108280477A (en) Method and apparatus for clustering image
CN108549848B (en) Method and apparatus for outputting information
CN108229375B (en) Method and device for detecting face image
CN111626126A (en) Face emotion recognition method, device, medium and electronic equipment
CN108491812B (en) Method and device for generating face recognition model
CN108509921B (en) Method and apparatus for generating information
CN110866469A (en) Human face facial features recognition method, device, equipment and medium
CN109871791A (en) Image processing method and device
CN109285181A (en) The method and apparatus of image for identification
CN116824278A (en) Image content analysis method, device, equipment and medium
CN110298850A (en) The dividing method and device of eye fundus image
CN110427915A (en) Method and apparatus for output information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant