CN117409470B - Face recognition feature data dynamic matching method, system, device and medium - Google Patents

Face recognition feature data dynamic matching method, system, device and medium Download PDF

Info

Publication number
CN117409470B
CN117409470B CN202311728701.2A CN202311728701A CN117409470B CN 117409470 B CN117409470 B CN 117409470B CN 202311728701 A CN202311728701 A CN 202311728701A CN 117409470 B CN117409470 B CN 117409470B
Authority
CN
China
Prior art keywords
face recognition
face
label
algorithm
recognition algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311728701.2A
Other languages
Chinese (zh)
Other versions
CN117409470A (en
Inventor
粟玉雄
黄育新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kilo X Robotics Co ltd
Original Assignee
Kilo X Robotics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kilo X Robotics Co ltd filed Critical Kilo X Robotics Co ltd
Priority to CN202311728701.2A priority Critical patent/CN117409470B/en
Publication of CN117409470A publication Critical patent/CN117409470A/en
Application granted granted Critical
Publication of CN117409470B publication Critical patent/CN117409470B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a face recognition characteristic data dynamic matching method, a system, a device and a medium, wherein the method comprises the following steps: extracting face image characteristic data of each personnel picture, and binding the face image characteristic data with the corresponding personnel picture; binding face image feature data of each person picture with corresponding labels, binding each label with a corresponding face recognition algorithm, and combining all face image feature data bound by each face recognition algorithm into a corresponding face feature library; selecting a face recognition algorithm and acquiring a face recognition request; acquiring the identification of the selected face recognition algorithm; and only when the selected face recognition algorithm has recognition capability and is in the starting state currently, searching and matching the corresponding face feature library through the face recognition algorithm according to the face recognition request to obtain a matching result. By extracting the facial image characteristic data and dynamically matching the facial image characteristic data into a face recognition algorithm, the accuracy and usability of recognition are improved.

Description

Face recognition feature data dynamic matching method, system, device and medium
Technical Field
The invention relates to a face recognition method, in particular to a face recognition characteristic data dynamic matching method, a face recognition characteristic data dynamic matching system, a face recognition characteristic data dynamic matching device and a face recognition characteristic data dynamic matching medium, and belongs to the technical field of artificial intelligence.
Background
In the prior art, ubiquitous authentication modes on intelligent terminals comprise physical media (bank cards, identity cards and the like), short message verification codes, two-dimensional codes, face recognition and the like. The face recognition is a biological recognition technology for carrying out identity recognition based on facial feature information of people, and compared with other authentication modes, the face recognition technology has the advantages that identity recognition can be finished through biological features of users without depending on other media.
Currently, face recognition technology mainly provides recognition services by extracting face feature data and dynamically binding the face feature data to a face recognition algorithm. On the premise of no face data, the face recognition method comprises the following steps: firstly, creating personnel in a face library page by an administrator, uploading personnel pictures, and extracting face features of the personnel pictures by calling a face feature data extraction algorithm; and then, carrying out label division on a batch of people, binding the people with the labels, and binding the labels to a face recognition algorithm, wherein the face recognition algorithm can have all face characteristic data under the labels, so that the bound people can be identified. In addition, the extracted face feature data can be used in a cross-platform manner, and the face feature data has the capability of one-time extraction and multiple-place use.
From this, the conventional face recognition method has the following technical defects: firstly, the face characteristic data extraction and algorithm binding can be operated by very professional staff, so that the development and use threshold is very high; secondly, the face feature sample data are less, so that the face recognition result is not ideal, and reliable recognition service cannot be provided; thirdly, facial features are required to be extracted firstly, facial feature data are built in a face recognition algorithm, if personnel and the facial feature data change, the face recognition algorithm is required to be manufactured again, so that the algorithm maintenance workload of developers is undoubtedly increased, the maintenance difficulty is increased, and the use cost is increased.
Therefore, a new face recognition method needs to be developed to solve the problems that the recognition accuracy of the face recognition algorithm is low, and different algorithms cannot share the same set of face feature library.
Disclosure of Invention
Aiming at the existing technical problems, the invention provides a face recognition feature data dynamic matching method, a system, a device and a medium, which are used for extracting face image feature data from a face recognition algorithm and dynamically matching the face image feature data into a corresponding face recognition algorithm so as to achieve the technical purpose of improving the recognition accuracy and usability of the face recognition algorithm.
In order to achieve the above purpose, the present invention provides a method for dynamically matching face recognition feature data, comprising the following steps:
acquiring all the personnel pictures, and extracting the face image feature data of each personnel picture through a face image feature extraction algorithm; then, binding relation is established between the face image characteristic data and the corresponding personnel picture;
classifying all people into the labels, and establishing a binding relationship between face image characteristic data of each person picture and the corresponding label; then, each label and a corresponding face recognition algorithm are established in a binding relation; combining all face image feature data bound by each face recognition algorithm into a corresponding face feature library;
selecting a face recognition algorithm and acquiring a face recognition request;
acquiring the identification mark of the selected face recognition algorithm, and judging whether the face recognition algorithm has identification capacity or not through the identification mark;
and only when the selected face recognition algorithm has recognition capability and is in the starting state currently, searching and matching the corresponding face feature library through the face recognition algorithm according to the face recognition request to obtain a matching result.
The method of the invention further comprises the steps of:
acquiring authentication information and performing authentication verification;
if the verification is passed, personnel information is obtained, and personnel are created;
importing a personnel picture, and detecting whether a face image feature extraction algorithm exists or not; if yes, the next step is carried out.
The face image feature extraction algorithm adopts an ARCFace model.
In the method, all the personnel are classified into the label, the personnel and the label are in one-to-many binding relation, and the label and the personnel are also in one-to-many binding relation.
In the method, in the binding relationship between the face image characteristic data of each person picture and the corresponding label, the face image characteristic data and the label are in one-to-many binding relationship, and the label and the face image characteristic data are also in one-to-many binding relationship.
In the method, in the binding relationship between each label and the corresponding face recognition algorithm, the label and the face recognition algorithm are in one-to-many binding relationship, and the face recognition algorithm and the label are also in one-to-many binding relationship.
The method of the invention further comprises the steps of:
acquiring a face recognition request and storing the face recognition request in a database;
acquiring authentication information of a face recognition request, and performing authentication verification; if the verification is passed, the next step is entered.
According to the technical scheme, the working principle and the technical effect of the method are as follows:
(1) By maintaining the face feature library, the invention can very conveniently extract the face features of the personnel pictures, thereby rapidly forming the face recognition service with high recognition rate. In addition, the invention supports the binding of the personnel and the labels in many-to-many mode, and the labels and the face recognition algorithm are bound in many-to-one mode, so that along with the enrichment of the pictures of the personnel, the face feature library is more and more enriched, and the face recognition accuracy is higher and higher.
(2) By binding the label with the face recognition algorithm, the invention can support personnel change, and once personnel change, the face feature library bound with the face recognition algorithm can be correspondingly updated, so that the face recognition algorithm does not need to be reissued. In addition, the usability of the face recognition algorithm is greatly improved and the maintenance cost of a developer is reduced by means of dynamic binding of the face image feature data.
The invention also provides a face recognition feature data dynamic matching system which comprises a feature data module, a face feature library module, a recognition request module, a recognition capability module and a face recognition module;
the feature data module is used for acquiring all the personnel pictures and extracting the face image feature data of each personnel picture through a face image feature extraction algorithm; then, binding relation is established between the face image characteristic data and the corresponding personnel picture;
the face feature library module is used for classifying all people into the labels and establishing a binding relationship between face image feature data of each person picture and the corresponding label; then, each label and a corresponding face recognition algorithm are established in a binding relation; combining all face image feature data bound by each face recognition algorithm into a corresponding face feature library;
the recognition request module is used for selecting a face recognition algorithm and acquiring a face recognition request;
the recognition capability module is used for acquiring the recognition identification of the selected face recognition algorithm and judging whether the face recognition algorithm has recognition capability or not through the recognition identification;
the face recognition module is used for searching and matching the corresponding face feature library according to the face recognition request only when the selected face recognition algorithm has recognition capability and is in the starting state currently, and a matching result is obtained.
The invention further provides a face recognition characteristic data dynamic matching device which comprises a memory and a processor; the memory stores a computer program, and the processor realizes the dynamic matching method of the face recognition characteristic data when executing the computer program.
In addition, the invention also provides a computer readable storage medium, on which a computer program is stored, which when being executed by a processor, realizes the face recognition characteristic data dynamic matching method.
In summary, the invention extracts the facial image features of the personnel pictures through the facial image feature extraction algorithm, extracts the facial image feature data from the facial recognition algorithm, and dynamically matches the facial image feature data to the corresponding facial recognition algorithm in a mode of constructing a facial feature library, thereby not only improving the usability of the facial recognition algorithm, but also greatly enriching the facial feature library through the accumulation of daily algorithm recognition services, and improving the accuracy of the facial recognition algorithm.
Compared with the prior art, the invention has the technical advantages that:
1. and combining all face image feature data under each face recognition algorithm into a unique face feature library by taking the face recognition algorithm as a dimension, and dynamically providing the face feature library for the corresponding face recognition algorithm.
2. The face image characteristic data and the face recognition algorithm are decoupled, free combination is achieved through a relation binding mode, and no matter the personnel change or the label change or the face image characteristic data expansion is carried out, the face recognition algorithm is not required to be manufactured again, so that the algorithm maintenance work of a developer is reduced, and the use and maintenance cost is also reduced.
3. Because the same facial image feature data can bind a plurality of facial recognition algorithms, the daily facial recognition service can enrich the sample data of the facial feature library, thereby improving the recognition accuracy of the bound algorithms.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a facial image feature extraction and feature data binding phase in the method of the present invention;
FIG. 2 is a flow chart of the face recognition stage in the method of the present invention;
FIG. 3 is a schematic block diagram of the system of the present invention;
fig. 4 is a schematic block diagram of the apparatus of the present invention.
Detailed Description
The invention is further illustrated below with reference to examples and figures. The examples described are only examples of a part of the present invention, which are only for explaining the present invention, and do not limit the scope of the present invention in any way. The flow diagrams depicted in the figures are illustrative only and do not necessarily include all of the content and operations/steps, nor are they necessarily performed in the order described. For example, some operations/steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations.
Example 1: the invention discloses a face recognition characteristic data dynamic matching method.
As shown in fig. 1 and 2, the present embodiment provides a method for dynamically matching face recognition feature data, which includes a phase of face image feature extraction and feature data binding, and a face recognition phase, and is specifically described below.
As shown in fig. 1, the process of binding the feature data with the feature extraction of the face image includes the following steps:
s1, acquiring all person pictures, and extracting face image feature data of each person picture through a face image feature extraction algorithm; and then, binding the facial image characteristic data and the corresponding personnel pictures, and storing the binding relation into a database.
In this embodiment, the step S1 includes the following specific steps:
s1-1, acquiring authentication information such as a user token and performing authentication verification so as to prevent invalid requests and attacks; if the verification is not passed, returning authentication failure and ending the flow; if the verification is passed, the next step is entered.
S1-2, acquiring personnel information, including the name, sex, departments of the personnel and the like, and creating the personnel.
S1-3, importing a personnel picture, and detecting whether a face image feature extraction algorithm is installed; if not, the face image feature extraction algorithm is not installed, the face image feature extraction cannot be performed, and the process is ended; if yes, the face image feature extraction algorithm is installed, and the next step is carried out.
S1-4, extracting face image feature data of each person picture through a face image feature extraction algorithm, and storing the face image feature data in a database.
In particular, the face image feature extraction algorithm uses an AI service container. First, an AI service container for face image feature extraction algorithm model (such as ARCFace model) is loaded by an interface of Kubernetes (K8S for short), which is an open source system for automatically deploying, expanding and contracting and managing containerized applications. Then, the AI service container transmits the imported person picture to an ARCFace model, and extracts face image feature data of the person picture through the ARCFace model. And finally, the extracted face image characteristic data is recalled into an API interface of the AI service container, and a face image characteristic data set of each personnel picture can be obtained.
S1-5, binding the extracted face image feature data with the corresponding personnel picture, and storing the face image feature data and the corresponding personnel picture into a database.
S2, classifying all people into the labels, and establishing a binding relation between face image characteristic data of each person picture and the corresponding label; then, each label and a corresponding face recognition algorithm are established in a binding relation; and combining all face image feature data bound by each face recognition algorithm into a corresponding face feature library, and storing the face feature library into an AI service container of the corresponding face recognition algorithm.
In this embodiment, the step S2 includes the following steps:
s2-1, classifying all people into the labels, establishing a binding relation between the face image characteristic data of each person picture and the corresponding label, and storing the binding relation into a database.
In practice, a label (group) is set first, and the label (group) may be defined as a certain area or a certain department, for example, a research and development department. The personnel are classified into the tags (groups) so that the personnel and the corresponding tags establish a binding relationship, one personnel is supported to be classified into a plurality of tags, one tag is classified into a plurality of personnel, and the personnel are stored in a database. And then, establishing a binding relation between the face image characteristic data of each person and the corresponding label.
S2-2, establishing a binding relation between each label and a corresponding face recognition algorithm by taking the label as a dimension, and storing the binding relation in a database.
In specific implementation, one face recognition algorithm is supported to bind a plurality of labels, and one label can bind a plurality of face recognition algorithms. Correspondingly, the same face recognition algorithm can be provided with a plurality of face image feature data, and the same face image feature data can be bound with a plurality of face recognition algorithms.
S2-3, summarizing all face image feature data under each face recognition algorithm, combining all face image feature data into a feature matrix data file, thus constructing a face feature library specific to each face recognition algorithm, and storing the face feature library in an AI service container corresponding to the face recognition algorithm.
In specific implementation, each face recognition algorithm uses an AI service container, a feature matrix data file is stored in the AI service container for deploying the corresponding face recognition algorithm, and the feature matrix data file is provided for the corresponding face recognition algorithm in a mode that a data volume is mapped into the AI service container.
After steps S1-S2, each face recognition algorithm has a unique face feature library. Because the face feature library is extracted, when personnel change or label change occurs, a developer does not need to reissue the face recognition algorithm, and only the steps S1-S2 are needed to be repeated. And, the daily face recognition service can enrich the sample data of the face feature library. In addition, a plurality of face recognition algorithms can be bound to the same face image feature data, so that the recognition accuracy of the face recognition algorithms is improved.
As shown in fig. 2, the face recognition stage includes the following steps:
s3, selecting a face recognition algorithm, acquiring a face recognition request, and storing data of the face recognition request in a database.
In specific implementation, this step S3 includes the following steps:
s3-1, selecting a specific face recognition algorithm before face recognition.
S3-2, acquiring a face recognition request, and when the face recognition request reaches the back-end service of the application system, firstly storing data (including request parameters, pictures to be identified, requested user information, requester IP and other information) of the face recognition request in a database, so that subsequent log examination is facilitated.
S3-3, acquiring authentication information of the face recognition request and performing authentication verification, so that invalid requests and attacks are prevented. If the verification is not passed, returning authentication failure and ending the flow; if the verification is passed, the next step is entered.
S4, acquiring the identification mark of the face recognition algorithm from a database according to the selected face recognition algorithm, and judging whether the face recognition algorithm has identification capacity or not through the identification mark; if not, directly feeding back to identify the absence, and ending the flow; if so, the next step is entered.
It should be noted that, since all face recognition algorithms are managed in the database, including the unique ID of the face recognition algorithm, whether the face recognition algorithm is currently enabled or not, the unique identification thereof (i.e., the unique ID of the face recognition algorithm stored in the database) can be acquired from the database according to the selected face recognition algorithm, and whether the face recognition algorithm is currently enabled or not. Meanwhile, each face recognition algorithm records information about whether the face recognition algorithm has recognition capability or not, so that whether the corresponding face recognition algorithm exists or not, namely whether the face recognition algorithm has the recognition capability or not can be judged through the recognition mark. In particular, the next step can be entered only when the selected face recognition algorithm is present and is currently in an enabled state.
And S5, searching the face feature library through a selected face recognition algorithm according to the request parameters in the face recognition request, matching the picture to be recognized in the face recognition request with the face feature library data to obtain a matching result, and storing the matching result in a database.
In particular, when the selected face recognition algorithm is judged to be normal and has recognition capability, the step S8 includes the following steps:
s5-1, firstly, starting an AI service container of a selected face recognition algorithm through an interface of K8S, and loading a corresponding face recognition algorithm model and a corresponding face feature library after the AI service container is started successfully.
S5-2, the back-end service of the application system sends request parameters (including labeling areas for auxiliary identification and the like) in the face identification request and the picture to be identified into an AI service container of the face identification algorithm through an API interface, the corresponding face feature library is searched through a face identification algorithm model, the picture to be identified is matched with face feature library data, and after the matching is completed, a matching result is fed back to the AI service container.
And S5-3, finally, the AI service container stores the matching result in a database so as to make identification records, and returns the matching result to the upper layer caller service.
From the above, the conventional face recognition method is to embed the face feature library into the face recognition algorithm, the coupling between the face recognition algorithm and the face recognition algorithm is very high, and the face recognition algorithm needs to be reproduced once personnel variation, label variation and other conditions occur, so that the work load of a developer is increased and the recognition accuracy of the face recognition algorithm is difficult to improve by reissuing the face recognition algorithm. Compared with the prior art, the face feature extraction method has the advantages that the face feature is extracted from the face recognition algorithm and dynamically matched with the face recognition algorithm, the face feature can be freely combined with the face recognition algorithm, no matter how the face recognition algorithm changes, or the requirement of improving the recognition accuracy of the face recognition algorithm is met, a developer is not required to reissue the face recognition algorithm, the same face feature data can be provided for a plurality of face recognition algorithms to use, and the memory occupation is saved.
Example 2: the invention relates to a face recognition characteristic data dynamic matching system.
As shown in fig. 3, the embodiment provides a face recognition feature data dynamic matching system, which comprises a feature data module, a face feature library module, a recognition request module, a recognition capability module and a face recognition module.
The feature data module is used for acquiring all the personnel pictures and extracting the face image feature data of each personnel picture through a face image feature extraction algorithm; and then, binding the facial image characteristic data and the corresponding personnel picture, and storing the binding relation into a database.
The face feature library module is used for classifying all people into the labels and establishing a binding relationship between face image feature data of each person picture and the corresponding label; then, each label and a corresponding face recognition algorithm are established in a binding relation; and combining all face image feature data bound by each face recognition algorithm into a corresponding face feature library, and storing the face feature library into a corresponding face recognition algorithm AI service container.
The recognition request module is used for selecting a face recognition algorithm, acquiring a face recognition request and storing the face recognition request into a database.
The recognition capability module is used for acquiring the recognition identification of the selected face recognition algorithm from the database according to the selected face recognition algorithm, and judging whether the face recognition algorithm has recognition capability or not through the recognition identification.
The face recognition module is used for searching the corresponding face feature library through the face recognition algorithm according to the request parameters in the face recognition request only when the selected face recognition algorithm has recognition capability and is in the starting state currently, matching the picture to be recognized with the face feature library data to obtain a matching result, and storing the matching result in the database.
In summary, the system of the invention decouples the face feature data and the face recognition algorithm, and realizes free combination in a relation binding mode, so that the algorithm is not required to be manufactured again for personnel variation and feature data expansion, and the algorithm maintenance work of developers is reduced.
Example 3: the invention relates to a dynamic matching device for face recognition characteristic data.
As shown in fig. 4, the present embodiment provides a device for dynamically matching face recognition feature data, which includes a memory and a processor connected to each other; the memory stores a computer program, and the processor realizes the dynamic matching method of the face recognition characteristic data when executing the computer program.
Wherein the memory is used as a data storage component of the device of the invention for storing input data and processing results, and a database or other distributed storage system can be used to meet the requirements of persistence and high availability of data.
In specific implementation, the processor includes a load balancer, an application system back-end service connected with the load balancer, a K8S interface connected with the application system back-end service, and a plurality of AI service containers respectively connected with the K8S interface.
The back-end service of the application system generally refers to a component or a module in a software application, which is responsible for processing data, logic and service functions, and is used for receiving a face recognition request, and sending request parameters (including a labeling area for assisting in recognition, etc.) in the face recognition request and pictures to be recognized into an AI service container corresponding to a face recognition algorithm through an API interface.
The load balancer is used for uniformly distributing requests to a plurality of AI service containers so as to realize concurrent processing and horizontal expansion. In order to improve the performance and scalability of the inventive apparatus, a load balancer is introduced to distribute requests.
The K8S interface is used for communicating with the K8S cluster to realize the management and the scheduling of the AI service container. Also, kubernetes (K8S for short) is a container orchestration and management tool that provides the functionality to manage, schedule, and deploy containerized applications.
The AI service container is provided with a plurality of AI service containers for providing different artificial intelligence services. These AI service containers may run a variety of different algorithmic models, such as face image feature extraction algorithms, face recognition algorithms, and the like. In this embodiment, the face image feature extraction algorithm uses one AI service container, and each face image feature extraction algorithm uses one AI service container. And, each AI service container provides an API interface for receiving input data and returning corresponding processing results.
Moreover, the components in the device of the present invention need to perform network communication to transfer data and control information, and common network protocols and technologies, such as HTTP, TCP/IP, etc., can be used.
Furthermore, the arrangement of the device according to the invention requires the following aspects to be considered:
a. deploying Kubernetes clusters: a Kubernetes cluster, including a Master node and a plurality of Worker nodes, needs to be configured and deployed. The method can select the Kubernetes service provided by the public cloud, and can also build a cluster in the private cloud or the local environment.
b. Deploying an AI service container: each AI service container needs to be configured with the corresponding environment and dependencies, including algorithmic models, runtime environments, network interfaces, etc.
c. Configuration data storage: the appropriate data storage components are selected and configured to meet the storage and access requirements of the system for data.
d. Configuring network communication: ensuring normal communication among all components and setting correct network configuration and security policy.
Example 4: the invention provides a computer readable storage medium.
The present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the described method for dynamic matching of face recognition feature data.
Moreover, those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium provided herein and used in embodiments may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article or method that comprises the element.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the claims, and all equivalent structures or equivalent processes using the descriptions and drawings of the present application, or direct or indirect application in other related technical fields are included in the scope of the claims of the present application.

Claims (7)

1. The face recognition characteristic data dynamic matching method is characterized by comprising the following steps of:
acquiring all the personnel pictures, and extracting the face image feature data of each personnel picture through a face image feature extraction algorithm; then, binding relation is established between the face image characteristic data and the corresponding personnel picture;
classifying all people into the labels, and establishing a binding relationship between face image characteristic data of each person picture and the corresponding label; then, each label and a corresponding face recognition algorithm are established in a binding relation; combining all face image feature data bound by each face recognition algorithm into a corresponding face feature library;
classifying all the personnel into the label, wherein the personnel and the label are in one-to-many binding relation, and the label and the personnel are also in one-to-many binding relation;
in the binding relationship between the face image characteristic data of each person picture and the corresponding label, the face image characteristic data and the label are in one-to-many binding relationship, and the label and the face image characteristic data are also in one-to-many binding relationship;
in the binding relationship between each label and the corresponding face recognition algorithm, the label and the face recognition algorithm are in one-to-many binding relationship, and the face recognition algorithm and the label are also in one-to-many binding relationship;
selecting a face recognition algorithm and acquiring a face recognition request;
acquiring the identification mark of the selected face recognition algorithm, and judging whether the face recognition algorithm has identification capacity or not through the identification mark;
and only when the selected face recognition algorithm has recognition capability and is in the starting state currently, searching and matching the corresponding face feature library through the face recognition algorithm according to the face recognition request to obtain a matching result.
2. The method for dynamically matching face recognition feature data according to claim 1, wherein the step of obtaining all person pictures comprises:
acquiring authentication information and performing authentication verification;
if the verification is passed, personnel information is obtained, and personnel are created;
importing a personnel picture, and detecting whether a face image feature extraction algorithm exists or not; if yes, the next step is carried out.
3. The method for dynamically matching face recognition feature data according to claim 1 or 2, wherein the face image feature extraction algorithm adopts an ARCFace model.
4. The method for dynamically matching face recognition feature data according to claim 1, wherein the obtaining the face recognition request comprises:
acquiring a face recognition request and storing the face recognition request in a database;
acquiring authentication information of a face recognition request, and performing authentication verification; if the verification is passed, the next step is entered.
5. The face recognition feature data dynamic matching system is characterized by comprising a feature data module, a face feature library module, a recognition request module, a recognition capability module and a face recognition module;
the feature data module is used for acquiring all the personnel pictures and extracting the face image feature data of each personnel picture through a face image feature extraction algorithm; then, binding relation is established between the face image characteristic data and the corresponding personnel picture;
the face feature library module is used for classifying all people into the labels and establishing a binding relationship between face image feature data of each person picture and the corresponding label; then, each label and a corresponding face recognition algorithm are established in a binding relation; combining all face image feature data bound by each face recognition algorithm into a corresponding face feature library;
classifying all the personnel into the label, wherein the personnel and the label are in one-to-many binding relation, and the label and the personnel are also in one-to-many binding relation;
in the binding relationship between the face image characteristic data of each person picture and the corresponding label, the face image characteristic data and the label are in one-to-many binding relationship, and the label and the face image characteristic data are also in one-to-many binding relationship;
in the binding relationship between each label and the corresponding face recognition algorithm, the label and the face recognition algorithm are in one-to-many binding relationship, and the face recognition algorithm and the label are also in one-to-many binding relationship;
the recognition request module is used for selecting a face recognition algorithm and acquiring a face recognition request;
the recognition capability module is used for acquiring the recognition identification of the selected face recognition algorithm and judging whether the face recognition algorithm has recognition capability or not through the recognition identification;
the face recognition module is used for searching and matching the corresponding face feature library according to the face recognition request only when the selected face recognition algorithm has recognition capability and is in the starting state currently, and a matching result is obtained.
6. A dynamic matching device for face recognition characteristic data comprises a memory and a processor; the memory stores a computer program, wherein the processor implements a face recognition feature data dynamic matching method according to any one of claims 1 to 4 when executing the computer program.
7. A computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements a face recognition feature data dynamic matching method according to any one of claims 1 to 4.
CN202311728701.2A 2023-12-15 2023-12-15 Face recognition feature data dynamic matching method, system, device and medium Active CN117409470B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311728701.2A CN117409470B (en) 2023-12-15 2023-12-15 Face recognition feature data dynamic matching method, system, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311728701.2A CN117409470B (en) 2023-12-15 2023-12-15 Face recognition feature data dynamic matching method, system, device and medium

Publications (2)

Publication Number Publication Date
CN117409470A CN117409470A (en) 2024-01-16
CN117409470B true CN117409470B (en) 2024-03-15

Family

ID=89487547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311728701.2A Active CN117409470B (en) 2023-12-15 2023-12-15 Face recognition feature data dynamic matching method, system, device and medium

Country Status (1)

Country Link
CN (1) CN117409470B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110309744A (en) * 2019-06-21 2019-10-08 武汉市公安局视频侦查支队 A kind of suspect's recognition methods and device
CN111221803A (en) * 2019-12-27 2020-06-02 深圳云天励飞技术有限公司 Characteristic library management method and coprocessor
CN112052733A (en) * 2020-07-31 2020-12-08 中国建设银行股份有限公司 Database construction method, face recognition device and electronic equipment
CN112308031A (en) * 2020-11-25 2021-02-02 浙江大华系统工程有限公司 Universal face recognition and face feature information base generation method, device and equipment
CN112446317A (en) * 2020-11-23 2021-03-05 四川大学 Heterogeneous face recognition method and device based on feature decoupling
CN113971831A (en) * 2021-11-22 2022-01-25 武汉虹信技术服务有限责任公司 Dynamically updated face recognition method and device and electronic equipment
CN114694226A (en) * 2022-03-31 2022-07-01 北京瑞莱智慧科技有限公司 Face recognition method, system and storage medium
CN114764939A (en) * 2022-03-29 2022-07-19 中国科学院信息工程研究所 Heterogeneous face recognition method and system based on identity-attribute decoupling
WO2022179046A1 (en) * 2021-02-26 2022-09-01 深圳壹账通智能科技有限公司 Facial recognition method and apparatus, computer device, and storage medium
CN116311515A (en) * 2023-03-08 2023-06-23 虹软科技股份有限公司 Gesture recognition method, device, system and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679451A (en) * 2017-08-25 2018-02-09 百度在线网络技术(北京)有限公司 Establish the method, apparatus, equipment and computer-readable storage medium of human face recognition model

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110309744A (en) * 2019-06-21 2019-10-08 武汉市公安局视频侦查支队 A kind of suspect's recognition methods and device
CN111221803A (en) * 2019-12-27 2020-06-02 深圳云天励飞技术有限公司 Characteristic library management method and coprocessor
CN112052733A (en) * 2020-07-31 2020-12-08 中国建设银行股份有限公司 Database construction method, face recognition device and electronic equipment
CN112446317A (en) * 2020-11-23 2021-03-05 四川大学 Heterogeneous face recognition method and device based on feature decoupling
CN112308031A (en) * 2020-11-25 2021-02-02 浙江大华系统工程有限公司 Universal face recognition and face feature information base generation method, device and equipment
WO2022179046A1 (en) * 2021-02-26 2022-09-01 深圳壹账通智能科技有限公司 Facial recognition method and apparatus, computer device, and storage medium
CN113971831A (en) * 2021-11-22 2022-01-25 武汉虹信技术服务有限责任公司 Dynamically updated face recognition method and device and electronic equipment
CN114764939A (en) * 2022-03-29 2022-07-19 中国科学院信息工程研究所 Heterogeneous face recognition method and system based on identity-attribute decoupling
CN114694226A (en) * 2022-03-31 2022-07-01 北京瑞莱智慧科技有限公司 Face recognition method, system and storage medium
CN116311515A (en) * 2023-03-08 2023-06-23 虹软科技股份有限公司 Gesture recognition method, device, system and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
纪国惠.人脸识别关键技术综述.科技创新与品牌.2019,(第12期),第73-74页. *

Also Published As

Publication number Publication date
CN117409470A (en) 2024-01-16

Similar Documents

Publication Publication Date Title
CN103400579B (en) A kind of speech recognition system and construction method
CN109614209B (en) Task processing method, application server and system
CN109788031B (en) Service data acquisition method and device, computer equipment and storage medium
CN110377649B (en) Construction and query methods, devices, equipment and storage medium of tagged data
Mahmoudi et al. Multimedia processing using deep learning technologies, high‐performance computing cloud resources, and Big Data volumes
CN112395390B (en) Training corpus generation method of intention recognition model and related equipment thereof
WO2023065746A1 (en) Algorithm application element generation method and apparatus, electronic device, computer program product and computer readable storage medium
CN112948723A (en) Interface calling method and device and related equipment
CN112328486A (en) Interface automation test method and device, computer equipment and storage medium
US11151088B2 (en) Systems and methods for verifying performance of a modification request in a database system
CN110457401B (en) Data storage method and device, computer equipment and storage medium
US20200057773A1 (en) Generation and use of numeric identifiers for arbitrary objects
CN117409470B (en) Face recognition feature data dynamic matching method, system, device and medium
CN111585897B (en) Request route management method, system, computer system and readable storage medium
CN111552740B (en) Data processing method and device
CN114254278A (en) User account merging method and device, computer equipment and storage medium
CN110619275A (en) Information pushing method and device, computer equipment and storage medium
CN113645064B (en) Task issuing method and device, electronic equipment and storage medium
CN117112654B (en) City data display method, device, computer equipment and storage medium
WO2022267178A1 (en) Project file management method and apparatus, electronic device, and storage medium
CN111414260A (en) Software system data processing method, device and computer readable storage medium
Abhishek et al. Integrated Hadoop Cloud Framework (IHCF)
CN115934204A (en) Instance management method based on specific coding and related equipment thereof
CN115712441A (en) Model deployment method and device, computer equipment and storage medium
CN117009703A (en) Service processing method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant