CN109271869B - Face feature value extraction method and device, computer equipment and storage medium - Google Patents
Face feature value extraction method and device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN109271869B CN109271869B CN201810953164.4A CN201810953164A CN109271869B CN 109271869 B CN109271869 B CN 109271869B CN 201810953164 A CN201810953164 A CN 201810953164A CN 109271869 B CN109271869 B CN 109271869B
- Authority
- CN
- China
- Prior art keywords
- extraction
- feature
- state
- face picture
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a face characteristic value extraction method, a device, computer equipment and a storage medium, wherein the method comprises the following steps: receiving an extraction request of a client and modifying the extraction state of a face picture of a feature value to be extracted in a sample database and a feature extraction main switch; inquiring the extraction state of the face picture of the feature value to be extracted in the sample database and a feature extraction master switch at preset time intervals; distributing the characteristic value extraction task to a processing thread; transmitting information related to extraction in the processing thread to a virtual computing server through middleware; after the virtual computing server side finishes extracting the characteristic values, the characteristic values are stored on a network storage platform, and meanwhile, the extraction state of the face picture of the characteristic value to be extracted in the sample database is updated. The technical scheme of the invention fully utilizes the hardware resources of the server, and greatly improves the speed of extracting the characteristic values of the face pictures in batches.
Description
Technical Field
The present invention relates to the field of information processing, and in particular, to a face feature value extraction method, a device, a computer device, and a storage medium.
Background
One performance bottleneck of the existing face recognition system is in the extraction stage of the feature values of the face pictures. When a model is upgraded or peak hours are requested, it may involve re-extracting feature values for a large number of, even all, samples, on the order of millions.
At present, it is common practice to extract the feature values of the face pictures, when a request for extracting the feature values in batches is faced, the background synchronous processing or the single-thread asynchronous processing is changed, and meanwhile, the extraction of the feature values in batches can only be carried out on one machine. Therefore, the utilization rate of hardware resources is not high, the efficiency of the extraction process is low, and the extraction speed is low.
Disclosure of Invention
The embodiment of the invention provides a face characteristic value extraction method, a device, computer equipment and a storage medium, which are used for solving the problems of low utilization rate of hardware server resources and low extraction process efficiency when face characteristic values are extracted in batches.
A face feature value extraction method comprises the following steps:
receiving an extraction request sent by a client, and determining a feature extraction task according to the extraction request, wherein the feature extraction task comprises identification information of each face picture of a feature value to be extracted;
Setting a feature extraction main switch in a sample database as an on state, and setting the extraction state of each face picture of a feature value to be extracted in the sample database as a ready extraction state according to the identification information;
inquiring the feature extraction master switch at intervals of preset time;
if the feature extraction main switch is in the on state, a processing thread is allocated to the feature extraction task, wherein the processing thread is used for transmitting a storage path of the face picture of the feature value to be extracted on a network storage platform to a virtual computing server through a middleware, and the virtual computing server is used for acquiring the face picture according to the storage path and extracting the feature value of the face picture;
receiving identification information, characteristic values and extraction states of the face pictures returned by the virtual computing server;
storing the characteristic value to a corresponding position of the face picture identified by the identification information in the network storage platform, and updating the extraction state of the face picture identified by the identification information in the sample database to be an extraction completion state;
and if the extraction state of each face picture of the feature value to be extracted is the extraction completion state, setting the feature extraction main switch to be in a closed state.
A face feature value extraction device includes:
the device comprises a receiving and extracting request module, a characteristic extracting module and a characteristic extracting module, wherein the receiving and extracting request module is used for receiving an extracting request sent by a client and determining a characteristic extracting task according to the extracting request, and the characteristic extracting task comprises identification information of each face picture of a characteristic value to be extracted;
the setting state module is used for setting a feature extraction main switch in a sample database to be in an on state, and setting the extraction state of each face picture of a feature value to be extracted in the sample database to be in a ready extraction state according to the identification information;
the inquiry module is used for inquiring the feature extraction main switch at intervals of preset time intervals;
the distribution thread module is used for distributing a processing thread to the feature extraction task if the feature extraction main switch is in the on state, wherein the processing thread is used for transmitting a storage path of the face picture of the feature value to be extracted on a network storage platform to a virtual computing server through a middleware, and the virtual computing server is used for acquiring the face picture according to the storage path and extracting the feature value of the face picture;
the receiving return value module is used for receiving the identification information, the characteristic value and the extraction state of the face picture returned by the virtual computing server;
The first updating module is used for storing the characteristic value to the corresponding position of the face picture identified by the identification information in the network storage platform, and updating the extraction state of the face picture identified by the identification information in the sample database into an extraction completion state;
and the second updating module is used for setting the feature extraction total switch to be in a closed state if the extraction state of each face picture of the feature value to be extracted is the extraction completion state.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the face feature value extraction method described above when the computer program is executed.
A computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the face feature value extraction method described above.
According to the face feature value extraction method, the face feature value extraction device, the computer equipment and the storage medium, a feature extraction task is determined according to an extraction request sent by a received client, a feature extraction main switch in a sample database is set to be in an on state, and the extraction state of a face picture of a feature value to be extracted in the sample database is set to be in a ready extraction state according to identification information of the face picture in the feature extraction task; inquiring the state of a feature extraction master switch at intervals of a preset time interval through a timing task, and if the feature extraction master switch is in the on state, distributing a processing thread for the feature extraction task, so that a virtual computing server can simultaneously perform feature value extraction computation on face pictures to be subjected to feature value extraction; the extraction state of the face picture of each feature value to be extracted is recorded in the sample database, so that each processing thread can not repeatedly extract the face picture which is already extracted and can not miss the face picture which is not already extracted when the virtual computing server side performs parallel processing; meanwhile, the hardware resources of the server are fully utilized, and the extraction speed of the characteristic values of the batch of face pictures is greatly improved, so that the extraction efficiency of the whole extraction process is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of an application environment of a face feature value extraction method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a face feature value extraction method according to an embodiment of the present invention;
FIG. 3 is a flowchart of step S4 in a face feature value extraction method according to an embodiment of the present invention;
fig. 4 is a flowchart of updating the extraction state of the face picture identified by the identification information in the sample database to the extraction completion state in step S6 of the face feature value extraction method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a face feature value extraction system according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a face feature value extraction device according to an embodiment of the invention;
FIG. 7 is a schematic diagram of a computer device in accordance with an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The face feature value extraction method provided by the application can be applied to an application environment as shown in figure 1, wherein a plurality of physical servers form a server cluster through a network, and each server in the server cluster is simultaneously connected with a sample database and a network storage platform through the network; each server in the server cluster is provided with a virtual communication server, a virtual computing server and middleware, and the virtual communication server communicates with the virtual computing server through the middleware. The network may be a wired network or a wireless network. After the client initiates the face feature value extraction request, the virtual communication server and the virtual computing server on each server cooperate to jointly complete the processing task of extracting the face feature value. The face feature value extraction method provided by the embodiment of the application is applied to the virtual communication server.
In an embodiment, as shown in fig. 2, a face feature value extraction method is provided, and the implementation flow includes the following steps:
s1: and receiving an extraction request sent by the client, and determining a feature extraction task according to the extraction request, wherein the feature extraction task comprises identification information of each face picture of the feature value to be extracted.
The clients that initiate the extraction request come from different registered users, which are enterprises or organizations that are registered on the system for face recognition services using the face recognition system, collectively referred to as business parties. When the business party registers in the face recognition system, the business party needs to submit legal face pictures which need to be recognized in the business or organization to a network storage platform of the face recognition system, and store relevant attribute information of the legal face pictures into a sample database. For example, if a company needs to use a face recognition system to check in on the employees of the company, the company needs to submit legal face pictures of all employees of the company as a sample. These legal face pictures will be submitted to the network storage platform of the face recognition system, and the relevant attribute information of the legal face pictures is stored in the sample database, wherein the relevant attribute information includes but is not limited to: service side identification information, identification information of face pictures, a storage path of the face pictures on a network storage platform, a face picture file size and the like.
The service party identification information may be a service party id (identification) number, and the identification information of the face picture may be an id number of the face picture.
When the client initiates the extraction request, the virtual communication server can confirm which service party the extraction request comes from according to the preset service party identification information, so that the identification information of each face picture of the feature value to be extracted is positioned in the sample database.
For example, the client submits a form of the extraction request through the web page, and attaches a service party id number in the web page. And the virtual communication server inquires the identification information of each face picture to be extracted corresponding to the service party in a sample database according to the id number of the service party.
S2: and setting a feature extraction main switch in the sample database as an on state, and setting the extraction state of each face picture of the feature values to be extracted in the sample database as a ready extraction state according to the identification information of each face picture.
And a feature extraction total switch table is stored in the sample database, and the feature extraction total switch state of each business party is recorded in the feature extraction total switch table. Once the extraction request reaches the virtual communication service end, the feature extraction master switch of the service party is set to be in an on state by the virtual communication service end, and the feature extraction master switch represents that the service party is extracting face feature values; when all the face picture feature values are extracted, the feature extraction main switch is set to be in an off state.
Meanwhile, a face picture extraction state table is also stored in the sample database, the face picture extraction state table records the extraction state of each face picture, and the face picture extraction state table has a field of extraction state. When the extraction request reaches the virtual communication service end, the virtual communication service end finds a corresponding face picture in a face picture extraction state table according to the identification information of the face picture, and modifies the extraction state in the record into 'ready extraction'; when the virtual communication server receives the identification information and the extraction state of the face picture returned by the virtual computing server, the virtual communication server finds the corresponding face picture in the face picture extraction state table according to the identification information of the face picture, and modifies the extraction state in the record to be 'extraction completion'.
Specifically, after receiving the extraction request in step S1, the feature extraction master switch belonging to the service party in the feature extraction master switch table is set to an on state according to the service party identification information, and the feature value extraction state of the face picture of each feature value to be extracted in the face picture extraction state table is set to a ready extraction state according to the identification information of the face picture of each feature value to be extracted.
S3: and inquiring the feature extraction master switch at preset time intervals.
Specifically, the virtual communication server uses a Java server, and a Spring framework is deployed thereon. The Spring framework is a design layer framework of open source codes, is a lightweight Java development framework, and solves the problem of loose coupling of a business logic layer and other layers. For a large Web application system, spring is usually deployed on a server, and then secondary development is performed on Spring according to the requirements of real application. The existing interfaces of the Spring framework are fully utilized, simple and repeated development work is avoided, and the application suitable for self business needs can be rapidly developed.
Starting a timer task on the basis of a Spring framework, and periodically inquiring the state of a feature extraction master switch in a feature extraction master switch table on a sample database at preset time intervals.
Preferably, the timer task may query the status of the feature extraction master switch every 1 second.
S4: if the feature extraction main switch is in an on state, a processing thread is allocated to the feature extraction task, wherein the processing thread is used for transmitting a storage path of a face picture of a feature value to be extracted on a network storage platform to a virtual computing server through an intermediate piece, and the virtual computing server is used for acquiring the face picture according to the storage path and extracting the feature value of the face picture.
Specifically, when the timer task queries that the feature extraction master switch is in an on state, it indicates that the client initiates the extraction task, and a callback function of the timer task starts a multi-processing thread to execute the extraction task. And each processing thread transmits the storage path of the face picture with the extracted characteristic value on the network storage platform to the virtual computing server through the middleware.
The virtual computing server is used for extracting the characteristic value of the face picture, and the virtual computing server goes to the network storage platform to obtain the face picture and performs the operation of extracting the characteristic value according to the storage path of the face picture on the network storage platform, which is transmitted by the processing thread.
Because the C++ server using the same extraction algorithm is much faster than the Java server due to the difference in operating efficiency between the C++ language and the Java language, the virtual compute server is preferably served by the C++ server.
Middleware is independent system software or service program and is software for connecting two independent application programs or independent systems. Connected systems, even though they have different interfaces, can exchange information with each other through middleware, and thus, one key use of middleware is information transfer between heterogeneous systems.
The virtual communication server and the virtual computing server are deployed on the same physical host, and the middleware is used for communication between the virtual communication server and the virtual computing server, namely the virtual communication server transmits a storage path required by feature value extraction to the virtual computing server through the middleware, and the virtual computing server returns an extraction result to the virtual communication server through the middleware.
Preferably ZeroMQ is used as middleware. ZeroMQ is a multi-wire Cheng Wanglao library based on message queues that abstracts the underlying details of socket type, connection handling, frames, and even routing, providing sockets across multiple transport protocols. ZeroMQ is a new layer in network communication, between the application layer and the transport layer of TCP/IP (Transmission Control Protocol/Internet Protocol ) protocol, and is a scalable layer that can run in parallel and be dispersed among distributed systems. ZeroMQ provides a framework-based socket library (socket library) that makes socket programming simple, compact and more capable, often applied to network communications.
In a general case, the JAVA server and the c++ server deployed on the same physical host are troublesome to carry out multithreading communication, and because the JAVA server and the c++ server are heterogeneous, the communication between the JAVA server and the c++ server needs a special API interface to realize; also because of the complexity of multithreading, the effort to develop a specialized API interface is time consuming and error prone. Therefore, the advantage of network communication of ZeroMQ is utilized, the ZeroMQ is used as a middleware for communication between a JAVA server and a C++ server, a traditional API communication method is replaced, development efficiency is improved, error risks are reduced, and meanwhile, the communication method is not completely in the traditional scheme in performance.
S5: and receiving the identification information, the characteristic value and the extraction state of the face picture returned by the virtual computing server.
Specifically, each processing thread will execute a fetcher on the virtual compute server. The extraction program takes a storage path of the received face picture to be extracted on the network storage platform as an input parameter, and returns the extracted result data to the virtual communication server through the middleware. The result data comprise identification information, characteristic values and extraction states of the face picture, and the virtual communication server receives the result data.
S6: and storing the characteristic value to a corresponding position of the face picture identified by the identification information in the network storage platform, and updating the extraction state of the face picture identified by the identification information in the sample database to be an extraction completion state.
The network storage platform stores the feature value after feature value extraction besides the face picture file. The characteristic values are stored in the form of a characteristic value file.
The virtual communication server uses the identification information of the face picture as a directory name, and stores the received characteristic value into a network storage platform in the form of writing the characteristic value into a disk file. Meanwhile, the virtual communication server updates the extraction state of the face picture in the sample database.
Specifically, the virtual communication server updates a face picture extraction state table in a sample database through JDBC, and modifies the extraction state of the face picture corresponding to the identification information of the face picture to 'extraction completion'. Among them, JDBC (Java DataBase Connectivity, java database connection) is a Java API for executing SQL statements, which can provide unified access to various relational databases, and is composed of a set of classes and interfaces written in Java language. JDBC provides a benchmark from which higher level tools and interfaces can be built to enable database developers to write database applications. The interface program written by the database developer through the JDBC can be suitable for different databases, and the interface program is not required to be written for the different databases, so that the development efficiency is greatly improved.
S7: if the extraction state of each face picture of the feature value to be extracted is the extraction completion state, setting the feature extraction main switch to be in a closed state.
In one extraction task, the number of face pictures to be extracted with the feature value is determined in step S1. And adding one to the number of the face pictures which are extracted after the virtual communication service end modifies the extraction state of one face picture, and storing the accumulated value into a global variable. When the value of the global variable is equal to the number of face pictures of the feature value to be extracted, the fact that the extraction task is completely executed is determined, and the feature extraction main switch is set to be in an off state.
Specifically, a timer task for executing an update task in the virtual communication server inquires whether the value of the global variable is equal to the number of face pictures of the feature value to be extracted every 0.5 seconds, and if the value of the global variable is equal to the number of face pictures of the feature value to be extracted, the feature extraction total switch in a feature extraction total switch table in the sample database is set to be in a closed state through JDBC.
In this embodiment, according to the received extraction request sent by the client, a feature extraction task is determined, a feature extraction main switch in a sample database is set to be in an on state, and according to identification information of face pictures in the feature extraction task, an extraction state of the face pictures to be extracted with feature values in the sample database is set to be in a ready extraction state; inquiring the state of a feature extraction master switch at intervals of a preset time interval through a timing task, and if the feature extraction master switch is in the on state, distributing a processing thread for the feature extraction task, so that a virtual computing server can simultaneously perform feature value extraction computation on face pictures to be subjected to feature value extraction; the extraction state of the face picture of each feature value to be extracted is recorded in the sample database, so that each processing thread can not repeatedly extract the face picture which is already extracted and can not miss the face picture which is not already extracted when the virtual computing server side performs parallel processing; meanwhile, the hardware resources of the server are fully utilized, and the extraction speed of the characteristic values of the batch of face pictures is greatly improved, so that the extraction efficiency of the whole extraction process is improved.
Further, in an embodiment, as shown in fig. 3, in step S4, if the feature extraction master switch is in an on state, a processing thread is allocated to the feature extraction task, which specifically includes the following steps:
s41: if the feature extraction main switch is in an on state, CPU configuration information is detected.
The CPU configuration information mainly comprises the number of threads of the local CPU. When the feature extraction master switch is in an on state, if a CPU with 4 cores and 8 threads is detected, the number of threads of the CPU is 8. Similarly, each virtual communication server detects the number of CPU threads of the local host.
Specifically, the virtual communication server may detect configuration information of the CPU by calling a system interface. For example, commands may be used in the Linux system: the grep 'processor'/proc/CPU info|sort-u|wc-l detects CPU configuration information.
S42: and determining the number of threads according to the CPU configuration information, and starting M processing threads, wherein M is the number of threads.
The number of threads of the CPU is consistent with the number of processing threads to be started by the virtual communication server. If the number of CPU threads detected in step S41 is 8, the virtual communication server starts 8 processing threads. Similarly, each virtual communication server starts a corresponding number of processing threads according to the detected CPU threads of the local host.
Specifically, the virtual communication server starts the processing thread with the number of CPU threads detected in step S41 as an input parameter. If the number of the detected CPU threads is 8, calling a new method at the virtual communication server, and starting the instance of the 8 thread objects.
S43: and distributing target extraction task numbers capable of locking processing for each processing thread according to the preset average extraction time of the characteristic values of the single face picture.
The processing threads on each virtual computing server side can acquire face pictures of the feature values to be extracted from the network storage platform, and read and update the records corresponding to the face pictures of the feature values to be extracted in the sample database, so that in order to prevent unordered contending resources and repeated extraction between the threads, the target extraction task number capable of locking processing needs to be allocated to each processing thread. The basis for distributing the target extraction task number capable of locking processing is average extraction time according to a preset single face picture characteristic value.
For example, if a thread takes an average of about 500 milliseconds to extract a feature value of a face picture. If each thread locks only one piece of data, then he will frequently access the database; if 100 ten thousand pieces of data are needed to access the database 100 ten thousand times, the waste of resources and the excessive loss of hardware are caused. If the number of records per thread lock is too large, e.g., 10 tens of thousands, and each thread has limited working capacity, the processing is not timely, the waiting time for processing is long, and the efficiency is low.
If the time required for extracting a face picture feature value in the cluster is less than 500 milliseconds of the server, and the performance of the local CPU of the server is superior, 1500 target extraction task numbers capable of locking and processing are allocated to the processing thread of the server; if the time required for extracting a face picture feature value in the cluster is greater than or equal to 500 milliseconds, and the performance of the local CPU of the server is relatively bad, 1000 target extraction task numbers capable of locking processing are allocated to the processing thread of the server.
The locking process may specifically be: setting up 1500 records to be locked by each processing thread, for 100 ten thousand records to be extracted, each processing thread locks 1500 records in the sample database, and each processing thread marks the data records in the sample database, which belong to own processing, for example, the locking state of the face picture of the feature value to be extracted is set as 'locked'. The same record can only be locked by one processing thread. When one processing thread finishes processing 1500 records belonging to the processing thread, locking a new unprocessed 1500 records from the sample database until all records to be extracted are processed.
In this embodiment, a corresponding number of processing threads are allocated to the extraction task according to the physical thread number of the local CPU, and meanwhile, the number of target extraction tasks which can be locked for processing and allocated to the processing threads is determined according to the average time consumption of executing the extraction task by each processing thread, so that the feature value of the face picture cannot be repeatedly extracted when the multi-processing thread executes the extraction task, and the situation of confusion in the extraction process caused by competing for resources is avoided.
Further, in an embodiment, as shown in fig. 4, in step S6, the extraction state of the face picture identified by the identification information in the sample database is updated to the extraction completion state, which specifically includes the following steps:
s61: and correspondingly storing the identification information and the extraction state of the face picture returned by the virtual computing server to a cache queue.
And caching the identification information and the extraction state of the face pictures returned by the virtual computing server, and storing the identification information and the corresponding extraction state of each face picture as a pair of data into a cache queue.
Preferably, the identification information and the corresponding extraction state of each face picture are stored in the form of key value pairs, in particular in JSON format. Among them, JSON (JavaScript Object Notation, JS object profile) is a lightweight data exchange format.
S62: if the length of the cache queue reaches a preset length threshold, updating the extraction state of the face picture identified by the identification information in the sample database to the extraction state corresponding to the identification information in the cache queue according to the identification information stored in the cache queue.
Specifically, the length of the cache queue may be determined according to the allocated array size. Preferably, the preset length threshold is set to 100, that is, when the number of face picture records after extraction reaches 100, the virtual communication server updates the extraction state of the corresponding record in the sample database to the extraction completion state.
In this embodiment, after the feature value is extracted, the virtual computing server returns the identification information and the extraction state of the face picture to the virtual communication server through the middleware. If the virtual communication server updates the data results into the sample database in real time, frequent access to the sample database causes resource waste and excessive hardware loss, so that the data returned by the virtual computing server is updated into the sample database in batches by using a caching method, thereby saving network resources and avoiding excessive hardware loss.
Further, in an embodiment, according to a preset average extraction time of a single face picture feature value, the target extraction task number capable of locking processing is allocated to each processing thread, including allocating the target extraction task number capable of locking processing to each processing thread according to the following formula:
t*N<T
wherein T is the average extraction time of the preset single face picture characteristic values, N is the target extraction task number, and T is the total extraction time of the preset batch face picture characteristic values. I.e. t is the average extraction time of the feature values of the single face picture preset in step S43.
Preferably, T is 500 ms and T is 10 minutes, and the number of target extraction tasks N to which each processing thread can be assigned for lock processing is 1200. Namely, every 10 minutes, the virtual communication server starts a processing thread to lock a batch of face picture data records of the feature values to be extracted by the sample database to extract the feature values, and the target extraction task number of each thread capable of locking processing is 1200.
It should be noted that, the total time T for extracting the characteristic values of the preset batch face pictures can be freely configured according to project requirements, for example, in the peak period of user request, the response speed of extracting the characteristic values of the samples in batches needs to be as fast as possible, and then 10 minutes or less can be selected; on the contrary, the response timeliness requirement is not high, and more than 10 minutes can be selected. If the face pictures with the feature values which are not extracted still exist for more than 10 minutes, the face pictures with the feature values which are not extracted are regarded as face pictures with the feature values which are not extracted, and meanwhile, the virtual communication server restarts a processing thread to lock the data records of the face pictures with the feature values which are not extracted until all the face pictures with the feature values which are not extracted are extracted.
In this embodiment, the number of target extraction tasks that can be locked and processed by each thread is calculated according to the above formula, so that the number of target extraction tasks that can be locked and processed by each processing thread is within a reasonable range, and the efficiency and speed are both considered in the whole process of extracting the feature value.
Further, in an embodiment, after updating the extraction state of the face picture identified by the identification information in the sample database to the extraction completion state, the face feature value extraction method further includes:
s8: when the actual extraction time of the feature values of the face pictures exceeds the preset total extraction time of the feature values of the face pictures in batches, if the extraction state of any face picture of the feature values to be extracted is not the extraction completion state, determining that the feature extraction task corresponding to the face picture is not completed, and reassigning a processing thread for the feature extraction task.
The multithreading may encounter various unexpected problems such as network delay, server unresponsiveness, etc. when processing the extracting task, which requires an error correction mechanism to ensure that the extracting task can be successfully executed at last.
Specifically, the virtual communication server counts the actual extraction time of the face picture feature value from the beginning of the distribution processing thread, and the preset total extraction time of the face picture feature value is 15 minutes. When the actual extraction time of the feature value of the face picture reaches 15 minutes, the virtual communication server queries a face picture extraction state table in a sample database through JDBC, and determines that the feature extraction task corresponding to the face picture is not completed when the extraction state of the face picture in the table is not the face picture in the extraction completion state. And for the incomplete extraction tasks, the virtual communication server reallocates processing threads for the feature extraction tasks and restarts timing until all the extraction tasks are executed.
In this embodiment, in order to prevent the occurrence of unexpected situations in the execution process of the multithreading, an error correction mechanism is introduced to ensure that each extraction task can be executed.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
In one embodiment, as shown in fig. 5, a face feature value extraction system is provided, where the system includes a network storage platform, a sample database, and a server cluster, where the server cluster is composed of a plurality of physical servers, and each physical server includes a virtual computing server, a virtual communication server, and middleware;
each physical server in the server cluster is connected with the network storage platform through a network; each physical server in the server cluster is connected with the sample database through a network; the virtual computing server is connected with the virtual communication server through middleware;
the virtual communication server is used for realizing the steps in the face feature value extraction method embodiment. Preferably, the virtual communication server consists of a Java server, and a Spring framework is deployed on the Java server;
The virtual computing server is used for acquiring the face picture according to the storage path of the face picture of the feature value to be extracted on the network storage platform, extracting the feature value of the face picture and returning the feature value of the face picture to the virtual communication server. Preferably, the virtual computing server consists of a C++ server;
the network storage platform is used for storing the face picture of the feature value to be extracted and the feature value of the face picture. Preferably, the network storage platform is a NAS (Network Attached Storage ) system, and the NAS system realizes data transmission based on a standard network protocol, and provides file sharing and data backup for computers of various different operating systems such as Windows, linux, mac OS and the like in a network;
and the sample database is used for storing the feature extraction total switch and the extraction state of each face picture. The sample database uses relational data including, but not limited to, MS-SQL, oracle, mySQL, sybase, DB2, etc.;
and the middleware is used for communication between the virtual computing server and the virtual communication server. Preferably, the middleware uses ZeroMQ.
In an embodiment, a face feature value extraction device is provided, where the face feature value extraction device corresponds to the face feature value extraction method in the above embodiment one by one. As shown in fig. 6, the face feature value extracting apparatus includes a receiving extraction request module 61, a setting status module 62, a query module 63, an allocation thread module 64, a receiving return value module 65, a first update module 66, and a second update module 67. The functional modules are described in detail as follows:
The receiving and extracting request module 61 is configured to receive an extracting request sent by a client, and determine a feature extracting task according to the extracting request, where the feature extracting task includes identification information of each face picture of a feature value to be extracted;
a setting state module 62, configured to set a feature extraction master switch in the sample database to an on state, and set an extraction state of each face picture of a feature value to be extracted in the sample database to a ready extraction state according to the identification information;
the query module 63 is configured to query the feature extraction master switch at intervals of a preset time interval;
the distribution thread module 64 is configured to distribute a processing thread to the feature extraction task if the feature extraction main switch is in an on state, where the processing thread is configured to transmit a storage path of a face picture of a feature value to be extracted on the network storage platform to the virtual computing server through the middleware, and the virtual computing server is configured to obtain the face picture according to the storage path and extract the feature value of the face picture;
the receiving return value module 65 is configured to receive identification information, a feature value and an extraction state of a face picture returned by the virtual computing server;
the first updating module 66 is configured to store the feature value in a corresponding position of the face picture identified by the identification information in the network storage platform, and update an extraction state of the face picture identified by the identification information in the sample database to an extraction completion state;
The second updating module 67 is configured to set the feature extraction main switch to an off state if the extraction state of each face picture of the feature value to be extracted is an extraction completion state.
Further, the dispatch thread module 64 also includes:
a detection submodule 641, configured to detect CPU configuration information if the feature extraction main switch is in an on state;
the thread start sub-module 642 is configured to determine the number of threads according to the CPU configuration information, and start M processing threads, where M is the number of threads;
the assignment locking sub-module 643 is configured to assign, to each processing thread, a target extraction task number capable of locking processing according to a preset average extraction time of a single face picture feature value.
Further, the first update module 66 includes:
buffer submodule 661: the face image extraction method comprises the steps of correspondingly storing identification information and extraction state of a face image returned by a virtual computing server to a cache queue;
a synchronization sub-module 662: and if the length of the cache queue reaches the preset length threshold, updating the extraction state of the face picture identified by the identification information in the sample database into the extraction state corresponding to the identification information in the cache queue according to the identification information stored in the cache queue.
Further, the allocation lock sub-module 643 includes:
allocation lock subunit 6431: for assigning each processing thread a target fetch task number that can lock processing according to the following formula:
t*N<T
wherein T is the average extraction time of the preset single face picture characteristic values, N is the target extraction task number, and T is the total extraction time of the preset batch face picture characteristic values.
Further, the face feature value extraction device further includes:
thread allocation reset module 68: when the actual extraction time of the feature values of the face pictures exceeds the preset total extraction time of the feature values of the face pictures in batches, if the extraction state of any face picture of the feature values to be extracted is not the extraction completion state, determining that the feature extraction task corresponding to the face picture is not completed, and reassigning a processing thread for the feature extraction task.
For specific limitations of the face feature value extraction device, reference may be made to the above limitations of the face feature value extraction method, and details thereof will not be repeated here. The modules in the face feature value extraction device can be all or partially realized by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a face feature value extraction method.
In one embodiment, a computer device is provided, including a memory, a processor, and a computer program stored on the memory and capable of running on the processor, where the processor executes the computer program to implement steps of the face feature value extraction method in the foregoing embodiment, such as steps S1 to S7 shown in fig. 2. Alternatively, the processor may implement the functions of the modules/units of the face feature value extraction apparatus in the above embodiment, such as the functions of the modules 61 to 67 shown in fig. 6, when executing the computer program. In order to avoid repetition, a description thereof is omitted.
In one embodiment, a computer readable storage medium is provided, on which a computer program is stored, where the computer program when executed by a processor implements the face feature value extraction method in the foregoing method embodiment, or where the computer program when executed by the processor implements the functions of each module/unit in the face feature value extraction device in the foregoing device embodiment. In order to avoid repetition, a description thereof is omitted.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.
Claims (10)
1. The face characteristic value extraction method is characterized by comprising the following steps of:
receiving an extraction request sent by a client, and determining a feature extraction task according to the extraction request, wherein the feature extraction task comprises identification information of each face picture of a feature value to be extracted;
Setting a feature extraction main switch in a sample database as an on state, and setting the extraction state of each face picture of a feature value to be extracted in the sample database as a ready extraction state according to the identification information;
inquiring the feature extraction master switch at intervals of preset time;
if the feature extraction main switch is in the on state, a processing thread is allocated to the feature extraction task, wherein the processing thread is used for transmitting a storage path of the face picture of the feature value to be extracted on a network storage platform to a virtual computing server through a middleware, and the virtual computing server is used for acquiring the face picture according to the storage path and extracting the feature value of the face picture;
receiving identification information, characteristic values and extraction states of the face pictures returned by the virtual computing server;
storing the characteristic value to a corresponding position of the face picture identified by the identification information in the network storage platform, and updating the extraction state of the face picture identified by the identification information in the sample database to be an extraction completion state;
and if the extraction state of each face picture of the feature value to be extracted is the extraction completion state, setting the feature extraction main switch to be in a closed state.
2. The face feature value extraction method as claimed in claim 1, wherein if the feature extraction master switch is in the on state, allocating a processing thread to the feature extraction task comprises:
if the feature extraction main switch is in the on state, detecting CPU configuration information;
determining the number of threads according to the CPU configuration information, and starting M processing threads, wherein M is the number of threads;
and distributing target extraction task numbers capable of locking processing for each processing thread according to the preset average extraction time of the characteristic values of the single face picture.
3. The face feature value extraction method as claimed in claim 1, wherein said updating the extraction state of the face picture identified by the identification information in the sample database to the extraction completion state includes:
correspondingly storing the identification information and the extraction state of the face picture returned by the virtual computing server to a cache queue;
if the length of the buffer queue reaches a preset length threshold, updating the extraction state of the face picture identified by the identification information in the sample database to the extraction state corresponding to the identification information in the buffer queue according to the identification information stored in the buffer queue.
4. The face feature value extraction method as claimed in claim 2, wherein the step of allocating a target extraction task number capable of locking processing to each processing thread according to a preset average extraction time of the feature value of the single face picture comprises:
and distributing target extraction task numbers capable of locking processing for each processing thread according to the following formula:
t*N<T
wherein T is the average extraction time of the preset single face picture feature values, N is the target extraction task number, and T is the total extraction time of the preset batch face picture feature values.
5. The face feature value extraction method according to claim 4, wherein after the updating of the extraction state of the face picture identified by the identification information in the sample database to the extraction completion state, the face feature value extraction method further comprises:
when the actual extraction time of the face picture feature values exceeds the preset total extraction time of the batch of face picture feature values, if the extraction state of any face picture of the feature values to be extracted is not the extraction completion state, determining that the feature extraction task corresponding to the face picture is not completed, and reassigning the processing thread for the feature extraction task.
6. A face feature value extraction device, characterized in that the face feature value extraction device comprises:
the device comprises a receiving and extracting request module, a characteristic extracting module and a characteristic extracting module, wherein the receiving and extracting request module is used for receiving an extracting request sent by a client and determining a characteristic extracting task according to the extracting request, and the characteristic extracting task comprises identification information of each face picture of a characteristic value to be extracted;
the setting state module is used for setting a feature extraction main switch in a sample database to be in an on state, and setting the extraction state of each face picture of a feature value to be extracted in the sample database to be in a ready extraction state according to the identification information;
the inquiry module is used for inquiring the feature extraction main switch at intervals of preset time intervals;
the distribution thread module is used for distributing a processing thread to the feature extraction task if the feature extraction main switch is in the on state, wherein the processing thread is used for transmitting a storage path of the face picture of the feature value to be extracted on a network storage platform to a virtual computing server through a middleware, and the virtual computing server is used for acquiring the face picture according to the storage path and extracting the feature value of the face picture;
The receiving return value module is used for receiving the identification information, the characteristic value and the extraction state of the face picture returned by the virtual computing server;
the first updating module is used for storing the characteristic value to the corresponding position of the face picture identified by the identification information in the network storage platform, and updating the extraction state of the face picture identified by the identification information in the sample database into an extraction completion state;
and the second updating module is used for setting the feature extraction total switch to be in a closed state if the extraction state of each face picture of the feature value to be extracted is the extraction completion state.
7. The face feature value extraction device of claim 6, wherein the distribution thread module comprises:
the detection submodule is used for detecting CPU configuration information if the feature extraction main switch is in an on state;
the thread starting sub-module is used for determining the number of threads according to the CPU configuration information and starting M processing threads, wherein M is the number of threads;
the distribution locking sub-module is used for distributing target extraction task numbers capable of locking processing for each processing thread according to the preset average extraction time of the characteristic values of the single face picture.
8. The face feature value extraction apparatus of claim 7, wherein the distribution lock sub-module comprises:
distribution lock subunit: for assigning each of the processing threads a target fetch task number that can lock processing according to the following formula:
t*N<T
wherein T is the average extraction time of the preset single face picture feature values, N is the target extraction task number, and T is the total extraction time of the preset batch face picture feature values.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the face feature value extraction method according to any one of claims 1 to 5 when the computer program is executed by the processor.
10. A computer-readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the steps of the face feature value extraction method according to any one of claims 1 to 5.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810953164.4A CN109271869B (en) | 2018-08-21 | 2018-08-21 | Face feature value extraction method and device, computer equipment and storage medium |
PCT/CN2018/120825 WO2020037896A1 (en) | 2018-08-21 | 2018-12-13 | Facial feature value extraction method and device, computer apparatus, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810953164.4A CN109271869B (en) | 2018-08-21 | 2018-08-21 | Face feature value extraction method and device, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109271869A CN109271869A (en) | 2019-01-25 |
CN109271869B true CN109271869B (en) | 2023-09-05 |
Family
ID=65154069
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810953164.4A Active CN109271869B (en) | 2018-08-21 | 2018-08-21 | Face feature value extraction method and device, computer equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109271869B (en) |
WO (1) | WO2020037896A1 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111598043A (en) * | 2020-05-25 | 2020-08-28 | 山东超越数控电子股份有限公司 | Face recognition method, system, device and medium |
CN113760487B (en) * | 2020-08-05 | 2024-04-12 | 北京京东振世信息技术有限公司 | Service processing method and device |
CN112149087B (en) * | 2020-08-24 | 2024-07-12 | 深圳达实物联网技术有限公司 | Face authority quick authorization method |
CN113032140A (en) * | 2021-01-29 | 2021-06-25 | 浙江易云物联科技有限公司 | Matching method based on face library |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105975948A (en) * | 2016-05-23 | 2016-09-28 | 南京甄视智能科技有限公司 | Cloud service platform architecture for face identification |
CN108197318A (en) * | 2018-02-01 | 2018-06-22 | 广州市君望机器人自动化有限公司 | Face identification method, device, robot and storage medium |
WO2018133666A1 (en) * | 2017-01-17 | 2018-07-26 | 腾讯科技(深圳)有限公司 | Method and apparatus for tracking video target |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8078674B2 (en) * | 2007-05-10 | 2011-12-13 | International Business Machines Corporation | Server device operating in response to received request |
CN108197608A (en) * | 2018-02-01 | 2018-06-22 | 广州市君望机器人自动化有限公司 | Face identification method, device, robot and storage medium |
CN108108499B (en) * | 2018-02-07 | 2023-05-26 | 腾讯科技(深圳)有限公司 | Face retrieval method, device, storage medium and equipment |
-
2018
- 2018-08-21 CN CN201810953164.4A patent/CN109271869B/en active Active
- 2018-12-13 WO PCT/CN2018/120825 patent/WO2020037896A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105975948A (en) * | 2016-05-23 | 2016-09-28 | 南京甄视智能科技有限公司 | Cloud service platform architecture for face identification |
WO2018133666A1 (en) * | 2017-01-17 | 2018-07-26 | 腾讯科技(深圳)有限公司 | Method and apparatus for tracking video target |
CN108197318A (en) * | 2018-02-01 | 2018-06-22 | 广州市君望机器人自动化有限公司 | Face identification method, device, robot and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109271869A (en) | 2019-01-25 |
WO2020037896A1 (en) | 2020-02-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109271869B (en) | Face feature value extraction method and device, computer equipment and storage medium | |
CN110633320B (en) | Processing method, system, equipment and storage medium for distributed data service | |
US10691716B2 (en) | Dynamic partitioning techniques for data streams | |
US10467105B2 (en) | Chained replication techniques for large-scale data streams | |
US9471585B1 (en) | Decentralized de-duplication techniques for largescale data streams | |
US9990224B2 (en) | Relaxing transaction serializability with statement-based data replication | |
US10338958B1 (en) | Stream adapter for batch-oriented processing frameworks | |
US8713046B2 (en) | Snapshot isolation support for distributed query processing in a shared disk database cluster | |
US9563426B1 (en) | Partitioned key-value store with atomic memory operations | |
WO2017063520A1 (en) | Method and apparatus for operating database | |
US10114848B2 (en) | Ensuring the same completion status for transactions after recovery in a synchronous replication environment | |
CN115668141A (en) | Distributed processing of transactions in a network using timestamps | |
WO2022111188A1 (en) | Transaction processing method, system, apparatus, device, storage medium, and program product | |
US11899648B2 (en) | Concurrency control for transactions in database systems | |
CN110445828A (en) | A kind of data distribution formula processing method and its relevant device based on Redis | |
US11397632B2 (en) | Safely recovering workloads within a finite timeframe from unhealthy cluster nodes | |
US8793527B1 (en) | Apparatus and method for handling partially inconsistent states among members of a cluster in an erratic storage network | |
US8359601B2 (en) | Data processing method, cluster system, and data processing program | |
US10185735B2 (en) | Distributed database system and a non-transitory computer readable medium | |
CN112711606A (en) | Database access method and device, computer equipment and storage medium | |
US10970175B2 (en) | Flexible per-request data durability in databases and other data stores | |
US9659041B2 (en) | Model for capturing audit trail data with reduced probability of loss of critical data | |
US20210019299A1 (en) | System and server comprising database schema for accessing and managing utilization and job data | |
CN112100186A (en) | Data processing method and device based on distributed system and computer equipment | |
US11892914B2 (en) | System and method for an application container prioritization during a restoration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |