WO2020037896A1 - 人脸特征值提取方法、装置、计算机设备及存储介质 - Google Patents

人脸特征值提取方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2020037896A1
WO2020037896A1 PCT/CN2018/120825 CN2018120825W WO2020037896A1 WO 2020037896 A1 WO2020037896 A1 WO 2020037896A1 CN 2018120825 W CN2018120825 W CN 2018120825W WO 2020037896 A1 WO2020037896 A1 WO 2020037896A1
Authority
WO
WIPO (PCT)
Prior art keywords
extraction
feature
face picture
identification information
state
Prior art date
Application number
PCT/CN2018/120825
Other languages
English (en)
French (fr)
Inventor
陈林
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020037896A1 publication Critical patent/WO2020037896A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present application relates to the field of information processing, and in particular, to a method, a device, a computer device, and a storage medium for extracting facial feature values.
  • a performance bottleneck of existing face recognition systems lies in the phase of extracting feature values of face pictures.
  • the model is upgraded or the peak time of the request, it will involve re-extracting feature values for a large number or even all samples, which are in the order of millions.
  • the embodiments of the present application provide a method, a device, a computer device, and a storage medium for extracting facial feature values to solve the problems of low utilization of hardware server resources and low efficiency of the extraction process when extracting facial feature values in batches.
  • a face feature value extraction method includes:
  • a processing thread is allocated for the feature extraction task, wherein the processing thread is used to store the face picture of the feature value to be extracted on a network storage platform.
  • the path is passed to the virtual computing server through middleware, and the virtual computing server is configured to obtain the face picture according to the storage path, and extract feature values of the face picture;
  • the feature extraction master switch is set to the off state.
  • a face feature value extraction device includes:
  • Receiving an extraction request module configured to receive an extraction request sent by a client, and determine a feature extraction task according to the extraction request, wherein the feature extraction task includes identification information of each face picture of a feature value to be extracted;
  • a setting state module configured to set a general feature extraction switch in a sample database to an on state, and set an extraction state of each of the face pictures of feature values to be extracted in the sample database to be ready according to the identification information Extraction status
  • a query module configured to query the feature extraction master switch every preset time interval
  • An allocating thread module is configured to allocate a processing thread to the feature extraction task if the feature extraction master switch is in the on state, wherein the processing thread is used to place a face picture of the feature value to be extracted in
  • the storage path on the network storage platform is passed to the virtual computing server through middleware, and the virtual computing server is configured to obtain the face picture according to the storage path and extract feature values of the face picture;
  • a return value module configured to receive the identification information, feature values, and extraction status of the face picture returned by the virtual computing server
  • a first update module configured to save the feature value to a corresponding position of a face picture identified by the identification information in the network storage platform, and store a face image of the face picture identified by the identification information in the sample database
  • the extraction status is updated to the extraction completion status
  • the second update module is configured to set the feature extraction master switch to an off state if the extraction state of each of the face pictures to be extracted with the feature value is the extraction completion state.
  • a computer device includes a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, and the processor implements the computer-readable instructions to implement the facial feature value. Steps of the extraction method.
  • One or more non-volatile readable storage media storing computer-readable instructions, which when executed by one or more processors, cause the one or more processors to execute the above-mentioned facial features Steps of the value extraction method.
  • FIG. 1 is a schematic diagram of an application environment of a method for extracting facial feature values according to an embodiment of the present application
  • FIG. 2 is a flowchart of a facial feature value extraction method according to an embodiment of the present application.
  • step S4 is a flowchart of step S4 in a method for extracting facial feature values in an embodiment of the present application
  • step S6 is a flowchart of updating the extraction state of a face picture identified by identification information in a sample database to an extraction completion state in step S6 of a method for extracting facial feature values in an embodiment of the present application;
  • FIG. 5 is a schematic diagram of a facial feature value extraction system according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a facial feature value extraction device according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a computer device according to an embodiment of the present application.
  • the facial feature value extraction method provided in this application can be applied in the application environment as shown in FIG. 1, in which several physical servers form a server cluster through a network, and each server in the server cluster is simultaneously connected with the sample database and the database through the network.
  • the network storage platform is connected; a virtual communication server, a virtual computing server, and middleware are deployed on each server in the server cluster.
  • the virtual communication server communicates with the virtual computing server through the middleware.
  • the network can be a wired network or a wireless network.
  • a method for extracting facial feature values is provided, and the implementation process includes the following steps:
  • S1 Receive an extraction request sent by a client, and determine a feature extraction task according to the extraction request, where the feature extraction task includes identification information of each face picture of a feature value to be extracted.
  • the client that initiates the extraction request comes from different registered users.
  • These registered users are enterprises or organizations registered on the system for using the face recognition service of the face recognition system, collectively referred to as business parties.
  • business parties When the business party registers in the face recognition system, it needs to submit the legal face pictures that need to be identified in the enterprise or organization to the network storage platform of the face recognition system, and store the relevant attribute information of the legal face pictures in the sample database. in. For example, if a company needs to use the face recognition system to check attendance of employees of the company, the company needs to submit legal face pictures of all employees of the company as a sample.
  • the relevant attribute information includes, but is not limited to: business party identification information, person The identification information of the face picture, the storage path of the face picture on the network storage platform, the file size of the face picture, etc.
  • the identification information of the business party may specifically be an id (identification) number of the business party, and the identification information of the face picture may specifically be the id number of the face picture.
  • the virtual communication server can confirm which business party the extraction request comes from according to the pre-agreed business party identification information, so as to locate the identification information of each face picture to be extracted in the sample database .
  • the client submits the form of the extraction request through the web page, and the business party id number is attached to the web page.
  • the virtual communication server inquires the identification information of each face picture to be extracted corresponding to the business party in the sample database according to the id number of the business party.
  • S2 Set the feature extraction master switch in the sample database to the on state, and set the extraction state of each face picture in the sample database to be extracted from the feature value to the ready-to-extract state according to the identification information of each face picture.
  • a feature extraction master switch table is stored in the sample database, and the feature extraction master switch table records the feature extraction master switch status of each business party.
  • a sample picture extraction status table is also stored in the sample database.
  • the face picture extraction status table records the extraction status of each face picture.
  • the face picture extraction status table has a field "extraction status".
  • the virtual communication server finds the corresponding face picture in the face picture extraction status table according to the identification information of the face picture, and changes the extraction status in the record to "ready” Extraction ";
  • the virtual communication server receives the identification information and extraction status of the face picture returned by the virtual computing server, the virtual communication server finds the corresponding person in the face picture extraction state table according to the identification information of the face picture Face picture, and modify the extraction status in this record to "extraction complete”.
  • the feature extraction master switch in the feature extraction master switch table is set to the on state according to the identification information of the business party, and according to each person whose feature value is to be extracted,
  • the identification information of the face picture sets the feature value extraction state of the face picture to be extracted in the face picture extraction state table to the ready-to-extract state.
  • the virtual communication server uses a Java server, and a Spring framework is deployed thereon.
  • the Spring framework is an open-source design-level framework and a lightweight Java development framework. It solves the problem of loose coupling between the business logic layer and other layers.
  • Spring is usually deployed on the server first, and then secondary development is performed on Spring according to the needs of the actual application. This makes full use of the existing interface of the Spring framework, avoids simple and repetitive development work, and can quickly develop applications suitable for their own business needs.
  • the timer task may query the state of the feature extraction master switch every 1 second.
  • a processing thread is allocated for the feature extraction task, where the processing thread is used to pass the storage path of the face image of the feature value to be extracted on the network storage platform to the virtual path through the middleware.
  • a computing server The virtual computing server is used to obtain a face picture according to the storage path, and extract feature values of the face picture.
  • the timer task queries that the feature extraction master switch is on, it indicates that a client has initiated the extraction task, and the callback function of the timer task will start a multi-processing thread to execute the extraction task.
  • Each processing thread will extract the storage path of the face image of the feature value on the network storage platform and pass it to the virtual computing server through middleware.
  • the virtual computing server is an execution terminal for extracting the feature values of the face pictures.
  • the virtual computing server goes to the network storage platform to obtain the face pictures according to the storage path of the face pictures passed by the processing thread on the network storage platform. And perform the operation of extracting feature values.
  • the C ++ server using the same extraction algorithm is much faster than the Java server. Therefore, preferably, the C ++ server is the virtual computing server.
  • Middleware is an independent system software or service program, which is software that connects two independent applications or independent systems. Connected systems, even if they have different interfaces, can still exchange information with each other through middleware. Therefore, a key use of middleware is to transfer information between heterogeneous systems.
  • the virtual communication server and the virtual computing server are deployed on the same physical host.
  • the middleware is used for communication between the virtual communication server and the virtual computing server, that is, the virtual communication server extracts the required storage path for the feature values.
  • the middleware is passed to the virtual computing server, and the virtual computing server returns the extraction result to the virtual communication server through the middleware.
  • ZeroMQ is used as the middleware.
  • ZeroMQ is a multi-threaded network library based on message queues. It abstracts the underlying details of socket types, connection processing, frames, and even routing, and provides sockets that span multiple transport protocols.
  • ZeroMQ is a new layer in network communication, between the application layer and the transmission layer of TCP / IP (Transmission Control Protocol / Internet Protocol) protocol.
  • TCP / IP Transmission Control Protocol / Internet Protocol
  • ZeroMQ is a scalable layer that can be parallelized Operation, dispersed among distributed systems.
  • ZeroMQ provides a framework-based socket library, which makes socket programming simple, concise, and more performant. It is often applied to network communications.
  • the JAVA server and C ++ server deployed on the same physical host are troublesome for multi-threaded communication. Because the two are heterogeneous, the communication between them requires a special API interface to implement; The complexity of processing, the development of specialized API interfaces is time consuming and error prone. Therefore, using the advantages of ZeroMQ's network communication as its middleware, it is used to communicate between JAVA server and C ++ server, instead of the traditional API communication method, which improves the development efficiency and reduces the risk of errors, while the performance is not inferior to the traditional Program.
  • S5 Receive identification information, feature values, and extraction status of the face picture returned by the virtual computing server.
  • each processing thread will execute an extraction program on the virtual computing server.
  • the extraction program takes the storage path of the face image to be extracted on the network storage platform as an input parameter, and returns the extracted result data to the virtual communication server through the middleware.
  • the result data includes identification information, feature values, and extraction status of the face picture, and the result data is received by the virtual communication server.
  • S6 Save the feature value to the corresponding position of the face picture identified by the identification information in the network storage platform, and update the extraction status of the face picture identified by the identification information in the sample database to the extraction completion state.
  • the network storage platform not only saves the face picture file, but also saves the feature values after the feature value extraction.
  • the eigenvalues are saved as a eigenvalue file.
  • the virtual communication server uses the identification information of the face picture as the directory name, and saves the received feature values to a network storage platform in the form of a disk file. At the same time, the virtual communication server updates the extraction status of the face pictures in the sample database.
  • the virtual communication server updates the face picture extraction status table in the sample database through JDBC, and changes the extraction status of the face picture corresponding to the identification information of the face picture to "extraction completed".
  • JDBC Java DataBase Connectivity
  • JDBC Java DataBase Connectivity
  • JDBC Java DataBase Connectivity
  • JDBC provides a benchmark against which more advanced tools and interfaces can be built, enabling database developers to write database applications.
  • the interface programs written by database developers through JDBC can be applied to different databases, and it is no longer necessary to write interface programs for different databases, which greatly improves the development efficiency.
  • the number of face pictures of feature values to be extracted is determined in step S1.
  • the virtual communication server modifies the extraction status of a face picture
  • the number of the extracted face pictures is increased by one, and the accumulated value is stored in a global variable.
  • the value of this global variable is equal to the number of face pictures of the feature value to be extracted, it is determined that the extraction task has been completely performed, and the feature extraction master switch is set to the off state.
  • the timer task that executes the update task in the virtual communication server queries every 0.5 seconds whether the value of the global variable is equal to the number of face pictures of the feature values to be extracted.
  • the feature extraction master switch in the feature extraction master switch table is set to the off state.
  • the feature extraction task is determined according to the received extraction request sent by the client, the feature extraction master switch in the sample database is set to on, and according to the identification information of the face picture in the feature extraction task,
  • the extraction state of the face picture to be extracted for the feature value in the sample database is set to the ready-to-extract state; the timer is used to query the state of the feature extraction master switch at predetermined intervals, and if the feature extraction master switch is the on state ,
  • a processing thread is allocated for the feature extraction task, so that the virtual computing server can perform feature value extraction calculations on the face pictures of the extracted feature values simultaneously; because the extraction status of each face picture to be extracted feature values is in the sample database
  • each processing thread will not repeatedly extract the face images that have already been extracted, and will not omit the face images that have not been extracted; at the same time, it will make full use of the hardware resources of the server , Which greatly improves the speed of extracting feature values of batch face pictures, thereby improving the overall Ex
  • step S4 if the feature extraction master switch is on, the processing thread is assigned to the feature extraction task, which specifically includes the following steps:
  • CPU configuration information including the number of threads of the native CPU.
  • the feature extraction master switch is on, if a CPU with 4 cores and 8 threads is detected, the number of CPU threads is 8.
  • each virtual communication server detects the number of CPU threads of the local host.
  • the virtual communication server can detect the configuration information of the CPU by calling a system interface. For example, on a Linux system, you can use the command: grep 'processor' / proc / cpuinfo
  • S42 Determine the number of threads according to the CPU configuration information, and start M processing threads, where M is the number of threads.
  • the number of CPU threads is consistent with the number of processing threads to be started by the virtual communication server. If the number of CPU threads detected in step S41 is 8, the virtual communication server starts 8 processing threads. Similarly, each virtual communication server starts a corresponding number of processing threads according to the number of CPU threads detected by the local host.
  • the virtual communication server uses the number of CPU threads detected in step S41 as an input parameter to start a processing thread. If the number of detected CPU threads is 8, then the new method is called on the virtual communication server to start the instance of 8 thread objects.
  • each processing thread is allocated the number of target extraction tasks that can be locked.
  • each processing thread on each virtual computing server will obtain the face picture of the feature value to be extracted from the network storage platform, and read and update the record corresponding to the face picture of the feature value to be extracted in the sample database. Therefore, in order to prevent disorderly contention for resources and repeated extraction between threads, each processing thread needs to be assigned the number of target extraction tasks that can lock the processing.
  • the basis for assigning the number of target extraction tasks that can be locked is based on a preset average extraction time of the feature value of a single face picture.
  • a thread extracts the feature values of a face picture, it takes about 500 milliseconds on average. If each thread locks only one piece of data, it will frequently access the database; if one million pieces of data, it will need to access the database 1 million times, which will cause waste of resources and excessive hardware consumption. If the number of records locked by each thread is too large, such as 100,000, and each thread has a limited work capacity, the processing is not timely, and the waiting time will be very long, so the efficiency will be low.
  • the server's processing thread is assigned 1000 target extraction tasks that can be locked. number.
  • the locking process can specifically be: setting each processing thread to lock a maximum of 1500 records, then for 1 million records to be extracted, each processing thread will lock 1500 records in the sample database, and each processing thread will lock the sample database Annotate the data records that belong to your own processing, such as setting the locked state of the face image to be extracted for the feature value to “locked”. The same record can only be locked by one processing thread. After a processing thread finishes processing its own 1500 records, it locks new unprocessed 1500 records from the sample database until all the records to be extracted are processed.
  • a corresponding number of processing threads are allocated for the extraction task according to the number of physical threads of the local CPU, and at the same time, the target extraction of the processing thread allocation that can lock the processing is determined according to the average time taken by each processing thread to execute the extraction task.
  • the number of tasks so that when the multi-processing thread performs the extraction task, the feature values of the face picture will not be repeatedly extracted, and the extraction process will not be chaotic due to competition for resources.
  • step S6 the extraction status of the face picture identified by the identification information in the sample database is updated to the extraction completion status, which specifically includes the following steps:
  • S61 Save the identification information and extraction status of the face picture returned by the virtual computing server to the cache queue.
  • the identification information and extraction status of the face picture returned by the virtual computing server are cached, and the identification information and corresponding extraction status of each face picture are stored in the cache queue as a pair of data.
  • the identification information and the corresponding extraction status of each face picture are stored in the form of key-value pairs, especially in a JSON format.
  • JSON JavaScript Object Notation, JS Object Notation
  • JSON is a lightweight data exchange format.
  • the length of the cache queue may be determined according to the size of the allocated array.
  • the preset length threshold is set to 100, that is, when the total number of extracted face picture records reaches 100, the virtual communication server updates the extraction status of the corresponding records in the sample database to the extraction completion status.
  • the virtual computing server after extracting the feature values, the virtual computing server returns the identification information and extraction status of the face picture to the virtual communication server through the middleware. If the virtual communication server updates these data results to the sample database in real time, frequent access to the sample database will cause waste of resources and excessive hardware consumption. Therefore, the data returned by the virtual computing server will be updated in batches using the cache method. In the sample database, network resources are saved, and excessive hardware consumption is avoided.
  • the average extraction time is based on a preset feature value of a single face picture
  • Assigning the number of target extraction tasks that can lock processing to each processing thread includes assigning the number of target extraction tasks that can lock processing to each processing thread according to the following formula:
  • t is a preset average extraction time of feature values of a single face picture
  • N is a target number of extraction tasks
  • T is a preset total time of extracting feature values of a batch of face pictures. That is, t is the average extraction time of the feature value of the single face picture preset in step S43.
  • each processing thread allocates 1200 target extraction tasks that can be locked for processing. That is, every 10 minutes, the virtual communication server will start a processing thread to lock the data records of a group of face pictures to be extracted for the feature value extraction in the sample database.
  • the number of target extraction tasks that each thread can lock is 1200.
  • the preset total time T for extracting feature values of a batch of face pictures can be freely configured according to the needs of the project. For example, during peak periods of user requests, the response time of extracting feature values of a batch of samples needs to be as fast as possible, you can choose 10 minutes or more Less; conversely, the timeliness of response is not high, you can choose more than 10 minutes. If there are still face pictures for which feature values have not been extracted for more than 10 minutes, the face pictures for which feature values have not been extracted are considered as face pictures for which extraction has failed, and the virtual communication server will restart the processing thread. Unlock the data records of the remaining face pictures whose feature values have not been extracted until all extractions are completed.
  • the number of target extraction tasks that can be locked and processed by each thread is calculated according to the above formula, so that the number of target extraction tasks that can be locked and processed by each processing thread is within a reasonable range.
  • the process balances efficiency and speed.
  • the method for extracting facial feature values further includes:
  • Multithreading may encounter various unexpected problems such as network delays and server non-response when processing extraction tasks. This requires an error correction mechanism to ensure that the extraction tasks can be successfully completed.
  • the virtual communication server counts the actual extraction time of the feature value of the face picture from the time when the processing thread is allocated, and the preset total time of extracting the feature value of the face picture is 15 minutes.
  • the virtual communication server queries the facial image extraction status table in the sample database through JDBC.
  • the extraction status of the facial image in the table is not yet the extraction completion status.
  • Face picture it is determined that the feature extraction task corresponding to the face picture is not completed.
  • the virtual communication server reallocates processing threads for this feature extraction task and restarts timing until all extraction tasks are executed and completed.
  • an error correction mechanism is introduced to ensure that each extraction task can be executed and completed.
  • a facial feature value extraction system includes a network storage platform, a sample database, and a server cluster, where the server cluster is composed of multiple physical servers, and each physical server Including virtual computing server, virtual communication server and middleware;
  • Each physical server in the server cluster is connected to the network storage platform through a network; each physical server in the server cluster is connected to the sample database through a network; the virtual computing server and the virtual communication server are connected through a middleware ;
  • the virtual communication server is configured to implement the steps in the embodiment of the facial feature value extraction method described above.
  • the virtual communication server is composed of a Java server, and a Spring framework is deployed on the Java server;
  • the virtual computing server is used to obtain a face picture according to the storage path of the face picture to be extracted on the network storage platform, extract the feature value of the face picture, and return the feature value of the face picture to the virtual communication service. end.
  • the virtual computing server is composed of a C ++ server;
  • a network storage platform is used to store a face picture to be extracted and a feature value of the face picture.
  • the network storage platform is a NAS (Network Attached Storage) system.
  • the NAS system implements data transmission based on standard network protocols, and provides file sharing for computers of various operating systems such as Windows, Linux, and Mac OS in the network. And data backup;
  • the sample database is used to store the feature extraction master switch and the extraction status of each face picture.
  • the relational data used in the sample database includes, but is not limited to, MS-SQL, Oracle, MySQL, Sybase, DB2, etc .;
  • Middleware used for communication between the virtual computing server and the virtual communication server.
  • the middleware uses ZeroMQ.
  • a facial feature value extraction device is provided, and the facial feature value extraction device corresponds to the facial feature value extraction method in the above embodiment in a one-to-one correspondence.
  • the face feature value extraction device includes a reception extraction request module 61, a setting status module 62, a query module 63, an allocation thread module 64, a return value module 65, a first update module 66, and a second update module. 67.
  • the detailed description of each function module is as follows:
  • the receiving extraction request module 61 is configured to receive an extraction request sent by a client, and determine a feature extraction task according to the extraction request, where the feature extraction task includes identification information of each face picture of a feature value to be extracted;
  • a setting state module 62 configured to set a general feature extraction switch in the sample database to an on state, and set an extraction state of each face picture to be extracted in the sample database to a feature value to be extracted according to the identification information;
  • a query module 63 configured to query the feature extraction master switch every preset time interval
  • An allocation thread module 64 is configured to allocate a processing thread for the feature extraction task if the feature extraction master switch is on, wherein the processing thread is used to store the path of the face image of the feature value to be extracted on the network storage platform through
  • the middleware is passed to the virtual computing server.
  • the virtual computing server is used to obtain a face picture according to the storage path and extract the feature values of the face picture.
  • a return value module 65 configured to receive identification information, feature values, and extraction status of a face picture returned by the virtual computing server;
  • a first updating module 66 configured to save the feature value to a corresponding position of a face picture identified by the identification information in the network storage platform, and update the extraction status of the face picture identified by the identification information in the sample database to an extraction completion state;
  • the second updating module 67 is configured to set the feature extraction master switch to the off state if the extraction state of each face picture to be extracted with the feature value is the extraction completion state.
  • thread allocation module 64 further includes:
  • the thread starting sub-module 642 is configured to determine the number of threads according to the CPU configuration information, and start M processing threads, where M is the number of threads;
  • An allocation lock sub-module 643 is configured to allocate, to each processing thread, the number of target extraction tasks that can be locked and processed according to a preset average extraction time of a single face picture feature value.
  • the first update module 66 includes:
  • Cache submodule 661 It is used to correspondingly save the identification information and extraction status of the face picture returned by the virtual computing server to the cache queue;
  • Synchronization submodule 662 used to update the extraction status of the face picture identified by the identification information in the sample database to the cache queue according to the identification information stored in the cache queue if the length of the cache queue reaches a preset length threshold. The extraction status corresponding to the identification information.
  • allocation lock submodule 643 includes:
  • Allocate lock subunit 6431 It is used to allocate to each processing thread the number of target extraction tasks capable of locking processing according to the following formula:
  • t is a preset average extraction time of feature values of a single face picture
  • N is a target number of extraction tasks
  • T is a preset total time of extracting feature values of a batch of face pictures.
  • the device for extracting facial feature values further includes:
  • Thread allocation reset module 68 used for when the actual extraction time of the facial image feature value exceeds the preset total extraction time of the facial image feature value, if the extraction status of any facial image whose feature value is to be extracted is not extraction In the completion state, it is determined that the feature extraction task corresponding to the face picture is not completed, and a processing thread is reassigned to the feature extraction task.
  • Each module in the above-mentioned facial feature value extraction device may be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the hardware form or independent of the processor in the computer device, or may be stored in the memory of the computer device in the form of software, so that the processor calls and performs the operations corresponding to the above modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in FIG. 7.
  • the computer device includes a processor, a memory, a network interface, and a database connected through a system bus.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, computer-readable instructions, and a database.
  • the internal memory provides an environment for operating the operating system and computer-readable instructions in a non-volatile storage medium.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer-readable instructions are executed by a processor to implement a method for extracting facial feature values.
  • a computer device including a memory, a processor, and computer-readable instructions stored on the memory and executable on the processor.
  • the processor executes the computer-readable instructions
  • the person in the foregoing embodiment is implemented.
  • the steps of the face feature value extraction method are, for example, steps S1 to S7 shown in FIG. 2.
  • the processor executes the computer-readable instructions
  • the functions of the modules / units of the facial feature value extraction device in the foregoing embodiment are implemented, for example, the functions of modules 61 to 67 shown in FIG. 6. To avoid repetition, we will not repeat them here.
  • one or more non-volatile readable storage media storing computer-readable instructions are provided, and the computer-readable instructions are executed by one or more processors to implement facial feature values in the foregoing method embodiments.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in various forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Synchlink DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM dual data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous chain Synchlink DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种人脸特征值提取方法、装置、计算机设备及存储介质,所述方法包括:接收客户端的提取请求并修改样本数据库中待提取特征值的人脸图片的提取状态和特征提取总开关;每隔预设的时间间隔,查询样本数据库中待提取特征值的人脸图片的提取状态和特征提取总开关;将特征值提取任务分发给处理线程;将处理线程中与提取相关的信息通过中间件传递给虚拟计算服务端;在虚拟计算服务端提取完特征值后,将特征值保存到网络存储平台上,同时更新样本数据库中待提取特征值的人脸图片的提取状态。本申请的技术方案充分利用了服务器的硬件资源,极大提高了批量提取人脸图片特征值的速度。

Description

人脸特征值提取方法、装置、计算机设备及存储介质
本申请以2018年08月21日提交的申请号为201810953164.4,名称为“人脸特征值提取方法、装置、计算机设备及存储介质”的中国发明专利申请为基础,并要求其优先权。
技术领域
本申请涉及信息处理领域,尤其涉及一种人脸特征值提取方法、装置、计算机设备及存储介质。
背景技术
现有人脸识别系统的一个性能瓶颈在于对人脸图片特征值的提取阶段。当模型升级或者请求高峰时段,会牵涉到对大量甚至所有样本进行重新提取特征值,其数量级以百万计。
目前,对人脸图片特征值提取比较普遍的做法是,当面对一个批量特征值提取请求时,在后台同步处理或者转为单线程异步处理,同时只能在一台机器上进行批量特征值提取。这样,对硬件资源的利用率不高,提取过程的效率低,提取速度慢。
发明内容
本申请实施例提供一种人脸特征值提取方法、装置、计算机设备及存储介质,以解批量提取人脸特征值时,对硬件服务器资源的利用率不高,提取过程效率低的问题。
一种人脸特征值提取方法,包括:
接收客户端发送的提取请求,并根据所述提取请求确定特征提取任务,其中,所述特征提取任务包含待提取特征值的每个人脸图片的标识信息;
将样本数据库中的特征提取总开关设置为开启状态,并根据所述标识信息将所述样本数据库中待提取特征值的每个所述人脸图片的提取状态设置为准备提取状态;
每隔预设的时间间隔,查询所述特征提取总开关;
若所述特征提取总开关为所述开启状态,则为所述特征提取任务分配处理线程,其中,所述处理线程用于将所述待提取特征值的人脸图片在网络存储平台上的存储路径,通过中间件传递给虚拟计算服务端,所述虚拟计算服务端用于根据所述存储路径获取所述人脸图片,并提取所述人脸图片的特征值;
接收所述虚拟计算服务端返回的所述人脸图片的标识信息、特征值和提取状态;
将所述特征值保存到所述网络存储平台中所述标识信息标识的人脸图片的对应位置,并将所述样本数据库中所述标识信息标识的人脸图片的提取状态更新为提取完成状态;
若待提取特征值的每个所述人脸图片的提取状态均为所述提取完成状态,则将所述特征提取总开关设置为关闭状态。
一种人脸特征值提取装置,包括:
接收提取请求模块,用于接收客户端发送的提取请求,并根据所述提取请求确定特征提取任务,其中,所述特征提取任务包含待提取特征值的每个人脸图片的标识信息;
设置状态模块,用于将样本数据库中的特征提取总开关设置为开启状态,并根据所述标识信息将所述样本数据库中待提取特征值的每个所述人脸图片的提取状态设置为准备提取状态;
查询模块,用于每隔预设的时间间隔,查询所述特征提取总开关;
分配线程模块,用于若所述特征提取总开关为所述开启状态,则为所述特征提取任务分配处理线程,其中,所述处理线程用于将所述待提取特征值的人脸图片在网络存储平台上的存储路径,通过中间件传递给虚拟计算服务端,所述虚拟计算服务端用于根据所述存储路径获取所述人脸图片,并提取所述人脸图片的特征值;
接收返回值模块,用于接收所述虚拟计算服务端返回的所述人脸图片的标识信息、特征值和提取状态;
第一更新模块,用于将所述特征值保存到所述网络存储平台中所述标识信息标识的人脸图片的对应位置,并将所述样本数据库中所述标识信息标识的人脸图片的提取状态更新为提取完成状态;
第二更新模块,用于若待提取特征值的每个所述人脸图片的提取状态均为所述提取完成状态,则将所述特征提取总开关设置为关闭状态。
一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现上述人脸特征值提取方法的步骤。
一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行上述人脸特征值提取方法的步骤。
本申请的一个或多个实施例的细节在下面的附图和描述中提出,本申请的其他特征和优点将从说明书、附图以及权利要求变得明显。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来 讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一实施例中人脸特征值提取方法的一应用环境示意图;
图2是本申请一实施例中人脸特征值提取方法的流程图;
图3是本申请一实施例中人脸特征值提取方法中步骤S4的流程图;
图4是本申请一实施例中人脸特征值提取方法的步骤S6中将样本数据库中标识信息标识的人脸图片的提取状态更新为提取完成状态的流程图;
图5是本申请一实施例中人脸特征值提取系统的示意图;
图6是本申请一实施例中人脸特征值提取装置的示意图;
图7是本申请一实施例中计算机设备的示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请提供的人脸特征值提取方法,可应用在如图1的应用环境中,其中,若干台物理服务器通过网络组成一个服务器集群,服务器集群中的每台服务器均通过网络同时与样本数据库和网络存储平台相连;服务器集群中的每台服务器上均部署有虚拟通信服务端、虚拟计算服务端以及中间件,虚拟通信服务端通过中间件与虚拟计算服务端进行通信。网络可以是有线网络或者无线网络。客户端发起人脸特征值提取请求后,每台服务器上的虚拟通信服务端和虚拟计算服务端协同合作,共同完成提取人脸提取特征值的处理任务。本申请实施例提供的人脸特征值提取方法应用于虚拟通信服务端。
在一实施例中,如图2所示,提供了一种人脸特征值提取方法,其实现流程包括如下步骤:
S1:接收客户端发送的提取请求,并根据提取请求确定特征提取任务,其中,特征提取任务包含待提取特征值的每个人脸图片的标识信息。
发起提取请求的客户端来自不同的注册用户,这些注册用户是为使用人脸识别系统的人脸识别服务而在系统上注册的企业或组织,统称为业务方。业务方在人脸识别系统进行注册时,需要提交本企业或组织内需要识别的合法人脸图片到人脸识别系统的网络存储平台中,以及将合法人脸图片的相关属性信息存储到样本数据库中。举例来说,如果某公司需要使用人脸识别系统对本公司的员工进行考勤,则该公司需要提交本公司所有员工的合法人脸图片作为样本。这些合法人脸图片将被提交到人脸识别系统的网络存储平台中,合法人脸图片的相关属性信息被存储到样本数据库中,其中,相关属性信息包括但不限于:业务方标识信息、人脸图片的标识信息、人脸图片在网络存储平台上的存储路径、人脸图片文件 大小等。
业务方标识信息具体可以是业务方id(identification,身份标识)号,人脸图片的标识信息具体可以是人脸图片的id号。
当客户端发起提取请求,虚拟通信服务端可以根据预先约定的业务方标识信息确认出该提取请求是来自哪个业务方,从而在样本数据库中定位出待提取特征值的每个人脸图片的标识信息。
例如,客户端通过web页面提交提取请求的表单,并在web页面中附加了业务方id号。虚拟通信服务端根据业务方的id号在样本数据库中查询出该业务方下所对应的每个待提取人脸图片的标识信息。
S2:将样本数据库中的特征提取总开关设置为开启状态,并根据每个人脸图片的标识信息将样本数据库中待提取特征值的每个人脸图片的提取状态设置为准备提取状态。
在样本数据库中保存有一张特征提取总开关表,该特征提取总开关表中记录了每个业务方的特征提取总开关状态。一旦有提取请求达到虚拟通信服务端时,则该业务方所属的特征提取总开关将被虚拟通信服务端设置为开启状态,代表正在对该业务方进行人脸特征值提取;当所有的人脸图片特征值被提取完成后,该特征提取总开关将被设置为关闭状态。
同时,在样本数据库中还保存有一张人脸图片提取状态表,该人脸图片提取状态表记录了每张人脸图片的提取状态,该人脸图片提取状态表中有字段“提取状态”。当有提取请求达到虚拟通信服务端时,虚拟通信服务端根据人脸图片的标识信息在人脸图片提取状态表中找到对应的人脸图片,并将该条记录中的提取状态修改为“准备提取”;当虚拟通信服务端接收到虚拟计算服务端返回的人脸图片的标识信息和提取状态时,虚拟通信服务端根据人脸图片的标识信息在人脸图片提取状态表中找到对应的人脸图片,并将该条记录中的提取状态修改为“提取完成”。
具体地,在接收到步骤S1中的提取请求后,根据业务方标识信息将特征提取总开关表中该业务方所属的特征提取总开关设置为开启状态,并根据每个待提取特征值的人脸图片的标识信息,将人脸图片提取状态表中待提取特征值的人脸图片的特征值提取状态设置为准备提取状态。
S3:每隔预设的时间间隔,查询特征提取总开关。
具体地,虚拟通信服务端使用Java服务器,且其上部署有Spring框架。其中,Spring框架是一个开放源代码的设计层面框架,是一个轻量级的Java开发框架,他解决的是业务逻辑层和其他各层的松耦合问题。对于大型的Web应用系统,通常先将Spring部署到服务器上,然后再根据现实应用的需要,在Spring上做二次开发。这样做充分利用了Spring框架的现有接口,避免了简单重复性的开发工作,又能快速开发适合自身业务需要的应用。
在Spring框架的基础上启动定时器任务,以预设的时间间隔,定期查询样本数据库上特征提取总开关表中特征提取总开关的状态。
优选地,定时器任务可以以每隔1秒,查询一次特征提取总开关的状态。
S4:若特征提取总开关为开启状态,则为特征提取任务分配处理线程,其中,处理线程用于将待提取特征值的人脸图片在网络存储平台上的存储路径,通过中间件传递给虚拟计算服务端,虚拟计算服务端用于根据存储路径获取人脸图片,并提取人脸图片的特征值。
具体地,当定时器任务查询到特征提取总开关为开启状态时,表明有客户端发起了提取任务,定时器任务的回调函数将启动多处理线程去执行提取任务。每个处理线程将提取特征值的人脸图片在网络存储平台上的存储路径,通过中间件传递给虚拟计算服务端。
其中,虚拟计算服务端是用于提取人脸图片的特征值的执行端,虚拟计算服务端根据处理线程传递来的人脸图片在网络存储平台上的存储路径,去网络存储平台获取人脸图片并执行提取特征值的操作。
由于C++语言和Java语言在运行效率上的差异,采用同样提取算法的C++服务器要比Java服务器快出很多,所以,优选地,虚拟计算服务端由C++服务器担任。
中间件是一种独立的系统软件或服务程序,是连接两个独立应用程序或独立系统的软件。相连接的系统,即使它们具有不同的接口,但通过中间件相互之间仍能交换信息,因此,中间件的一个关键用途是异构系统间的信息传递。
虚拟通信服务端和虚拟计算服务端是部署在同一台物理主机上的,中间件用于虚拟通信服务端与虚拟计算服务端之间的通信,即虚拟通信服务端将特征值提取需要的存储路径通过中间件传递给虚拟计算服务端,虚拟计算服务端通过中间件将提取结果返回给虚拟通信服务端。
优选地,使用ZeroMQ作为中间件。ZeroMQ是一种基于消息队列的多线程网络库,其对套接字类型、连接处理、帧、甚至路由的底层细节进行抽象,提供跨越多种传输协议的套接字。ZeroMQ是网络通信中新的一层,介于TCP/IP(Transmission Control Protocol/Internet Protocol,传输控制协议/因特网互联协议)协议的应用层和传输层之间,ZeroMQ是一个可伸缩层,可并行运行,分散在分布式系统间。ZeroMQ提供基于框架的套接字库(socket library),使得套接字编程变得简单、简洁和性能更高,常将其应用在网络通信上。
在通常情况下,部署在同一台物理主机上的JAVA服务器和C++服务器要进行多线程的通信比较麻烦,由于两者异构,他们之间的通信需要专门的API接口来实现;又由于多线程处理的复杂性,开发专门API接口的工作是耗时和容易出错的。因此,利用ZeroMQ的网络通信优势,以其作为中间件,用于JAVA服务器和C++服务器之间通信,代替传统API通信的方法,提升了开发效率、降低出错风险,同时性能上完全不输于传统方案。
S5:接收虚拟计算服务端返回的人脸图片的标识信息、特征值和提取状态。
具体地,每个处理线程都将执行虚拟计算服务端上的提取程序。提取程序以接收到的待提取人脸图 片在网络存储平台上的存储路径作为输入参数,将提取完成的结果数据通过中间件返回给虚拟通信服务端。其中,结果数据包括人脸图片的标识信息、特征值和提取状态,由虚拟通信服务端接收结果数据。
S6:将特征值保存到网络存储平台中标识信息标识的人脸图片的对应位置,并将样本数据库中标识信息标识的人脸图片的提取状态更新为提取完成状态。
网络存储平台除了保存人脸图片文件,也保存特征值提取之后的特征值。特征值以特征值文件的形式保存。
虚拟通信服务端以人脸图片的标识信息作为目录名,将接收到的特征值通过写成磁盘文件的形式保存到网络存储平台。同时,虚拟通信服务端将样本数据库中的人脸图片的提取状态进行更新。
具体地,虚拟通信服务端通过JDBC对样本数据库中的人脸图片提取状态表进行更新,将与人脸图片的标识信息对应的人脸图片的提取状态修改为“提取完成”。其中,JDBC(Java DataBase Connectivity,Java数据库连接)是一种用于执行SQL语句的Java API,可以为多种关系数据库提供统一访问,它由一组用Java语言编写的类和接口组成。JDBC提供了一种基准,据此可以构建更高级的工具和接口,使数据库开发人员能够编写数据库应用程序。数据库开发人员通过JDBC编写的接口程序能适用于不同的数据库,而不再需要为不同的数据库分别编写接口程序,极大提高了开发效率。
S7:若待提取特征值的每个人脸图片的提取状态均为提取完成状态,则将特征提取总开关设置为关闭状态。
在一次提取任务中,待提取特征值的人脸图片的数量是在步骤S1中就确定下来的。虚拟通信服务端每修改完一张人脸图片的提取状态,则将已提取完成的人脸图片数量加一,并将累加值保存到一个全局变量中。当这个全局变量的值等于待提取特征值的人脸图片的数量时,则确定该次提取任务已经全部执行完成,并将特征提取总开关设置为关闭状态。
具体地,虚拟通信服务端中执行更新任务的定时器任务每隔0.5秒查询一次全局变量的值是否与待提取特征值的人脸图片数量相等,若两者相等,则通过JDBC将样本数据库中特征提取总开关表中的特征提取总开关设置为关闭状态。
在本实施例中,根据接收到客户端发送的提取请求,确定特征提取任务,将样本数据库中的特征提取总开关设置为开启状态,并根据特征提取任务中的人脸图片的标识信息,将样本数据库中待提取特征值的人脸图片的提取状态设置为准备提取状态;通过定时任务,每隔预设的时间间隔去查询特征提取总开关的状态,若特征提取总开关为所述开启状态,则为特征提取任务分配处理线程,使得虚拟计算服务端能同时对待提取特征值的人脸图片进行特征值提取计算;由于每个待提取特征值的人脸图片的提取状态在样本数据库中都有记录,使得虚拟计算服务端在并行处理时,各处理线程既不会重复提取已提取完成的人脸图片,也不会遗漏未提取完成的人脸图片;同时,充分利用了服务端的硬件资源,极大提高了 批量人脸图片特征值的提取速度,从而提高整个提取过程的提取效率。
进一步地,在一实施例中,如图3所示,在步骤S4中,即若特征提取总开关为开启状态,则为特征提取任务分配处理线程,具体包括如下步骤:
S41:若特征提取总开关为开启状态,则检测CPU配置信息。
CPU的配置信息,主要包括本机CPU的线程数量。当特征提取总开关为开启状态,若检测到有4核8线程的CPU,则CPU的线程数量为8个。同理,每个虚拟通信服务端检测本地主机的CPU线程数。
具体地,虚拟通信服务端可以通过调用系统接口来检测CPU的配置信息。例如,在Linux系统下可以使用命令:grep'processor'/proc/cpuinfo|sort-u|wc-l进行CPU配置信息的检测。
S42:根据CPU配置信息,确定线程数量,并启动M个处理线程,其中,M为线程数量。
CPU的线程数量与虚拟通信服务端将要启动的处理线程数一致。若在步骤S41中检测到的CPU线程数量为8个,则虚拟通信服务端启动8个处理线程。同理,每个虚拟通信服务端根据检测到本地主机的CPU线程数而启动相应数量的处理线程数。
具体地,虚拟通信服务端以步骤S41中检测到的CPU线程数量作为输入参数,启动处理线程。若检测到的CPU线程数量为8个,则在虚拟通信服务端调用new方法,启动8个线程对象的实例即可。
S43:根据预设的单个人脸图片特征值平均提取时间,为每个处理线程分配能锁定处理的目标提取任务数。
每个虚拟计算服务端上的处理线程都会从网络存储平台上获取待提取特征值的人脸图片,并且对样本数据库中的待提取特征值的人脸图片对应的记录进行读取和更新操作,因此,为了不让线程之间发生无序的争夺资源和重复的提取,需要为每个处理线程分配能锁定处理的目标提取任务数。分配能锁定处理的目标提取任务数的依据,是根据预设的单个人脸图片特征值平均提取时间。
举例来说,若一个线程提取一张人脸图片的特征值需要平均大约500毫秒。如果每一个线程只锁定一条数据,那么他会频繁的访问数据库;如果100万条数据,就需要访问数据库100万次,这样会造成资源的浪费,硬件过度损耗。如果每一个线程锁定的记录数太多,如10万条,而每个线程工作能力有限,处理不及时,等待处理的时间也会很长,从而效率也会低下。
若集群中提取一张人脸图片特征值需要的时间小于500毫秒的服务器,代表该服务器的本机CPU性能优越,则为该台服务器的处理线程分配1500个能锁定处理的目标提取任务数;若集群中提取一张人脸图片特征值需要的时间大于等于500毫秒的服务器,代表该服务器的本机CPU性能相对劣势,则为该台服务器的处理线程分配1000个能锁定处理的目标提取任务数。
锁定过程具体可以为:设置每个处理线程最多锁定1500条记录,则对于100万条待提取的记录来说,每个处理线程会锁定样本数据库中1500条记录,每个处理线程将样本数据库中属于自己处理的数 据记录进行标注,如将待提取特征值人脸图片的锁定状态设置为“已锁定”。同一条记录只能被一个处理线程锁定。当一个处理线程处理完属于自己的1500条记录后,再从样本数据库中锁定新的未处理的1500条记录,直到所有待提取的记录被处理完。
在本实施例中,根据本机CPU的物理线程数,为提取任务分配相应数量的处理线程,同时,根据每个处理线程执行提取任务的平均耗时来确定处理线程分配能锁定处理的目标提取任务数,使得多处理线程在执行提取任务时,人脸图片特征值不会被重复提取,且不会因争夺资源造成提取过程混乱的情形。
进一步地,在实施例中,如图4所示,在步骤S6中,即将样本数据库中标识信息标识的人脸图片的提取状态更新为提取完成状态,具体包括如下步骤:
S61:将虚拟计算服务端返回的人脸图片的标识信息和提取状态对应保存到缓存队列。
将虚拟计算服务端返回的人脸图片的标识信息和提取状态进行缓存,将每个人脸图片的标识信息和对应的提取状态作为一对数据保存到缓存队列中。
优选地,每个人脸图片的标识信息和对应的提取状态以键值对的形式进行保存,特别是以JSON格式的方式进行保存。其中,JSON(JavaScript Object Notation,JS对象简谱)是一种轻量级的数据交换格式。
S62:若缓存队列的长度达到预设的长度阈值,则根据缓存队列中保存的标识信息,将样本数据库中该标识信息标识的人脸图片的提取状态更新为缓存队列中该标识信息对应的提取状态。
具体地,缓存队列的长度可以根据所分配的数组大小来确定。优选地,预设的长度阈值设置为100,即当提取完成的人脸图片记录数累计达到100条时,虚拟通信服务端将样本数据库中对应记录的提取状态更新为提取完成状态。
在本实施例中,虚拟计算服务端在提取完特征值后,将人脸图片的标识信息和提取状态通过中间件返回给虚拟通信服务端。若虚拟通信服务端实时将这些数据结果更新到样本数据库中,则频繁的访问样本数据库会造成资源的浪费,硬件过度损耗,因此,利用缓存的方法将虚拟计算服务端返回的数据分批的更新到样本数据库中,节省了网络资源,同时避免了硬件过度损耗。
进一步地,在一实施例中,根据预设的单个人脸图片特征值平均提取时间,
为每个处理线程分配能锁定处理的目标提取任务数,包括按照如下公式为每个处理线程分配能锁定处理的目标提取任务数:
t*N<T
其中,t为预设的单个人脸图片特征值平均提取时间,N为目标提取任务数,T为预设的批量人脸图片特征值提取总时间。即t为步骤S43中预设的单个人脸图片特征值平均提取时间。
优选地,t为500毫秒,T为10分钟,则每个处理线程分配能锁定处理的目标提取任务数N为1200 个。即每隔10分钟,虚拟通信服务端会启动处理线程去样本数据库锁定一批待提取特征值的人脸图片的数据记录进行特征值提取,每个线程能锁定处理的目标提取任务数为1200。
需要说明的是,预设的批量人脸图片特征值提取总时间T可以根据项目需求自由配置,例如在用户请求高峰时期,批量提取样本特征值响应速度需要尽量快,则可以选10分钟或更少;反之,响应时效性要求不高,可以选10分钟以上。若超过10分钟仍然有未被提取完特征值的人脸图片,则将这些未被提取完特征值的人脸图片视为提取失败的人脸图片,同时,虚拟通信服务端会重新启动处理线程去锁定剩余未被提取完特征值的人脸图片的数据记录,直至全部提取完成。
在本实施例中,将每个线程能锁定处理的目标提取任务数根据上述公式进行计算,能使得每个处理线程能锁定处理的目标提取任务数在一个合理的范围内,整个提取特征值的过程兼顾了效率与速度。
进一步地,在一实施例中,在将样本数据库中标识信息标识的人脸图片的提取状态更新为提取完成状态之后,人脸特征值提取方法还包括:
S8:当人脸图片特征值的实际提取时间超过预设的批量人脸图片特征值提取总时间时,若待提取特征值的任一人脸图片的提取状态不为提取完成状态,则确定与该人脸图片对应的特征提取任务未完成,并为该特征提取任务重新分配处理线程。
多线程在处理提取任务时可能会遭遇如网络延迟、服务器不响应等各种意外的问题,这需要有纠错机制保障提取任务最后能顺利执行完毕。
具体地,虚拟通信服务端从开始分配处理线程起对人脸图片特征值的实际提取时间进行计时,预设的人脸图片特征值提取总时间为15分钟。当人脸图片特征值的实际提取时间到达15分钟时,虚拟通信服务端通过JDBC对样本数据库中的人脸图片提取状态表进行查询,对于表中人脸图片的提取状态尚不为提取完成状态的人脸图片,则确定与该人脸图片对应的特征提取任务未完成。针对未完成的提取任务,虚拟通信服务端为该特征提取任务重新分配处理线程,并重新开始计时,直至所有提取任务均被执行完成。
在本实施例中,为了防止多线程在执行过程中有意外情况的发生,引入纠错机制保障每个提取任务能被执行完成。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
在一实施例中,如图5所示,提供一种人脸特征值提取系统,该系统包括网络存储平台、样本数据库和服务器集群,其中,服务器集群由多台物理服务器组成,每台物理服务器包括虚拟计算服务端、虚拟通信服务端和中间件;
服务器集群中的每台物理服务器与网络存储平台之间通过网络连接;服务器集群中的每台物理服务 器与样本数据库之间通过网络连接;虚拟计算服务端与虚拟通信服务端之间通过中间件连接;
虚拟通信服务端,用于实现上述人脸特征值提取方法实施例中的步骤。优选地,虚拟通信服务端由Java服务器组成,Java服务器上部署有Spring框架;
虚拟计算服务端,用于根据待提取特征值的人脸图片在网络存储平台上的存储路径获取人脸图片,提取人脸图片的特征值,并将人脸图片的特征值返回给虚拟通信服务端。优选地,虚拟计算服务端由C++服务器组成;
网络存储平台,用于存储待提取特征值的人脸图片和人脸图片的特征值。优选地,网络存储平台为NAS(Network Attached Storage,网络附属存储)系统,NAS系统基于标准网络协议实现数据传输,为网络中的Windows、Linux、Mac OS等各种不同操作系统的计算机提供文件共享和数据备份;
样本数据库,用于存储特征提取总开关和每个人脸图片的提取状态。样本数据库使用的关系型数据包括但不限于MS-SQL、Oracle、MySQL、Sybase、DB2等;
中间件,用于虚拟计算服务器与虚拟通信服务器之间通信。优选地,中间件使用ZeroMQ。
在一实施例中,提供一种人脸特征值提取装置,该人脸特征值提取装置与上述实施例中人脸特征值提取方法一一对应。如图6所示,该人脸特征值提取装置包括接收提取请求模块61、设置状态模块62、查询模块63、分配线程模块64、接收返回值模块65、第一更新模块66以及第二更新模块67。各功能模块详细说明如下:
接收提取请求模块61,用于接收客户端发送的提取请求,并根据提取请求确定特征提取任务,其中,特征提取任务包含待提取特征值的每个人脸图片的标识信息;
设置状态模块62,用于将样本数据库中的特征提取总开关设置为开启状态,并根据标识信息将样本数据库中待提取特征值的每个人脸图片的提取状态设置为准备提取状态;
查询模块63,用于每隔预设的时间间隔,查询特征提取总开关;
分配线程模块64,用于若特征提取总开关为开启状态,则为特征提取任务分配处理线程,其中,处理线程用于将待提取特征值的人脸图片在网络存储平台上的存储路径,通过中间件传递给虚拟计算服务端,虚拟计算服务端用于根据存储路径获取人脸图片,并提取人脸图片的特征值;
接收返回值模块65,用于接收虚拟计算服务端返回的人脸图片的标识信息、特征值和提取状态;
第一更新模块66,用于将特征值保存到网络存储平台中标识信息标识的人脸图片的对应位置,并将样本数据库中标识信息标识的人脸图片的提取状态更新为提取完成状态;
第二更新模块67,用于若待提取特征值的每个人脸图片的提取状态均为提取完成状态,则将特征提取总开关设置为关闭状态。
进一步地,分配线程模块64还包括:
检测子模块641,用于若特征提取总开关为开启状态,则检测CPU配置信息;
线程启动子模块642,用于根据CPU配置信息,确定线程数量,并启动M个处理线程,其中,M为线程数量;
分配锁定子模块643,用于根据预设的单个人脸图片特征值平均提取时间,为每个处理线程分配能锁定处理的目标提取任务数。
进一步地,第一更新模块66包括:
缓存子模块661:用于将虚拟计算服务端返回的人脸图片的标识信息和提取状态对应保存到缓存队列;
同步子模块662:用于若缓存队列的长度达到预设的长度阈值,则根据缓存队列中保存的标识信息,将样本数据库中该标识信息标识的人脸图片的提取状态更新为缓存队列中该标识信息对应的提取状态。
进一步地,分配锁定子模块643包括:
分配锁定子单元6431:用于按照如下公式为每个处理线程分配能锁定处理的目标提取任务数:
t*N<T
其中,t为预设的单个人脸图片特征值平均提取时间,N为目标提取任务数,T为预设的批量人脸图片特征值提取总时间。
进一步地,该人脸特征值提取装置还包括:
线程分配重置模块68:用于当人脸图片特征值的实际提取时间超过预设的批量人脸图片特征值提取总时间时,若待提取特征值的任一人脸图片的提取状态不为提取完成状态,则确定与该人脸图片对应的特征提取任务未完成,并为该特征提取任务重新分配处理线程。
关于人脸特征值提取装置的具体限定可以参见上文中对于人脸特征值提取方法的限定,在此不再赘述。上述人脸特征值提取装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图7所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机可读指令和数据库。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理器执行时以实现一种人脸特征值提取方法。
在一个实施例中,提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上 运行的计算机可读指令,处理器执行计算机可读指令时实现上述实施例中人脸特征值提取方法的步骤,例如图2所示的步骤S1至步骤S7。或者,处理器执行计算机可读指令时实现上述实施例中人脸特征值提取装置的各模块/单元的功能,例如图6所示模块61至模块67的功能。为避免重复,这里不再赘述。
在一实施例中,提供一个或多个存储有计算机可读指令的非易失性可读存储介质,计算机可读指令被一个或多个处理器执行时实现上述方法实施例中人脸特征值提取方法,或者,该计算机可读指令被一个或多个处理器执行时实现上述装置实施例中人脸特征值提取置中各模块/单元的功能。为避免重复,这里不再赘述。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种人脸特征值提取方法,其特征在于,所述人脸特征值提取方法包括:
    接收客户端发送的提取请求,并根据所述提取请求确定特征提取任务,其中,所述特征提取任务包含待提取特征值的每个人脸图片的标识信息;
    将样本数据库中的特征提取总开关设置为开启状态,并根据所述标识信息将所述样本数据库中待提取特征值的每个所述人脸图片的提取状态设置为准备提取状态;
    每隔预设的时间间隔,查询所述特征提取总开关;
    若所述特征提取总开关为所述开启状态,则为所述特征提取任务分配处理线程,其中,所述处理线程用于将所述待提取特征值的人脸图片在网络存储平台上的存储路径,通过中间件传递给虚拟计算服务端,所述虚拟计算服务端用于根据所述存储路径获取所述人脸图片,并提取所述人脸图片的特征值;
    接收所述虚拟计算服务端返回的所述人脸图片的标识信息、特征值和提取状态;
    将所述特征值保存到所述网络存储平台中所述标识信息标识的人脸图片的对应位置,并将所述样本数据库中所述标识信息标识的人脸图片的提取状态更新为提取完成状态;
    若待提取特征值的每个所述人脸图片的提取状态均为所述提取完成状态,则将所述特征提取总开关设置为关闭状态。
  2. 如权利要求1所述的人脸特征值提取方法,其特征在于,所述若所述特征提取总开关为所述开启状态,则为所述特征提取任务分配处理线程,包括:
    若所述特征提取总开关为所述开启状态,则检测CPU配置信息;
    根据所述CPU配置信息,确定线程数量,并启动M个所述处理线程,其中,M为所述线程数量;
    根据预设的单个人脸图片特征值平均提取时间,为每个所述处理线程分配能锁定处理的目标提取任务数。
  3. 如权利要求1所述的人脸特征值提取方法,其特征在于,所述将所述样本数据库中所述标识信息标识的人脸图片的提取状态更新为提取完成状态,包括:
    将所述虚拟计算服务端返回的所述人脸图片的标识信息和提取状态对应保存到缓存队列;
    若所述缓存队列的长度达到预设的长度阈值,则根据所述缓存队列中保存的标识信息,将所述样本数据库中该标识信息标识的人脸图片的提取状态更新为所述缓存队列中该标识信息对应的提取状态。
  4. 如权利要求2所述的人脸特征值提取方法,其特征在于,所述根据预设的单个人脸图片特征值平均提取时间,为每个所述处理线程分配能锁定处理的目标提取任务数,包括:
    按照如下公式为每个所述处理线程分配能锁定处理的目标提取任务数:t*N<T
    其中,t为所述预设的单个人脸图片特征值平均提取时间,N为所述目标提取任务数,T为预设的批量人脸图片特征值提取总时间。
  5. 如权利要求4所述的人脸特征值提取方法,其特征在于,在所述将所述样本数据库中所述标识信息标识的人脸图片的提取状态更新为提取完成状态之后,所述人脸特征值提取方法还包括:
    当人脸图片特征值的实际提取时间超过所述预设的批量人脸图片特征值提取总时间时,若待提取特征值的任一所述人脸图片的提取状态不为所述提取完成状态,则确定与该人脸图片对应的所述特征提取任务未完成,并为该特征提取任务重新分配所述处理线程。
  6. 一种人脸特征值提取装置,其特征在于,所述人脸特征值提取装置,包括:
    接收提取请求模块,用于接收客户端发送的提取请求,并根据所述提取请求确定特征提取任务,其中,所述特征提取任务包含待提取特征值的每个人脸图片的标识信息;
    设置状态模块,用于将样本数据库中的特征提取总开关设置为开启状态,并根据所述标识信息将所述样本数据库中待提取特征值的每个所述人脸图片的提取状态设置为准备提取状态;
    查询模块,用于每隔预设的时间间隔,查询所述特征提取总开关;
    分配线程模块,用于若所述特征提取总开关为所述开启状态,则为所述特征提取任务分配处理线程,其中,所述处理线程用于将所述待提取特征值的人脸图片在网络存储平台上的存储路径,通过中间件传递给虚拟计算服务端,所述虚拟计算服务端用于根据所述存储路径获取所述人脸图片,并提取所述人脸图片的特征值;
    接收返回值模块,用于接收所述虚拟计算服务端返回的所述人脸图片的标识信息、特征值和提取状态;
    第一更新模块,用于将所述特征值保存到所述网络存储平台中所述标识信息标识的人脸图片的对应位置,并将所述样本数据库中所述标识信息标识的人脸图片的提取状态更新为提取完成状态;
    第二更新模块,用于若待提取特征值的每个所述人脸图片的提取状态均为所述提取完成状态,则将所述特征提取总开关设置为关闭状态。
  7. 如权利要求6所述的人脸特征值提取装置,其特征在于,所述分配线程模块,包括:
    检测子模块,用于若所述特征提取总开关为开启状态,则检测CPU配置信息;
    线程启动子模块,用于根据所述CPU配置信息,确定线程数量,并启动M个所述处理线程,其中,M为所述线程数量;
    分配锁定子模块,用于根据预设的单个人脸图片特征值平均提取时间,为每个所述处理线程分配能锁定处理的目标提取任务数。
  8. 如权利要求7所述的人脸特征值提取装置,其特征在于,所述分配锁定子模块包括:
    分配锁定子单元:用于按照如下公式为每个所述处理线程分配能锁定处理的目标提取任务数:
    t*N<T
    其中,t为所述预设的单个人脸图片特征值平均提取时间,N为所述目标提取任务数,T为预设的批量人脸图片特征值提取总时间。
  9. 如权利要求6所述的人脸特征值提取装置,其特征在于,所述第一更新模块,包括:
    缓存子模块,用于将所述虚拟计算服务端返回的所述人脸图片的标识信息和提取状态对应保存到缓存队列;
    同步子模块,用于若所述缓存队列的长度达到预设的长度阈值,则根据所述缓存队列中保存的标识信息,将所述样本数据库中该标识信息标识的人脸图片的提取状态更新为所述缓存队列中该标识信息对应的提取状态。
  10. 如权利要求6所述的人脸特征值提取装置,其特征在于,所述人脸特征值提取装置,还包括:
    线程分配重置模块,用于当人脸图片特征值的实际提取时间超过所述预设的批量人脸图片特征值提取总时间时,若待提取特征值的任一所述人脸图片的提取状态不为所述提取完成状态,则确定与该人脸图片对应的所述特征提取任务未完成,并为该特征提取任务重新分配所述处理线程。
  11. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现如下步骤:
    接收客户端发送的提取请求,并根据所述提取请求确定特征提取任务,其中,所述特征提取任务包含待提取特征值的每个人脸图片的标识信息;
    将样本数据库中的特征提取总开关设置为开启状态,并根据所述标识信息将所述样本数据库中待提取特征值的每个所述人脸图片的提取状态设置为准备提取状态;
    每隔预设的时间间隔,查询所述特征提取总开关;
    若所述特征提取总开关为所述开启状态,则为所述特征提取任务分配处理线程,其中,所述处理线程用于将所述待提取特征值的人脸图片在网络存储平台上的存储路径,通过中间件传递给虚拟计算服务端,所述虚拟计算服务端用于根据所述存储路径获取所述人脸图片,并提取所述人脸图片的特征值;
    接收所述虚拟计算服务端返回的所述人脸图片的标识信息、特征值和提取状态;
    将所述特征值保存到所述网络存储平台中所述标识信息标识的人脸图片的对应位置,并将所述样本数据库中所述标识信息标识的人脸图片的提取状态更新为提取完成状态;
    若待提取特征值的每个所述人脸图片的提取状态均为所述提取完成状态,则将所述特征提取总开关设置为关闭状态。
  12. 如权利要求11所述的计算机设备,其特征在于,所述若所述特征提取总开关为所述开启状态,则为所述特征提取任务分配处理线程,包括:
    若所述特征提取总开关为所述开启状态,则检测CPU配置信息;
    根据所述CPU配置信息,确定线程数量,并启动M个所述处理线程,其中,M为所述线程数量;
    根据预设的单个人脸图片特征值平均提取时间,为每个所述处理线程分配能锁定处理的目标提取任务数。
  13. 如权利要求11所述的计算机设备,其特征在于,所述将所述样本数据库中所述标识信息标识的人脸图片的提取状态更新为提取完成状态,包括:
    将所述虚拟计算服务端返回的所述人脸图片的标识信息和提取状态对应保存到缓存队列;
    若所述缓存队列的长度达到预设的长度阈值,则根据所述缓存队列中保存的标识信息,将所述样本数据库中该标识信息标识的人脸图片的提取状态更新为所述缓存队列中该标识信息对应的提取状态。
  14. 如权利要求12所述的计算机设备,其特征在于,所述根据预设的单个人脸图片特征值平均提取时间,为每个所述处理线程分配能锁定处理的目标提取任务数,包括:
    按照如下公式为每个所述处理线程分配能锁定处理的目标提取任务数:t*N<T
    其中,t为所述预设的单个人脸图片特征值平均提取时间,N为所述目标提取任务数,T为预设的批量人脸图片特征值提取总时间。
  15. 如权利要求14所述的计算机设备,其特征在于,在所述将所述样本数据库中所述标识信息标识的人脸图片的提取状态更新为提取完成状态之后,所述处理器执行所述计算机可读指令时还实现如下步骤:
    当人脸图片特征值的实际提取时间超过所述预设的批量人脸图片特征值提取总时间时,若待提取特征值的任一所述人脸图片的提取状态不为所述提取完成状态,则确定与该人脸图片对应的所述特征提取任务未完成,并为该特征提取任务重新分配所述处理线程。
  16. 一个或多个存储有计算机可读指令的非易失性可读存储介质,其特征在于,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
    接收客户端发送的提取请求,并根据所述提取请求确定特征提取任务,其中,所述特征提取任务包含待提取特征值的每个人脸图片的标识信息;
    将样本数据库中的特征提取总开关设置为开启状态,并根据所述标识信息将所述样本数据库中待提取特征值的每个所述人脸图片的提取状态设置为准备提取状态;
    每隔预设的时间间隔,查询所述特征提取总开关;
    若所述特征提取总开关为所述开启状态,则为所述特征提取任务分配处理线程,其中,所述处理线 程用于将所述待提取特征值的人脸图片在网络存储平台上的存储路径,通过中间件传递给虚拟计算服务端,所述虚拟计算服务端用于根据所述存储路径获取所述人脸图片,并提取所述人脸图片的特征值;
    接收所述虚拟计算服务端返回的所述人脸图片的标识信息、特征值和提取状态;
    将所述特征值保存到所述网络存储平台中所述标识信息标识的人脸图片的对应位置,并将所述样本数据库中所述标识信息标识的人脸图片的提取状态更新为提取完成状态;
    若待提取特征值的每个所述人脸图片的提取状态均为所述提取完成状态,则将所述特征提取总开关设置为关闭状态。
  17. 如权利要求16所述的非易失性可读存储介质,其特征在于,所述若所述特征提取总开关为所述开启状态,则为所述特征提取任务分配处理线程,包括:
    若所述特征提取总开关为所述开启状态,则检测CPU配置信息;
    根据所述CPU配置信息,确定线程数量,并启动M个所述处理线程,其中,M为所述线程数量;
    根据预设的单个人脸图片特征值平均提取时间,为每个所述处理线程分配能锁定处理的目标提取任务数。
  18. 如权利要求16所述的非易失性可读存储介质,其特征在于,所述将所述样本数据库中所述标识信息标识的人脸图片的提取状态更新为提取完成状态,包括:
    将所述虚拟计算服务端返回的所述人脸图片的标识信息和提取状态对应保存到缓存队列;
    若所述缓存队列的长度达到预设的长度阈值,则根据所述缓存队列中保存的标识信息,将所述样本数据库中该标识信息标识的人脸图片的提取状态更新为所述缓存队列中该标识信息对应的提取状态。
  19. 如权利要求17所述的非易失性可读存储介质,其特征在于,所述根据预设的单个人脸图片特征值平均提取时间,为每个所述处理线程分配能锁定处理的目标提取任务数,包括:
    按照如下公式为每个所述处理线程分配能锁定处理的目标提取任务数:t*N<T
    其中,t为所述预设的单个人脸图片特征值平均提取时间,N为所述目标提取任务数,T为预设的批量人脸图片特征值提取总时间。
  20. 如权利要求19所述的非易失性可读存储介质,其特征在于,在所述将所述样本数据库中所述标识信息标识的人脸图片的提取状态更新为提取完成状态之后,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
    当人脸图片特征值的实际提取时间超过所述预设的批量人脸图片特征值提取总时间时,若待提取特征值的任一所述人脸图片的提取状态不为所述提取完成状态,则确定与该人脸图片对应的所述特征提取任务未完成,并为该特征提取任务重新分配所述处理线程。
PCT/CN2018/120825 2018-08-21 2018-12-13 人脸特征值提取方法、装置、计算机设备及存储介质 WO2020037896A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810953164.4 2018-08-21
CN201810953164.4A CN109271869B (zh) 2018-08-21 2018-08-21 人脸特征值提取方法、装置、计算机设备及存储介质

Publications (1)

Publication Number Publication Date
WO2020037896A1 true WO2020037896A1 (zh) 2020-02-27

Family

ID=65154069

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/120825 WO2020037896A1 (zh) 2018-08-21 2018-12-13 人脸特征值提取方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN109271869B (zh)
WO (1) WO2020037896A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598043A (zh) * 2020-05-25 2020-08-28 山东超越数控电子股份有限公司 一种人脸识别的方法、系统、设备及介质
CN112149087A (zh) * 2020-08-24 2020-12-29 深圳达实软件有限公司 一种人脸权限快速授权方法
CN113760487A (zh) * 2020-08-05 2021-12-07 北京京东振世信息技术有限公司 一种业务处理方法和装置

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108108499A (zh) * 2018-02-07 2018-06-01 腾讯科技(深圳)有限公司 人脸检索方法、装置、存储介质及设备
CN108197608A (zh) * 2018-02-01 2018-06-22 广州市君望机器人自动化有限公司 人脸识别方法、装置、机器人及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8078674B2 (en) * 2007-05-10 2011-12-13 International Business Machines Corporation Server device operating in response to received request
CN105975948B (zh) * 2016-05-23 2019-03-29 南京甄视智能科技有限公司 用于人脸识别的云服务平台架构
CN106845385A (zh) * 2017-01-17 2017-06-13 腾讯科技(上海)有限公司 视频目标跟踪的方法和装置
CN108197318A (zh) * 2018-02-01 2018-06-22 广州市君望机器人自动化有限公司 人脸识别方法、装置、机器人及存储介质

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108197608A (zh) * 2018-02-01 2018-06-22 广州市君望机器人自动化有限公司 人脸识别方法、装置、机器人及存储介质
CN108108499A (zh) * 2018-02-07 2018-06-01 腾讯科技(深圳)有限公司 人脸检索方法、装置、存储介质及设备

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598043A (zh) * 2020-05-25 2020-08-28 山东超越数控电子股份有限公司 一种人脸识别的方法、系统、设备及介质
CN113760487A (zh) * 2020-08-05 2021-12-07 北京京东振世信息技术有限公司 一种业务处理方法和装置
CN113760487B (zh) * 2020-08-05 2024-04-12 北京京东振世信息技术有限公司 一种业务处理方法和装置
CN112149087A (zh) * 2020-08-24 2020-12-29 深圳达实软件有限公司 一种人脸权限快速授权方法

Also Published As

Publication number Publication date
CN109271869B (zh) 2023-09-05
CN109271869A (zh) 2019-01-25

Similar Documents

Publication Publication Date Title
US10621005B2 (en) Systems and methods for providing zero down time and scalability in orchestration cloud services
US9342346B2 (en) Live migration of virtual machines that use externalized memory pages
Lin et al. Towards a non-2pc transaction management in distributed database systems
US8713046B2 (en) Snapshot isolation support for distributed query processing in a shared disk database cluster
US9501502B2 (en) Locking protocol for partitioned and distributed tables
US20180173745A1 (en) Systems and methods to achieve sequential consistency in replicated states without compromising performance in geo-distributed, replicated services
US9990225B2 (en) Relaxing transaction serializability with statement-based data replication
WO2020037896A1 (zh) 人脸特征值提取方法、装置、计算机设备及存储介质
US20230106118A1 (en) Distributed processing of transactions in a network using timestamps
US20210073198A1 (en) Using persistent memory and remote direct memory access to reduce write latency for database logging
US10108456B2 (en) Accelerated atomic resource allocation on a multiprocessor platform
WO2017128028A1 (zh) 一种事务处理方法及装置
US20230315721A1 (en) Snapshot isolation query transactions in distributed systems
US11138198B2 (en) Handling of unresponsive read only instances in a reader farm system
US11709809B1 (en) Tree-based approach for transactionally consistent version sets
US10515066B2 (en) Atomic updates of versioned data structures
US9342351B1 (en) Systems and methods for efficient DB2 outage operations
US10970175B2 (en) Flexible per-request data durability in databases and other data stores
US20220171756A1 (en) Method and apparatus for distributed database transactions using global timestamping
US10824640B1 (en) Framework for scheduling concurrent replication cycles
US9852172B2 (en) Facilitating handling of crashes in concurrent execution environments of server systems while processing user queries for data retrieval
US20160306666A1 (en) Selective Allocation of CPU Cache Slices to Objects
US11687496B2 (en) Synchronization of distributed data files
US10922008B2 (en) System and method for backup of virtual machines organized using logical layers
US11960510B2 (en) Data movement from data storage clusters

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18930590

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18930590

Country of ref document: EP

Kind code of ref document: A1