CN110706148A - Face image processing method, device, equipment and storage medium - Google Patents

Face image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN110706148A
CN110706148A CN201910959768.4A CN201910959768A CN110706148A CN 110706148 A CN110706148 A CN 110706148A CN 201910959768 A CN201910959768 A CN 201910959768A CN 110706148 A CN110706148 A CN 110706148A
Authority
CN
China
Prior art keywords
image processing
gpu
library
processing task
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910959768.4A
Other languages
Chinese (zh)
Other versions
CN110706148B (en
Inventor
吴孟晴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Construction Bank Corp
Original Assignee
China Construction Bank Corp
CCB Finetech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Bank Corp, CCB Finetech Co Ltd filed Critical China Construction Bank Corp
Priority to CN201910959768.4A priority Critical patent/CN110706148B/en
Publication of CN110706148A publication Critical patent/CN110706148A/en
Application granted granted Critical
Publication of CN110706148B publication Critical patent/CN110706148B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the invention discloses a method, a device, equipment and a storage medium for processing a face image, wherein the method for processing the face image comprises the following steps: acquiring an image processing task instruction; determining target library positions for executing the image processing task according to the running state information of the GPU equipment for image processing, wherein each library position comprises at least one GPU equipment; and sending the image processing task instruction to the target library position so that each GPU device in the target library position executes the image processing task instruction in parallel based on the respective stored image library. The embodiment of the invention selects the proper target library position for the image processing task based on the judgment of the running state information of the GPU equipment, thereby ensuring that the execution of the image processing task is not influenced when some library positions have problems and ensuring the execution success rate of the image processing task; and for the image processing task, each GPU device in the target library position is executed in parallel, so that the execution efficiency of the image processing task is improved.

Description

Face image processing method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a method, a device, equipment and a storage medium for processing a face image.
Background
With the development of the internet of things, people have higher and higher requirements on the response speed of the server. In the field of face technology, a user can complete a series of Processing operations of face images through a Graphics Processing server deployed with a face algorithm, and the Graphics Processing server integrates a Graphics Processing Unit (GPU), which may be referred to as a GPU server for short. And the operation response to the face image can be realized more quickly by uploading the face library to the GPU server.
Generally, for a plurality of face service requests, the GPU server needs to perform sequential processing. If a plurality of face images share a face library in a GPU server in a processing scene, the service request processing speed is low.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment and a storage medium for processing a face image, which are used for improving the efficiency of processing the face image and the success rate of executing an image processing task.
In a first aspect, an embodiment of the present invention provides a face image processing method, including:
acquiring an image processing task instruction;
determining target library positions for executing the image processing task according to the running state information of the GPU equipment for image processing, wherein each library position comprises at least one GPU equipment;
and sending the image processing task instruction to the target library location, so that each GPU device in the target library location executes the image processing task instruction in parallel based on the respective stored image library.
In a second aspect, an embodiment of the present invention further provides a face image processing apparatus, including:
the task instruction acquisition module is used for acquiring an image processing task instruction;
the system comprises a target library position determining module, a task execution module and a task execution module, wherein the target library position determining module is used for determining a target library position for executing an image processing task according to the running state information of the GPU equipment for image processing, and each library position comprises at least one GPU equipment;
and the task instruction execution module is used for sending the image processing task instruction to the target library position so as to enable each GPU device in the target library position to execute the image processing task instruction in parallel based on the image library stored in each GPU device.
In a third aspect, an embodiment of the present invention further provides a computer device, including:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a method of processing a face image as in any embodiment of the invention.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the face image processing method according to any embodiment of the present invention.
The method and the device have the advantages that based on the judgment of the running state information of the GPU device, the proper target library position is selected for the image processing task from the plurality of pre-configured library positions, the plurality of library positions can aim at the same face logic library, the content and the position of the plurality of library positions are the same, the execution of the image processing task cannot be influenced when some library positions have problems, and the execution success rate of the image processing task is ensured; and for the image processing task, each GPU device in the target library position is executed in parallel, so that the execution efficiency of the image processing task and the execution success rate of the image processing task are improved.
Drawings
FIG. 1 is a flowchart of a face image processing method according to a first embodiment of the present invention;
FIG. 2 is a flowchart of a face image processing method according to a second embodiment of the present invention;
FIG. 3 is an exemplary diagram of the library bit and page bit of the face logic library in the second embodiment of the present invention;
fig. 4 is a schematic structural diagram of a face image processing apparatus according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of a computer device in the fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Fig. 1 is a flowchart of a face image processing method according to a first embodiment of the present invention, which is applicable to a case where the execution efficiency of a face image processing task and the success rate of task execution are improved in the field of face image processing. The method can be executed by a human face image processing device, which can be implemented in software and/or hardware and can be configured in computer equipment, such as equipment with communication and computing capabilities, such as a background server. As shown in fig. 1, the method specifically includes:
and step 101, acquiring an image processing task instruction.
The image processing task instruction is a task request for requesting a job transmitted to the face image library. In this embodiment, according to different image processing requirements, one or more image processing task instructions may be obtained, and the different image processing task instructions may be processing requirements in different image processing scenes, and the image processing scenes may be divided according to different image processing requesters, or may be divided according to different image usage scenes. Optionally, the image processing task instruction may be sent to the computer device for executing the technical solution of the present embodiment through a face detection device, an identity discovery device, or an interaction device, for example, a camera, a wifi box, a mobile Application (APP), an all-in-one machine, or the like.
Optionally, the image processing task instruction includes a face image comparison processing instruction, a face information query instruction, a face image quality detection instruction, and a change operation instruction of the face image stored in each GPU device, where the change includes addition, deletion, and modification. The face image comparison processing instruction may include an image 1 to 1 comparison task instruction, and the change operation instruction of the face image stored in each GPU device may include a face warehousing operation instruction and the like.
Specifically, the face image comparison processing instruction comprises searching a target face image which is the same as the comparison face image in a face library; the face information query instruction comprises the step of querying face information in a face library; the face image quality detection instruction comprises the steps of detecting the generation quality of a target face image, and mainly screening unqualified face images, such as face images which cannot be identified by face blurring; the change operation instruction comprises adding, deleting and modifying operations of the face image stored in the GPU device, such as requesting to delete a certain face image and related information stored in the GPU device.
Step 102, determining a target library position for executing the image processing task according to the running state information of the GPU equipment for image processing, wherein each library position comprises at least one GPU equipment.
The graphic processing GPU device is a device with a GPU as a processor and used for processing images, and comprises a hardware server integrated with a face algorithm, wherein the face algorithm comprises a face comparison algorithm, a face image management method and the like. The operation state information refers to parameter information capable of representing the operation condition of the GPU device, and includes an online state, a memory size, cache information, and the like of the GPU device. The target library bit refers to a library bit that meets the requirement of executing the task and is identified according to the running state information of the GPU device, for example, the target library bit may be an idle library bit, that is, a library bit that does not currently execute the image processing task, or a library bit that supports that the number of executed image processing tasks does not reach an upper limit.
Before determining the target library location, in this embodiment, a plurality of library locations may be configured in advance, each library location includes at least one GPU device, and the plurality of library locations may correspond to one face logic library. The face logic library is an image library for providing face information for different image processing scenes, and can be stored into different GPU equipment in a partitioning manner according to the number of GPU equipment in a plurality of library positions, and a face image stored in each GPU equipment is called as a target library. The positions of all the library positions are the same, the information is consistent, each library position comprises a plurality of page positions, each page position refers to a GPU equipment subset obtained after GPU equipment in each library position is grouped, the information of each page position is different, and the information of all the page positions is combined into a complete library position. These page bits may be sequentially numbered, and may be referred to as, for example, page 1, page 2, page 3, and so on. For example, a page bit includes partial information of a GPU target library, i.e., a face logic library. Determining the operating state information of the graphics processing GPU device includes determining the availability of page bits, e.g., the availability of each page bit may be determined by the presence or memory information of the GPU device, which may not be available if the GPU device is not online or memory full. If a page bit is not available, the bank bit to which it belongs is not available. Each page bit in the determined target bank bit is available. And each library position in the target library positions can be used, so that the integrity of the target library positions can be ensured, and the accuracy of executing the image processing task is further ensured.
In addition, in the embodiment, a face logic library corresponding to a plurality of library positions is supported to be commonly used in a plurality of scenes, compared with the prior art, the face logic library is not required to be used in an isolated manner in each image processing scene, and each library position can be used in different image processing scenes, so that the utilization rate of the computing power of the GPU equipment in different image processing scenes is improved, the waste of the computing power of the GPU is reduced, and the GPU equipment is utilized to the maximum extent.
And 103, sending the image processing task instruction to the target library location so that each GPU device in the target library location can execute the image processing task instruction in parallel based on the image library stored in the GPU device.
When the target library position is determined, the image processing task is issued to the target library position, the target library position comprises a plurality of page positions, each page position can comprise a GPU device, and a part of image library content of the face logic library corresponding to the target library position is stored in the GPU device. When the target library bit receives the image processing task, all the page bits belonging to the target library bit execute the image processing task at the same time, after the image processing task of a certain target page bit is executed, the execution of the image processing task is considered to be finished, the rest page bits stop executing, and meanwhile, the target page bit returns the execution result to the computer equipment for realizing the technical scheme of the embodiment.
Illustratively, taking the acquired image processing task instruction as an example of a map library searching task instruction, assuming that three library bits correspond to a face logic library, the serial numbers of the three library bits are No. 1 library bit, No. 2 library bit and No. 3 library bit, and each library bit includes three page bits, which are No. 1 page bit, No. 2 page bit and No. 3 page bit. At a certain moment, the library position No. 1 is executing a plurality of tasks, the GPU equipment corresponding to the page position No. 1 in the library position No. 2 is not on line, and after a task instruction of searching the library by a picture is issued, the determined target library position is the library position No. 3. And three page bits in the No. 3 library bit simultaneously execute an instruction of searching the library by the graph, namely, each page bit searches a corresponding image in a corresponding GPU target library, when the No. 1 page bit is searched in the corresponding image, the No. 2 page bit and the No. 3 page bit stop the searching task, and the task is completed. The execution efficiency of the image processing task is improved by executing the task in parallel through a plurality of page positions, and the efficient search of the ultra-large library can be realized by adding the page position to each library position. When the face logic library is small or the time consumption can be accepted, the number of the page bits can be 1, namely the single-page mode is adopted, the number of GPU equipment is reduced in the single-page mode, and the cost can be saved.
Illustratively, when the image processing task instruction is to read and query the face information, for each GPU device executing the task, the in-pile high-frequency cache of each GPU device is queried first, and then the distributed cache of each GPU device is queried, and the database in the GPU device, that is, the target library, is queried only if the distributed cache is not found. The in-pile high-frequency cache comprises face information which is stored in the GPU equipment cache according to historical query record statistics and has high query frequency, and optionally, the in-pile cache can be updated according to daily query frequency. The distributed cache refers to historical inquiry face information which is stored in a cache of the GPU device according to historical inquiry records, optionally, the distributed cache can be updated according to real-time inquiry, newly inquired information is added into the distributed cache, and information which is not inquired when a time threshold is reached is deleted. The database is a data storage area for storing complete face information in the GPU equipment.
The method and the device have the advantages that based on the judgment of the running state information of the GPU device, the proper target library position is selected for the image processing task from the plurality of pre-configured library positions, the plurality of library positions can aim at the same face logic library, the content and the position of the plurality of library positions are the same, the execution of the image processing task cannot be influenced when some library positions have problems, and the execution success rate of the image processing task is ensured; and for the image processing task, each GPU device in the target library position is executed in parallel, so that the execution efficiency of the image processing task and the execution success rate of the image processing task are improved.
Example two
Fig. 2 is a flowchart of a face image processing method in the second embodiment of the present invention, and the second embodiment of the present invention is further optimized based on the first embodiment of the present invention. As shown in fig. 2, the method includes:
step 201, constructing a state tree according to the face logic library to which each library position belongs and the dependency relationship between the library positions and the GPU device, wherein the face logic library is obtained by classifying face images according to a preset face image classification strategy.
The face logic library may refer to an image face image classification policy for providing face information for different image processing scenes, and may include classifying face images according to usage scenes of the face images, or may include classifying face images according to geographic regions to which the face images belong. Specifically, the face image classification policy may be set according to different processing requirements of the face image, and this embodiment is not particularly limited. The dependency relationship between the bin and the GPU device includes an affiliation relationship between the page bit and the bin, for example, the current page bit belongs to several bin bits. The state tree is used for monitoring the working information of the face logic library, including monitoring the operating state information of each GPU device, such as, for example, monitoring the face logic library, the states of all library bits and page bits, the maximum capacity, the current capacity, the GPU type, the page bit address, and the like.
For example, after the face logic library is allocated with the corresponding library bit and page bit numbers, all the logic libraries, library bits, and page bits are registered in the state tree, and the state, the maximum capacity, the current capacity, the GPU type, the page bit address, and the like may be monitored through the state tree. And the project state, the access project address, the management project address and the configuration information can be monitored through the state tree. The state tree monitors the states and data of the modules, so that the cooperative work among the modules can be guaranteed, the core management of the whole project is guaranteed, and the indirect communication among the modules is connected through the state tree, so that decoupling is facilitated.
Step 202, acquiring an image processing task instruction.
Step 203, monitoring the running state information of each GPU device by using the state tree, so as to determine a target library location for executing the image processing task according to the running state information of each GPU device monitored by the state tree.
The running state information of the GPU equipment is obtained through a state tree. For example, the running state information of the GPU device may be determined according to the state, the current capacity, and the maximum capacity of the GPU device monitored by the state tree, and then a target library bit is selected, and if the state of the GPU device is displayed as an offline state or the current capacity of the GPU device is close to the maximum capacity, the GPU device is unavailable, that is, the current page bit is unavailable, and then the library bit to which the page bit belongs is unavailable.
And step 204, calling a GPU management interface, and sending the image processing task to the target library position, so that each GPU device in the target library position executes the image processing task instruction in parallel based on the image library stored in each GPU device.
The face logic library corresponds to a plurality of library positions, one library position comprises at least one GPU (graphics processing unit) device, and the GPU management interface is an interface for uniformly managing the GPU devices, namely the GPU management interface is controlled to realize uniform management of the GPU devices, so that the same image processing task can be conveniently sent to different GPU devices.
Optionally, each GPU device corresponds to an adapter adapted to the GPU device, and correspondingly, the sending the image processing task instruction to the target library location includes:
calling a GPU management interface, and sending the image processing task to the target library position;
and sending the image processing task to each GPU device through an adapter adaptive to each GPU device in the target library position.
The adapter is an interface which is provided for GPU equipment of different manufacturers and models and is uniformly defined, the interface can be connected with a GPU management interface, common management of different manufacturers and GPU equipment is achieved, compatibility of the GPU equipment is improved, and the adapter achieves connection of the management interface and the GPU equipment. GPU equipment of different manufacturers and versions can be compatible by developing corresponding adapters, GPU equipment can be shared conveniently in different scenes, development work in different scenes is simplified, resource investment is reduced, and efficiency is improved.
For example, as shown in fig. 3, one face logic library may correspond to three library bits, the face information in the three library bits is the same, library bit No. 1 is composed of 4 page bits, library bit No. 2 is composed of 3 page bits, library bit No. 3 is composed of 2 page bits, each page bit may include one GPU device, the target library disposed in each GPU device is used to store part of the face image information in the face logic, and each GPU device corresponds to an adapter adapted to the GPU device. A face logic library can correspond to different numbers of library positions according to the size and the requirement of the library, and different numbers of page positions are developed for different library positions according to the difference of the requirement so as to meet the execution requirement of an image processing task instruction. For example, when the face logic library is a face library of a certain company, the face library of the company is stored corresponding to three library positions, wherein four page positions in the library position No. 1 can respectively correspond to a face target library of a department, a face target library of a department C and a face target library of the rest departments of the company; the No. 2 stock position can respectively correspond to the face target libraries of all offices of the company, and if the No. 2 stock position No. 1 page position represents the face target library of the office in x city, the No. 2 page position represents the face target library of the office in y city, and the No. 3 page position represents the face target libraries of the rest offices; the 3 rd library position can respectively correspond to the face target libraries of different identity information of the employees of the company, for example, the 1 st page position of the 3 rd library position represents the face target library of the management layer, and the 2 nd page position represents the face target libraries of other employees.
Optionally, in the target library location, sending the image processing task to each GPU device through an adapter adapted to each GPU device, includes:
and sending the image processing task and a task identifier to each GPU device through an adapter adaptive to each GPU device, wherein the task identifier is used for each GPU device to authenticate the received image processing task.
The task identifier refers to a mode of identifying an image processing task by the GPU device, and exemplarily includes a message token. For example, each library site issues a fixed number of message tokens, indicating that a corresponding number of image processing tasks are simultaneously allowed to execute in parallel. When the face comparison is carried out, firstly, message analysis is carried out on a task instruction of the face comparison, the task can be connected to GPU service after a token is obtained, and the task instruction is rejected if the token is not obtained. The token is supplemented regularly to ensure the continuity of the GPU service, the message token is distributed for the image processing task to ensure the efficiency of the GPU service in the task peak period, and the GPU crash caused by a large number of tasks can be avoided.
Optionally, task execution rejection feedback sent by the target library location is received, and the image processing task that the target library location is rejected to be executed is sent to a candidate library location, so that the candidate library location executes the image processing task that the target library location is rejected to be executed, where the task execution rejection feedback is generated when the number of task identifiers received by the target library location based on each GPU device reaches a number threshold.
Illustratively, after the message token of the library position 1 is issued, the image processing task instruction is rejected by the library position 1, then the rejection feedback is sent to the management interface, and the management interface sends the image processing task instruction to the candidate library position with the message token, so that the efficiency of the image processing task can be improved, and the delay of the image processing task caused by the fact that no message token is directly rejected when the task reaches the target library position is avoided.
Optionally, if no response feedback of the target bin to the image processing task instruction is received within a preset response time, determining a candidate bin for executing the image processing task from other bins except the target bin.
Illustratively, when the state information under the target library position is found to be wrong before the image processing task instruction is sent to the target library position, but when the image processing task instruction is sent to the target library position, the target library position is unavailable due to sudden failure of the GPU equipment, the current image processing task instruction cannot be responded, and at the moment, the task instruction which does not receive response feedback within the preset time is sent to other available candidate library positions, so that the execution of the image processing task instruction is ensured, the task execution failure caused by sudden change of the state of the GPU equipment is avoided, the logic of the image processing task is perfected, the task instruction is ensured not to be accumulated, and the image processing task instruction is convenient to process in time.
Optionally, after sending the image processing task instruction to the target library location, the method further includes:
and if the face image stored in the GPU equipment is added or deleted in the target library position, updating the image storage capacity of the GPU equipment in the target library position monitored in the state tree.
Illustratively, when the image processing task instruction is to add or delete a face image stored in the GPU device, then the state information, including the current image storage capacity of the GPU device, is updated on the state tree for the corresponding library bit of the GPU device that has updated the face image information. The state information on the state tree is updated in real time, so that the accuracy of the state tree information is guaranteed, image processing tasks are conveniently processed according to the state tree information, and data disorder is prevented.
Illustratively, when the image processing task instruction is to add or delete a face image stored in the GPU device, the image processing task is processed in an MQ asynchronous manner. For example, after receiving an instruction for adding a face image, sending the task instruction to the MQ queue, sequentially updating target library bits to be processed through the application system until all the library bits finish executing the instruction, and feeding back a signal of finishing the execution to the MQ. And the final consistency of the data of all the library bits under the face logic library is ensured in an asynchronous thread mode.
Illustratively, the services of a GPU device are connected using a connection pool, which is a technique that creates and manages a buffer pool of connections that are ready for use by any thread that needs them. GPU equipment in the face logic library is arranged in the same connection pool, when one GPU service request is connected, disconnection operation is not carried out after processing is finished, and when the next same GPU service request comes, the previous connection can be directly adopted. The setting of the connection pool reduces the time of connection establishment, avoids the cost of connection modes, and if the operation of the connection pool is not adopted, the connection operation needs to be established every time of service requests, thus causing resource waste.
The embodiment of the invention monitors the state information of the face library and the GPU equipment based on the state tree, realizes unified management of the information, is convenient for connection among modules and improves the processing efficiency of image processing tasks; moreover, adapters which can be uniformly accessed by the management interfaces are developed for GPU equipment of different types and different manufacturers, so that incompatible GPU equipment of different interfaces can be controlled through the management interfaces, uniform management is realized, and development resources under different scenes are saved; and the synchronization of the face information is realized by an MQ asynchronous mode, and the final consistency of the face data among a plurality of library positions is ensured.
EXAMPLE III
Fig. 4 is a schematic structural diagram of a face image processing apparatus according to a third embodiment of the present invention, which is applicable to a case of improving the execution efficiency of a face image processing task. As shown in fig. 4, the apparatus includes:
a task instruction obtaining module 410, configured to obtain an image processing task instruction;
a target library location determining module 420, configured to determine a target library location for executing an image processing task according to the running state information of the GPU device for image processing, where each library location includes at least one GPU device;
a task instruction executing module 430, configured to send the image processing task instruction to the target library location, so that each GPU device in the target library location executes the image processing task instruction in parallel based on the respective stored image library.
The embodiment of the invention selects a proper target library position for the image processing task based on the judgment of the running state information of the GPU equipment, thereby ensuring that the execution of the image processing task is not influenced when some library positions in a plurality of pre-configured library positions have problems and ensuring the execution success rate of the image processing task; and for the image processing task, each GPU device in the target library position is executed in parallel, so that the execution efficiency of the image processing task is improved.
Optionally, the apparatus further comprises:
the state tree construction module is used for constructing a state tree according to the face logic library to which each library belongs and the subordination relation between the library and the GPU equipment before the operation of determining the target library position for executing the image processing task is executed by the target library position determination module 420 according to the running state information of the GPU equipment, wherein the face logic library is obtained by classifying face images according to a preset face image classification strategy;
and the running state information detection module is configured to monitor the running state information of each GPU device by using the state tree before the target library location determination module 420 performs an operation of determining a target library location for executing an image processing task according to the running state information of the GPU device for image processing, so that the target library location for executing the image processing task is determined according to the running state information of each GPU device monitored by the state tree.
Optionally, each GPU device corresponds to an adapter adapted thereto;
correspondingly, the task instruction execution module comprises:
the management interface calling unit is used for calling a GPU management interface and sending the image processing task to the target library position;
and the image processing task sending unit is used for sending the image processing task to each GPU device through an adapter adaptive to each GPU device in the target library position.
Optionally, the image processing task sending unit is specifically configured to:
and sending the image processing task and a task identifier to each GPU device through an adapter adaptive to each GPU device, wherein the task identifier is used for each GPU device to authenticate the received image processing task.
Optionally, the apparatus further includes a reject execution feedback receiving module, specifically configured to:
and receiving task execution rejection feedback sent by the target library position, and sending the image processing tasks rejected by the target library position to a candidate library position so as to enable the candidate library position to execute the image processing tasks rejected by the target library position, wherein the task execution rejection feedback is generated when the number of task identifiers received by the target library position based on each GPU device reaches a number threshold.
Optionally, the apparatus further includes a candidate bin position determining module, specifically configured to:
and if response feedback of the target library position to the image processing task instruction is not received within preset response time, determining candidate library positions for executing the image processing task in other library positions except the target library position.
Optionally, the image processing task instruction includes a face image comparison processing instruction, a face information query instruction, a face image quality detection instruction, and a change operation instruction of the face image stored in each GPU device, where the change includes addition, deletion, and modification.
Optionally, the apparatus further includes a state tree updating module, specifically configured to, after the task instruction executing module executes the operation of sending the image processing task instruction to the target library location, so that each GPU device in the target library location executes the operation of the image processing task instruction in parallel based on the respective stored image library:
and if the face image stored in the GPU equipment is added or deleted in the target library position, updating the image storage capacity of the GPU equipment in the target library position monitored in the state tree.
The face image processing device provided by the embodiment of the invention can execute the face image processing method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the face image processing method.
Example four
Fig. 5 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention. FIG. 5 illustrates a block diagram of an exemplary computer device 12 suitable for use in implementing embodiments of the present invention. The computer device 12 shown in FIG. 5 is only an example and should not bring any limitations to the functionality or scope of use of embodiments of the present invention.
As shown in FIG. 5, computer device 12 is in the form of a general purpose computing device. The components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory device 28, and a bus 18 that couples various system components including the system memory device 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory device bus or memory device controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system storage 28 may include computer system readable media in the form of volatile storage, such as Random Access Memory (RAM)30 and/or cache storage 32. Computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, and commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Storage 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in storage 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
Computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with computer device 12, and/or with any devices (e.g., network card, modem, etc.) that enable computer device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, computer device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via network adapter 20. As shown, network adapter 20 communicates with the other modules of computer device 12 via bus 18. It should be appreciated that although not shown in FIG. 5, other hardware and/or software modules may be used in conjunction with computer device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing by running programs stored in the system storage device 28, for example, implementing a face image processing method provided by an embodiment of the present invention, including:
acquiring an image processing task instruction;
determining target library positions for executing the image processing task according to the running state information of the GPU equipment for image processing, wherein each library position comprises at least one GPU equipment;
and sending the image processing task instruction to the target library location, so that each GPU device in the target library location executes the image processing task instruction in parallel based on the respective stored image library.
EXAMPLE five
The fifth embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for processing a face image, which includes:
acquiring an image processing task instruction;
determining target library positions for executing the image processing task according to the running state information of the GPU equipment for image processing, wherein each library position comprises at least one GPU equipment;
and sending the image processing task instruction to the target library location, so that each GPU device in the target library location executes the image processing task instruction in parallel based on the respective stored image library.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory device (RAM), a read-only memory device (ROM), an erasable programmable read-only memory device (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory device (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (11)

1. A face image processing method is characterized by comprising the following steps:
acquiring an image processing task instruction;
determining target library positions for executing the image processing task according to the running state information of the GPU equipment for image processing, wherein each library position comprises at least one GPU equipment;
and sending the image processing task instruction to the target library location, so that each GPU device in the target library location executes the image processing task instruction in parallel based on the respective stored image library.
2. The method of claim 1, wherein prior to determining the target bin for performing the image processing task based on the operating state information of the graphics processing GPU device, the method further comprises:
constructing a state tree according to the face logic library to which each library position belongs and the dependency relationship between the library positions and GPU equipment, wherein the face logic library is obtained by classifying face images according to a preset face image classification strategy;
and monitoring the running state information of each GPU device by using the state tree, so that a target library position for executing the image processing task is determined according to the running state information of each GPU device monitored by the state tree.
3. The method of claim 1, wherein each GPU device corresponds to an adapter adapted thereto;
correspondingly, sending the image processing task instruction to the target library location includes:
calling a GPU management interface, and sending the image processing task to the target library position;
and sending the image processing task to each GPU device through an adapter adaptive to each GPU device in the target library position.
4. The method of claim 3, wherein sending the image processing task to each GPU device in the target library location via an adapter adapted to each GPU device comprises:
and sending the image processing task and a task identifier to each GPU device through an adapter adaptive to each GPU device, wherein the task identifier is used for each GPU device to authenticate the received image processing task.
5. The method of claim 4, further comprising:
and receiving task execution rejection feedback sent by the target library position, and sending the image processing tasks rejected by the target library position to a candidate library position so as to enable the candidate library position to execute the image processing tasks rejected by the target library position, wherein the task execution rejection feedback is generated when the number of task identifiers received by the target library position based on each GPU device reaches a number threshold.
6. The method of claim 1, further comprising:
and if response feedback of the target library position to the image processing task instruction is not received within preset response time, determining candidate library positions for executing the image processing task in other library positions except the target library position.
7. The method according to claim 1, wherein the image processing task instructions comprise face image comparison processing instructions, face information query instructions, face image quality detection instructions, and change operation instructions of the face images stored in each GPU device, and the changes comprise addition, deletion and modification.
8. The method of claim 2, wherein after sending the image processing task instructions to the target library location, the method further comprises:
and if the face image stored in the GPU equipment is added or deleted in the target library position, updating the image storage capacity of the GPU equipment in the target library position monitored in the state tree.
9. A face image processing apparatus, comprising:
the task instruction acquisition module is used for acquiring an image processing task instruction;
the system comprises a target library position determining module, a task execution module and a task execution module, wherein the target library position determining module is used for determining a target library position for executing an image processing task according to the running state information of the GPU equipment for image processing, and each library position comprises at least one GPU equipment;
and the task instruction execution module is used for sending the image processing task instruction to the target library position so as to enable each GPU device in the target library position to execute the image processing task instruction in parallel based on the image library stored in each GPU device.
10. A computer device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of processing a face image according to any one of claims 1-8.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the face image processing method according to any one of claims 1 to 8.
CN201910959768.4A 2019-10-10 2019-10-10 Face image processing method, device, equipment and storage medium Active CN110706148B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910959768.4A CN110706148B (en) 2019-10-10 2019-10-10 Face image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910959768.4A CN110706148B (en) 2019-10-10 2019-10-10 Face image processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110706148A true CN110706148A (en) 2020-01-17
CN110706148B CN110706148B (en) 2023-08-15

Family

ID=69199135

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910959768.4A Active CN110706148B (en) 2019-10-10 2019-10-10 Face image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110706148B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112596924A (en) * 2020-12-25 2021-04-02 中标慧安信息技术股份有限公司 Internet of things middlebox server application remote procedure calling method and system
CN113420170A (en) * 2021-07-15 2021-09-21 宜宾中星技术智能系统有限公司 Multithreading storage method, device, equipment and medium for big data image
CN114168684A (en) * 2021-12-10 2022-03-11 南威软件股份有限公司 Face modeling warehousing service implementation method and device based on asynchronous mechanism

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108363623A (en) * 2018-02-27 2018-08-03 郑州云海信息技术有限公司 GPU resource dispatching method, device, equipment and computer readable storage medium
US20190065251A1 (en) * 2017-08-31 2019-02-28 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for processing a heterogeneous cluster-oriented task
CN109753848A (en) * 2017-11-03 2019-05-14 杭州海康威视数字技术股份有限公司 Execute the methods, devices and systems of face identifying processing
CN109885388A (en) * 2019-01-31 2019-06-14 上海赜睿信息科技有限公司 A kind of data processing method and device suitable for heterogeneous system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190065251A1 (en) * 2017-08-31 2019-02-28 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for processing a heterogeneous cluster-oriented task
CN109753848A (en) * 2017-11-03 2019-05-14 杭州海康威视数字技术股份有限公司 Execute the methods, devices and systems of face identifying processing
CN108363623A (en) * 2018-02-27 2018-08-03 郑州云海信息技术有限公司 GPU resource dispatching method, device, equipment and computer readable storage medium
CN109885388A (en) * 2019-01-31 2019-06-14 上海赜睿信息科技有限公司 A kind of data processing method and device suitable for heterogeneous system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
盖素丽: "基于GPU的数字图像并行处理方法", 《电子产品世界》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112596924A (en) * 2020-12-25 2021-04-02 中标慧安信息技术股份有限公司 Internet of things middlebox server application remote procedure calling method and system
CN113420170A (en) * 2021-07-15 2021-09-21 宜宾中星技术智能系统有限公司 Multithreading storage method, device, equipment and medium for big data image
CN114168684A (en) * 2021-12-10 2022-03-11 南威软件股份有限公司 Face modeling warehousing service implementation method and device based on asynchronous mechanism
CN114168684B (en) * 2021-12-10 2023-08-08 清华大学 Face modeling warehouse-in service implementation method and device based on asynchronous mechanism

Also Published As

Publication number Publication date
CN110706148B (en) 2023-08-15

Similar Documents

Publication Publication Date Title
CN108537543B (en) Parallel processing method, device, equipment and storage medium for blockchain data
US20200396311A1 (en) Provisioning using pre-fetched data in serverless computing environments
CN109491928B (en) Cache control method, device, terminal and storage medium
US10235047B2 (en) Memory management method, apparatus, and system
CN111897638B (en) Distributed task scheduling method and system
CN110706148B (en) Face image processing method, device, equipment and storage medium
US20190196875A1 (en) Method, system and computer program product for processing computing task
CN111679911B (en) Management method, device, equipment and medium of GPU card in cloud environment
CN107153643B (en) Data table connection method and device
CN107092686B (en) File management method and device based on cloud storage platform
CN111343262B (en) Distributed cluster login method, device, equipment and storage medium
CN111831618A (en) Data writing method, data reading method, device, equipment and storage medium
CN114244595A (en) Method and device for acquiring authority information, computer equipment and storage medium
CN112612523A (en) Embedded equipment driving system and method
CN110781159B (en) Ceph directory file information reading method and device, server and storage medium
CN108694083B (en) Data processing method and device for server
CN114356521A (en) Task scheduling method and device, electronic equipment and storage medium
CN109165078B (en) Virtual distributed server and access method thereof
US10552419B2 (en) Method and system for performing an operation using map reduce
CN114077690A (en) Vector data processing method, device, equipment and storage medium
US20230155958A1 (en) Method for optimal resource selection based on available gpu resource analysis in large-scale container platform
CN110781137A (en) Directory reading method and device for distributed system, server and storage medium
CN111552740B (en) Data processing method and device
CN112445763B (en) File operation method and device, electronic equipment and storage medium
CN111399753B (en) Method and device for writing pictures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220915

Address after: 25 Financial Street, Xicheng District, Beijing 100033

Applicant after: CHINA CONSTRUCTION BANK Corp.

Address before: 25 Financial Street, Xicheng District, Beijing 100033

Applicant before: CHINA CONSTRUCTION BANK Corp.

Applicant before: Jianxin Financial Science and Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant