CN107403173B - Face recognition system and method - Google Patents

Face recognition system and method Download PDF

Info

Publication number
CN107403173B
CN107403173B CN201710717471.8A CN201710717471A CN107403173B CN 107403173 B CN107403173 B CN 107403173B CN 201710717471 A CN201710717471 A CN 201710717471A CN 107403173 B CN107403173 B CN 107403173B
Authority
CN
China
Prior art keywords
face recognition
target
server
information
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710717471.8A
Other languages
Chinese (zh)
Other versions
CN107403173A (en
Inventor
纪风
周晓
王林
杨宁
张险峰
陈晓伍
朱才志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Lintu Information Technology Co ltd
Original Assignee
Hefei Lintu Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Lintu Information Technology Co ltd filed Critical Hefei Lintu Information Technology Co ltd
Priority to CN201710717471.8A priority Critical patent/CN107403173B/en
Publication of CN107403173A publication Critical patent/CN107403173A/en
Application granted granted Critical
Publication of CN107403173B publication Critical patent/CN107403173B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a face recognition system and a method, wherein the face recognition system is deployed on a cloud platform and comprises the following components: the system comprises a load balancing server, an application service cluster and a face recognition cluster, wherein the load balancing server is used for selecting a target application server, sending an obtained target face recognition request to the target application server and receiving a result acquisition request sent by the target application server; selecting a target face recognition server, and forwarding a result acquisition request to the target face recognition server; the target application server is used for receiving the target face recognition request and acquiring information to be recognized according to the information to be processed carried in the target face recognition request; sending a result acquisition request to a load balancing server; the target face recognition server is used for receiving a result acquisition request forwarded by the load balancing server; carrying out face recognition on a source image; and sends the face recognition result to the initiator of the result acquisition request. By applying the embodiment of the invention, the face recognition efficiency is improved.

Description

Face recognition system and method
Technical Field
The invention relates to the technical field of intelligent biological recognition, in particular to a face recognition system and a face recognition method.
Background
The face recognition is a biological recognition technology for carrying out identity recognition based on face characteristic information of people, and the face recognition method mainly comprises the following steps: the method comprises the steps of automatically detecting and tracking a human face from a complex image and/or video scene by utilizing technologies such as computer image analysis, artificial intelligence, pattern recognition and the like, and then carrying out intelligent analysis of matching recognition on the detected human face.
With the rapid increase of the security of the countries in recent years, the monitoring image and/or video data also increases explosively, and in the face recognition method deployed on a single machine in the traditional technology, the single machine executes all steps in the face recognition process, for example, the steps of receiving a target face recognition request, obtaining a source image according to the face recognition request, storing the source image, performing face recognition on the source image and the like. Because the execution of each step needs to occupy the resources of the single machine, waiting for the resource allocation can consume a large amount of processing time, and the face recognition efficiency is not high.
Disclosure of Invention
The embodiment of the invention aims to provide a face recognition system and a face recognition method so as to improve the face recognition efficiency. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a face recognition system, where the face recognition system is deployed on a cloud platform, and the face recognition system includes: load balancing server, application service cluster, face recognition cluster, wherein,
the load balancing server is used for selecting a target application server from the application service cluster and sending an obtained target face recognition request to the target application server, wherein the target face recognition request comprises information to be processed; receiving a result acquisition request which is sent by the target application server and used for acquiring a face recognition result; selecting a target face recognition server from the face recognition cluster, and forwarding the result acquisition request to the target face recognition server;
the target application server is used for receiving a target face recognition request sent by the load balancing server and acquiring information to be recognized according to information to be processed carried in the target face recognition request; sending a result acquisition request for acquiring a face recognition result to the load balancing server; receiving the face recognition result sent by a target face recognition server;
the target face recognition server is used for receiving the result acquisition request forwarded by the load balancing server; acquiring the information to be identified according to the result acquisition request; obtaining a source image to be recognized according to the information to be recognized; carrying out face recognition on the source image to obtain a face recognition result; and sending the face recognition result to the initiator of the result acquisition request.
Optionally, the load balancing server is further configured to:
before selecting a target application server from the application service cluster, detecting whether the self state meets a first processing condition; if so, selecting a target application server from the application service cluster;
the load balancing server selects a target application server from the application service cluster, and specifically comprises the following steps:
selecting an application server with the lowest load from the application service cluster as a target application server; or,
and selecting a target application server from the application service cluster according to a preset load balancing algorithm.
Optionally, the load balancing server detects whether a state of the load balancing server meets a first processing condition, specifically:
detecting whether the total number of the face recognition requests currently processed reaches the maximum concurrent number or not;
if yes, judging that the self state does not accord with the first processing condition;
if not, the self state is judged to be in accordance with the first processing condition.
Optionally, the load balancing server is further configured to:
after the self state is judged to be not in accordance with the first processing condition, prompt information used for prompting that the self state is not in accordance with the first processing condition is fed back to a target user, wherein the target user is a user sending the target face recognition request to the load balancing server.
Optionally, the target application server is further configured to:
verifying whether the target face recognition request is legal or not before acquiring information to be recognized according to the information to be processed carried in the target face recognition request; and if the target face identification request is legal, acquiring information to be identified according to the information to be processed carried in the target face identification request.
Optionally, the system further includes a background management server, where a sender of the face recognition request obtained by the load balancing server is a target user, and the background management server is configured to:
receiving a registration request sent by the target user, and creating target application information for the target user;
and sending the target application information to the target user, so that the target user generates the target face recognition request based on the target application information.
The target application server is further configured to:
and after receiving the face recognition result sent by the target face recognition server, feeding back the face recognition result to the target user.
Optionally, the system further includes a distributed storage cluster, and the target application server is further configured to:
after the information to be identified is obtained, sending the information to be identified to the distributed storage cluster;
the distributed storage cluster is used for receiving the information to be identified sent by the target application server; storing the information to be identified in a distributed storage mode;
the target face recognition server obtains the request according to the result, and obtains the information to be recognized, specifically:
and acquiring the information to be identified from the distributed storage cluster according to the result acquisition request.
Optionally, the information to be identified is one of a still image, a moving image, and a video to be processed.
Optionally, the information to be identified is a video to be processed;
the target face recognition server obtains the source image according to the information to be recognized, specifically:
and processing the video to be processed to obtain a key frame in the video to be processed, and taking the key frame in the video to be processed as a source image.
Optionally, the target face recognition server performs face recognition on the source image to obtain a face recognition result, specifically:
carrying out face detection on the source image to obtain a face recognition result of face contour coordinates in the source image; or,
comparing the faces of at least two images in the source images to obtain the similarity of the faces in the at least two images; obtaining a face recognition result of whether the faces in the at least two images are the same person according to the face similarity; or,
and carrying out face search on the source image in a preset target face set to obtain each image in the target face set corresponding to the similarity before the preset value and a face recognition result of the similarity between each image and the source image.
In a second aspect, an embodiment of the present invention provides a face recognition method, which is applied to the above system, where the method is applied to the target application server, and specifically includes:
receiving a target face recognition request sent by the load balancing server, wherein the target face recognition request comprises information to be processed;
acquiring information to be recognized according to the information to be processed carried in the target face recognition request;
sending a result obtaining request for obtaining a face recognition result to the load balancing server, so that the load balancing server selects a target face recognition server from the face recognition cluster according to the result obtaining request, and forwards the result obtaining request to the target face recognition server, so that the target face recognition server obtains the information to be recognized according to the result obtaining request; obtaining a source image to be recognized according to the information to be recognized; carrying out face recognition on the source image to obtain a face recognition result; sending the face recognition result to the initiator of the result acquisition request;
and receiving the face recognition result sent by the target face recognition server.
Optionally, before the step of obtaining information to be recognized according to the information to be processed carried in the target face recognition request, the method further includes:
verifying whether the target face identification request is legal or not; and if the target face identification request is legal, acquiring the information to be identified according to the information to be processed carried in the target face identification request.
Optionally, the face recognition system further includes a distributed storage cluster, and after the step of obtaining the information to be recognized according to the information to be processed carried in the target face recognition request, the method further includes:
and sending the information to be identified to a distributed storage cluster so that the distributed storage cluster receives the information to be identified, and storing the information to be identified by adopting a distributed storage mode.
Optionally, the target face recognition request received by the target application server is generated by a target user;
after the step of receiving, by the target application server, the face recognition result sent by the target face recognition server, the method further includes:
and feeding back the face recognition result to the target user.
By applying the technical scheme provided by the embodiment of the invention, the face recognition method is executed by building a cloud platform, a target application server receives a target face recognition request sent by a load balancing server, obtains information to be recognized according to information to be processed carried in the target face recognition request, and further sends a result acquisition request to the load balancing server, so that the load balancing server selects the target face recognition server from a face recognition cluster according to the result acquisition request, and the target face recognition server acquires the request according to the result and obtains the information to be recognized; and then, the target application server can receive the face recognition result sent by the target face recognition server.
By applying the technical scheme provided by the embodiment of the invention, through the mutual matching among the clusters, all the steps in the face recognition process are executed by different servers, the resources used in the whole face recognition process are expanded, and the resource allocation among the servers is not influenced mutually, so that the whole cloud platform has large-scale computing capacity, the face recognition can be carried out quickly, and the face recognition efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a face recognition system according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a face recognition system according to an embodiment of the present invention;
FIG. 3 is a flow chart of a process for performing face recognition on a source image by applying the face recognition system shown in FIG. 2;
fig. 4 is a schematic flow chart of a face recognition method according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to solve the problems in the prior art, embodiments of the present invention provide a face recognition system and method. First, a face recognition system provided in an embodiment of the present invention is described below.
Referring to fig. 1, a schematic structural diagram of a face recognition system provided in an embodiment of the present invention is shown, where the face recognition system is deployed on a cloud platform, and the face recognition system includes: load balancing server, application service cluster, face recognition cluster, wherein,
the load balancing server is used for selecting a target application server from the application service cluster and sending an obtained target face recognition request to the target application server, wherein the target face recognition request comprises information to be processed; receiving a result acquisition request which is sent by a target application server and used for acquiring a face recognition result; selecting a target face recognition server from the face recognition cluster, and forwarding a result acquisition request to the target face recognition server;
the target application server is used for receiving a target face recognition request sent by the load balancing server and acquiring information to be recognized according to information to be processed carried in the target face recognition request; sending a result acquisition request for acquiring a face recognition result to a load balancing server; receiving a face recognition result sent by a target face recognition server;
the target face recognition server is used for receiving the result acquisition request forwarded by the load balancing server; acquiring a request according to a result, and acquiring information to be identified; obtaining a source image according to information to be identified; carrying out face recognition on a source image to obtain a face recognition result; and sends the face recognition result to the initiator of the result acquisition request.
In practical application, the load balancing server may be a single load balancing server, or may be a certain load balancing server in the load balancing cluster.
It should be noted that, for a certain load balancing server in the load balancing cluster, the load balancing cluster may include a main load balancing server and at least one backup load balancing server, where when the main load balancing server is in a normal operating state, the load balancing server in this text refers to the main load balancing server, and when the main load balancing server fails, the load balancing server in this text refers to the backup load balancing server that works in place of the main load balancing server.
In a specific embodiment, the load balancing server may further be configured to:
before selecting a target application server from an application service cluster, detecting whether the self state meets a first processing condition; if so, a target application server is selected from the application service cluster.
In order to improve the lateral expansion and throughput capability of the application service cluster and improve the availability of the service, the target face recognition request can be uniformly distributed to the application service cluster by using the load balancing server. The load balancing server selects a target application server from the application service cluster, and specifically may be:
selecting an application server with the lowest load from the application service cluster as a target application server; or selecting a target application server from the application service cluster according to a preset load balancing algorithm.
The application service cluster comprises a plurality of application servers, and the load balancing server can directly select the application server with the lowest load as a target application server; a load balancing algorithm may also be utilized to select a target application server from the application service cluster. Common load balancing algorithms include: a Domain Name System (DNS) -based Domain Name alternate resolution method, a client-based scheduling access method, an application layer System load-based scheduling method, an IP address-based scheduling method, and the like.
In practical application, the load balancing server detects whether the state of the load balancing server meets a first processing condition, and specifically may be:
detecting whether the total number of the face recognition requests currently processed reaches the maximum concurrent number or not; if yes, judging that the self state does not accord with the first processing condition; if not, the self state is judged to be in accordance with the first processing condition.
The load balancing server can provide a plurality of processors to execute a concurrency technology so as to improve the processing speed and improve the efficiency of face recognition, in practical application, the maximum concurrency number can be set to reasonably distribute the resources of the load balancing server, the execution burden of the load balancing server is reduced, and the overload risk is avoided. Theoretically, the larger the maximum number of concurrencies, the more face recognition requests per second the load balancing server can handle, and the greater the performance loss to the load balancing server. Specifically, the maximum concurrency number may be preset according to the performance of the load balancing server, for example, the maximum concurrency number may be preset to be 20 per second.
Additionally, the load balancing server may be further configured to:
after judging that the self state does not accord with the first processing condition, feeding back prompt information for prompting that the self state does not accord with the first processing condition to the target user.
The target user is a user sending a target face recognition request to the load balancing server. In practical applications, the target user may be any terminal capable of communicating with the load balancing server, for example, a client server, a mobile terminal electronic device, an image capturing device, and the like.
In practical application, the information to be processed carried in the target face recognition request may be: the video processing method comprises one of a static image, a dynamic image, a video to be processed, a URL (Uniform Resource Locator) of the static image, a URL of the dynamic image and a URL of the video to be processed.
The information to be processed can be from various image acquisition devices (such as cameras in places such as garages, units, hospitals, markets, parks and the like); a security monitoring video database, a standard image database, a self-built image database and the like in the client server; mobile electronic devices (such as cell phones, video cameras, in-vehicle cameras, etc.).
When the information to be processed is one of a static image, a dynamic image and a video to be processed, the target application server obtains the information to be recognized according to the information to be processed carried in the target face recognition request, and the information to be recognized may be: and extracting the information to be processed carried in the target face recognition request, and directly taking the extracted information to be processed as the information to be recognized.
When the information to be processed is one of the URL of the static image, the URL of the dynamic image, and the URL of the video to be processed, the target application server obtains the information to be recognized according to the information to be processed carried in the target face recognition request, which may be: and extracting the URL carried in the target face recognition request, sending an information acquisition request to a server corresponding to the extracted URL, acquiring target information fed back by the server aiming at the information acquisition request, and taking the target information as information to be recognized.
It can be seen that the information to be identified may be one of a still image, a moving image, and a video to be processed. The embodiment of the invention does not limit the formats of the static image, the dynamic image and the video to be processed. For example, the Video Format may be conventional Video formats such as AVI (Audio Video Interleaved), ASF (Advanced Streaming Format), WMV (Windows Media Video, microsoft Media Video Format), and the like; the static image format may be JPEG (Joint photographic Experts Group), PNG (Portable network graphics), etc.; the moving image Format may be GIF (Graphics Interchange Format).
In order to ensure the validity of the obtained target face recognition request and improve the reliability of the face recognition system, in a specific embodiment, after the target application server receives the target face recognition request sent by the load balancing server, the target application server may further be configured to:
verifying whether the target face recognition request is legal or not before acquiring the information to be recognized according to the information to be processed carried in the target face recognition request; and if the target face identification request is legal, acquiring the information to be identified according to the information to be processed carried in the target face identification request.
The embodiment of the invention does not limit the specific way of the target application server for verifying whether the target face recognition request is legal, for example, the target application server can check the information to be processed carried in the target face recognition request, judge whether the data formats of the information to be processed all meet the preset data format requirements, if so, judge that the target face recognition request is legal, and if not, judge that the target face recognition request is illegal.
Illustratively, the information to be processed is a URL of a video to be processed, the target application server may check whether a data format of the URL meets a standard HTTP (HyperText Transfer protocol) URL data format requirement, and if so, determine that the target face recognition request is legal, and if not, determine that the target face recognition request is illegal.
In a specific implementation manner, the system may further include a background management server, where a sender of the face recognition request obtained by the load balancing server is a target user, and the background management server is configured to:
receiving a registration request sent by a target user, and creating target application information for the target user;
and sending the target application information to the target user, so that the target user generates a target face recognition request based on the target application information.
The target application server is further configured to:
and after receiving the face recognition result sent by the target face recognition server, feeding back the face recognition result to the target user.
The background management server can manage the configuration of each service and application in the distributed system, and comprises the following steps: creation, update and deletion of services and applications, concurrency of applications, storage capacity allocated to target users, and the like; and moreover, the running state of each server in each cluster can be monitored, and each server can be ensured to be in a normal running state. The background management server may create target application information for the target user after detecting a registration request of the target user.
In practical application, the target user may be any terminal capable of communicating with the background management server, for example, the terminal may be a client server, a mobile terminal electronic device, an image acquisition device, or the like, and after the background management server detects a registration request of the target user, a system administrator may log in the background management server and create target application information for the target user; or the background management server automatically creates target application information for the target user according to the registration request.
The target application information may include: an application account number, a password target, an application type, a maximum concurrency number for the application type, a storage capacity allocated to a target user, and the like. The application types can be divided into: face detection, face comparison, face search, and the like. The maximum concurrency number of each application type may be the same or different, for example, the maximum concurrency number of each application type may be set as: 20 per second.
The storage capacity allocated to the target user may be set according to the actual situation of the target user, and the specific value of the storage capacity is not limited in the embodiment of the present invention, for example, the storage capacity may be: 1G, 2G, 3G, and the like.
In order to enhance the scalability of the whole cloud platform and the lateral expansibility of a single service, the embodiment of the invention adopts a mode that the single service independently accesses the cloud platform as a design principle of the cloud platform. All services accessed to the cloud platform are used as a product set, and a distributed architecture is adopted to improve the high availability and expansibility of the services. The services are provided to users with different classes and different requirements in the form of open API (Application programming interface) and SDK (Software Development Kit). For each service, there will be a corresponding open API and SDK for that service, while the cloud platform is a combination of different services.
The open SDK packages a calling function of a bottom API for providing face recognition service, can be provided for a third-party developer or organization, and can quickly and conveniently use the face service API through the SDK.
The process of the target user generating the target face recognition request based on the target application information may be: after receiving the target application information, the target user may call an API corresponding to the application type in the target application information through the SDK, or may directly call an API corresponding to the application type in the target application information, and a request parameter is introduced through the API, where the request parameter includes: and the application account number, the password, the information to be processed and the like carried in the target application information. The target user may use the API to which the request parameter has been introduced as a target face recognition request, and further, send the target face recognition request to the load balancing server.
In order to facilitate uniform management of the information to be recognized, in a specific embodiment, the face recognition system may further include a distributed storage cluster, and the target application server is further configured to:
after the information to be identified is obtained, the information to be identified is sent to a distributed storage cluster;
the distributed storage cluster is used for receiving information to be identified sent by the target application server; storing the information to be identified by adopting a distributed storage mode;
the target face recognition server obtains the request according to the result, and obtains the information to be recognized, specifically:
and obtaining the information to be identified from the distributed storage cluster according to the result obtaining request.
Specifically, the distributed storage cluster receives information to be identified sent by a target application server; and a distributed storage mode is adopted to store the information to be identified, which can be:
a certain server in the distributed storage cluster obtains information to be identified sent by a target application server; and generating an information identifier of the information to be identified, and correspondingly storing the information to be identified and the information identifier of the information to be identified in a distributed storage mode.
The information identifier corresponds to the information to be identified one by one, and the specific method for generating the information identifier is not limited in the embodiment of the present invention, for example, the time for obtaining the information to be identified may be combined with the name of the sender of the information to be identified, and the time for obtaining the information to be identified is, for example: 2016.06.06, the sender name of the information to be identified is a, the information identifier of the information to be identified may be: 2016.06.06A;
in addition, as the information to be identified can be one of a still image, a moving image and a video to be processed, the still image, the moving image and the video can be stored in the distributed storage cluster.
The system adopts a distributed storage mode, and mass data can be stored by using a distributed storage cluster in a convenient and extensible manner.
Specifically, the distributed storage manner may be: and a cloud storage mode based on Hadoop. Hadoop is a distributed system infrastructure developed by the Apache foundation, and a user can develop a distributed program without knowing details of a distributed bottom layer, so that resources of a distributed cluster can be fully utilized to carry out high-speed operation and storage. The cloud storage is a cloud computing system which takes data storage and management as a core and supports dynamic increase of storage devices, and a large number of different types of storage devices in a network can be aggregated through application software, so that the storage devices work cooperatively and provide data storage and service access functions to the outside together, so that a user can conveniently access data at any time and any place by connecting to the cloud through any network-connectable device.
Therefore, the cloud storage mode based on Hadoop is adopted, so that the image and/or video data breaks through the space limitation and is collected at the cloud, the effective face information can be mined and analyzed from the big data, model training of technologies such as deep learning is facilitated, and efficient sharing of data resources is realized. Meanwhile, the management of the image and/or video data is facilitated, and in addition, the image and/or video data are stored in a cloud storage mode, so that the cloud storage mode is more flexible, convenient and safe.
It should be noted that, the embodiment of the present invention is described by taking a cloud storage manner based on Hadoop as an example, and does not limit the present application.
The face recognition cluster includes a plurality of face recognition servers, the manner of selecting a target face recognition server from the face recognition cluster by the load balancing server is the same as the manner of selecting a target application server from the application service cluster, and for specific description, reference may be made to the portion of selecting the target application server from the application service cluster by the load balancing server, which is not described herein again.
In practical applications, the source image may be one of a still image, a moving image, and a key frame of a video to be processed, where the key frame refers to a frame in which a key action of motion or change occurs in the video.
In a specific implementation manner, the information to be recognized may be a video to be processed, and specifically, the target face recognition server obtains the source image according to the information to be recognized, and may be:
and processing the video to be processed to obtain a key frame in the video to be processed, and taking the key frame in the video to be processed as a source image.
Specifically, processing the video to be processed to obtain the key frame in the video to be processed may be: dividing a video to be processed into a first preset number of lens units by using a lens division algorithm; extracting a second preset number of image frames from each lens unit by using a key frame extraction algorithm to serve as key frames of the lens unit; and taking the key frames of all the lens units as the key frames in the video to be processed.
The general idea of the shot segmentation algorithm is to calculate a feature difference between frames, and when the feature difference is greater than a preset threshold, it indicates that a shot change has occurred, mark a shot boundary, and perform shot segmentation. Common shot segmentation algorithms include: histogram-based video segmentation algorithms, pixel-based video segmentation algorithms, edge contour rate-based video segmentation algorithms, and the like. The embodiment of the invention does not limit the specific adopted lens segmentation algorithm.
Common key frame extraction algorithms include: a shot boundary based key frame extraction algorithm, a visual content based key frame extraction algorithm, a motion analysis based key frame extraction algorithm, and the like. The embodiment of the invention does not limit the specific adopted key frame extraction algorithm.
The first preset number can be preset according to the size of the video to be processed and a specifically adopted lens segmentation algorithm, and the second preset number can be preset according to the size of the video to be processed and a specifically adopted key frame extraction algorithm. The embodiment of the present invention does not limit the specific numerical values of the first preset quantity and the second preset quantity. For example, the first predetermined number may be 10, 20, 30, 40, etc., and the second predetermined number may be 5, 6, 7, 8, 9, etc.
The result obtaining request may include an information identifier of the information to be recognized, and the target face recognition server may obtain the information to be recognized according to the result obtaining request. The embodiment of the invention does not limit the specific mode of obtaining the request according to the result and obtaining the information to be identified. For example, the result obtaining request may include an information identifier of the information to be identified, and with the information identifier, the information uniquely corresponding to the information identifier, that is, the information to be identified may be obtained from the distributed storage cluster storing the information to be identified.
The initiator of the result obtaining request refers to the party that initiates the result obtaining request first, and in the embodiment of the present invention, the initiator of the result obtaining request is the target application server. The result obtaining request can include address information of the initiator, so that the target face recognition server can obtain the address of the target application server according to the result obtaining request, and further, after face recognition is carried out on the source image to obtain a face recognition result, the face recognition result is sent to the initiator of the result obtaining request.
In practical application, the target face recognition server performs face recognition on a source image to obtain a face recognition result, specifically:
carrying out face detection on the source image to obtain a face recognition result of face contour coordinates in the source image; or,
comparing the faces of at least two images in the source images to obtain the similarity of the faces in the at least two images; obtaining a face recognition result of whether the faces in the at least two images are the same person according to the face similarity; or,
and carrying out face search on the source image in a preset target face set to obtain each image in the target face set corresponding to the similarity before the preset value and a face recognition result of the similarity between each image and the source image.
The target face recognition server performs face detection on the source image to obtain a face recognition result of face contour coordinates in the source image, which may specifically be:
a target face recognition server detects a face area in a source image; positioning face contour coordinates of a source image from a face region; and taking the face contour coordinates as a face recognition result.
In practical application, a face detection algorithm can be used for detecting a face region in an image to be detected. The embodiment of the invention does not limit the specifically adopted face detection algorithm. For example, the face detection algorithm used may be: a CascadeCNN (adaptive Neural Network Cascade for Face Detection) algorithm, an AdaBoost Detection algorithm based on Harr features (Haar-like features), and the like. Adaboost is an iterative algorithm, and the core idea thereof is to train different classifiers (weak classifiers) aiming at the same training set, and then to assemble the weak classifiers to form a stronger final classifier (strong classifier). The Haar features, namely the rectangular features, are applied to the human face representation and can better reflect the facial features of the human.
The face contour coordinates include: facial features coordinates, such as eyes, nose tip, corner points of the mouth, eyebrows, and facial contours.
Specifically, the face contour coordinates of the source image are located from the face region, and may be: and positioning the face contour coordinates of the source image from the face region by using a face alignment algorithm. The embodiment of the invention does not limit the adopted face alignment algorithm. For example, the face alignment algorithm used may be: ASM (Active Shape Model, point distribution Model); AAM (Active appearance Model); LBF (Local Binary Features Regression), and the like.
Therefore, the embodiment of the invention realizes the face detection of the source image.
The target face recognition server compares faces of at least two images in the source images to obtain face similarity of the at least two images; obtaining a face recognition result of whether the faces in the at least two images are the same person according to the face similarity, which specifically may be:
a target face recognition server detects the face area of each image in a source image; positioning face contour coordinates from a face region in each image;
extracting a characteristic value at the face contour coordinate in each image as a face characteristic value of each image;
comparing the face characteristic values of at least two images to obtain the similarity of the compared images;
judging whether the human faces in the at least two images belong to the same person or not according to the similarity;
and taking whether the faces of the at least two images are the faces of the same person as a face recognition result.
The at least two images in the source image means that 2 or more than 2 images are included in the source image, for example, 2, 3, 4, 5, etc. images may be included in the source image.
Specifically, the target face recognition server detects a face region of each image in the source image, and may be: and detecting the face area of each image in the source image by using a face detection algorithm. The embodiment of the invention does not limit the specifically adopted face detection algorithm. For example, the specific face detection algorithm may be: a CascadeCNN (A convolutional neural network Cascade for Face Detection) algorithm, an AdaBoost Detection algorithm based on Harr characteristics, and the like.
The face contour coordinates are located from the face region in each image, and specifically may be:
and positioning face contour coordinates from the face area in each image by using a face alignment algorithm. For example, the face alignment algorithm used may be: ASM (ActiveShape Model, point distribution Model); AAM (Active appearance Model); LBF (local binary Features Regression), and the like.
Extracting the feature value of the face contour coordinate in each image, which may specifically be: and extracting the characteristic value of the coordinates of the face contour in each image to be compared by using a face characteristic extraction algorithm. The embodiment of the invention does not limit the specifically adopted face feature extraction algorithm. For example, the face feature extraction algorithm used may be: centrloss function) + ResNet (residual network) algorithm, deep face recognition (depeid) algorithm; vgface (deep learning framework) algorithm; google FaceNet (Google face recognition network) + TripletLoss (triple loss function) algorithm, etc.
The face feature values of at least two images can be compared by using a face comparison algorithm, for example, an LSH (local Sensitive Hashing) algorithm and a KD-tree (K-Dimensional-tree) algorithm, to obtain the similarity of the compared images. The similarity can represent the similarity degree between different images, and the value range of the similarity can be 0-1.
When the source image comprises two images, comparing the face characteristic values of at least two images to obtain the similarity of the compared images, which specifically can be: comparing the face characteristic values of the two images to obtain the similarity of the two images; for example, if the source image includes two images, which are A, B respectively, the face feature values of a and B are compared to obtain the similarity between a and B.
When the source image comprises more than two images, comparing the face characteristic values of the at least two images to obtain the similarity of the compared images, which can be specifically as follows: comparing the face characteristic values of any two images to obtain the similarity of any two images; for example, when the source image includes three images, which are C, D, E respectively, the face feature values of C and D, the face feature values of D and E, and the face feature values of C and E are compared to obtain the similarity between C and D, the similarity between D and E, and the similarity between C and E.
The embodiment of the invention does not limit the specific mode of judging whether the faces in the at least two images belong to the same person according to the similarity. For example, when 2 images are included in the source image, the manner may be: if the value of the similarity is in a preset range, the human faces in the two images are judged to belong to the same person, and if the value of the similarity is not in the preset range, the human faces in the two images to be compared are judged not to belong to the same person; when more than 2 images are included in the source image, the mode can be as follows: if the similarity of any two images is within a preset range, judging that the faces in at least two images belong to the same person; and if the similarity between the two images does not take the value within the preset range, judging that the faces in the at least two images do not belong to the same person. The preset range can be set according to the requirement of the target user on the accuracy of face comparison, and the embodiment of the invention does not limit the requirement. For example, when the requirement for precision is high, the preset range may be: 0.95 to 1; when the requirement on the precision is not high, the preset range can be as follows: 0.75-1.
The face recognition result is: the faces in the at least two images belong to the same person, or the faces in the at least two images do not belong to the same person.
Therefore, the embodiment of the invention realizes the face comparison of at least two images.
In practical application, in order to obtain the face characteristic value quickly and conveniently in the subsequent processing process, the face recognition system may further include a characteristic value storage server; the target face recognition server may be further configured to:
after the face characteristic value of each image is obtained, the face characteristic value of each image is sent to the characteristic value storage server;
the characteristic value storage server is used for receiving the face characteristic value of each image sent by the target face recognition server; and storing the face characteristic value of each image in a MySQL database.
The MySQL database is an open source relational database management system, which uses the most common database management Language, such as SQL (Structured Query Language), to perform database management. Because MySQL is open source code, designers can modify MySQL according to their own needs, which is convenient for development and maintenance. And the MySQL database has rapid data processing capacity and can improve the storage speed of the face characteristic value.
The target face recognition server performs face search on the source image in a preset target face set to obtain each image in the target face set corresponding to the similarity ranked before the preset value and a face recognition result of the similarity between each image and the source image, and specifically, the face recognition result may be:
the target face recognition server obtains a face characteristic value of a source image and a face characteristic value of each image in a preset target face set;
according to the face characteristic value of the source image and the face characteristic value of each image in the target face set, obtaining the similarity between the source image and each image in the target face set;
ranking the similarity according to a descending order; taking each image in the target face set corresponding to the similarity with the rank before the preset value and the similarity between each image and the source image as a face recognition result;
the target application server may also be configured to create a target face set in advance, specifically, the creating of the target face set may be:
a target application server receives a creation request for creating a target face set, which is sent by a load balancing server; detecting whether the creation request is legal; if the image is legal, a target face set is created, and the obtained image to be added is added into the target face set; and the load balancing server sends the creation request to the target application server under the condition of judging that the self state meets the second processing condition.
The embodiment of the invention does not limit the specific method for obtaining the face characteristic value of the image to be searched by the target face recognition server and the preset mode of the face characteristic value of each image in the target face set. For example, the manner of obtaining the face feature value of the image to be searched may be:
detecting a face area in an image to be searched; positioning face contour coordinates from the face region; and extracting the characteristic value of the face contour coordinate in the image to be searched as the face characteristic value of the image to be searched.
Assuming that the face feature value of each image in the target face set is pre-stored in the distributed storage cluster, the obtaining request further includes: the identifier of the target face set, where the identifier corresponds to the face set one to one, and the manner of obtaining the face feature value of each image in the preset target face set may be:
and acquiring the target face set from the distributed storage cluster by using the identifier of the target face set, thereby acquiring the face characteristic value of each image in the target face set.
In addition, in order to improve the security of the system, the distributed storage cluster may be visible only to the application service cluster and the face recognition cluster, that is, only the server in the application service cluster and the server in the face recognition cluster may access the image stored in the distributed storage cluster.
In practical application, a face comparison algorithm, such as an LSH (local Sensitive Hashing) algorithm and a KD-tree (KD tree) algorithm, may be used to compare a face feature value of an image to be searched and a face feature value of each image in a target face set, so as to obtain a similarity between the image to be searched and each image in the target face set.
The similarity can represent the similarity degree between different images, and in the embodiment of the invention, the higher the value of the similarity is, the higher the similarity degree between different images is. The similarity can range from 0 to 1.
The preset value can be set according to the requirements of the target user, and can be any value, such as 1, 2, 3, 4, 100, and the like.
For example, the preset value is 3, and the target face set includes: a. the similarity between the five images b, c, d and e and the image to be searched is respectively as follows: 0.1, 0.2, 0.3, 0.4 and 0.5, ranking the similarity degrees in a descending order, and taking the similarity degrees of the images in the target face set corresponding to the similarity degree of the top 3, namely three images c, d and e, and the similarity degrees of the images and the images to be searched, namely the similarity degree of the image c and the image to be searched is 0.3, the similarity degree of the image d and the image to be searched is 0.4, and the similarity degree of the image e and the image to be searched is 0.5 as the face recognition result.
Specifically, the creation request may be a request initiated by a target user, and the target user sends the creation request to the load balancing server, and then the load balancing server forwards the creation request to the target application server when determining that the obtained creation request meets the second processing condition.
Before a target user sends a creation request to a load balancing server, the target user can send a registration request to a background management server in a cloud platform, and the background management server can create a face search application for the target user aiming at the registration request, generate an application account and a password of the application, and set the maximum face number in a face set to be created, the storage capacity allocated to the target user and the maximum concurrent number of the application for the target user. For example, the maximum face number in the target face set to be created, the storage capacity allocated to the target user, and the maximum concurrency number of the application may be set as follows: 10 ten thousand, 1G, 20 per second.
After receiving the application account and the password, the target user can call the API corresponding to the face search application through the SDK, or can directly call the API corresponding to the face search application, and request parameters are transmitted through the API, wherein the request parameters comprise: an application account number, a password, names of images to be searched, a target face set and the like. The target user may use the API to which the request parameters have been transferred as a request for creating a target face set, and then send the request for creating to the load balancing server.
Specifically, the load balancing server may detect whether the total number of the creation requests currently being processed has reached a preset maximum concurrent number, and if not, determine that the state of the load balancing server is in accordance with the second processing condition; if so, judging that the self state does not accord with the second processing condition, and feeding back prompt information for prompting that the self state does not accord with the second processing condition to the target user.
In a specific embodiment, the step of the target application server detecting whether the creation request is legal may be:
detecting whether the used storage capacity is larger than the storage capacity pre-allocated to the target user; if yes, judging that the creation request is illegal; if not, the creation request is judged to be legal.
In another specific implementation, the detecting, by the target application server, whether the creation request is legal may further be:
detecting whether the number of faces contained in the target face set is larger than a preset maximum number of faces, and if so, judging that the creation request is illegal; if not, the creation request is judged to be legal.
Therefore, the target face set can be created according to the requirements of the target user by applying the embodiment of the invention, and the service can be better provided for the target user.
In order to realize the identification of the video, in another specific implementation manner, the source image is a key frame in the video to be processed, and the target face recognition server performs face recognition on the source image to obtain a face recognition result, which may specifically be:
extracting a face image set belonging to the same person in a preset time period by using a target tracking technology; and selecting the face image with the optimal quality from the face image set as a face recognition result.
In practical application, the target tracking technology may be: kalman filtering, particle filtering, optical flow methods, etc. The preset time period can be set according to the requirement of the target user, and can be 9:00-10:00, 10:00-11:00, 14:00-16:00 and the like.
In order to improve the identification accuracy and greatly reduce the background processing load, the face image with the optimal quality can be selected from the face image set by utilizing the technologies of contrast optimization, definition identification and the like.
Therefore, by applying the technical scheme provided by the embodiment of the invention, the face recognition method is executed by building the cloud platform, and through the mutual matching among the clusters, all the steps in the face recognition process are executed by different servers, so that the resources used in the whole face recognition process are expanded, and the resource distribution among the servers is not influenced mutually, so that the whole cloud platform has large-scale computing capacity, and the face recognition can be quickly carried out.
The following presents a simplified summary of an embodiment of the invention by way of a specific example.
The face recognition system provided by the embodiment of the invention is applied to build a face recognition system architecture diagram as shown in fig. 2, and the specific operation flow of the face recognition system is as follows:
the background management server configures each server in the face recognition system according to the received instruction sent by the administrator and the instruction;
the load balancing server receives a target face recognition request sent by a target user and judges whether the target face recognition request meets a first processing condition; if so, selecting the application server with the lowest load from the application service cluster as a target application server; sending the target face recognition request to a target application server, wherein the target face recognition request comprises information to be processed;
a target application server receives a target face recognition request sent by a load balancing server; verifying whether the target face recognition request is legal or not; if the target face identification request is legal, acquiring information to be identified according to the information to be processed carried in the target face identification request, and storing the information to be identified in a distributed storage cluster; sending a result acquisition request for acquiring a face recognition result of information to be recognized to a load balancing server based on a Transmission Control Protocol (TCP)/HyperText Transfer Protocol (HTTP);
the load balancing server receives a result acquisition request sent by a target application server; selecting a face recognition server with the lowest load from the face recognition cluster as a target face recognition server, and forwarding a result acquisition request to the target face recognition server;
a target face recognition server receives a result acquisition request sent by a load balancing server; acquiring information to be identified from the distributed storage cluster according to the result acquisition request; according to the information to be recognized, obtaining a source image to be recognized, and performing face recognition on the source image to obtain a face recognition result; sending the face recognition result to a target application server; storing the face characteristic value obtained by face recognition in a database;
the target application server receives a face recognition result sent by the target face recognition server; and storing the face recognition result in a database.
Therefore, by applying the embodiment of the invention, the face recognition method is executed by building the cloud platform, and through the mutual matching among clusters, all steps in the face recognition process are executed by different servers, the resources used in the whole face recognition process are expanded, and the resource distribution among the servers is not influenced mutually, so that the whole cloud platform has large-scale computing capacity, thereby being capable of rapidly carrying out face recognition, furthermore, by utilizing the load balancing method of the load balancing server, the target face recognition request can be uniformly distributed to the application service clusters, the transverse expansion and throughput capacity of the application service clusters are improved, further, the processing speed of the target face recognition request is improved, thereby the face recognition efficiency is improved, and the distributed storage clusters are utilized to store the source images, so that the image data breaks through the space limitation, the image data are collected at the cloud end, so that the high-efficiency sharing of data resources is realized, and meanwhile, the management of the image data is facilitated.
Specifically, the process of the target face recognition server in fig. 2 obtaining the source image according to the information to be recognized may be as shown in fig. 3, where the information to be recognized is: when a video is to be processed, the target face recognition server obtains a source image according to information to be recognized, and the method comprises the following steps: segmenting a video to be processed; extracting key frames from the segmented video to be processed; taking the key frame as a source image to be processed; when the information to be identified is: when static images/dynamic images are obtained, the target application server obtains source images according to the information to be identified, and the method comprises the following steps: and taking the static image/dynamic image as a source image to be processed.
The process of performing face recognition on a source image by the target face recognition server in fig. 2 to obtain a face recognition result may be as shown in fig. 3: the target face recognition server respectively performs face detection, face alignment and face feature extraction on the source image to obtain a face feature value of the source image; comparing/searching the face characteristic value of the source image with the face characteristic value stored in a characteristic database to obtain a comparison/search result, wherein the characteristic database is used for storing the face characteristic value of each image in the original face database, and the comparison/search result is as follows: and further, after obtaining a comparison/search result, comparing the comparison/search result with information of a base library to obtain a query result, taking the query result as a face recognition result, and sending the query result to a target user, so that the query result is displayed in the target user, and a face recognition process is completed, wherein the target user is a user who sends a target face recognition request to a load balancing server, and the base library information can be related information of a person corresponding to each face characteristic value stored on the basis of a black list or a white list, and comprises a name, a gender, a case base, an identity document number and the like.
Therefore, by the adoption of the embodiment of the invention, the face recognition method is executed by building the cloud platform, and through mutual matching among the clusters, all steps in the face recognition process are executed by different servers, so that resources used in the whole face recognition process are expanded, and resource allocation among the servers is not affected mutually, so that the whole cloud platform has large-scale computing capacity, face recognition can be carried out rapidly, and face recognition efficiency is improved.
Corresponding to the embodiment of the face recognition system, an embodiment of the present invention provides a face recognition method, as shown in fig. 4, which corresponds to the structure shown in fig. 1, and the method is applied to the face recognition system, specifically, the method is applied to the target application server, and specifically includes:
s101, receiving a target face recognition request sent by a load balancing server;
the target face recognition request comprises information to be processed;
s102, obtaining information to be recognized according to the information to be processed carried in the target face recognition request;
s103, sending a result obtaining request for obtaining a face recognition result to a load balancing server, so that the load balancing server selects a target face recognition server from a face recognition cluster according to the result obtaining request, and forwards the result obtaining request to the target face recognition server, so that the target face recognition server obtains information to be recognized according to the result obtaining request; obtaining a source image to be recognized according to information to be recognized; carrying out face recognition on a source image to obtain a face recognition result; sending the face recognition result to the initiator of the result acquisition request;
and S104, receiving a face recognition result sent by the target face recognition server.
Therefore, by applying the technical scheme provided by the embodiment of the invention, through the mutual cooperation among the clusters, all the steps in the face recognition process are executed by different servers, the resources used in the whole face recognition process are expanded, and the resource allocation among the servers is not influenced mutually, so that the whole cloud platform has large-scale computing capacity, the face recognition can be carried out quickly, and the face recognition efficiency is improved.
Optionally, before the step of obtaining information to be recognized according to the information to be processed carried in the target face recognition request, the method further includes:
verifying whether the target face identification request is legal or not; and if the target face identification request is legal, acquiring the information to be identified according to the information to be processed carried in the target face identification request.
Optionally, the face recognition system further includes a distributed storage cluster, and after the step of obtaining the information to be recognized according to the information to be processed carried in the target face recognition request, the method further includes:
and sending the information to be identified to a distributed storage cluster so that the distributed storage cluster receives the information to be identified, and storing the information to be identified by adopting a distributed storage mode.
Optionally, the target face recognition request received by the target application server is generated by a target user;
after the step of receiving, by the target application server, the face recognition result sent by the target face recognition server, the method further includes:
and feeding back the face recognition result to the target user.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the method embodiment, since it is substantially similar to the system embodiment, the description is simple, and the relevant points can be referred to the partial description of the system embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (14)

1. A face recognition system deployed on a cloud platform, the system comprising: load balancing server, application service cluster, face recognition cluster, wherein,
the load balancing server is used for selecting a target application server from the application service cluster and sending an obtained target face recognition request to the target application server, wherein the target face recognition request comprises information to be processed; receiving a result acquisition request which is sent by the target application server and used for acquiring a face recognition result; selecting a target face recognition server from the face recognition cluster, and forwarding the result acquisition request to the target face recognition server;
the target application server is used for receiving a target face recognition request sent by the load balancing server and acquiring information to be recognized according to information to be processed carried in the target face recognition request; sending a result acquisition request for acquiring a face recognition result to the load balancing server; receiving the face recognition result sent by a target face recognition server;
the target face recognition server is used for receiving the result acquisition request forwarded by the load balancing server; acquiring the information to be identified according to the result acquisition request; obtaining a source image to be recognized according to the information to be recognized; carrying out face recognition on the source image to obtain a face recognition result; and sending the face recognition result to the initiator of the result acquisition request.
2. The system of claim 1, wherein the load balancing server is further configured to:
before selecting a target application server from the application service cluster, detecting whether the self state meets a first processing condition; if so, selecting a target application server from the application service cluster;
the load balancing server selects a target application server from the application service cluster, and specifically comprises the following steps:
selecting an application server with the lowest load from the application service cluster as a target application server; or,
and selecting a target application server from the application service cluster according to a preset load balancing algorithm.
3. The system according to claim 2, wherein the load balancing server detects whether its own state meets a first processing condition, specifically:
detecting whether the total number of the face recognition requests currently processed reaches the maximum concurrent number or not;
if yes, judging that the self state does not accord with the first processing condition;
if not, the self state is judged to be in accordance with the first processing condition.
4. The system of claim 3, wherein the load balancing server is further configured to:
after the self state is judged to be not in accordance with the first processing condition, prompt information used for prompting that the self state is not in accordance with the first processing condition is fed back to a target user, wherein the target user is a user sending the target face recognition request to the load balancing server.
5. The system of claim 1, wherein the target application server is further configured to:
verifying whether the target face recognition request is legal or not before acquiring information to be recognized according to the information to be processed carried in the target face recognition request; and if the target face identification request is legal, acquiring information to be identified according to the information to be processed carried in the target face identification request.
6. The system according to any one of claims 1 to 5, wherein the system further comprises a background management server, the sender of the face recognition request obtained by the load balancing server is a target user, and the background management server is configured to:
receiving a registration request sent by the target user, and creating target application information for the target user;
sending the target application information to the target user, so that the target user generates the target face recognition request based on the target application information;
the target application server is further configured to:
and after receiving the face recognition result sent by the target face recognition server, feeding back the face recognition result to the target user.
7. The system of any of claims 1-5, wherein the system further comprises a distributed storage cluster, and wherein the target application server is further configured to:
after the information to be identified is obtained, sending the information to be identified to the distributed storage cluster;
the distributed storage cluster is used for receiving the information to be identified sent by the target application server; storing the information to be identified in a distributed storage mode;
the target face recognition server obtains the request according to the result, and obtains the information to be recognized, specifically:
and acquiring the information to be identified from the distributed storage cluster according to the result acquisition request.
8. The system of claim 1, wherein the information to be identified is one of a still image, a moving image, and a video to be processed.
9. The system according to claim 1, wherein the information to be identified is a video to be processed;
the target face recognition server obtains the source image according to the information to be recognized, specifically:
and processing the video to be processed to obtain a key frame in the video to be processed, and taking the key frame in the video to be processed as a source image.
10. The system according to claim 1 or 8, wherein the target face recognition server performs face recognition on the source image to obtain a face recognition result, specifically:
carrying out face detection on the source image to obtain a face recognition result of face contour coordinates in the source image; or,
comparing the faces of at least two images in the source images to obtain the similarity of the faces in the at least two images; obtaining a face recognition result of whether the faces in the at least two images are the same person according to the face similarity; or,
and carrying out face search on the source image in a preset target face set to obtain each image in the target face set corresponding to the similarity before the preset value and a face recognition result of the similarity between each image and the source image.
11. A face recognition method applied to the system of claim 1, wherein the method applied to the target application server specifically includes:
receiving a target face recognition request sent by the load balancing server, wherein the target face recognition request comprises information to be processed;
acquiring information to be recognized according to the information to be processed carried in the target face recognition request;
sending a result obtaining request for obtaining a face recognition result to the load balancing server, so that the load balancing server selects a target face recognition server from the face recognition cluster according to the result obtaining request, and forwards the result obtaining request to the target face recognition server, so that the target face recognition server obtains the information to be recognized according to the result obtaining request; obtaining a source image to be recognized according to the information to be recognized; carrying out face recognition on the source image to obtain a face recognition result; sending the face recognition result to the initiator of the result acquisition request;
and receiving the face recognition result sent by the target face recognition server.
12. The method according to claim 11, wherein before the step of obtaining information to be recognized according to information to be processed carried in the target face recognition request, the method further comprises:
verifying whether the target face identification request is legal or not; and if the target face identification request is legal, acquiring the information to be identified according to the information to be processed carried in the target face identification request.
13. The method according to claim 11, wherein the face recognition system further comprises a distributed storage cluster, and after the step of obtaining the information to be recognized according to the information to be processed carried in the target face recognition request, the method further comprises:
and sending the information to be identified to a distributed storage cluster so that the distributed storage cluster receives the information to be identified, and storing the information to be identified by adopting a distributed storage mode.
14. The method of claim 11, wherein the target face recognition request received by the target application server is generated by a target user;
after the step of receiving, by the target application server, the face recognition result sent by the target face recognition server, the method further includes:
and feeding back the face recognition result to the target user.
CN201710717471.8A 2017-08-21 2017-08-21 Face recognition system and method Active CN107403173B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710717471.8A CN107403173B (en) 2017-08-21 2017-08-21 Face recognition system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710717471.8A CN107403173B (en) 2017-08-21 2017-08-21 Face recognition system and method

Publications (2)

Publication Number Publication Date
CN107403173A CN107403173A (en) 2017-11-28
CN107403173B true CN107403173B (en) 2020-10-09

Family

ID=60397526

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710717471.8A Active CN107403173B (en) 2017-08-21 2017-08-21 Face recognition system and method

Country Status (1)

Country Link
CN (1) CN107403173B (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107944427B (en) * 2017-12-14 2020-11-06 厦门市美亚柏科信息股份有限公司 Dynamic face recognition method and computer readable storage medium
CN108156235A (en) * 2017-12-22 2018-06-12 平安养老保险股份有限公司 Online verification method, apparatus, computer equipment and storage medium
CN109993026B (en) * 2017-12-29 2021-08-20 华为技术有限公司 Training method and device for relative recognition network model
CN108242054A (en) 2018-01-09 2018-07-03 北京百度网讯科技有限公司 A kind of steel plate defect detection method, device, equipment and server
CN108564104A (en) * 2018-01-09 2018-09-21 北京百度网讯科技有限公司 Product defects detection method, device, system, server and storage medium
CN108346208A (en) * 2018-04-19 2018-07-31 深圳安邦科技有限公司 A kind of face identification system of deep learning
CN108564053A (en) * 2018-04-24 2018-09-21 南京邮电大学 Multi-cam dynamic human face recognition system based on FaceNet and method
CN108664914B (en) * 2018-05-04 2023-05-23 腾讯科技(深圳)有限公司 Face retrieval method, device and server
CN110263603B (en) * 2018-05-14 2021-08-06 桂林远望智能通信科技有限公司 Face recognition method and device based on central loss and residual error visual simulation network
CN108776808A (en) * 2018-05-25 2018-11-09 北京百度网讯科技有限公司 A kind of method and apparatus for detecting ladle corrosion defect
CN109359579A (en) * 2018-10-10 2019-02-19 红云红河烟草(集团)有限责任公司 Face recognition system based on machine deep learning algorithm
CN111241884B (en) * 2018-11-29 2024-07-23 中科天网(广东)科技有限公司 Cloud architecture sharing type face recognition system
CN109858328B (en) * 2018-12-14 2023-06-02 上海集成电路研发中心有限公司 Face recognition method and device based on video
CN109815839B (en) * 2018-12-29 2021-10-08 深圳云天励飞技术有限公司 Loitering person identification method under micro-service architecture and related product
CN109801420B (en) * 2019-01-25 2021-01-19 大匠智联(深圳)科技有限公司 Multi-concurrent face recognition access control system based on classification algorithm and recognition method thereof
CN110031697B (en) * 2019-03-07 2021-09-14 北京旷视科技有限公司 Method, device, system and computer readable medium for testing target identification equipment
CN111860069A (en) * 2019-04-30 2020-10-30 百度时代网络技术(北京)有限公司 Image processing method and system
CN110223080A (en) * 2019-06-05 2019-09-10 北京三快在线科技有限公司 The determination method and device of the target account of brush face payment platform
CN112115748B (en) * 2019-06-21 2023-08-25 腾讯科技(深圳)有限公司 Certificate image recognition method, device, terminal and storage medium
CN110489240A (en) * 2019-08-22 2019-11-22 Oppo广东移动通信有限公司 Image-recognizing method, device, cloud platform and storage medium
CN110659087A (en) * 2019-09-11 2020-01-07 旭辉卓越健康信息科技有限公司 Face recognition algorithm engineering system applied to intelligent medical treatment
CN110659616A (en) * 2019-09-26 2020-01-07 新华智云科技有限公司 Method for automatically generating gif from video
CN112905699B (en) * 2021-02-23 2024-04-09 京东科技信息技术有限公司 Full data comparison method, device, equipment and storage medium
CN113449628A (en) * 2021-06-24 2021-09-28 北京澎思科技有限公司 Image processing system, image processing method, image processing apparatus, storage medium, and computer program product
CN117726925B (en) * 2024-02-07 2024-06-04 广州思涵信息科技有限公司 Face recognition resource scheduling method, device and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306287A (en) * 2011-08-24 2012-01-04 百度在线网络技术(北京)有限公司 Method and equipment for identifying sensitive image
CN103645902A (en) * 2013-12-17 2014-03-19 江苏名通信息科技有限公司 Application access and data statistical method of mobile phone advertisement system
CN105574506A (en) * 2015-12-16 2016-05-11 深圳市商汤科技有限公司 Intelligent face tracking system and method based on depth learning and large-scale clustering
CN105610907A (en) * 2015-12-18 2016-05-25 华夏银行股份有限公司 Server access method and device
CN106550003A (en) * 2015-09-23 2017-03-29 腾讯科技(深圳)有限公司 The control method of load balancing, apparatus and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10083720B2 (en) * 2015-11-06 2018-09-25 Aupera Technologies, Inc. Method and system for video data stream storage

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306287A (en) * 2011-08-24 2012-01-04 百度在线网络技术(北京)有限公司 Method and equipment for identifying sensitive image
CN103645902A (en) * 2013-12-17 2014-03-19 江苏名通信息科技有限公司 Application access and data statistical method of mobile phone advertisement system
CN106550003A (en) * 2015-09-23 2017-03-29 腾讯科技(深圳)有限公司 The control method of load balancing, apparatus and system
CN105574506A (en) * 2015-12-16 2016-05-11 深圳市商汤科技有限公司 Intelligent face tracking system and method based on depth learning and large-scale clustering
CN105610907A (en) * 2015-12-18 2016-05-25 华夏银行股份有限公司 Server access method and device

Also Published As

Publication number Publication date
CN107403173A (en) 2017-11-28

Similar Documents

Publication Publication Date Title
CN107403173B (en) Face recognition system and method
US10900772B2 (en) Apparatus and methods for facial recognition and video analytics to identify individuals in contextual video streams
CN112232293B (en) Image processing model training method, image processing method and related equipment
JP6973876B2 (en) Face recognition methods, face recognition devices and computer programs that execute face recognition methods
JP6451246B2 (en) Method, system and program for determining social type of person
US11423265B1 (en) Content moderation using object detection and image classification
WO2019033525A1 (en) Au feature recognition method, device and storage medium
US20090076996A1 (en) Multi-Classifier Selection and Monitoring for MMR-based Image Recognition
CN106303599B (en) Information processing method, system and server
US11335127B2 (en) Media processing method, related apparatus, and storage medium
CN111008935B (en) Face image enhancement method, device, system and storage medium
CN112101304B (en) Data processing method, device, storage medium and equipment
US11250039B1 (en) Extreme multi-label classification
US10798399B1 (en) Adaptive video compression
US12062105B2 (en) Utilizing multiple stacked machine learning models to detect deepfake content
KR102261054B1 (en) Fast Face Recognition Apparatus connected to a Camera
CN105631404A (en) Method and device for clustering pictures
CN113128526B (en) Image recognition method and device, electronic equipment and computer-readable storage medium
US9552373B2 (en) Method for performing face recognition in a radio access network
US12045331B2 (en) Device and network-based enforcement of biometric data usage
Chen et al. Agile services provisioning for learning-based applications in fog computing networks
Xu et al. Identity-aware attribute recognition via real-time distributed inference in mobile edge clouds
Mukherjee et al. Energy efficient face recognition in mobile-fog environment
WO2023000792A1 (en) Methods and apparatuses for constructing living body identification model and for living body identification, device and medium
CN112669353B (en) Data processing method, data processing device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221110

Address after: 201203 floor 3, building 3, No. 1690, Cailun Road, pilot Free Trade Zone, Pudong New Area, Shanghai

Patentee after: Shanghai lintu Intelligent Technology Co.,Ltd.

Address before: No. 305-2, Building A3, Innovation Industrial Park, No. 800, West Wangjiang Road, High tech Zone, Hefei City, Anhui Province, 230000

Patentee before: HEFEI LINTU INFORMATION TECHNOLOGY CO.,LTD.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230713

Address after: No. 305-2, Building A3, Innovation Industrial Park, No. 800, West Wangjiang Road, High tech Zone, Hefei City, Anhui Province, 230000

Patentee after: HEFEI LINTU INFORMATION TECHNOLOGY CO.,LTD.

Address before: 201203 floor 3, building 3, No. 1690, Cailun Road, pilot Free Trade Zone, Pudong New Area, Shanghai

Patentee before: Shanghai lintu Intelligent Technology Co.,Ltd.