CN106845355B - A kind of method of recognition of face, server and system - Google Patents

A kind of method of recognition of face, server and system Download PDF

Info

Publication number
CN106845355B
CN106845355B CN201611213878.9A CN201611213878A CN106845355B CN 106845355 B CN106845355 B CN 106845355B CN 201611213878 A CN201611213878 A CN 201611213878A CN 106845355 B CN106845355 B CN 106845355B
Authority
CN
China
Prior art keywords
picture
server
client
area
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611213878.9A
Other languages
Chinese (zh)
Other versions
CN106845355A (en
Inventor
彭程
苏建钢
张立峰
钟斌
罗予晨
程冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN201611213878.9A priority Critical patent/CN106845355B/en
Publication of CN106845355A publication Critical patent/CN106845355A/en
Application granted granted Critical
Publication of CN106845355B publication Critical patent/CN106845355B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The present invention provides a kind of method of recognition of face, server and system.The system comprises client and server.The client, for sending picture detection request to server;Wherein, the picture detection request includes the pickup area of Target Photo and user's selection;The server, for determining Target Photo set according to the pickup area;If there is the matching picture for being more than default similarity with the similarity of the Target Photo in the Target Photo set, the matching picture is sent to the client;The client, for showing the matching picture.Pass through said system, the picture that the picture provided with user matches whether is included in the picture that the picture comprising face that the picture searching monitor video that can be provided according to user uploads is uploaded with definite monitor video, and the picture of successful match is presented in predeterminated position;So as in the case where not participating in manually, complete recognition of face, and then greatly promote work efficiency.

Description

Face recognition method, server and system
Technical Field
The invention relates to the field of terminals, in particular to a face recognition method, a server and a system.
Background
The face recognition technology is a technology that acquires a face image through a camera by using an image processing technology, and performs comparison analysis on the acquired image to acquire required information. At present, the face recognition technology is widely applied to the fields of security systems, enterprise management systems, identity authentication and the like.
However, in practice, it is found that when a user needs to obtain other required picture information according to an existing face sample picture, the user often needs to continuously check the picture information acquired by a monitoring camera or other camera equipment to manually identify the required image information, and the identification operation is long in time consumption and low in efficiency, so that inconvenience is brought to the user.
Disclosure of Invention
The embodiment of the invention provides a face recognition method, a server and a system, which can search pictures containing faces uploaded by a monitoring video according to pictures provided by a user to determine whether the pictures uploaded by the monitoring video contain pictures matched with the pictures provided by the user, and present the pictures successfully matched at a preset position; therefore, the face recognition is completed without manual participation, and the working efficiency is greatly improved.
The first aspect of the embodiment of the invention discloses a face recognition system, which comprises a client and a server;
the client is used for prompting the user to upload the target picture when the fact that the user clicks the preset area is detected; the target picture is a face sample picture;
the client is further used for prompting a user to select an acquisition area when the target picture is successfully uploaded;
the client is also used for sending a picture detection request to the server; the picture detection request comprises the target picture and the acquisition area selected by the user;
the server is used for determining a target picture set according to the acquisition area;
the server is further used for traversing the target picture set to determine whether a matched picture with the similarity greater than a preset threshold exists in the target picture set;
the server is further used for sending the matching picture to the client if the matching picture with the similarity larger than the preset similarity exists in the target picture set;
and the client is used for displaying the matched picture in a search result display area according to a preset mode.
With reference to the first aspect, in a first possible implementation manner of the first aspect,
the client is also used for prompting the user to select a time period;
the client is further used for sending the time period selected by the user to the server;
the server is also used for judging whether the time period accords with a preset rule or not;
the server is specifically configured to determine a target picture set according to the acquisition region selected by the user and the selected time period when the time period meets a preset rule.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, the acquisition region includes a plurality of regions;
the server is further configured to determine, if there is a matching picture in the target picture set, which has a similarity greater than a preset similarity with the target picture, an area to which each matching picture belongs;
the server is further used for counting the number of the matched pictures contained in each region;
the server is further used for sending the identification of each region and the number of the matched pictures corresponding to each region to the client;
the client is further configured to display the identifier of each region and the number of the matching pictures corresponding to the identifier of each region.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect,
the client is further configured to send an information acquisition request to the server when detecting that the user selects the identifier of the first area, where the information acquisition request includes the identifier of the first area; wherein the first region is any one of the plurality of regions;
the server is used for determining a target area corresponding to the identifier of the first area and acquiring the identifier of a camera which shoots a matched picture in the target area;
the server is further used for feeding back the identification of the camera to the client;
the client is used for displaying the camera identification of the matched picture shot in the first area; wherein the first target region is any one of the plurality of target regions;
the client is further used for displaying the geographic position of the first camera, the shot matching pictures and the shooting time of each matching picture when the fact that the first camera identification is selected is detected; the first camera identification is any one of the camera identifications.
With reference to the first aspect or any one of the foregoing possible implementations of the first aspect, in a fourth possible implementation of the first aspect,
the client is further used for sending a prompt for improving the preset similarity to the user when the number of the matched pictures is larger than the preset number;
the client is further configured to obtain a target similarity threshold re-input by the user, and send the target similarity threshold to the server;
the server is further used for re-traversing the target picture set to determine a matching picture in the target picture set, wherein the similarity between the matching picture and the target picture is greater than a target similarity threshold; sending the matched picture with the similarity larger than a target similarity threshold to the client;
and the client is used for receiving the matched picture with the similarity larger than the target similarity threshold value sent by the server and displaying the matched picture with the similarity larger than the target similarity threshold value.
A second aspect of the present invention discloses a server, which is characterized by comprising:
the receiving unit is used for receiving a picture detection request sent by a client; the picture detection request comprises the target picture and the acquisition area selected by the user; the target picture is a face sample picture;
the determining unit is used for determining a target picture set according to the acquisition area;
the traversal unit is further used for traversing the target picture set to determine whether a matching picture with the similarity of the target picture greater than a preset threshold exists in the target picture set;
and the sending unit is further used for sending the matched picture to the client if the matched picture with the similarity greater than the preset similarity exists in the target picture set.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the server further includes a determining unit;
the receiving unit is further configured to receive a time period sent by the client;
the judging unit is also used for judging whether the time period accords with a preset rule or not;
the determining unit is specifically configured to, when the time period meets a preset rule, determine, by the server, a target picture set according to the acquisition region selected by the user and the selected time period.
With reference to the first possible implementation manner of the second aspect, in a second possible implementation manner of the second aspect, the acquisition region includes a plurality of regions; the server also comprises a statistical unit;
the determining unit is further configured to determine a region to which each of the matching pictures belongs if the matching picture with which the similarity with the target picture is greater than a preset similarity exists in the target picture set;
the counting unit is used for counting the number of the matched pictures contained in each region;
the sending unit is further configured to send the identifier of each region and the number of the matching pictures corresponding to each region to the client.
With reference to the second possible implementation manner of the second aspect, in a third possible implementation manner of the second aspect,
the receiving unit is further configured to receive an information acquisition request sent by the client, where the information acquisition request includes an identifier of the first area; wherein the first region is any one of the plurality of regions;
the determining unit is used for determining a target area corresponding to the identifier of the first area and acquiring the identifier of a camera which shoots a matched picture in the target area;
the sending unit is further configured to feed back the identifier of the camera to the client.
The third aspect of the invention discloses a method for face recognition, which comprises the following steps:
receiving a picture detection request sent by a client; the picture detection request comprises the target picture and the acquisition area selected by the user; the target picture is a face sample picture;
determining a target picture set according to the acquisition area;
traversing the target picture set to determine whether a matched picture with the similarity of the target picture greater than a preset threshold exists in the target picture set;
and the server is used for sending the matching picture to the client if the matching picture with the similarity larger than the preset similarity exists in the target picture set.
With reference to the third aspect, in a first possible implementation manner of the third aspect, before the server determines the target picture set according to the acquisition region selected by the user and the selected time period, the method further includes
Receiving a time period sent by the client;
judging whether the time period meets a preset rule or not;
the server determines a target picture set according to the acquisition area selected by the user and the selected time period, and the method comprises the following steps:
and when the time period meets a preset rule, the server determines a target picture set according to the acquisition area selected by the user and the selected time period.
With reference to the first possible implementation manner of the third aspect, in a second possible implementation manner of the third aspect, the acquisition region includes a plurality of regions; the method further comprises the following steps:
if matched pictures with the similarity higher than a preset similarity exist in the target picture set, determining the region to which each matched picture belongs;
counting the number of the matched pictures contained in each region;
and sending the identification of each region and the number of the matched pictures corresponding to each region to the client.
With reference to the second possible implementation manner of the third aspect, in a third possible implementation manner of the third aspect,
receiving an information acquisition request sent by the client, wherein the information acquisition request comprises the identifier of the first area; wherein the first region is any one of the plurality of regions;
determining a target area corresponding to the identifier of the first area, and acquiring the identifier of a camera which shoots a matched picture in the target area;
and feeding back the identification of the camera to the client.
A fourth aspect of the present invention discloses a server, wherein the terminal includes:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the method according to the third aspect.
It can be seen that the scheme of the embodiment of the invention discloses a face recognition system, which comprises a client and a server. The client is used for prompting the user to upload the target picture when the fact that the user clicks the preset area is detected; the terminal is also used for prompting a user to select an acquisition area when the target picture is successfully uploaded; sending a picture detection request to a server; the picture detection request comprises the target picture and the acquisition area selected by the user; the server is used for determining a target picture set according to the acquisition area; the image processing device is also used for traversing the target image set to determine whether a matched image with the similarity greater than a preset threshold exists in the target image set; the client side is further used for sending the matching picture to the client side if the matching picture with the similarity larger than the preset similarity exists in the target picture set; and the client is used for displaying the matched picture in a search result display area according to a preset mode. By the system, pictures containing the human faces uploaded by the monitoring video can be searched according to the pictures provided by the user so as to determine whether the pictures uploaded by the monitoring video contain the pictures matched with the pictures provided by the user, and the pictures successfully matched with the pictures are presented at the preset position; therefore, the face recognition is completed without manual participation, and the working efficiency is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic diagram of a face recognition system according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a server according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of another server according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of a method for face recognition according to an embodiment of the present invention;
fig. 5 is a schematic flow chart of another face recognition method according to an embodiment of the present invention;
fig. 6 is a schematic physical structure diagram of a server according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a face recognition method, a server and a system, which can search pictures containing faces uploaded by a monitoring video according to pictures provided by a user to determine whether the pictures uploaded by the monitoring video contain pictures matched with the pictures provided by the user, and present the pictures successfully matched at a preset position; therefore, the face recognition is completed without manual participation, and the working efficiency is greatly improved.
In order to make the technical solutions of the present invention better understood by those skilled in the art, the technical solutions in the embodiments of the present invention will be clearly described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The appearances of the phrases "first," "second," and "third," or the like, in the specification, claims, and figures are not necessarily all referring to the particular order in which they are presented. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
In one embodiment of the invention, a face recognition system is disclosed, which comprises a client and a server. The client is used for prompting the user to upload the target picture when the fact that the user clicks the preset area is detected; the terminal is also used for prompting a user to select an acquisition area when the target picture is successfully uploaded; sending a picture detection request to a server; the picture detection request comprises the target picture and the acquisition area selected by the user; the server is used for determining a target picture set according to the acquisition area; the image processing device is also used for traversing the target image set to determine whether a matched image with the similarity greater than a preset threshold exists in the target image set; the client side is further used for sending the matching picture to the client side if the matching picture with the similarity larger than the preset similarity exists in the target picture set; and the client is used for displaying the matched picture in a search result display area according to a preset mode.
Referring to fig. 1, fig. 1 is a system for face recognition according to an embodiment of the present invention. The system comprises a client 10 and a server 20.
It should be noted that the client 10 may operate on an electronic device such as a smart phone, a tablet computer, an intelligent wearable device, or a computer.
The server 20 may be a distributed server, or may be a cloud data center, which is not limited herein.
Wherein, it can be understood that server 20 can connect a plurality of cameras, and the camera can take a candid photograph the record in real time to the personage that appears in the camera lens, and when the people face appears in monitoring range, the camera will take a candid photograph automatically and send the people face image to server 20 with the mode of time flow.
The client 10 is used for prompting the user to upload a target picture when the user is detected to click the preset area;
wherein the target picture comprises a face sample picture.
It should be noted that the preset area may be a picture frame, a button, or a blank area, which is not limited herein.
The target picture format includes, but is not limited to, JPEG, BMP, and the like.
The user can select a picture locally as a target picture, and can call a camera to shoot to obtain the target picture.
The user can click with a left button, a right button, or a double button, which is not limited herein.
The client 10 is further configured to prompt a user to select a collection area when detecting that the target picture is successfully uploaded;
it is understood that the collection area may be divided by administrative areas. For example, the collection area packet a of the face recognition system of a police station in a certain city assigns a collected area, B assigns a collected area, C assigns a collected area, and so on.
It is understood that the definition can also be freely defined, such as one-floor acquisition area, two-floor acquisition area, etc. of a certain building. But also a teaching building area, a dormitory area, a dining room area, etc. of a certain school.
The client 10 is further configured to send a picture detection request to the server 20; the picture detection request comprises the target picture and the acquisition area selected by the user;
it is understood that the purpose of the picture detection request of the client 10 is to let the server 20 detect whether a picture matching the target picture is taken in the acquisition area.
The server 20 is used for determining a target picture set according to the acquisition area;
it can be understood that, for example, the acquisition area is a certain canteen in a school, the target picture set is pictures including human faces taken by all cameras in the canteen within a preset time range. The preset time range may be a default or a manual setting, for example, within three months, within one month, or even within three days, and the like, which is not limited herein.
The server 20 is further configured to traverse the target picture set to determine whether a matching picture with a similarity greater than a preset threshold exists in the target picture set;
it should be noted that image matching refers to identifying a homonymy point between two or more images through a certain matching algorithm, for example, in two-dimensional image matching, by comparing correlation coefficients of windows of the same size in a target region and a search region, a window center point corresponding to the largest number of relationships in the search region is taken as the homonymy point. The essence is to apply the best search problem of matching criteria under the condition of primitive similarity.
Image matching can be mainly divided into gray-scale-based matching and feature-based matching.
The basic idea of gray scale matching is to consider an image as a two-dimensional signal from a statistical viewpoint and find out the correlation matching between the signals by adopting a statistical correlation method. The similarity of the two signals is evaluated using their correlation functions to determine the homonymy point. The gray matching determines the correspondence between the two images by using some similarity measure, such as correlation function, covariance function, sum of squared differences, sum of absolute differences, etc.
The feature matching is an algorithm that extracts features (such as points, lines, and planes) of two or more images, describes the features using parameters, and matches the features using the described parameters. The images processed based on feature matching typically contain features such as color features, texture features, shape features, spatial location features, and the like. The feature matching firstly preprocesses the images to extract the high-level features of the images, and then establishes the matching corresponding relation of the features between the two images, and commonly used feature elements comprise point features, edge features and region features. Feature matching requires the use of many mathematical operations such as matrix operations, gradient solving, and also fourier transforms and taylor expansions. The common feature extraction and matching method comprises the following steps: statistical method, geometric method, model method, signal processing method, boundary characteristic method, Fourier shape description method, geometric parameter method, shape invariant moment method, etc.
The server 20 is further configured to send a matching picture to the client 10 if the matching picture with the similarity to the target picture greater than a preset similarity exists in the target picture set;
the preset similarity may be a default of the system, or may be set manually, and is not displayed here. For example, the preset similarity may be 90%, 92%, 95%, and so on.
And the client 10 is used for displaying the matched picture in a search result display area according to a preset mode.
It is to be understood, among other things, that the search result display area may be a default location of the system or may be a user-selected location.
Wherein displaying the matching picture according to a preset mode comprises: displaying the matched pictures according to the shooting time sequence;
wherein displaying the matching picture according to a preset mode comprises: and displaying according to the number of the camera to which the matched picture belongs. For example, the matching picture taken by the camera No. 1 is displayed first, and then the matching picture taken by the camera No. 2 is displayed, which is not limited herein.
It can be appreciated that if there are too many matching pictures, the user can turn the page to view.
Optionally, in addition to searching for a matching picture according to the target picture and the acquisition region, the system may also perform picture search according to the target picture, the time period, and the acquisition region.
The client 10 is also used for prompting the user to select a time period;
wherein the time period can be input by the user or selected by the user on a calendar provided by the system.
The time period may be different in dimension, and may be specific to a day, or may be specific to an hour or a minute, which is not limited herein.
The client 10 is further used for sending the time period selected by the user to the server 20;
the server 20 is further configured to determine whether the time period meets a preset rule;
for example, the time period should be the past time, and if the future time is involved, the time period does not conform to the preset rule.
The server 20 is specifically configured to, when the time period meets a preset rule, determine the target picture set according to the acquisition area selected by the user and the selected time period by the server 20.
Optionally, the acquisition region comprises a plurality of regions;
the server 20 is further configured to determine, if there is a matching picture in the target picture set, which has a similarity greater than a preset similarity with the target picture, an area to which each matching picture belongs;
for example, if the user selects a canteen area and a dormitory area of a school, the matching pictures are sorted to determine the source of each match. That is, to which region each matching picture belongs. The camera shooting of the dining room belongs to the dining room area, and the camera shooting of the dormitory of the same college belongs to the dormitory area.
For example, if the user selects a sent area and B sent area, it is determined which matching pictures are taken by the camera of the sent area, and the matching picture taken by the camera of the sent area belongs to the sent area.
The server 20 is further configured to count the number of the matching pictures included in each region;
for example, the matching pictures are 100 pieces in total. Then a may have 60 contained matching pictures and B may have 40 contained matching pictures.
The server 20 is further configured to send the identifier of each region and the number of the matching pictures corresponding to each region to the client;
the client 10 is further configured to display the identifier of each region and the number of the matching pictures corresponding to the identifier of each region.
Wherein the client 10 may be presented in the form of a list.
Optionally, there are multiple cameras in each acquisition region, and the user may select the camera in each acquisition region to query the matching picture taken by the selected camera. The method comprises the following specific steps:
the client 10 is further configured to send an information acquisition request to the server 20 when detecting that the user selects the identifier of the first area, where the information acquisition request includes the identifier of the first area; wherein the first region is any one of the plurality of regions;
for example, a list of region identifiers, such as a party, B party, etc., is displayed on the interface of the client 10.
The server 20 is configured to determine a target area corresponding to the identifier of the first area, and obtain an identifier of a camera in the target area, where the camera takes a matching picture;
for example, if the user selects the identifier of the a-party, the identifier of the camera in the area of the a-party that captured the matching picture is displayed. For example, camera a (3) means that camera a has taken 3 matching pictures.
The server 20 is further used for feeding back the identification of the camera to the client;
the client 10 is used for displaying the camera identification of the matched picture shot in the first area; wherein the first target region is any one of the plurality of target regions;
the client 10 is further configured to display the geographic location of the first camera, the captured matching pictures, and the capturing time of each matching picture when it is detected that the first camera identifier is selected; the first camera identification is any one of the camera identifications.
Therein, it can be understood that when the user clicks the identification of the camera a, the installation position of the camera a, and the taken 3 matching pictures, and the taking time of each matching picture are displayed. Other information is not listed here.
Wherein, when the user clicks the identification of the camera a, the installation position of the camera a can be directly displayed on the map.
As shown in fig. 2, fig. 2 illustrates a specific structure of the server 20, and the server 20 includes:
a receiving unit 210, configured to receive a picture detection request sent by a client; the picture detection request comprises the target picture and the acquisition area selected by the user; the target picture is a face sample picture;
a determining unit 220, configured to determine a target picture set according to the acquisition region;
the traversing unit 230 is further configured to traverse the target picture set to determine whether a matching picture with a similarity greater than a preset threshold exists in the target picture set;
the sending unit 240 is further configured to send a matching picture to the client 10 if the matching picture with the similarity to the target picture greater than a preset similarity exists in the target picture set.
Based on fig. 2, as shown in fig. 3, the server 20 further includes a determining unit 250;
a receiving unit 210, further configured to receive a time period sent by the client 10;
the judging unit 250 is further configured to judge whether the time period meets a preset rule;
the determining unit 220 is specifically configured to, when the time period meets a preset rule, determine the target picture set according to the acquisition area selected by the user and the selected time period by the server 20.
Optionally, the acquisition region comprises a plurality of regions; server 20 further comprises a statistics unit 260;
the determining unit 220 is further configured to determine, if there are matching pictures in the target picture set, which have a similarity greater than a preset similarity with the target picture, a region to which each of the matching pictures belongs;
a counting unit 260, configured to count the number of matching pictures included in each region;
the sending unit 240 is further configured to send the identifier of each region and the number of the matching pictures corresponding to each region to the client 10.
As shown in fig. 4, fig. 4 discloses a face recognition method, the main body executed by the method is a server, and the method includes:
s301, receiving a picture detection request sent by a client; the picture detection request comprises the target picture and the acquisition area selected by the user; the target picture is a face sample picture;
s302, determining a target picture set according to the acquisition region;
s303, traversing the target picture set to determine whether a matched picture with the similarity of the target picture being greater than a preset threshold exists in the target picture set;
s304, if the target picture set comprises the matched picture with the similarity higher than the preset similarity, the matched picture is sent to a client.
From the above, the invention provides a face recognition method, wherein a server receives a picture detection request sent by a client; the picture detection request comprises the target picture and the acquisition area selected by the user; determining a target picture set according to the acquisition area; traversing the target picture set to determine whether a matched picture with the similarity of the target picture greater than a preset threshold exists in the target picture set; and if the target picture set has a matched picture with the similarity greater than the preset similarity, sending the matched picture to a client. By the method, the pictures containing the faces uploaded by the monitoring video can be searched according to the pictures provided by the user so as to determine whether the pictures uploaded by the monitoring video contain the pictures matched with the pictures provided by the user, and the pictures successfully matched with the pictures are presented at the preset position; therefore, the face recognition is completed without manual participation, and the working efficiency is greatly improved.
As shown in fig. 5, fig. 5 discloses a face recognition method, the main body executed by the method is a server, and the method includes:
s401, receiving a picture detection request sent by a client; the picture detection request comprises a target picture, a time period selected by a user and an acquisition area; the target picture is a face sample picture;
s402, judging whether the time period accords with a preset rule or not;
and S403, when the time period accords with a preset rule, determining a target picture set according to the acquisition region selected by the user and the selected time period.
S404, traversing the target picture set to determine whether a matched picture with the similarity of the target picture being greater than a preset threshold exists in the target picture set;
s405, if a matched picture with the similarity higher than a preset similarity exists in the target picture set, sending the matched picture to a client;
s406, if matched pictures with the similarity higher than a preset similarity exist in the target picture set, determining the region to which each matched picture belongs;
s407, counting the number of the matched pictures contained in each region;
s408, sending the identification of each region and the number of the matched pictures corresponding to each region to a client;
therefore, the step of the number of the matching pictures contained in each area is expanded, so that the client can display the number of the matching pictures shot in each area.
Referring to fig. 6, in another embodiment of the present invention, a terminal is provided. The server 500 includes a CPU501, a memory 502, and a bus 503.
The CPU501 executes a program pre-stored in the memory 502, and the execution process specifically includes:
receiving a picture detection request sent by a client; the picture detection request comprises the target picture and the acquisition area selected by the user; the target picture is a face sample picture;
determining a target picture set according to the acquisition area;
traversing the target picture set to determine whether a matched picture with the similarity of the target picture greater than a preset threshold exists in the target picture set;
and the server is used for sending the matching picture to the client if the matching picture with the similarity larger than the preset similarity exists in the target picture set.
Optionally, before the server determines the target picture set according to the acquisition region selected by the user and the selected time period, the executing process further includes:
receiving a time period sent by the client;
judging whether the time period meets a preset rule or not;
the server determines a target picture set according to the acquisition area selected by the user and the selected time period, and the method comprises the following steps:
and when the time period meets a preset rule, the server determines a target picture set according to the acquisition area selected by the user and the selected time period.
Optionally, the acquisition region comprises a plurality of regions; the executing process further comprises:
if matched pictures with the similarity higher than a preset similarity exist in the target picture set, determining the region to which each matched picture belongs;
counting the number of the matched pictures contained in each region;
and sending the identification of each region and the number of the matched pictures corresponding to each region to the client.
It can be seen that, in the scheme of the embodiment of the present invention, the server receives the picture detection request sent by the client; the picture detection request comprises the target picture and the acquisition area selected by the user; determining a target picture set according to the acquisition area; traversing the target picture set to determine whether a matched picture with the similarity of the target picture greater than a preset threshold exists in the target picture set; and if the target picture set has a matched picture with the similarity greater than the preset similarity, sending the matched picture to a client. By the method, the pictures containing the faces uploaded by the monitoring video can be searched according to the pictures provided by the user so as to determine whether the pictures uploaded by the monitoring video contain the pictures matched with the pictures provided by the user, and the pictures successfully matched with the pictures are presented at the preset position; therefore, the face recognition is completed without manual participation, and the working efficiency is greatly improved.
In the embodiments shown in fig. 4 and fig. 5, the method flows of the steps may be implemented based on the structure of the server.
In the embodiments shown in fig. 2 and fig. 3, the functions of the units may be implemented based on the structure of the server.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (1)

1. A system for face recognition is characterized by comprising a client and a server;
the client is used for prompting the user to upload the target picture when the fact that the user clicks the preset area is detected; the target picture is a face sample picture;
the client is further used for prompting a user to select an acquisition area when the target picture is successfully uploaded;
the client is also used for sending a picture detection request to the server; the picture detection request comprises the target picture and the acquisition area selected by the user;
the server is used for determining a target picture set according to the acquisition area;
the server is further used for traversing the target picture set to determine whether a matched picture with the similarity greater than a preset threshold exists in the target picture set;
the server is further used for sending the matching picture to the client if the matching picture with the similarity larger than the preset similarity exists in the target picture set;
the client is used for displaying the matched picture in a search result display area according to a preset mode;
wherein,
the client is also used for prompting the user to select a time period;
the client is further used for sending the time period selected by the user to the server;
the server is also used for judging whether the time period accords with a preset rule or not;
the server is specifically configured to determine a target picture set according to the acquisition region selected by the user and the selected time period when the time period meets a preset rule;
wherein the acquisition region comprises a plurality of regions;
the server is further configured to determine, if there is a matching picture in the target picture set, which has a similarity greater than a preset similarity with the target picture, an area to which each matching picture belongs;
the server is further used for counting the number of the matched pictures contained in each region;
the server is further used for sending the identification of each region and the number of the matched pictures corresponding to each region to the client;
the client is further used for displaying the identification of each region and the number of the matched pictures corresponding to the identification of each region;
wherein,
the client is further configured to send an information acquisition request to the server when detecting that the user selects the identifier of the first area, where the information acquisition request includes the identifier of the first area; wherein the first region is any one of the plurality of regions;
the server is used for determining a target area corresponding to the identifier of the first area and acquiring the identifier of a camera which shoots a matched picture in the target area;
the server is further used for feeding back the identification of the camera to the client;
the client is used for displaying the camera identification of the matched picture shot in the target area;
the client is further used for displaying the geographic position of the first camera, the shot matching pictures and the shooting time of each matching picture when the fact that the first camera identification is selected is detected; the first camera identification is any one of the camera identifications.
CN201611213878.9A 2016-12-24 2016-12-24 A kind of method of recognition of face, server and system Active CN106845355B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611213878.9A CN106845355B (en) 2016-12-24 2016-12-24 A kind of method of recognition of face, server and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611213878.9A CN106845355B (en) 2016-12-24 2016-12-24 A kind of method of recognition of face, server and system

Publications (2)

Publication Number Publication Date
CN106845355A CN106845355A (en) 2017-06-13
CN106845355B true CN106845355B (en) 2018-05-11

Family

ID=59135646

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611213878.9A Active CN106845355B (en) 2016-12-24 2016-12-24 A kind of method of recognition of face, server and system

Country Status (1)

Country Link
CN (1) CN106845355B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108319916A (en) * 2018-02-01 2018-07-24 广州市君望机器人自动化有限公司 Face identification method, device, robot and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101859374A (en) * 2010-05-20 2010-10-13 上海洪剑智能科技有限公司 Distributed face identification system and identification method thereof
CN103745223A (en) * 2013-12-11 2014-04-23 深圳先进技术研究院 Face detection method and apparatus
CN104133899A (en) * 2014-08-01 2014-11-05 百度在线网络技术(北京)有限公司 Method and device for generating picture search library and method and device for searching for picture
CN105913037A (en) * 2016-04-26 2016-08-31 广东技术师范学院 Face identification and radio frequency identification based monitoring and tracking system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6161271B2 (en) * 2011-12-22 2017-07-12 キヤノン株式会社 Information processing apparatus, control method thereof, and program
JP6080940B2 (en) * 2013-02-28 2017-02-15 株式会社日立国際電気 Person search method and home staying person search device
CN105374055B (en) * 2014-08-20 2018-07-03 腾讯科技(深圳)有限公司 Image processing method and device
CN105139470B (en) * 2015-09-30 2018-02-16 杭州海康威视数字技术股份有限公司 Work attendance method, apparatus and system based on recognition of face
CN105488478B (en) * 2015-12-02 2020-04-07 深圳市商汤科技有限公司 Face recognition system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101859374A (en) * 2010-05-20 2010-10-13 上海洪剑智能科技有限公司 Distributed face identification system and identification method thereof
CN103745223A (en) * 2013-12-11 2014-04-23 深圳先进技术研究院 Face detection method and apparatus
CN104133899A (en) * 2014-08-01 2014-11-05 百度在线网络技术(北京)有限公司 Method and device for generating picture search library and method and device for searching for picture
CN105913037A (en) * 2016-04-26 2016-08-31 广东技术师范学院 Face identification and radio frequency identification based monitoring and tracking system

Also Published As

Publication number Publication date
CN106845355A (en) 2017-06-13

Similar Documents

Publication Publication Date Title
CN109858371B (en) Face recognition method and device
CN106844492B (en) A kind of method of recognition of face, client, server and system
CN109284729B (en) Method, device and medium for acquiring face recognition model training data based on video
CN107093171B (en) Image processing method, device and system
IL256885A (en) Apparatus and methods for facial recognition and video analytics to identify individuals in contextual video streams
WO2019071664A1 (en) Human face recognition method and apparatus combined with depth information, and storage medium
TW201911130A (en) Method and device for remake image recognition
CN103679147A (en) Method and device for identifying model of mobile phone
CN110245573B (en) Sign-in method and device based on face recognition and terminal equipment
US9633272B2 (en) Real time object scanning using a mobile phone and cloud-based visual search engine
CN109116129B (en) Terminal detection method, detection device, system and storage medium
CN109784274A (en) Identify the method trailed and Related product
CN105740379A (en) Photo classification management method and apparatus
CN108881813A (en) A kind of video data handling procedure and device, monitoring system
US20200218903A1 (en) CCTV video smart surveillance system and method thereof
CN110991231B (en) Living body detection method and device, server and face recognition equipment
CN109308452B (en) Class attendance image processing method based on face recognition
CN114387548A (en) Video and liveness detection method, system, device, storage medium and program product
CN104951440B (en) Image processing method and electronic equipment
CN111263955A (en) Method and device for determining movement track of target object
CN113627339A (en) Privacy protection method, device and equipment
CN108090982A (en) One kind is registered method, system and terminal device
CN109889773A (en) Method, apparatus, equipment and the medium of the monitoring of assessment of bids room personnel
US9286707B1 (en) Removing transient objects to synthesize an unobstructed image
CN113469135A (en) Method and device for determining object identity information, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant