CN114613058A - Access control system with attendance checking function, attendance checking method and related device - Google Patents

Access control system with attendance checking function, attendance checking method and related device Download PDF

Info

Publication number
CN114613058A
CN114613058A CN202210302321.1A CN202210302321A CN114613058A CN 114613058 A CN114613058 A CN 114613058A CN 202210302321 A CN202210302321 A CN 202210302321A CN 114613058 A CN114613058 A CN 114613058A
Authority
CN
China
Prior art keywords
target face
face feature
target
extracting
undetermined
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210302321.1A
Other languages
Chinese (zh)
Inventor
赵春梅
王资鑫
刘海洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Agricultural Bank of China
Original Assignee
Agricultural Bank of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agricultural Bank of China filed Critical Agricultural Bank of China
Priority to CN202210302321.1A priority Critical patent/CN114613058A/en
Publication of CN114613058A publication Critical patent/CN114613058A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C1/00Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
    • G07C1/10Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people together with the recording, indicating or registering of other data, e.g. of signs of identity

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application discloses an access control system with an attendance checking function, an attendance checking method and a related device. The tracking unit comprises a determining module used for determining a template area corresponding to the target face from the i-1 frame image; determining a search area from the ith frame image according to the position of the target face in the ith-1 frame image; the tracking unit comprises a feed-forward network module used for extracting the face features to be determined in the search area and the target face features of the target face in the template area; and determining the target position of the target face in the search area according to the target face feature and the face feature to be determined. The identification unit is used for identifying the identity information of the target face according to the target position; and determining whether to open the door or not according to the identity information of the target face. And the attendance system display unit is used for determining an attendance result according to the identity information of the target face.

Description

Access control system with attendance checking function, attendance checking method and related device
Technical Field
The invention relates to the technical field of data processing, in particular to an access control system with an attendance checking function, an attendance checking method and a related device.
Background
The access control system is a system for controlling access passages, is a novel modern safety management system, integrates image acquisition, automatic identification and modern safety management measures, and is an effective measure for solving the problem of realizing safety precaution management of the access passages of all the departments.
The company generally adopts access control system to be used for the staff to get into the entrance guard safety control of official working garden, and in the correlation technique, can adopt face identification technique in access control system, through gathering the face image of personnel that enter, discerns whether this personnel is the staff of company, and then confirms whether let this personnel get into.
However, in the process of acquiring the face image, only one person can stand in the acquisition area of the access control system so that the acquired image only comprises one person, if the acquired image comprises a plurality of persons, the problem of incapability of identifying can occur, and then the situation that a plurality of employees queue up in the acquisition area of the access control system during the working peak is caused, and the attendance efficiency is influenced.
Disclosure of Invention
In view of the above problems, the present application provides an access control system with an attendance function, an attendance method and a related device, which are used for solving the problem that collected images include a plurality of people which cannot be identified, and improving attendance efficiency.
Based on this, the embodiment of the application discloses the following technical scheme:
on the one hand, this application embodiment provides an access control system with attendance function, the system includes: the system comprises a collecting unit, a tracking unit, an identifying unit and an attendance system display unit;
the image acquisition unit is used for acquiring the ith frame image and the (i-1) th frame image;
the tracking unit comprises a determining module used for determining a template area corresponding to a target face from the i-1 frame image; determining a search area from the ith frame image according to the position of the target face in the ith-1 frame image; wherein the target face is one of a plurality of faces in the i-1 frame image;
the tracking unit comprises a feed-forward network module used for extracting the face features to be determined in the search area and the target face features of the target face in the template area; determining a target position of the target face in the search area according to the target face feature and the undetermined face feature;
the identification unit is used for identifying the identity information of the target face according to the target position; determining whether to open a door according to the identity information of the target face;
and the attendance system display unit is used for determining an attendance result according to the identity information of the target face.
Optionally, if the target face is determined from the ith frame image, the tracking unit is further configured to:
and determining the face different from the target face again from the faces in the image of the (i-1) th frame, taking the face different from the target face as the target face, and executing the step of determining the template area corresponding to the target face from the image of the (i-1) th frame.
Optionally, the feed-forward network module includes a first convolution layer, a first pooling layer, a first connection layer, a second convolution layer, a second pooling layer, a third convolution layer, a second connection layer, a fourth convolution layer, a fifth convolution layer, and a fusion layer;
the first convolution layer is used for extracting a first to-be-determined face feature in the search area and extracting a first target face feature of a target face in the template area;
the first pooling layer is used for reducing the first to-be-determined face feature to obtain a second to-be-determined face feature and reducing the dimensionality of the first target face feature to obtain a second target face feature;
the first connecting layer is used for extracting the second undetermined face feature to obtain a third undetermined face feature and extracting the second target face feature to obtain a third target face feature;
the second convolutional layer is used for extracting the second undetermined face feature to obtain a fourth undetermined face feature and extracting the second target face feature to obtain a fourth target face feature;
the second pooling layer is used for extracting the fourth to-be-determined face feature to obtain a fifth to-be-determined face feature and extracting the fourth target face feature to obtain a fifth target face feature;
the third convolutional layer is used for extracting the fifth undetermined face feature to obtain a sixth undetermined face feature and extracting the fifth target face feature to obtain a sixth target face feature;
the second connecting layer is used for extracting the sixth undetermined face feature to obtain a seventh undetermined face feature and extracting the sixth target face feature to obtain a seventh target face feature;
the fourth convolution layer is used for extracting the sixth undetermined face feature to obtain an eighth undetermined face feature and extracting the sixth target face feature to obtain an eighth target face feature;
the fifth convolutional layer is used for extracting the eighth undetermined face feature to obtain a ninth undetermined face feature and extracting the eighth target face feature to obtain a ninth target face feature;
and the fusion layer is used for obtaining undetermined face features in the search area according to the third undetermined face feature, the seventh undetermined face feature and the ninth undetermined face feature, and obtaining target face features of a target face in the template area according to the third target face feature, the seventh target face feature and the ninth target face feature.
Optionally, the first connection layer and the second connection layer comprise a hole convolution.
Optionally, the attendance system display unit receives the identity information of the target face in real time, and determines an attendance result according to the identity information of the target face and the current time.
On the other hand, the embodiment of the application provides an attendance checking method, the method is used for an access control system, and the method comprises the following steps:
acquiring an ith frame image and an ith-1 frame image;
determining a template area corresponding to a target face from the i-1 frame image; determining a search area from the ith frame image according to the position of the target face in the ith-1 frame image; wherein the target face is one of a plurality of faces in the i-1 frame image;
extracting the face features to be determined in the search area and the target face features of the target face in the template area; determining a target position of the target face in the search area according to the target face feature and the undetermined face feature;
identifying the identity information of the target face according to the target position; determining whether to open a door according to the identity information of the target face;
and determining an attendance checking result according to the identity information of the target face.
Optionally, if the target face is determined from the ith frame image, the method further includes:
and determining the face different from the target face again from the faces in the image of the (i-1) th frame, taking the face different from the target face as the target face, and executing the step of determining the template area corresponding to the target face from the image of the (i-1) th frame.
Optionally, the extracting the face feature to be determined in the search region and the target face feature of the target face in the template region includes:
extracting the face features to be determined in the search area and the target face features of the target face in the template area through a neural network model, wherein the neural network model comprises: the first convolution layer, the first pooling layer, the first connection layer, the second convolution layer, the second pooling layer, the third convolution layer, the second connection layer, the fourth convolution layer, the fifth convolution layer and the fusion layer;
the first convolution layer is used for extracting a first to-be-determined face feature in the search area and extracting a first target face feature of a target face in the template area;
the first pooling layer is used for reducing the first to-be-determined face feature to obtain a second to-be-determined face feature and reducing the dimensionality of the first target face feature to obtain a second target face feature;
the first connecting layer is used for extracting the second undetermined face feature to obtain a third undetermined face feature and extracting the second target face feature to obtain a third target face feature;
the second convolutional layer is used for extracting the second undetermined face feature to obtain a fourth undetermined face feature and extracting the second target face feature to obtain a fourth target face feature;
the second pooling layer is used for extracting the fourth to-be-determined face feature to obtain a fifth to-be-determined face feature and extracting the fourth target face feature to obtain a fifth target face feature;
the third convolutional layer is used for extracting the fifth undetermined face feature to obtain a sixth undetermined face feature and extracting the fifth target face feature to obtain a sixth target face feature;
the second connecting layer is used for extracting the sixth undetermined face feature to obtain a seventh undetermined face feature and extracting the sixth target face feature to obtain a seventh target face feature;
the fourth convolution layer is used for extracting the sixth undetermined face feature to obtain an eighth undetermined face feature and extracting the sixth target face feature to obtain an eighth target face feature;
the fifth convolutional layer is used for extracting the eighth undetermined face feature to obtain a ninth undetermined face feature and extracting the eighth target face feature to obtain a ninth target face feature;
and the fusion layer is used for obtaining undetermined face features in the search area according to the third undetermined face feature, the seventh undetermined face feature and the ninth undetermined face feature, and obtaining target face features of a target face in the template area according to the third target face feature, the seventh target face feature and the ninth target face feature.
In another aspect, the present application provides a computer device comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the method of the above aspect according to instructions in the program code.
In another aspect the present application provides a computer readable storage medium for storing a computer program for performing the method of the above aspect.
In another aspect, embodiments of the present application provide a computer program product or a computer program, which includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method of the above aspect.
The above technical scheme of this application's advantage lies in:
the application provides an access control system with attendance function, this access control system includes acquisition unit, tracking unit, recognition cell and attendance system display element. The image acquisition unit is used for acquiring the ith frame image and the (i-1) th frame image. The tracking unit comprises a determining module and a feedforward network module, wherein the determining module is used for determining a template area corresponding to the target face from the i-1 frame image; determining a search area from the ith frame image according to the position of the target face in the ith-1 frame image; the feedforward network module is used for extracting the face features to be determined in the search area and the target face features of the target face in the template area; and determining the target position of the target face in the search area according to the target face feature and the undetermined face feature. The identification unit is used for identifying the identity information of the target face according to the target position; and determining whether to open the door or not according to the identity information of the target face. And the attendance system display unit is used for determining an attendance result according to the identity information of the target face. Therefore, if the acquired image comprises a plurality of people, one of the faces in the i-1 frame is determined as a target face, the position of the target face in the i-1 frame in the i frame is tracked, the identity information of the target face is recognized, and the entrance guard function and the attendance checking function are realized according to the identity information of the target face. The problem that multiple persons cannot be identified in the acquired image is solved, and the attendance checking efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic view of an access control system with attendance checking function according to an embodiment of the present application;
fig. 2 is a schematic diagram of a neural network model architecture according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a tracking unit according to an embodiment of the present application;
fig. 4 is a flowchart of an attendance checking method provided in an embodiment of the present application;
fig. 5 is a block diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
An access control system with attendance checking function provided in the embodiment of the present application is described below with reference to fig. 1. Referring to fig. 1, which is a schematic view of an access control system with attendance function provided in an embodiment of the present application, the system includes an acquisition unit 101, a tracking unit 102, an identification unit 103, and an attendance system display unit 104, which are respectively described below.
(1) And the image acquisition unit 101 is used for acquiring the ith frame image and the (i-1) th frame image.
In practical application, an employee may gradually approach to a company gate from a far place, at this time, the image acquisition unit 101 included in the access control system with the attendance checking function may continuously acquire images of the employee from the far place to the near place, and the description will be given by taking the ith frame image and the (i-1) th frame image in the multi-frame image as an example.
As a possible implementation manner, the image capturing unit 101 is further configured to perform preprocessing on the captured image, for example, to obtain the ith frame image and the (i-1) th frame image.
(2) The tracking unit 102 is mainly used for tracking a target human face, and includes a determining module 1021 and a feed-forward network module 1022, which are respectively described below.
(a) A determining module 1021, configured to determine a template area corresponding to the target face from the i-1 th frame image; and determining a search area from the ith frame image according to the position of the target face in the ith-1 frame image.
The target face is one of the faces in the i-1 frame image, for example, the faces in the i-1 frame image are identified through feature extraction, face identification and other modes, and then one face is randomly determined from the faces to serve as the target face for subsequent tracking identification. The template region is a region including the target face in the i-1 th frame image, and the size of the template region is not specifically limited in this embodiment of the application, and may be, for example, 127 × 3.
Since the motion change of the face in the adjacent frames is small, a partial region, such as a 2 × 2 region, may be expanded outwards at the same position of the i-th frame image according to the position of the target face in the i-1-th frame image, and the search region is obtained by performing frame candidate cropping and size transformation on the i-th frame image.
(b) A feed-forward network module 1022, configured to extract a to-be-determined face feature in the search area and a target face feature of a target face in the template area; and determining the target position of the target face in the search area according to the target face feature and the face feature to be determined.
The feed-forward network module 1022 is mainly configured to extract image features, and determine the position of the target face in the search region by comparing the face features to be determined in the search region with the target face features in the template region, that is, determine the target face from multiple persons included in the i-th frame of image.
The content of the feedforward network module is not specifically limited in the embodiments of the present application, and the feedforward network module including a trained neural network model is taken as an example for description below.
The neural network model comprises a first convolutional layer, a first pooling layer, a first connecting layer, a second convolutional layer, a second pooling layer, a third convolutional layer, a second connecting layer, a fourth convolutional layer and a fifth convolutional layer.
Referring to fig. 2, the figure is a schematic diagram of a neural network model architecture provided in an embodiment of the present application.
Inputting the search area into a first convolution layer, extracting a first to-be-determined face feature of the search area through the first convolution layer, inputting the template area into the first convolution layer, and extracting a first target face feature of a target face in the template area through the first convolution layer.
The method comprises the steps of inputting a first to-be-determined face feature into a first pooling layer, reducing the first to-be-determined face feature through the first pooling layer to obtain a second to-be-determined face feature, inputting a first target face feature into the first pooling layer, and reducing the dimensionality of the first target face feature through the first pooling layer to obtain a second target face feature.
Inputting the second undetermined face features into the first connecting layer, extracting the second undetermined face features through the first connecting layer to obtain third undetermined face features, inputting the second target face features into the first connecting layer, and extracting the second target face features through the first connecting layer to obtain third target face features.
Inputting the second undetermined face feature into a second convolution layer, extracting the second undetermined face feature through the second convolution layer to obtain a fourth undetermined face feature, inputting the second target face feature into the second convolution layer, extracting the second target face feature through the second convolution layer to obtain a fourth target face feature.
Inputting the fourth undetermined face feature into a second pooling layer, extracting the fourth undetermined face feature through the second pooling layer to obtain a fifth undetermined face feature, inputting the fourth target face feature into the second pooling layer, and extracting the fourth target face feature through the second pooling layer to obtain a fifth target face feature.
Inputting the fifth undetermined face feature into a third convolutional layer, extracting the fifth undetermined face feature through the third convolutional layer to obtain a sixth undetermined face feature, inputting the fifth target face feature into the third convolutional layer, and extracting the fifth target face feature through the third convolutional layer to obtain a sixth target face feature.
Inputting the sixth undetermined face feature into the second connecting layer, extracting the sixth undetermined face feature through the second connecting layer to obtain a seventh undetermined face feature, inputting the sixth target face feature into the second connecting layer, and extracting the sixth target face feature through the second connecting layer to obtain a seventh target face feature.
Inputting the sixth undetermined face feature into a fourth convolutional layer, extracting the sixth undetermined face feature through the fourth convolutional layer to obtain an eighth undetermined face feature, inputting the sixth target face feature into the fourth convolutional layer, and extracting the sixth face feature through the fourth convolutional layer to obtain an eighth target face feature.
Inputting the eighth undetermined face feature into a fifth convolutional layer, extracting the eighth undetermined face feature through the fifth convolutional layer to obtain a ninth undetermined face feature, inputting the eighth target face feature into the fifth convolutional layer, and extracting the eighth target face feature through the fifth convolutional layer to obtain a ninth target face feature.
Inputting the third to-be-determined face feature, the seventh to-be-determined face feature and the ninth to-be-determined face feature into the fusion layer, obtaining the to-be-determined face feature in the search area through the fusion layer, inputting the third target face feature, the seventh target face feature and the ninth target face feature into the fusion layer, and obtaining the target face feature of the target face in the template area through the fusion layer.
Therefore, undetermined face features obtained by representing the third undetermined face feature of the shallow layer feature, representing the seventh undetermined face feature of the middle layer and representing the ninth undetermined face feature of the deep layer are more accurate, and target face features obtained by representing the third target face feature of the shallow layer feature, representing the seventh target face feature of the middle layer and representing the ninth target face feature of the deep layer are more accurate.
As a possible implementation, the first connection layer and the second connection layer include a hole convolution, which reduces the amount of computation while maintaining the reception field.
Referring to fig. 3, the figure is a schematic diagram of a tracking unit provided in an embodiment of the present application. The tracking unit comprises two identical neural network models, of which only the internal structure is shown in fig. 3.
And respectively inputting the template area and the search area into corresponding neural network models, and obtaining undetermined human face features and target human face features through the neural network models. Wherein, compared to fig. 2, the first connection layer and the second connection layer in the neural network in fig. 3 include a hole convolution.
If the size of the template region is 127 × 3 and the size of the search region is 255 × 3, the input/output table 1 of the neural network model shown in fig. 3 is shown.
TABLE 1
Figure BDA0003565970250000091
Therefore, the target face features and the pending face features obtained by passing the template region and the search region through the same neural network model have the same emphasis points. And inputting the target face features and the face features to be determined into the cross convolution layer to obtain a correlation diagram of the target face in the search area, wherein the position with the maximum correlation is the position of the target face in the search area.
(3) The identification unit 103 is used for identifying the identity information of the target face according to the target position; and determining whether to open the door or not according to the identity information of the target face.
The target position is input by the recognition unit 103, the tracked target face is used as input of the recognition unit to recognize the target face, the recognition operation is automatically completed by the access control system, if the target face is input by the access control system, the employee of the company opens the door, and if the target face is not input by the employee of the company, the employee of the company does not open the door. And the identification result is stored and pushed to an attendance system to be used as a basis for checking attendance and checking cards.
(4) And the attendance system display unit 104 is used for determining an attendance result according to the identity information of the target face.
The attendance system display unit 104 is mainly used for receiving real-time card punching data from the access control system, performing user interaction, analyzing in real time and displaying the attendance condition of the user.
The attendance system plays a role in user interaction, and mainly plays a role in analyzing data pushed by a third-party system, including card punching data pushed by the access control system. The attendance system display comprises a user side and a management side display, wherein the user side displays the personal attendance book and can check the daily card punching time and the daily attendance condition; the management end displays the attendance condition of the staff under the organization by taking the organization as a unit, and the attendance condition comprises specific data such as leave asking, late arrival, early exit, business trip and the like.
As a possible implementation mode, aiming at offline analysis data of an attendance system, the data needs to be analyzed regularly after batch starting, so that a manager cannot check the attendance condition of a staff in real time, if the attendance is abnormal, the staff cannot perform abnormal processing in time, and the examination is influenced. Therefore, the conditions of card punching, attendance abnormity, business trips, various leave requests and the like of the staff in each department of the head office are analyzed and recorded in real time, and the data push of other external office systems is received and analyzed, so that the attendance manager can conveniently carry out real-time attendance statistics, and the staff can process the attendance abnormity in real time.
As a possible implementation manner, if the target face is determined from the ith frame image, the tracking unit 102 is further configured to:
and determining the face different from the target face from the faces in the i-1 frame image again, taking the face different from the target face as the target face, and executing the step of determining the template area corresponding to the target face from the i-1 frame image.
From this, through the people's face that appears in discerning frame image in proper order, get rid of the scene limitation that the single target of access control system gathered the image and the single target carries out discernment among the correlation technique, promote the entrance guard and punch the card speed, improve early peak efficiency of attendance.
According to the technical scheme, the access control system with the attendance function comprises a collecting unit, a tracking unit, an identification unit and an attendance system display unit. The image acquisition unit is used for acquiring the ith frame image and the (i-1) th frame image. The tracking unit comprises a determining module and a feedforward network module, wherein the determining module is used for determining a template area corresponding to the target face from the i-1 frame image; determining a search area from the ith frame image according to the position of the target face in the ith-1 frame image; the feedforward network module is used for extracting the face features to be determined in the search area and the target face features of the target face in the template area; and determining the target position of the target face in the search area according to the target face feature and the face feature to be determined. The identification unit is used for identifying the identity information of the target face according to the target position; and determining whether to open the door or not according to the identity information of the target face. And the attendance system display unit is used for determining an attendance result according to the identity information of the target face. Therefore, if the acquired image comprises a plurality of people, one of the faces in the i-1 frame is determined as a target face, the position of the target face in the i-1 frame in the i frame is tracked, the identity information of the target face is recognized, and the entrance guard function and the attendance checking function are realized according to the identity information of the target face. The problem that multiple persons cannot be identified in the acquired image is solved, and the attendance checking efficiency is improved.
The access control system can not only realize the access control function, but also realize the attendance function, thereby realizing the intelligent integrated target of access control and attendance system card punching through target tracking and target identification. The entrance guard is controlled and controlled through target tracking and target identification, staff attendance card punching is completed through real-time data pushing, queuing waiting efficiency of an entrance guard card punching system is greatly improved, and attendance data are updated in real time.
The embodiment of the application provides an access control system with an attendance checking function, and also provides an attendance checking method, as shown in fig. 4, the method comprises the following steps:
s401: acquiring an ith frame image and an ith-1 frame image;
s402: determining a template area corresponding to a target face from the i-1 frame image; determining a search area from the ith frame image according to the position of the target face in the ith-1 frame image; the target face is one of a plurality of faces in the i-1 frame image;
s403: extracting the face features to be determined in the search area and the target face features of the target face in the template area; determining a target position of the target face in the search area according to the target face feature and the undetermined face feature;
s404: identifying the identity information of the target face according to the target position; determining whether to open a door according to the identity information of the target face;
s405: and determining an attendance checking result according to the identity information of the target face.
As a possible implementation manner, if the target face is determined from the ith frame image, the method further includes:
and determining the face different from the target face again from the faces in the image of the (i-1) th frame, taking the face different from the target face as the target face, and executing the step of determining the template area corresponding to the target face from the image of the (i-1) th frame.
As a possible implementation manner, the extracting of the face feature to be specified in the search region and the target face feature of the target face in the template region includes:
extracting the face features to be determined in the search area and the target face features of the target face in the template area through a neural network model, wherein the neural network model comprises: the first convolution layer, the first pooling layer, the first connection layer, the second convolution layer, the second pooling layer, the third convolution layer, the second connection layer, the fourth convolution layer, the fifth convolution layer and the fusion layer;
the first convolution layer is used for extracting a first to-be-determined face feature in the search area and extracting a first target face feature of a target face in the template area;
the first pooling layer is used for reducing the first to-be-determined face feature to obtain a second to-be-determined face feature and reducing the dimensionality of the first target face feature to obtain a second target face feature;
the first connecting layer is used for extracting the second undetermined face feature to obtain a third undetermined face feature and extracting the second target face feature to obtain a third target face feature;
the second convolutional layer is used for extracting the second undetermined face feature to obtain a fourth undetermined face feature and extracting the second target face feature to obtain a fourth target face feature;
the second pooling layer is used for extracting the fourth to-be-determined face feature to obtain a fifth to-be-determined face feature and extracting the fourth target face feature to obtain a fifth target face feature;
the third convolutional layer is used for extracting the fifth undetermined face feature to obtain a sixth undetermined face feature and extracting the fifth target face feature to obtain a sixth target face feature;
the second connecting layer is used for extracting the sixth undetermined face feature to obtain a seventh undetermined face feature and extracting the sixth target face feature to obtain a seventh target face feature;
the fourth convolution layer is used for extracting the sixth undetermined face feature to obtain an eighth undetermined face feature and extracting the sixth target face feature to obtain an eighth target face feature;
the fifth convolutional layer is used for extracting the eighth undetermined face feature to obtain a ninth undetermined face feature and extracting the eighth target face feature to obtain a ninth target face feature;
and the fusion layer is used for obtaining undetermined face features in the search area according to the third undetermined face features, the seventh undetermined face features and the ninth undetermined face features, and obtaining target face features of the target face in the template area according to the third target face features, the seventh target face features and the ninth target face features.
As a possible implementation, the first connection layer and the second connection layer include a hole convolution.
As a possible implementation manner, the determining an attendance result according to the identity information of the target face includes:
and receiving the identity information of the target face in real time, and determining an attendance result according to the identity information of the target face and the current time.
According to the technical scheme, the ith frame image and the (i-1) th frame image are obtained; determining a template area corresponding to the target face from the i-1 frame image; determining a search area from the ith frame image according to the position of the target face in the ith-1 frame image; extracting the face features to be determined in the search area and the target face features of the target face in the template area; determining a target position of a target face in a search region according to the target face feature and the face feature to be determined; identifying the identity information of the target face according to the target position; determining whether to open a door according to the identity information of the target face; and determining an attendance result according to the identity information of the target face. Therefore, if the acquired image comprises a plurality of people, one of the faces in the i-1 frame is determined as a target face, the position of the target face in the i-1 frame in the i frame is tracked, the identity information of the target face is recognized, and the entrance guard function and the attendance checking function are realized according to the identity information of the target face. The problem that multiple persons cannot be identified in the acquired image is solved, and the attendance checking efficiency is improved.
An embodiment of the present application further provides a computer device, referring to fig. 5, which shows a structural diagram of a computer device provided in an embodiment of the present application, and as shown in fig. 5, the device includes a processor 510 and a memory 520:
the memory 510 is used for storing program codes and transmitting the program codes to the processor;
the processor 520 is configured to execute any of the attendance checking methods provided by the above embodiments according to instructions in the program code.
The embodiment of the application provides a computer-readable storage medium, which is used for storing a computer program, and the computer program is used for executing any attendance checking method provided by the embodiment.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the attendance checking method provided in the various alternative implementations of the above aspects.
It should be noted that, in the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the system or the device disclosed by the embodiment, the description is simple because the system or the device corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. The utility model provides an access control system with attendance function which characterized in that, the system includes: the system comprises a collecting unit, a tracking unit, an identifying unit and an attendance system display unit;
the image acquisition unit is used for acquiring the ith frame image and the (i-1) th frame image;
the tracking unit comprises a determining module used for determining a template area corresponding to a target face from the i-1 frame image; determining a search area from the ith frame image according to the position of the target face in the ith-1 frame image; wherein the target face is one of a plurality of faces in the i-1 frame image;
the tracking unit comprises a feed-forward network module used for extracting the face features to be determined in the search area and the target face features of the target face in the template area; determining a target position of the target face in the search area according to the target face feature and the undetermined face feature;
the identification unit is used for identifying the identity information of the target face according to the target position; determining whether to open a door according to the identity information of the target face;
and the attendance system display unit is used for determining an attendance result according to the identity information of the target face.
2. The system of claim 1, wherein if the target face is determined from the i-th frame image, the tracking unit is further configured to:
and determining the face different from the target face again from the faces in the image of the (i-1) th frame, taking the face different from the target face as the target face, and executing the step of determining the template area corresponding to the target face from the image of the (i-1) th frame.
3. The system of claim 1, wherein the feed-forward network module comprises a first convolutional layer, a first pooling layer, a first connection layer, a second convolutional layer, a second pooling layer, a third convolutional layer, a second connection layer, a fourth convolutional layer, a fifth convolutional layer, and a fusion layer;
the first convolution layer is used for extracting a first to-be-determined face feature in the search area and extracting a first target face feature of a target face in the template area;
the first pooling layer is used for reducing the first to-be-determined face feature to obtain a second to-be-determined face feature and reducing the dimensionality of the first target face feature to obtain a second target face feature;
the first connecting layer is used for extracting the second undetermined face feature to obtain a third undetermined face feature and extracting the second target face feature to obtain a third target face feature;
the second convolutional layer is used for extracting the second undetermined face feature to obtain a fourth undetermined face feature and extracting the second target face feature to obtain a fourth target face feature;
the second pooling layer is used for extracting the fourth to-be-determined face feature to obtain a fifth to-be-determined face feature and extracting the fourth target face feature to obtain a fifth target face feature;
the third convolutional layer is used for extracting the fifth undetermined face feature to obtain a sixth undetermined face feature and extracting the fifth target face feature to obtain a sixth target face feature;
the second connecting layer is used for extracting the sixth undetermined face feature to obtain a seventh undetermined face feature and extracting the sixth target face feature to obtain a seventh target face feature;
the fourth convolution layer is used for extracting the sixth undetermined face feature to obtain an eighth undetermined face feature and extracting the sixth target face feature to obtain an eighth target face feature;
the fifth convolutional layer is used for extracting the eighth undetermined face feature to obtain a ninth undetermined face feature and extracting the eighth target face feature to obtain a ninth target face feature;
and the fusion layer is used for obtaining undetermined face features in the search area according to the third undetermined face features, the seventh undetermined face features and the ninth undetermined face features, and obtaining target face features of the target face in the template area according to the third target face features, the seventh target face features and the ninth target face features.
4. The system of claim 3, wherein the first connection layer and the second connection layer comprise a hole convolution.
5. The system of claim 1, wherein the attendance system display unit receives the identity information of the target face in real time and determines an attendance result according to the identity information of the target face and the current time.
6. An attendance checking method, which is used for an access control system, and comprises the following steps:
acquiring an ith frame image and an ith-1 frame image;
determining a template area corresponding to a target face from the i-1 frame image; determining a search area from the ith frame image according to the position of the target face in the ith-1 frame image; wherein the target face is one of a plurality of faces in the i-1 frame image;
extracting the face features to be determined in the search area and the target face features of the target face in the template area; determining a target position of the target face in the search area according to the target face feature and the undetermined face feature;
identifying the identity information of the target face according to the target position; determining whether to open a door according to the identity information of the target face;
and determining an attendance checking result according to the identity information of the target face.
7. The method of claim 6, wherein if the target face is determined from the ith frame of image, the method further comprises:
and determining the face different from the target face again from the faces in the image of the (i-1) th frame, taking the face different from the target face as the target face, and executing the step of determining the template area corresponding to the target face from the image of the (i-1) th frame.
8. The method of claim 6, wherein the extracting the face feature to be specified in the search region and the target face feature of the target face in the template region comprises:
extracting the face features to be determined in the search area and the target face features of the target face in the template area through a neural network model, wherein the neural network model comprises: the first convolution layer, the first pooling layer, the first connection layer, the second convolution layer, the second pooling layer, the third convolution layer, the second connection layer, the fourth convolution layer, the fifth convolution layer and the fusion layer;
the first convolution layer is used for extracting a first to-be-determined face feature in the search area and extracting a first target face feature of a target face in the template area;
the first pooling layer is used for reducing the first to-be-determined face feature to obtain a second to-be-determined face feature and reducing the dimensionality of the first target face feature to obtain a second target face feature;
the first connecting layer is used for extracting the second undetermined face feature to obtain a third undetermined face feature and extracting the second target face feature to obtain a third target face feature;
the second convolutional layer is used for extracting the second undetermined face feature to obtain a fourth undetermined face feature and extracting the second target face feature to obtain a fourth target face feature;
the second pooling layer is used for extracting the fourth to-be-determined face feature to obtain a fifth to-be-determined face feature and extracting the fourth target face feature to obtain a fifth target face feature;
the third convolutional layer is used for extracting the fifth undetermined face feature to obtain a sixth undetermined face feature and extracting the fifth target face feature to obtain a sixth target face feature;
the second connecting layer is used for extracting the sixth undetermined face feature to obtain a seventh undetermined face feature and extracting the sixth target face feature to obtain a seventh target face feature;
the fourth convolution layer is used for extracting the sixth undetermined face feature to obtain an eighth undetermined face feature and extracting the sixth target face feature to obtain an eighth target face feature;
the fifth convolutional layer is used for extracting the eighth undetermined face feature to obtain a ninth undetermined face feature and extracting the eighth target face feature to obtain a ninth target face feature;
and the fusion layer is used for obtaining undetermined face features in the search area according to the third undetermined face feature, the seventh undetermined face feature and the ninth undetermined face feature, and obtaining target face features of a target face in the template area according to the third target face feature, the seventh target face feature and the ninth target face feature.
9. A computer device, the device comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the method of any of claims 6-8 according to instructions in the program code.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium is used to store a computer program for performing the method of any of claims 6-8.
CN202210302321.1A 2022-03-25 2022-03-25 Access control system with attendance checking function, attendance checking method and related device Pending CN114613058A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210302321.1A CN114613058A (en) 2022-03-25 2022-03-25 Access control system with attendance checking function, attendance checking method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210302321.1A CN114613058A (en) 2022-03-25 2022-03-25 Access control system with attendance checking function, attendance checking method and related device

Publications (1)

Publication Number Publication Date
CN114613058A true CN114613058A (en) 2022-06-10

Family

ID=81866416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210302321.1A Pending CN114613058A (en) 2022-03-25 2022-03-25 Access control system with attendance checking function, attendance checking method and related device

Country Status (1)

Country Link
CN (1) CN114613058A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633258A (en) * 2017-08-21 2018-01-26 北京精密机电控制设备研究所 A kind of deep learning identifying system and method based on feed-forward character extraction
CN108090403A (en) * 2016-11-22 2018-05-29 上海银晨智能识别科技有限公司 A kind of face dynamic identifying method and system based on 3D convolutional neural networks
CN110211266A (en) * 2019-05-29 2019-09-06 甘肃万华金慧科技股份有限公司 A kind of gate inhibition's face identification system and method
KR20190137384A (en) * 2018-06-01 2019-12-11 서울대학교산학협력단 Attendance check system and method thereof
WO2020001083A1 (en) * 2018-06-30 2020-01-02 东南大学 Feature multiplexing-based face recognition method
CN110852254A (en) * 2019-11-08 2020-02-28 杭州网易云音乐科技有限公司 Face key point tracking method, medium, device and computing equipment
CN111612943A (en) * 2020-04-24 2020-09-01 上海图丽信息技术有限公司 Face recognition entrance guard attendance system
WO2020244174A1 (en) * 2019-06-05 2020-12-10 深圳云天励飞技术有限公司 Face recognition method, apparatus and device, and computer readable storage medium
CN113553990A (en) * 2021-08-09 2021-10-26 深圳智必选科技有限公司 Method and device for tracking and identifying multiple faces, computer equipment and storage medium
CN113723375A (en) * 2021-11-02 2021-11-30 杭州魔点科技有限公司 Double-frame face tracking method and system based on feature extraction

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108090403A (en) * 2016-11-22 2018-05-29 上海银晨智能识别科技有限公司 A kind of face dynamic identifying method and system based on 3D convolutional neural networks
CN107633258A (en) * 2017-08-21 2018-01-26 北京精密机电控制设备研究所 A kind of deep learning identifying system and method based on feed-forward character extraction
KR20190137384A (en) * 2018-06-01 2019-12-11 서울대학교산학협력단 Attendance check system and method thereof
WO2020001083A1 (en) * 2018-06-30 2020-01-02 东南大学 Feature multiplexing-based face recognition method
CN110211266A (en) * 2019-05-29 2019-09-06 甘肃万华金慧科技股份有限公司 A kind of gate inhibition's face identification system and method
WO2020244174A1 (en) * 2019-06-05 2020-12-10 深圳云天励飞技术有限公司 Face recognition method, apparatus and device, and computer readable storage medium
CN110852254A (en) * 2019-11-08 2020-02-28 杭州网易云音乐科技有限公司 Face key point tracking method, medium, device and computing equipment
CN111612943A (en) * 2020-04-24 2020-09-01 上海图丽信息技术有限公司 Face recognition entrance guard attendance system
CN113553990A (en) * 2021-08-09 2021-10-26 深圳智必选科技有限公司 Method and device for tracking and identifying multiple faces, computer equipment and storage medium
CN113723375A (en) * 2021-11-02 2021-11-30 杭州魔点科技有限公司 Double-frame face tracking method and system based on feature extraction

Similar Documents

Publication Publication Date Title
Kar et al. Study of implementing automated attendance system using face recognition technique
CN106600478A (en) Multifunctional robot system applied to hotel reception service
EP2701095A2 (en) Interactive system for recognition analysis of multiple streams of video
US20070183634A1 (en) Auto Individualization process based on a facial biometric anonymous ID Assignment
CN111046810A (en) Data processing method and processing device
CN109829691A (en) C/S punch card method and device based on position and deep learning multi-biological feature
CN111291912A (en) Number taking method, number taking machine and number taking system using witness verification
EP2697064A2 (en) System and method for demographic analytics based on multimodal information
CN114613058A (en) Access control system with attendance checking function, attendance checking method and related device
CN111241930A (en) Method and system for face recognition
CN110287841B (en) Image transmission method and apparatus, image transmission system, and storage medium
CN114999644A (en) Building personnel epidemic situation prevention and control visual management system and management method
CN114840748A (en) Information pushing method, device and equipment based on face recognition and storage medium
CN114821844B (en) Attendance checking method and device based on face recognition, electronic equipment and storage medium
JP2002208011A (en) Image collation processing system and its method
CN113537073A (en) Method and system for accurately processing special events in business hall
CN113689613A (en) Access control system, access control method, and storage medium
CN114359997A (en) Service guiding method and system
CN106295597A (en) A kind of method and device obtaining passenger flow information
CN113128452A (en) Greening satisfaction acquisition method and system based on image recognition
CN113344124A (en) Trajectory analysis method and device, storage medium and system
CN113591619A (en) Face recognition verification device based on video and verification method thereof
CN111161743A (en) Cash receiving supervision method and system based on voice recognition
CN113780216B (en) Non-inductive attendance checking method, device, computer equipment and storage medium
CN109409325A (en) A kind of recognition methods and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination