CN109308584A - A kind of noninductive attendance system and method - Google Patents

A kind of noninductive attendance system and method Download PDF

Info

Publication number
CN109308584A
CN109308584A CN201811137079.7A CN201811137079A CN109308584A CN 109308584 A CN109308584 A CN 109308584A CN 201811137079 A CN201811137079 A CN 201811137079A CN 109308584 A CN109308584 A CN 109308584A
Authority
CN
China
Prior art keywords
attendance
cloud server
noninductive
face
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811137079.7A
Other languages
Chinese (zh)
Inventor
陈晓明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jooan Technology Co Ltd
Original Assignee
Shenzhen Jooan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jooan Technology Co Ltd filed Critical Shenzhen Jooan Technology Co Ltd
Priority to CN201811137079.7A priority Critical patent/CN109308584A/en
Publication of CN109308584A publication Critical patent/CN109308584A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06311Scheduling, planning or task assignment for a person or group
    • G06Q10/063114Status monitoring or status determination for a person or group
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Theoretical Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Physics & Mathematics (AREA)
  • Operations Research (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Quality & Reliability (AREA)
  • Marketing (AREA)
  • Educational Administration (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to field of face identification, especially a kind of noninductive attendance system and method.The noninductive attendance checking system, including attendance record terminal, cloud server and management terminal, the cloud server are connect with the attendance record terminal and the management terminal communications respectively;The cloud server receives the human face image information by attendance person that the attendance record terminal extracts;The cloud server carries out recognition of face according to the human face image information and generates discrepancy attendance information;The discrepancy attendance information is fed back to the management terminal by the cloud server.A kind of noninductive attendance system and method proposed by the present invention, it is not needed to carry out formula attendance again by attendance person, only normal walking by be provided with fast video acquisition attendance record terminal position when, noninductive realization attendance can be realized, the video data that cloud server can be captured according to attendance record terminal realizes recognition of face and comparison, and forms discrepancy attendance information.

Description

A kind of noninductive attendance system and method
Technical field
The present invention relates to field of face identification, especially a kind of noninductive attendance system and method.
Background technique
It has realized in attendance checking system at present, fingerprint recognition is replicated because being easy and denounces;Attendance system based on recognition of face System, the prior art require checked-in person person to cooperate on one's own initiative, and according to different angle typing template, in attendance, it is desirable that Ren Yuanzhan Cooperate attendance recorder in designated position, and in angular range to complete attendance.This technology first aspect needs personnel actively to match Conjunction could complete attendance, and intelligence degree is low.Personnel cannot be completely presented other than the attendance of personnel's cooperation in second aspect Daylong attendance situation, for example, somebody person, outgoing after the completion of attendance.Or unidentified personnel swarm into without authorization, it is existing Some attendance checking systems not can solve these problems.
Summary of the invention
For the defects in the prior art, the present invention provides a kind of noninductive attendance system and method, it is intended to overcome existing skill The deficiency of art.
To achieve the goals above, in a first aspect, a kind of noninductive attendance checking system provided by the invention, including attendance record terminal, Cloud server and management terminal, the cloud server are connect with the attendance record terminal and the management terminal communications respectively; The cloud server receives the human face image information by attendance person that the attendance record terminal extracts;The cloud server according to The human face image information carries out recognition of face and generates discrepancy attendance information;The cloud server believes the discrepancy attendance Breath feeds back to the management terminal.
Second aspect, the present invention provide a kind of noninductive Work attendance method, and the noninductive Work attendance method is suitable for the invention Noninductive attendance checking system described in one side, the noninductive attendance checking system include attendance record terminal, cloud server and management terminal, The cloud server is connect with the attendance record terminal and the management terminal communications respectively, and the noninductive Work attendance method includes such as Lower step: the cloud server receives the human face image information by attendance person that the attendance record terminal extracts;The cloud clothes Business device carries out recognition of face according to the human face image information and generates discrepancy attendance information;The cloud server goes out by described in Enter attendance information and feeds back to the management terminal.
The beneficial effects of the present invention are: a kind of noninductive attendance system and method proposed by the present invention, is not needed by attendance person Carry out formula attendance again, only normal walking by be provided with fast video acquisition attendance record terminal position when, can be realized Noninductive realization attendance, the video data that cloud server can be captured according to attendance record terminal realizes recognition of face and comparison, and is formed Enter and leave attendance information.
Detailed description of the invention
Fig. 1 is the schematic diagram of the noninductive attendance checking system of the embodiment of the present invention;
Fig. 2 is the schematic diagram of face shape of the embodiment of the present invention;
Fig. 3 is the flow chart of the noninductive Work attendance method of the embodiment of the present invention.
Specific embodiment
Specific embodiments of the present invention are described more fully below, it should be noted that the embodiments described herein is served only for illustrating Illustrate, is not intended to restrict the invention.In the following description, in order to provide a thorough understanding of the present invention, a large amount of spies are elaborated Determine details.It will be apparent, however, to one skilled in the art that: this hair need not be carried out using these specific details It is bright.In other instances, in order to avoid obscuring the present invention, well known circuit, software or method are not specifically described.
Throughout the specification, meaning is referred to " one embodiment ", " embodiment ", " example " or " example " : a particular feature, structure, or characteristic described in conjunction with this embodiment or example is comprised at least one embodiment of the invention. Therefore, the phrase " in one embodiment ", " in embodiment ", " example " occurred in each place of the whole instruction Or " example " is not necessarily all referring to the same embodiment or example.Furthermore, it is possible in any suitable combination and or sub-portfolio will be specific Feature, structure or characteristic combine in one or more embodiment or examples.In addition, those of ordinary skill in the art should manage Solution, diagram is provided to the purpose of explanation provided herein, and diagram is not necessarily drawn to scale.
As shown in Figure 1, noninductive attendance checking system shown by the first embodiment of the present invention includes attendance record terminal, cloud service Device and management terminal, the cloud server are connect with the attendance record terminal and the management terminal communications respectively;The cloud Server receives the human face image information by attendance person that the attendance record terminal extracts;The cloud server is according to the face Image information carries out recognition of face and generates discrepancy attendance information;The cloud server feeds back to the discrepancy attendance information The management terminal.
Wherein, the attendance record terminal include shell and be integrated in the intracorporal global shutter sensor of the shell and Video master control IC, the signal input part phase of the signal output end of the global shutter sensor and the video master control IC Even;The cloud server includes server or server cluster;The management terminal includes computer, mobile phone and other shiftings Dynamic or fixed terminal.
The human face image information by attendance person that the cloud server receives the attendance record terminal crawl is predominantly described Cloud server receives the human face image information by attendance person that the attendance record terminal is extracted according to real-time pictures, the real-time picture Face is with the speed of 240 frame per second by the attendance record terminal to the real-time pictures grabbed by attendance person;
In the present embodiment, attendance record terminal is specifically included according to what real-time pictures were extracted by the human face image information of attendance person:
Image to be detected is obtained, image to be detected refers to needing to detect the image of face shape, wherein face shape can Position and shape including face contour shape and five palaces etc..Face shape can be as where in each characteristic point that face marks Position is indicated, as shown in Fig. 2, Fig. 2 be in one embodiment with feature point group at face shape schematic diagram, it is each in Fig. 2 A with labelled point is characteristic point, constitutes face shape according to the position of each characteristic point, wherein label 1 to 17 Based on Feature Points face contour shape, the Based on Feature Points eyebrow position of label 18 to 27 and shape, the spy of label 28 to 37 Sign point indicates nose shape and shape, the Based on Feature Points eye position and shape of label 38 to 47, the feature of label 48 to 67 Point indicates mouth position and shape.
In one embodiment, if it is color image that the video master control IC, which obtains image to be detected image to be detected, Gray level image can be converted according to corresponding transformed matrix by color image, wherein gray level image refers to that each pixel only has The image of one sample color.Video master control IC can be detected first in image to be detected roughly according to the characteristics of image of gray level image No includes face, if comprising that can extract the face detected from gray level image, and the face of extraction is put into preset unit In rectangular area.If including multiple faces in image to be detected, face extraction can be put into preset unit rectangles region respectively It is interior, then detection face shape is carried out one by one.
Obtain the original shape of this regression tree in the probability regression model constructed in advance.Probability regression model includes cascade Random forest, may include multistage random forest in probability regression model, may include more regression trees under every grade of random forest, often Every regression tree under grade random forest and every grade of random forest has cascade connection, the estimation shape of upper level random forest output Shape is the original shape of adjacent next stage random forest, and the estimation shape of upper regression tree output is in random forest at the same level The original shape of next adjacent regression tree.Regression tree has used binary tree that will predict that space is divided into several subsets, returns The corresponding different zones divided of each leaf node in tree, each image into regression tree finally can be assigned to uniquely Leaf node on.
Video master control IC can obtain pre-generated model file, and parse to model file, according to model file In include information rebuild cascade probability regression model, and facial image to be detected is detected according to the probability regression model In face shape, wherein the information that model file includes may include the regression tree of the series of random forest, every grade of random forest Quantity, the depth of every regression tree, node information of each node etc. in regression tree.
The every level-one random forest being directed in probability regression model, every regression tree under every grade of random forest change In generation, calculates, and finally obtains the face shape detected.When video master control IC is iterated calculating, need in acquisition probability regression model The original shape of this regression tree, wherein this regression tree refers to carrying out the regression tree that estimation shape calculates.Further Ground, video master control IC analytic modell analytical model file can also be obtained when constructing probability regression model according to sample graph image set, the sample graph The average shape of each sample image in image set, and concentrate the average shape of each sample image to return as probability sample image Return the original shape of first regression tree of model first order random forest.
Characteristics of image is extracted from image to be detected, and each leaf node of this regression tree is calculated separately according to characteristics of image Probability.The corresponding node information that video master control IC can include according to node each in this regression tree, from image to be detected Middle extraction characteristics of image, wherein node information is used to indicate the division rule of corresponding node.Into-step ground, node information may include The coordinate information of pixel pair is divided, video master control IC can be according to the coordinate information for dividing pixel pair, from the face for being placed with extraction Characteristics of image is extracted at corresponding position in default unit rectangles region.Video master control IC can be according to extracting from image to be detected Characteristics of image corresponding with each node of this regression tree, calculates separately the probability of each leaf node, wherein leaf node refers to Degree of being be 0 node, leaf node do not have child node, alternatively referred to as terminal node.
The error of each leaf node is extracted from this regression tree.Video master control IC can read this from model file The error of each leaf node in regression tree, wherein the error of each leaf node refers to that corresponding leaf node is calculated Estimation shape and image to be detected true shape between difference value, the error of each leaf node can be establish probability return Return concentrates a large amount of sample image to be calculated when model according to sample image.
The form error of this regression tree is determined according to the probability of each leaf node and error.Video master control IC can basis The probability of each leaf node of this regression tree and corresponding error are weighted and calculate, and calculate separately each leaf node The product of probability and corresponding error, and the product being calculated is added up, obtain the form error of this regression tree.This The form error of secondary regression tree is the difference between the estimation shape that this regression tree calculates and the true shape of image to be detected Different value.
The estimation shape of this regression tree is calculated according to original shape and form error.Video master control IC can be by this The original shape of regression tree adds up with form error, and the estimation shape of this regression tree can be obtained, it is assumed that this recurrence The estimation shape of tree is Sk, original shape Sk-1, the form error being calculated is Δ Sk, then Δ Sk=Sk-1+ΔSk, wherein Sk-1The estimation shape that can be calculated for the adjacent upper regression tree of this regression tree.
It is iterated calculating using estimation shape as the original shape of adjacent next regression tree, until probability regression model In last regression tree, the estimation shape of last regression tree is obtained, as the face shape detected.It is calculated It, can be using the estimation shape of this regression tree as the initial shape of adjacent next regression tree after the estimation shape of this regression tree Shape repeats the above steps, and obtains the estimation shape of next regression tree, then using the estimation shape of next regression tree as adjacent Next again regression tree original shape, and so on, calculating is iterated in probability regression model, until probability return Last recurrence of afterbody random forest is calculated in last regression tree of the afterbody random forest of model The estimation shape of tree, the face shape as detected.Every grade of random forest in probability regression model, under every grade of random forest Every regression tree, be approaching to the true shape of image to be detected.It changes in the manner described above in probabilistic model Generation calculate, can gradually from initial sample image concentrate each sample image average shape Step wise approximation image to be detected it is true Shape, to realize extraction by the human face image information of attendance person.
In the present embodiment, the cloud server carries out recognition of face according to the human face image information and specifically includes:
Cloud server carries out feature extraction according to the human face image information and obtains feature;Cloud server is according to Facial image is divided into multiple subdomains by human face image information;Preferably, 3*3 subdomains can be divided an image into;It needs It is illustrated, the number that subdomains divide can be selected according to the actual situation, in other one or some embodiments In, the facial image can also be divided into the subdomains of other numbers.Cloud server rolls up the subdomains and k operator Product, obtains the skirt response on corresponding k direction;Further, the operator be 8 Kirsch operators, by with 8 The skirt response i.e. m in corresponding 8 directions can be obtained in Kirsch operator convolutioni=I*Mi;Wherein i=0,1,2,3,4,5,6,7. The skirt response is made into difference according to preset order and it is taken to thoroughly deserve k face gray-tone response difference on k direction; Specifically, nearly facing skirt response miBetween take absolute value according to certain sequence as difference, obtain 8 face gray-tone response differences That is:nj(j=0,1,2,3,4,5,6,7).Finally by the face gray scale Respond difference njAccording to sequence arrangement from small to large, take wherein one be control point, the control point that will be greater than or equal to takes 1, the control point will be less than and be taken 0 i.e.:
The facial image is divided into M*N sub-block by cloud server;Further, the N is 13, and the M is 13, It needs to be illustrated, the value of M and N can be adjusted according to the actual situation, in other or some embodiments The value that it can also be used, herein with regard to not enumerated one by one;Wherein, each sub-block is represented by I* (x, y);Wherein x ∈ [1, 13], [1,13] y ∈.
Cloud server calculates the local message entropy of each pixel in each sub-block;Due to portion each in facial image Characteristic information representated by position is different, it is therefore desirable to calculate each pixel in each sub-block of the facial image Local message entropy is i.e.:Wherein piIndicate the probability that i-th of gray level occurs.
Cloud server obtains the contribution degree of each sub-block according to the entropy of the local message entropy, and draws statistics Histogram;The calculation of the contribution degree of each sub-block specifically: Wherein P is the line number of sub-block, and the Q is the columns of sub-block, x ∈ [1,13], y ∈ [1,13].
The statistic histogram concatenation of all sub-blocks is fused into a histogram by cloud server;Further, it is carrying out directly When side's figure concatenation fusion specifically:Then by the feature of the histogram Vector carries out dimensionality reduction degree, and completes recognition of face by comparing it with training sample.Further, with training sample It can be judged by calculating face identification rate when comparing, the face identification rateWherein SH*Represent test sample feature to be compared to Amount, MH*Training sample feature vector to be compared is compared by one judgment threshold of setting with the face identification rate Whether characterization recognition of face succeeds;Even face identification rate is more than or equal to the threshold value, then recognition of face success, if recognition of face Rate is less than the threshold value, then recognition of face fails.
In the present embodiment, cloud server generates discrepancy attendance information and specifically includes:
When recognition of face success, the cloud server can generate discrepancy attendance information, the discrepancy attendance information packet It includes but is not limited to enter and leave name, post, the work number, affiliated function, access time of attendance personnel.It, can root when recognition of face failure It is extracted again according in the real-time pictures by the heavy multiple recognition of face step of the human face image information of attendance person, is lost if repeating identification The number lost reaches preset value, then assert that the personnel are not our company employee, the cloud server, which produces, enters and leaves attendance Information, it is described to enter and leave the information such as time that attendance information includes the photo of discrepancy personnel, discrepancy.
In the present embodiment, the discrepancy attendance information is fed back to the management terminal and specifically included by cloud server:
As the personnel that the personnel that server carries out recognition of face input are our company, the discrepancy attendance information can be sent To the management terminal, personnel can be examined attendance through the above way and carry out accurate attendance, it, can also when midway someone person absence from duty With very clear.In addition, being directed to not for our company employee, the cloud server is in the discrepancy attendance letter that will enter and leave personnel Breath can also push the prompting of doubtful forcible entry while feeding back to the management terminal, can effectively shut out through the above way Exhausted unidentified personnel swarm into without authorization.
Noninductive Work attendance method shown by the second embodiment of the present invention, the noninductive Work attendance method are suitable for the invention Noninductive attendance checking system described in first implementation, the noninductive attendance checking system include that attendance record terminal, cloud server and management are whole End, the cloud server are connect with the attendance record terminal and the management terminal communications respectively, the noninductive Work attendance method packet Include following steps:
S1, cloud server receive the human face image information by attendance person that the attendance record terminal extracts.
Wherein, the attendance record terminal include shell and be integrated in the intracorporal global shutter sensor of the shell and Video master control IC, the signal input part phase of the signal output end of the global shutter sensor and the video master control IC Even;The cloud server includes server or server cluster;The management terminal includes computer, mobile phone and other shiftings Dynamic or fixed terminal.
The human face image information by attendance person that the cloud server receives the attendance record terminal crawl is predominantly described Cloud server receives the human face image information by attendance person that the attendance record terminal is extracted according to real-time pictures, the real-time picture Face is with the speed of 240 frame per second by the attendance record terminal to the real-time pictures grabbed by attendance person;
In the present embodiment, attendance record terminal is specifically included according to what real-time pictures were extracted by the human face image information of attendance person:
Image to be detected is obtained, image to be detected refers to needing to detect the image of face shape, wherein face shape can Position and shape including face contour shape and five palaces etc..Face shape can be as where in each characteristic point that face marks Position is indicated, as shown in Fig. 2, Fig. 2 be in one embodiment with feature point group at face shape schematic diagram, it is each in Fig. 2 A with labelled point is characteristic point, constitutes face shape according to the position of each characteristic point, wherein label 1 to 17 Based on Feature Points face contour shape, the Based on Feature Points eyebrow position of label 18 to 27 and shape, the spy of label 28 to 37 Sign point indicates nose shape and shape, the Based on Feature Points eye position and shape of label 38 to 47, the feature of label 48 to 67 Point indicates mouth position and shape.
In one embodiment, if it is color image that the video master control IC, which obtains image to be detected image to be detected, Gray level image can be converted according to corresponding transformed matrix by color image, wherein gray level image refers to that each pixel only has The image of one sample color.Video master control IC can be detected first in image to be detected roughly according to the characteristics of image of gray level image No includes face, if comprising that can extract the face detected from gray level image, and the face of extraction is put into preset unit In rectangular area.If including multiple faces in image to be detected, face extraction can be put into preset unit rectangles region respectively It is interior, then detection face shape is carried out one by one.
Obtain the original shape of this regression tree in the probability regression model constructed in advance.Probability regression model includes cascade Random forest, may include multistage random forest in probability regression model, may include more regression trees under every grade of random forest, often Every regression tree under grade random forest and every grade of random forest has cascade connection, the estimation shape of upper level random forest output Shape is the original shape of adjacent next stage random forest, and the estimation shape of upper regression tree output is in random forest at the same level The original shape of next adjacent regression tree.Regression tree has used binary tree that will predict that space is divided into several subsets, returns The corresponding different zones divided of each leaf node in tree, each image into regression tree finally can be assigned to uniquely Leaf node on.
Video master control IC can obtain pre-generated model file, and parse to model file, according to model file In include information rebuild cascade probability regression model, and facial image to be detected is detected according to the probability regression model In face shape, wherein the information that model file includes may include the regression tree of the series of random forest, every grade of random forest Quantity, the depth of every regression tree, node information of each node etc. in regression tree.
The every level-one random forest being directed in probability regression model, every regression tree under every grade of random forest change In generation, calculates, and finally obtains the face shape detected.When video master control IC is iterated calculating, need in acquisition probability regression model The original shape of this regression tree, wherein this regression tree refers to carrying out the regression tree that estimation shape calculates.Further Ground, video master control IC analytic modell analytical model file can also be obtained when constructing probability regression model according to sample graph image set, the sample graph The average shape of each sample image in image set, and concentrate the average shape of each sample image to return as probability sample image Return the original shape of first regression tree of model first order random forest.
Characteristics of image is extracted from image to be detected, and each leaf node of this regression tree is calculated separately according to characteristics of image Probability.The corresponding node information that video master control IC can include according to node each in this regression tree, from image to be detected Middle extraction characteristics of image, wherein node information is used to indicate the division rule of corresponding node.Into-step ground, node information may include The coordinate information of pixel pair is divided, video master control IC can be according to the coordinate information for dividing pixel pair, from the face for being placed with extraction Characteristics of image is extracted at corresponding position in default unit rectangles region.Video master control IC can be according to extracting from image to be detected Characteristics of image corresponding with each node of this regression tree, calculates separately the probability of each leaf node, wherein leaf node refers to Degree of being be 0 node, leaf node do not have child node, alternatively referred to as terminal node.
The error of each leaf node is extracted from this regression tree.Video master control IC can read this from model file The error of each leaf node in regression tree, wherein the error of each leaf node refers to that corresponding leaf node is calculated Estimation shape and image to be detected true shape between difference value, the error of each leaf node can be establish probability return Return concentrates a large amount of sample image to be calculated when model according to sample image.
The form error of this regression tree is determined according to the probability of each leaf node and error.Video master control IC can basis The probability of each leaf node of this regression tree and corresponding error are weighted and calculate, and calculate separately each leaf node The product of probability and corresponding error, and the product being calculated is added up, obtain the form error of this regression tree.This The form error of secondary regression tree is the difference between the estimation shape that this regression tree calculates and the true shape of image to be detected Different value.
The estimation shape of this regression tree is calculated according to original shape and form error.Video master control IC can be by this The original shape of regression tree adds up with form error, and the estimation shape of this regression tree can be obtained, it is assumed that this recurrence The estimation shape of tree is Sk, original shape Sk-1, the form error being calculated is Δ Sk, then Δ Sk=Sk-1+ΔSk, wherein Sk-1The estimation shape that can be calculated for the adjacent upper regression tree of this regression tree.
It is iterated calculating using estimation shape as the original shape of adjacent next regression tree, until probability regression model In last regression tree, the estimation shape of last regression tree is obtained, as the face shape detected.It is calculated It, can be using the estimation shape of this regression tree as the initial shape of adjacent next regression tree after the estimation shape of this regression tree Shape repeats the above steps, and obtains the estimation shape of next regression tree, then using the estimation shape of next regression tree as adjacent Next again regression tree original shape, and so on, calculating is iterated in probability regression model, until probability return Last recurrence of afterbody random forest is calculated in last regression tree of the afterbody random forest of model The estimation shape of tree, the face shape as detected.Every grade of random forest in probability regression model, under every grade of random forest Every regression tree, be approaching to the true shape of image to be detected.It changes in the manner described above in probabilistic model Generation calculate, can gradually from initial sample image concentrate each sample image average shape Step wise approximation image to be detected it is true Shape, to realize extraction by the human face image information of attendance person.
S2, cloud server carry out recognition of face according to the human face image information and generate discrepancy attendance information.
Cloud server carries out feature extraction according to the human face image information and obtains feature;Cloud server is according to Facial image is divided into multiple subdomains by human face image information;Preferably, 3*3 subdomains can be divided an image into;It needs It is illustrated, the number that subdomains divide can be selected according to the actual situation, in other one or some embodiments In, the facial image can also be divided into the subdomains of other numbers.Cloud server rolls up the subdomains and k operator Product, obtains the skirt response on corresponding k direction;Further, the operator be 8 Kirsch operators, by with 8 The skirt response i.e. m in corresponding 8 directions can be obtained in Kirsch operator convolutioni=I*Mi;Wherein i=0,1,2,3,4,5,6,7. The skirt response is made into difference according to preset order and it is taken to thoroughly deserve k face gray-tone response difference on k direction; Specifically, nearly facing skirt response miBetween take absolute value according to certain sequence as difference, obtain 8 face gray-tone response differences That is:nj(j=0,1,2,3,4,5,6,7).Finally by the face gray scale Respond difference njAccording to sequence arrangement from small to large, take wherein one be control point, the control point that will be greater than or equal to takes 1, the control point will be less than and be taken 0 i.e.:
The facial image is divided into M*N sub-block by cloud server;Further, the N is 13, and the M is 13, It needs to be illustrated, the value of M and N can be adjusted according to the actual situation, in other or some embodiments The value that it can also be used, herein with regard to not enumerated one by one;Wherein, each sub-block is represented by I* (x, y);Wherein x ∈ [1, 13], [1,13] y ∈.
Cloud server calculates the local message entropy of each pixel in each sub-block;Due to portion each in facial image Characteristic information representated by position is different, it is therefore desirable to calculate each pixel in each sub-block of the facial image Local message entropy is i.e.:Wherein piIndicate the probability that i-th of gray level occurs.
Cloud server obtains the contribution degree of each sub-block according to the entropy of the local message entropy, and draws statistics Histogram;The calculation of the contribution degree of each sub-block specifically: Wherein P is the line number of sub-block, and the Q is the columns of sub-block, x ∈ [1,13], y ∈ [1,13].
The statistic histogram concatenation of all sub-blocks is fused into a histogram by cloud server;Further, it is carrying out directly When side's figure concatenation fusion specifically:Then by the spy of the histogram It levies vector and carries out dimensionality reduction degree, and complete recognition of face by comparing it with training sample.Further, with training sample It can be judged by calculating face identification rate when originally comparing, the face identification rateWherein SH*Represent test sample feature to be compared to Amount, MH*Training sample feature vector to be compared is compared by one judgment threshold of setting with the face identification rate Whether characterization recognition of face succeeds;Even face identification rate is more than or equal to the threshold value, then recognition of face success, if recognition of face Rate is less than the threshold value, then recognition of face fails.
Cloud server generates discrepancy attendance information and specifically includes:
When recognition of face success, the cloud server can generate discrepancy attendance information, the discrepancy attendance information packet It includes but is not limited to enter and leave name, post, the work number, affiliated function, access time of attendance personnel.It, can root when recognition of face failure It is extracted again according in the real-time pictures by the heavy multiple recognition of face step of the human face image information of attendance person, is lost if repeating identification The number lost reaches preset value, then assert that the personnel are not our company employee, the cloud server, which produces, enters and leaves attendance Information, it is described to enter and leave the information such as time that attendance information includes the photo of discrepancy personnel, discrepancy.
The discrepancy attendance information is fed back to the management terminal by S3, cloud server.
As the personnel that the personnel that server carries out recognition of face input are our company, the discrepancy attendance information can be sent To the management terminal, personnel can be examined attendance through the above way and carry out accurate attendance, it, can also when midway someone person absence from duty With very clear.In addition, being directed to not for our company employee, the cloud server is in the discrepancy attendance letter that will enter and leave personnel Breath can also push the prompting of doubtful forcible entry while feeding back to the management terminal, can effectively shut out through the above way Exhausted unidentified personnel swarm into without authorization.
Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent Pipe present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: its according to So be possible to modify the technical solutions described in the foregoing embodiments, or to some or all of the technical features into Row equivalent replacement;And these are modified or replaceed, various embodiments of the present invention technology that it does not separate the essence of the corresponding technical solution The range of scheme should all cover within the scope of the claims and the description of the invention.

Claims (10)

1. a kind of noninductive attendance checking system, it is characterised in that: including attendance record terminal, cloud server and management terminal, the cloud Server is connect with the attendance record terminal and the management terminal communications respectively;
The cloud server receives the human face image information by attendance person that the attendance record terminal extracts;
The cloud server carries out recognition of face according to the human face image information and generates discrepancy attendance information;
The discrepancy attendance information is fed back to the management terminal by the cloud server.
2. noninductive attendance checking system according to claim 1, which is characterized in that it is whole that the cloud server receives the attendance End crawl is specifically included by the human face image information of attendance person:
The cloud server receives the human face image information by attendance person that the attendance record terminal is extracted according to real-time pictures, institute Stating real-time pictures is with the speed of 240 frame per second by the attendance record terminal to the real-time pictures grabbed by attendance person.
3. noninductive attendance checking system according to claim 1, which is characterized in that the cloud server is according to the face figure It is specifically included as information carries out recognition of face:
The cloud server carries out feature extraction according to the human face image information and obtains feature;
The facial image is divided into M*N sub-block by the cloud server;
The cloud server calculates the local message entropy of each pixel in each sub-block;
The cloud server obtains the contribution degree of each sub-block according to the entropy of the local message entropy, and draws statistics Histogram;
The statistic histogram concatenation of all sub-blocks is fused into a histogram by the cloud server;
The feature vector of the histogram is carried out dimensionality reduction degree by the cloud server, and by carrying out it with training sample pair Than completing recognition of face.
4. noninductive attendance checking system according to claim 3, it is characterised in that: the N is 13, and the M is 13.
5. noninductive attendance checking system according to claim 3, which is characterized in that the cloud server is according to the face figure It is specifically included as information progress feature extraction obtains feature:
Facial image is divided into multiple subdomains according to the human face image information by the cloud server;
The subdomains and k operator convolution are obtained the skirt response on corresponding k direction by the cloud server;
The skirt response is made difference according to preset order and it is taken to thoroughly deserve the k on k direction by the cloud server A face gray-tone response difference;
The cloud server by the face gray-tone response difference according to from small to large sequence arrange, take wherein one for pair According to point, the control point that will be greater than or equal to takes 1, will be less than the control point and take 0.
6. a kind of noninductive Work attendance method, the noninductive Work attendance method is suitable for noninductive described in any one of claim 1-5 Attendance checking system, the noninductive attendance checking system include attendance record terminal, cloud server and management terminal, the cloud server difference It is connect with the attendance record terminal and the management terminal communications, which is characterized in that the noninductive Work attendance method includes the following steps:
The cloud server receives the human face image information by attendance person that the attendance record terminal extracts;
The cloud server carries out recognition of face according to the human face image information and generates discrepancy attendance information;
The discrepancy attendance information is fed back to the management terminal by the cloud server.
7. noninductive attendance checking system according to claim 6, which is characterized in that it is whole that the cloud server receives the attendance End crawl is specifically included by the human face image information of attendance person:
The cloud server receives the human face image information by attendance person that the attendance record terminal is extracted according to real-time pictures, institute Stating real-time pictures is with the speed of 240 frame per second by the attendance record terminal to the real-time pictures grabbed by attendance person.
8. noninductive attendance checking system according to claim 6, which is characterized in that the cloud server is according to the face figure It is specifically included as information carries out recognition of face:
The cloud server carries out feature extraction according to the human face image information and obtains feature;
The facial image is divided into M*N sub-block by the cloud server;
The cloud server calculates the local message entropy of each pixel in each sub-block;
The cloud server obtains the contribution degree of each sub-block according to the entropy of the local message entropy, and draws statistics Histogram;
The statistic histogram concatenation of all sub-blocks is fused into a histogram by the cloud server;
The feature vector of the histogram is carried out dimensionality reduction degree by the cloud server, and by carrying out it with training sample pair Than completing recognition of face.
9. noninductive attendance checking system according to claim 8, it is characterised in that: the N is 13, and the M is 13.
10. noninductive attendance checking system according to claim 8, which is characterized in that the cloud server is according to the face Image information progress feature extraction obtains feature and specifically includes:
Facial image is divided into multiple subdomains according to the human face image information by the cloud server;
The subdomains and k operator convolution are obtained the skirt response on corresponding k direction by the cloud server;
The skirt response is made difference according to preset order and it is taken to thoroughly deserve the k on k direction by the cloud server A face gray-tone response difference;
The cloud server by the face gray-tone response difference according to from small to large sequence arrange, take wherein one for pair 1 is taken according to the position of point, the control point that will be greater than or equal to, the position for being less than the control point is taken 0.
CN201811137079.7A 2018-09-27 2018-09-27 A kind of noninductive attendance system and method Pending CN109308584A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811137079.7A CN109308584A (en) 2018-09-27 2018-09-27 A kind of noninductive attendance system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811137079.7A CN109308584A (en) 2018-09-27 2018-09-27 A kind of noninductive attendance system and method

Publications (1)

Publication Number Publication Date
CN109308584A true CN109308584A (en) 2019-02-05

Family

ID=65224284

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811137079.7A Pending CN109308584A (en) 2018-09-27 2018-09-27 A kind of noninductive attendance system and method

Country Status (1)

Country Link
CN (1) CN109308584A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110310384A (en) * 2019-06-20 2019-10-08 珠海鼎日电子科技有限公司 A kind of noninductive intelligent Checking on Work Attendance method and its system
CN110349285A (en) * 2019-07-11 2019-10-18 深圳市三宝创新智能有限公司 A kind of Work attendance method and robot attendance checking system with doubtful sick detection function

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104143091A (en) * 2014-08-18 2014-11-12 江南大学 Single-sample face recognition method based on improved mLBP
CN104933819A (en) * 2015-06-17 2015-09-23 福建永易信息科技有限公司 Alarm device based on face recognition and sign recognition and alarm method
CN106204780A (en) * 2016-07-04 2016-12-07 武汉理工大学 A kind of based on degree of depth study and the human face identification work-attendance checking system and method for cloud service
CN106599870A (en) * 2016-12-22 2017-04-26 山东大学 Face recognition method based on adaptive weighting and local characteristic fusion
CN106910258A (en) * 2016-08-31 2017-06-30 彭青 Intelligent Dynamic human face identification work-attendance checking record management system
CN107578005A (en) * 2017-09-01 2018-01-12 宜宾学院 A kind of Complex Wavelet Transform domain LBP face identification methods
CN107679447A (en) * 2017-08-17 2018-02-09 平安科技(深圳)有限公司 Facial characteristics point detecting method, device and storage medium
CN107798739A (en) * 2016-08-31 2018-03-13 彭青 Intelligent Dynamic human face identification work-attendance checking record management system
US20180082111A1 (en) * 2012-12-12 2018-03-22 Verint Systems Ltd. Time-in-store estimation using facial recognition

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180082111A1 (en) * 2012-12-12 2018-03-22 Verint Systems Ltd. Time-in-store estimation using facial recognition
CN104143091A (en) * 2014-08-18 2014-11-12 江南大学 Single-sample face recognition method based on improved mLBP
CN104933819A (en) * 2015-06-17 2015-09-23 福建永易信息科技有限公司 Alarm device based on face recognition and sign recognition and alarm method
CN106204780A (en) * 2016-07-04 2016-12-07 武汉理工大学 A kind of based on degree of depth study and the human face identification work-attendance checking system and method for cloud service
CN106910258A (en) * 2016-08-31 2017-06-30 彭青 Intelligent Dynamic human face identification work-attendance checking record management system
CN107798739A (en) * 2016-08-31 2018-03-13 彭青 Intelligent Dynamic human face identification work-attendance checking record management system
CN106599870A (en) * 2016-12-22 2017-04-26 山东大学 Face recognition method based on adaptive weighting and local characteristic fusion
CN107679447A (en) * 2017-08-17 2018-02-09 平安科技(深圳)有限公司 Facial characteristics point detecting method, device and storage medium
CN107578005A (en) * 2017-09-01 2018-01-12 宜宾学院 A kind of Complex Wavelet Transform domain LBP face identification methods

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110310384A (en) * 2019-06-20 2019-10-08 珠海鼎日电子科技有限公司 A kind of noninductive intelligent Checking on Work Attendance method and its system
CN110349285A (en) * 2019-07-11 2019-10-18 深圳市三宝创新智能有限公司 A kind of Work attendance method and robot attendance checking system with doubtful sick detection function

Similar Documents

Publication Publication Date Title
WO2020098250A1 (en) Character recognition method, server, and computer readable storage medium
CN103824053B (en) The sex mask method and face gender detection method of a kind of facial image
CN111768336B (en) Face image processing method and device, computer equipment and storage medium
CN108009482A (en) One kind improves recognition of face efficiency method
CN105022999B (en) A kind of adjoint real-time acquisition system of people's code
CN112132197B (en) Model training, image processing method, device, computer equipment and storage medium
CN108345837A (en) A kind of pedestrian's recognition methods again based on the study of human region alignmentization feature representation
CN107967458A (en) A kind of face identification method
CN104751108A (en) Face image recognition device and face image recognition method
US20100111375A1 (en) Method for Determining Atributes of Faces in Images
CN110263768A (en) A kind of face identification method based on depth residual error network
CN107292272B (en) Method and system for recognizing human face in real-time transmission video
CN103020655B (en) A kind of remote identity authentication method based on single training image per person
CN110147833A (en) Facial image processing method, apparatus, system and readable storage medium storing program for executing
CN107480580A (en) Image-recognizing method and pattern recognition device
CN109308584A (en) A kind of noninductive attendance system and method
CN109670423A (en) A kind of image identification system based on deep learning, method and medium
CN117636400B (en) Method and system for identifying animal identity based on image
CN112699810B (en) Method and device for improving character recognition precision of indoor monitoring system
CN110443577A (en) A kind of campus attendance checking system based on recognition of face
CN108090473B (en) Method and device for recognizing human face under multiple cameras
Rajput et al. Attendance Management System using Facial Recognition
Colombari et al. Background initialization in cluttered sequences
CN113283397B (en) Face recognition system and method
Yaghoubi et al. Region-based cnns for pedestrian gender recognition in visual surveillance environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190205