CN111860357B - Attendance rate calculating method and device based on living body identification, terminal and storage medium - Google Patents

Attendance rate calculating method and device based on living body identification, terminal and storage medium Download PDF

Info

Publication number
CN111860357B
CN111860357B CN202010718722.6A CN202010718722A CN111860357B CN 111860357 B CN111860357 B CN 111860357B CN 202010718722 A CN202010718722 A CN 202010718722A CN 111860357 B CN111860357 B CN 111860357B
Authority
CN
China
Prior art keywords
face
target
living body
lbp
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010718722.6A
Other languages
Chinese (zh)
Other versions
CN111860357A (en
Inventor
熊军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN202010718722.6A priority Critical patent/CN111860357B/en
Publication of CN111860357A publication Critical patent/CN111860357A/en
Application granted granted Critical
Publication of CN111860357B publication Critical patent/CN111860357B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C1/00Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
    • G07C1/10Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people together with the recording, indicating or registering of other data, e.g. of signs of identity

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Multimedia (AREA)
  • Strategic Management (AREA)
  • Educational Administration (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Tourism & Hospitality (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • Educational Technology (AREA)
  • General Business, Economics & Management (AREA)
  • Quality & Reliability (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Primary Health Care (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an artificial intelligence technology, and provides a attendance calculation method, a device, a terminal and a storage medium based on living body identification, which comprise the following steps: extracting face images in the living face video, converting the face images into a color space, calculating a first LBP characteristic, extracting face images in the non-living face video, converting the face images into the color space, and calculating a second LBP characteristic; calculating a plurality of first LBP features of the same living body to obtain a first face feature and a plurality of second LBP features of the same non-living body to obtain a second face feature; training a living body face recognition model based on a first feature pair constructed based on the first face features and a second feature pair constructed based on the second face features; and identifying the student video stream through the living face recognition model to obtain an identification result, and calculating the attendance rate of the students according to the identification result. The invention can be applied to intelligent education, and can accurately calculate the attendance when celebrity images are attached to classroom walls. In addition, the invention also relates to a blockchain technology, and a living human face recognition model is stored in the blockchain.

Description

Attendance rate calculating method and device based on living body identification, terminal and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a attendance calculation method, device, terminal and storage medium based on living body identification.
Background
The class attendance is an important reflection of the teaching level of the teacher. However, at present, the estimated attendance rate is often registered manually, subjectivity is relatively large, and objective data is not supported. This may not only lead to inaccurate assessment, but also to some negative teaching situations that cannot be resolved and improved in time.
In the prior art, based on the principle that face detection not only can detect face positions but also can automatically count the number of faces, the face detection is applied to a classroom scene to obtain the number of people so as to count the attendance rate of students.
However, in the actual teaching process, many students are low-headed, and for the low-head students, the low-head students cannot be detected by a face detection algorithm; in addition, there are typically some celebrity images in classrooms, and face detection also detects faces in celebrity images as students. Therefore, the detection result is inaccurate, resulting in unrealistic statistical attendance.
Disclosure of Invention
In view of the above, it is necessary to provide a living body recognition-based attendance calculation method, apparatus, terminal, and storage medium capable of accurately calculating attendance when celebrity images are attached to classroom walls.
A first aspect of the present invention provides a attendance calculation method based on living body identification, comprising:
Collecting a plurality of living face videos and extracting a plurality of first face images in each living face video, and collecting a plurality of non-living face videos and extracting a plurality of second face images in each non-living face video;
Converting the color space of each first face image into a preset color space to obtain target first face images, calculating first LBP characteristics of each target first face image, converting the color space of each second face image into the preset color space to obtain target second face images, and calculating second LBP characteristics of each target second face image;
the method comprises the steps of carrying out splicing and averaging on a plurality of first LBP features of the same living body to obtain first face features, and carrying out splicing and averaging on a plurality of second LBP features of the same non-living body to obtain second face features;
Constructing a first feature pair based on each first face feature and a preset first mark, constructing a second feature pair based on each second face feature and a preset second mark, and training a support vector machine based on a plurality of constructed first feature pairs and a plurality of second feature pairs to obtain a living body face recognition model;
And identifying the student video stream through the living face recognition model to obtain an identification result, and calculating the attendance rate of the students according to the identification result.
According to an optional embodiment of the present invention, the identifying the student video stream by the living face recognition model to obtain an identification result, and calculating the student attendance according to the identification result includes:
extracting a plurality of third face images from the acquired student video stream, and converting the color space of the plurality of third face images into the preset color space to obtain a plurality of candidate third face images;
the face detection model is called to detect each face area in each candidate third face image, and at least one target third face image is determined from the candidate third face images according to the detection result;
and calling the living body face recognition model to recognize each face area in the at least one target third face image to obtain a marking result, and calculating the attendance rate of students according to the marking result.
According to an optional embodiment of the present invention, the target third face image is one, and the calculating the attendance rate of the students according to the identification result includes:
Calculating the number of the first marks as the mark result;
Acquiring the attendance quantity of the students;
and calculating the attendance rate of the students according to the calculated quantity and the attendance quantity.
According to an optional embodiment of the present invention, the target third face image is two, and the calculating the attendance rate of the students according to the identification result includes:
outputting a marking result and a recognition rate corresponding to each face area in the first target third face image and the second target third face image;
determining a first number of face areas with the same sequence number and the identification results of the first identification;
determining target face areas with the same sequence number and different identification results as second identifications or with the same sequence number and different identification results;
Comparing the recognition rate of the target face area with the same sequence number in each of the first target third face image and the second target third face image;
determining a final identification result with a higher identification rate of the target face area;
Calculating the second number of the first marks in the final mark result;
calculating the attendance according to the sum of the first number and the second number.
According to an alternative embodiment of the present invention, the calculating the first LBP feature of each target first face image includes: and respectively extracting the images of each target first face image on each color component, calculating the LBP characteristics of the images on each color component, and connecting the LBP characteristics of the images on each color component to obtain the first LBP characteristics of each target first face image.
According to an alternative embodiment of the present invention, the second LBP feature of each target second face image includes: and respectively extracting the images of each target second face image on each color component, calculating the LBP characteristics of the images on each color component, and connecting the LBP characteristics of the images on each color component to obtain the second LBP characteristics of each target second face image.
According to an optional embodiment of the invention, the performing the stitching and averaging on the plurality of first LBP features of the same living body to obtain the first face feature includes: calculating a first number of a plurality of first LBP features of the same living body; adding and calculating a plurality of first LBP characteristics of the same living body to obtain a first new LBP characteristic; and dividing the first new LBP feature and the first quantity to obtain a first face feature.
According to an optional embodiment of the present invention, performing a stitching and averaging on the plurality of second LBP features of the same non-living body to obtain a second face feature includes: calculating a second number of a plurality of second LBP features of the same non-living body; adding and calculating a plurality of second LBP characteristics of the same non-living body to obtain a second new LBP characteristic; and carrying out division operation on the second new LBP characteristic and the second quantity to obtain a second face characteristic.
According to an optional embodiment of the invention, the attendance calculation method based on living body identification further comprises:
reading the lesson information of the classroom at the current time from a campus database;
the target third face image, the attendance rate and the lesson information are stored in a correlated mode;
and sending the attendance rate to a teaching teacher.
A second aspect of the present invention provides a living body identification-based attendance calculation apparatus comprising:
the video acquisition module is used for acquiring a plurality of living face videos and extracting a plurality of first face images in each living face video, and acquiring a plurality of non-living face videos and extracting a plurality of first face images in each non-living face video;
The feature calculation module is used for converting the color space of each first face image into a preset color space to obtain target first face images and calculating first LBP features of each target first face image, and converting the color space of each second face image into the preset color space to obtain target second face images and calculating second LBP features of each target second face image;
The feature stitching module is used for stitching and averaging a plurality of first LBP features of the same living body to obtain first face features, and stitching and averaging a plurality of second LBP features of the same non-living body to obtain second face features;
the model training module is used for constructing a first feature pair based on each first face feature and a preset first mark, constructing a second feature pair based on each second face feature and a preset second mark, and training a support vector machine based on a plurality of constructed first feature pairs and a plurality of second feature pairs to obtain a living face recognition model;
And the attendance calculation module is used for identifying the student video stream through the living face recognition model to obtain an identification result, and calculating the attendance rate of the students according to the identification result.
A third aspect of the present invention provides a terminal including a processor for implementing the living body identification-based attendance calculation method when executing a computer program stored in a memory.
A fourth aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the living body identification-based attendance calculation method.
In summary, according to the living body identification-based attendance calculation method, device, terminal and storage medium disclosed by the invention, living bodies and non-living bodies can be better distinguished by converting the color space of an image from RGB color space to YUV color space, and after LBP features are extracted, splicing and averaging processing is performed to obtain texture features which can better represent the living bodies and the non-living bodies, so that a living body identification model trained based on the LBP features obtained by splicing and averaging processing can better identify whether a face area in a key frame image in a video is a living body or a non-living body, and further the famous face in a classroom can be effectively removed, and the calculation result of the attendance is more accurate.
Drawings
Fig. 1 is a flowchart of a attendance calculation method based on living body identification according to an embodiment of the present invention.
Fig. 2 is a block diagram of a attendance calculation device based on living body recognition according to a second embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a terminal according to a third embodiment of the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will be more clearly understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. It should be noted that, without conflict, the embodiments of the present invention and features in the embodiments may be combined with each other.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
Example 1
Fig. 1 is a flowchart of a attendance calculation method based on living body identification according to an embodiment of the present invention. The attendance rate calculating method based on living body identification can be applied to the field of intelligent education, and promotes education development, so that construction of intelligent cities is promoted. The attendance rate calculating method based on living body identification specifically comprises the following steps, the sequence of the steps in the flow chart can be changed according to different requirements, and some steps can be omitted.
S11, collecting a plurality of living face videos and extracting a plurality of first face images in each living face video, and collecting a plurality of non-living face videos and extracting a plurality of second face images in each non-living face video.
The terminal is provided with a camera, and the camera is used for collecting videos including the face of the living body in a preset period of the living body and collecting videos including the face of the non-living body in a preset period of the non-living body.
The terminal can preset an extraction frame rate, extract a plurality of first face images from the collected living face video at intervals of the extraction frame rate, and extract a plurality of second face images from the collected non-living face video at intervals of the extraction frame rate.
S12, converting the color space of each first face image into a preset color space to obtain target first face images, calculating first LBP characteristics of each target first face image, converting the color space of each second face image into the preset color space to obtain target second face images, and calculating second LBP characteristics of each target second face image.
The color space to be converted is preset in the terminal, and the color space can be a YUV color space.
Since there is a large difference in color and texture between both living and non-living subjects in the video, by converting the color space of the image from RGB color space to YUV color space, the face regions of living subjects and non-living subjects can be well distinguished.
In an alternative embodiment, the calculating the first LBP feature of each target first face image includes: and respectively extracting the images of each target first face image on each color component, calculating the LBP characteristics of the images on each color component, and connecting the LBP characteristics of the images on each color component to obtain the first LBP characteristics of each target first face image.
In an alternative embodiment, the calculating the second LBP feature of each target second face image includes: and respectively extracting the images of each target second face image on each color component, calculating the LBP characteristics of the images on each color component, and connecting the LBP characteristics of the images on each color component to obtain the second LBP characteristics of each target second face image.
In this alternative embodiment, the terminal calculates the LBP feature on each color component to obtain a 256-dimensional vector, and then connects the LBP features on the three color components together to form a 3×256-dimensional vector, which is used as the final color texture feature, and has a stronger feature expression capability, and the recognition accuracy of the living face recognition model obtained based on the final color texture feature training is higher.
The calculation process of the LBP characteristic is the prior art.
And S13, performing stitching and averaging on a plurality of first LBP features of the same living body to obtain a first face feature, and performing stitching and averaging on a plurality of second LBP features of the same non-living body to obtain a second face feature.
The same object (living body and non-living body) corresponds to a plurality of target object face images, LBP characteristic extraction is carried out on the plurality of target object face images, and then splicing and averaging treatment is carried out on the extracted LBP characteristic, so that the texture characteristic of the object can be better represented.
In an optional embodiment, the performing the stitching and averaging on the plurality of first LBP features of the same living body to obtain the first face feature includes: calculating a first number of a plurality of first LBP features of the same living body; adding and calculating a plurality of first LBP characteristics of the same living body to obtain a first new LBP characteristic; and dividing the first new LBP feature and the first quantity to obtain a first face feature.
In an optional embodiment, the performing the stitching and averaging on the plurality of second LBP features of the same non-living body to obtain the second face feature includes: calculating a second number of a plurality of second LBP features of the same non-living body; adding and calculating a plurality of second LBP characteristics of the same non-living body to obtain a second new LBP characteristic; and carrying out division operation on the second new LBP characteristic and the second quantity to obtain a second face characteristic.
In this optional embodiment, since the living body moves at a moment in the video and the non-living body is in a static state, the first LBP features of the multiple target first face images of the same living body are spliced and averaged to obtain the first face features, so that the color texture features of the living body can be represented, and the second LBP features of the multiple target second face images of the same non-living body are spliced and averaged to obtain the second face features, and since the second LBP features of the obtained second face features and the second LBP features of any one target second face image of the non-living body have little change, the living body face recognition model obtained through training can be more effectively distinguished from the living body and the non-living body.
S14, constructing a first feature pair based on each first face feature and a preset first mark, constructing a second feature pair based on each second face feature and a preset second mark, and training a support vector machine based on the constructed first feature pairs and the constructed second feature pairs to obtain a living body face recognition model.
The support vector machine (Support Vector Machine, SVM) classification model trained through the color texture features corresponding to the face images of living bodies and non-living bodies has strong classification capability, can identify the face areas in the acquired images, and determines which face areas belong to living bodies, such as students, and which face areas belong to non-living bodies, such as celebrities attached to classroom walls.
S15, identifying the student video stream through the living face recognition model to obtain an identification result, and calculating the attendance rate of the students according to the identification result.
The camera is installed in the classroom, and the video stream of students in the classroom is collected through the camera and then sent to the terminal.
In order to prevent some of the detected face areas from being face areas in the celebrity image, the terminal can call a living face recognition model on line to recognize living face areas and non-living face areas in the student video stream, so that the face areas belonging to the celebrity image are removed, and the calculated attendance rate is more accurate. The terminal outputs the identification result corresponding to the face region through the living body face identification model. The identification result comprises a first identification and a second identification.
In an optional embodiment, the identifying the student video stream by the living face recognition model to obtain an identification result, and calculating the student attendance according to the identification result includes:
extracting a plurality of third face images from the acquired student video stream, and converting the color space of the plurality of third face images into the preset color space to obtain a plurality of candidate third face images;
the face detection model is called to detect each face area in each candidate third face image, and at least one target third face image is determined from the candidate third face images according to the detection result;
and calling the living body face recognition model to recognize each face area in the at least one target third face image to obtain a marking result, and calculating the attendance rate of students according to the marking result.
And the terminal extracts a plurality of third face images from the student video stream according to a preset extraction frame rate, and then the color space of each extracted third face image is changed to the preset color space.
The terminal may train the face detection model offline in advance and detect the face region in the target third face image using the face detection model online. The face detection model can detect each face region in the candidate third face image, and the terminal calculates the number of the face regions to determine how many faces are in the candidate third face image. And the terminal takes the candidate third face image with the largest number of face areas as a target third face image. The target third face image may be one or two.
In an alternative embodiment, the training process of the face detection model includes: acquiring a plurality of sample pictures; marking each face area in each sample picture by adopting an ImageLab tool; inputting a sample picture of the labeled face region into Yolo _v3 network for training; acquiring a face area predicted by the Yolo _v3 network; calculating an error rate between the marked face region and the predicted face region; when the error rate is lower than a preset error rate threshold value, finishing training of the Yolo _v3 network to obtain a face detection model; and when the error rate is higher than the preset error rate threshold, continuing training the Yolo _v3 network through a back propagation algorithm until training of the Yolo _v3 network is finished when the error rate is lower than the preset error rate threshold.
In the alternative embodiment, a camera can be used for collecting video of a student in class, and the video is generated into a frame-by-frame sample picture. The face region includes: a human face and a hindbrain scoop. Yolo _v3 is a full convolution network, the network structure of which is mainly a main network plus a detection network, wherein the main network uses darknet53, and the convolution with a step length of 2 is used for downsampling. Meanwhile, up sampling and route operations are used in the network, and 3 times of detection are also performed in one network structure. The Yolo _v3 network structure is prior art and the present invention is not described in detail.
The human face detection model trained based on Yolo _v3 can obtain a good detection effect aiming at a small target, and the process of the embodiment is simpler compared with the process of detecting multi-target tasks in the traditional method because the human face detection model is trained to detect the human face region.
In this embodiment, by converting the color space of the image from the RGB color space to the YUV color space, living bodies and non-living bodies can be better distinguished, and after the LBP features are extracted, the mosaic averaging processing is performed to obtain texture features which can better represent the living bodies and the non-living bodies, so that the living body recognition model trained based on the LBP features obtained by the mosaic averaging processing can better recognize whether the face region in the key frame image in the video is a living body or a non-living body, and further effectively reject the famous face in the classroom, so that the calculation result of the attendance rate is more accurate.
In an optional embodiment, when the target third face image is one, the calculating the student attendance according to the identification result includes:
Calculating the number of the first marks as the mark result;
Acquiring the attendance quantity of the students;
and calculating the attendance rate of the students according to the calculated quantity and the attendance quantity.
In this optional embodiment, the terminal may read a student list of the classroom at the current time from a campus database, where the number of names in the student list is the number of attendance of students, and the attendance rate of the students may be determined according to the identification result as a ratio between the number of the first identifications and the number of attendance. The higher the ratio, the higher the attendance, the lower the ratio, the lower the attendance. The ratio was 1, indicating a attendance of 100%.
In an optional embodiment, when the target third face image is two, the calculating the student attendance according to the identification result includes:
outputting a marking result and a recognition rate corresponding to each face area in the first target third face image and the second target third face image;
determining a first number of face areas with the same sequence number and the identification results of the first identification;
determining target face areas with the same sequence number and different identification results as second identifications or with the same sequence number and different identification results;
Comparing the recognition rate of the target face area with the same sequence number in each of the first target third face image and the second target third face image;
determining a final identification result with a higher identification rate of the target face area;
Calculating the second number of the first marks in the final mark result;
calculating the attendance according to the sum of the first number and the second number.
For example, assume that 10 face areas exist in the first target third face image and the second target third face image, and the sequence numbers are respectively: 1.2, 3,4, 5,6,7,8, 9, 10, the following table lists the identification result and the identification rate of each face area:
Sequence number 1 2 3 4 5 6 7 8 9 10
F1 B1 B1 B1 B1 B1 B1 B2 B1 B2 B2
F2 B1 B1 B1 B1 B1 B1 B1 B2 B2 B2
Wherein F1 represents a first target third face image, F2 represents a second target third face image, B1 represents a first mark, and B2 represents a second mark.
It can be seen that the first number of face regions with the same sequence number and the first identification result is 6, and the target face regions with the same sequence number and different identification results are face regions with sequence numbers 7-8.
For the sequence number 7, if the recognition rate of the second mark in the first target third face image is 99%, and the recognition rate of the second mark in the second target third face image is 90%, the final mark result of the face area of the sequence number 7 is the first mark.
For the sequence number 8, if the recognition rate of the second identifier in the first target third face image is 98%, and the recognition rate of the second identifier in the second target third face image is 92%, the final identifier result of the face region of the sequence number 7 is the first identifier.
Thus, the number of first markers in the final marker result is 2, and the number of heads of the living body is calculated to be 8.
In the alternative embodiment, the identification result of the face area detected by mistake can be corrected by identifying the face areas with the same sequence number and different identification results and determining the final identification result according to the identification rate, so that an accurate identification result is obtained, and the identification rate is higher.
In an alternative embodiment, the attendance calculation method based on living body identification further includes:
reading the lesson information of the classroom at the current time from a campus database;
the target third face image, the attendance rate and the lesson information are stored in a correlated mode;
and sending the attendance rate to a teaching teacher.
In this optional embodiment, the campus database includes lesson information of each class of each classroom, and the lesson information includes: the information of the teacher in the class, the subjects on the classroom, the time of the subjects on the class and the time of the subjects on the class, the class of the class, the number of people in the class and the information of each student in the class.
Binding the target third face image, the calculated attendance and the lesson information to provide objective data support for evaluating the teaching level of a teacher; and the attendance rate is sent to the teaching teacher, so that the teaching teacher can adjust the teaching plan in time according to the attendance condition, and the teaching quality is improved.
It is emphasized that to further ensure privacy and security of the living face recognition model and/or attendance, the living face recognition model and/or attendance may also be stored in a blockchain node.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The blockchain (Blockchain), essentially a de-centralized database, is a string of data blocks that are generated in association using cryptographic methods, each of which contains information from a batch of network transactions for verifying the validity (anti-counterfeit) of its information and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
Example two
Fig. 2 is a block diagram of a attendance calculation device based on living body recognition according to a second embodiment of the present invention.
In some embodiments, the vital sign-based attendance calculation means 20 may comprise a plurality of functional modules comprising computer program segments. The computer program of the individual program segments in the living being identification based attendance calculation means 20 may be stored in a memory of the terminal and executed by at least one processor for performing (as described in detail in fig. 1) the functions of living being identification based attendance calculation.
In the present embodiment, the attendance calculation means 20 based on living body recognition may be divided into a plurality of functional modules according to the functions performed thereby. The functional module may include: video acquisition module 201, feature calculation module 202, feature stitching module 203, model training module 204, attendance calculation module 205, and association storage module 206. The module referred to in the present invention refers to a series of computer program segments capable of being executed by at least one processor and of performing a fixed function, stored in a memory. In the present embodiment, the functions of the respective modules will be described in detail in the following embodiments.
The video acquisition module 201 is configured to acquire a plurality of live face videos and extract a plurality of first face images in each live face video, and acquire a plurality of non-live face videos and extract a plurality of second face images in each non-live face video.
The terminal is provided with a camera, and the camera is used for collecting videos including the face of the living body in a preset period of the living body and collecting videos including the face of the non-living body in a preset period of the non-living body.
The terminal can preset an extraction frame rate, extract a plurality of first face images from the collected living face video at intervals of the extraction frame rate, and extract a plurality of second face images from the collected non-living face video at intervals of the extraction frame rate.
The feature calculation module 202 is configured to convert the color space of each first face image into a preset color space to obtain target first face images and calculate first LBP features of each target first face image, and convert the color space of each second face image into the preset color space to obtain target second face images and calculate second LBP features of each target second face image.
The color space to be converted is preset in the terminal, and the color space can be a YUV color space.
Since there is a large difference in color and texture between both living and non-living subjects in the video, by converting the color space of the image from RGB color space to YUV color space, the face regions of living subjects and non-living subjects can be well distinguished.
In an alternative embodiment, the feature calculation module 202 calculates the first LBP feature of each target first face image includes: and respectively extracting the images of each target first face image on each color component, calculating the LBP characteristics of the images on each color component, and connecting the LBP characteristics of the images on each color component to obtain the first LBP characteristics of each target first face image.
In an alternative embodiment, the feature calculation module 202 calculates the second LBP feature of each target second face image includes: and respectively extracting the images of each target second face image on each color component, calculating the LBP characteristics of the images on each color component, and connecting the LBP characteristics of the images on each color component to obtain the second LBP characteristics of each target second face image.
In this alternative embodiment, the terminal calculates the LBP feature on each color component to obtain a 256-dimensional vector, and then connects the LBP features on the three color components together to form a 3×256-dimensional vector, which is used as the final color texture feature, and has a stronger feature expression capability, and the recognition accuracy of the living face recognition model obtained based on the final color texture feature training is higher.
The calculation process of the LBP characteristic is the prior art.
The feature stitching module 203 is configured to stitch and average a plurality of first LBP features of the same living body to obtain a first face feature, and stitch and average a plurality of second LBP features of the same non-living body to obtain a second face feature.
The same object (living body and non-living body) corresponds to a plurality of target object face images, LBP characteristic extraction is carried out on the plurality of target object face images, and then splicing and averaging treatment is carried out on the extracted LBP characteristic, so that the texture characteristic of the object can be better represented.
In an alternative embodiment, the feature stitching module 203 performs stitching and averaging on the plurality of first LBP features of the same living body to obtain a first face feature includes: calculating a first number of a plurality of first LBP features of the same living body; adding and calculating a plurality of first LBP characteristics of the same living body to obtain a first new LBP characteristic; and dividing the first new LBP feature and the first quantity to obtain a first face feature.
In an alternative embodiment, the feature stitching module 203 performs stitching and averaging on the plurality of second LBP features of the same non-living body to obtain a second face feature includes: calculating a second number of a plurality of second LBP features of the same non-living body; adding and calculating a plurality of second LBP characteristics of the same non-living body to obtain a second new LBP characteristic; and carrying out division operation on the second new LBP characteristic and the second quantity to obtain a second face characteristic.
In this optional embodiment, since the living body moves at a moment in the video and the non-living body is in a static state, the first LBP features of the multiple target first face images of the same living body are spliced and averaged to obtain the first face features, so that the color texture features of the living body can be represented, and the second LBP features of the multiple target second face images of the same non-living body are spliced and averaged to obtain the second face features, and since the second LBP features of the obtained second face features and the second LBP features of any one target second face image of the non-living body have little change, the living body face recognition model obtained through training can be more effectively distinguished from the living body and the non-living body.
The model training module 204 is configured to construct a first feature pair based on each of the first face features and a preset first identifier, construct a second feature pair based on each of the second face features and a preset second identifier, and train a support vector machine based on the constructed plurality of first feature pairs and the constructed plurality of second feature pairs to obtain a living face recognition model.
The support vector machine (Support Vector Machine, SVM) classification model trained through the color texture features corresponding to the face images of living bodies and non-living bodies has strong classification capability, can identify the face areas in the acquired images, and determines which face areas belong to living bodies, such as students, and which face areas belong to non-living bodies, such as celebrities attached to classroom walls.
The attendance calculation module 205 is configured to identify a student video stream through the living face recognition model to obtain an identification result, and calculate a student attendance rate according to the identification result.
The camera is installed in the classroom, and the video stream of students in the classroom is collected through the camera and then sent to the terminal.
In order to prevent some of the detected face areas from being face areas in the celebrity image, the terminal can call a living face recognition model on line to recognize living face areas and non-living face areas in the student video stream, so that the face areas belonging to the celebrity image are removed, and the calculated attendance rate is more accurate. The terminal outputs the identification result corresponding to the face region through the living body face identification model. The identification result comprises a first identification and a second identification.
In an alternative embodiment, the attendance calculation module 205 identifies the student video stream by the living face recognition model to obtain an identification result, and calculates the student attendance according to the identification result includes:
extracting a plurality of third face images from the acquired student video stream, and converting the color space of the plurality of third face images into the preset color space to obtain a plurality of candidate third face images;
the face detection model is called to detect each face area in each candidate third face image, and at least one target third face image is determined from the candidate third face images according to the detection result;
and calling the living body face recognition model to recognize each face area in the at least one target third face image to obtain a marking result, and calculating the attendance rate of students according to the marking result.
And the terminal extracts a plurality of third face images from the student video stream according to a preset extraction frame rate, and then the color space of each extracted third face image is changed to the preset color space.
The terminal may train the face detection model offline in advance and detect the face region in the target third face image using the face detection model online. The face detection model can detect each face region in the candidate third face image, and the terminal calculates the number of the face regions to determine how many faces are in the candidate third face image. And the terminal takes the candidate third face image with the largest number of face areas as a target third face image. The target third face image may be one or two.
In an alternative embodiment, the training process of the face detection model includes: acquiring a plurality of sample pictures; marking each face area in each sample picture by adopting an ImageLab tool; inputting a sample picture of the labeled face region into Yolo _v3 network for training; acquiring a face area predicted by the Yolo _v3 network; calculating an error rate between the marked face region and the predicted face region; when the error rate is lower than a preset error rate threshold value, finishing training of the Yolo _v3 network to obtain a face detection model; and when the error rate is higher than the preset error rate threshold, continuing training the Yolo _v3 network through a back propagation algorithm until training of the Yolo _v3 network is finished when the error rate is lower than the preset error rate threshold.
In the alternative embodiment, a camera can be used for collecting video of a student in class, and the video is generated into a frame-by-frame sample picture. The face region includes: a human face and a hindbrain scoop. Yolo _v3 is a full convolution network, the network structure of which is mainly a main network plus a detection network, wherein the main network uses darknet53, and the convolution with a step length of 2 is used for downsampling. Meanwhile, up sampling and route operations are used in the network, and 3 times of detection are also performed in one network structure. The Yolo _v3 network structure is prior art and the present invention is not described in detail.
The human face detection model trained based on Yolo _v3 can obtain a good detection effect aiming at a small target, and the process of the embodiment is simpler compared with the process of detecting multi-target tasks in the traditional method because the human face detection model is trained to detect the human face region.
In this embodiment, by converting the color space of the image from the RGB color space to the YUV color space, living bodies and non-living bodies can be better distinguished, and after the LBP features are extracted, the mosaic averaging processing is performed to obtain texture features which can better represent the living bodies and the non-living bodies, so that the living body recognition model trained based on the LBP features obtained by the mosaic averaging processing can better recognize whether the face region in the key frame image in the video is a living body or a non-living body, and further effectively reject the famous face in the classroom, so that the calculation result of the attendance rate is more accurate.
In an alternative embodiment, when the target third face image is one, the attendance calculation module 205 calculates the attendance rate of the students according to the identification result, including:
Calculating the number of the first marks as the mark result;
Acquiring the attendance quantity of the students;
and calculating the attendance rate of the students according to the calculated quantity and the attendance quantity.
In this optional embodiment, the terminal may read a student list of the classroom at the current time from a campus database, where the number of names in the student list is the number of attendance of students, and the attendance rate of the students may be determined according to the identification result as a ratio between the number of the first identifications and the number of attendance. The higher the ratio, the higher the attendance, the lower the ratio, the lower the attendance. The ratio was 1, indicating a attendance of 100%.
In an alternative embodiment, when the target third face image is two, the attendance calculation module 205 calculates the attendance rate of the students according to the identification result, including:
outputting a marking result and a recognition rate corresponding to each face area in the first target third face image and the second target third face image;
determining a first number of face areas with the same sequence number and the identification results of the first identification;
determining target face areas with the same sequence number and different identification results as second identifications or with the same sequence number and different identification results;
Comparing the recognition rate of the target face area with the same sequence number in each of the first target third face image and the second target third face image;
determining a final identification result with a higher identification rate of the target face area;
Calculating the second number of the first marks in the final mark result;
calculating the attendance according to the sum of the first number and the second number.
For example, assume that 10 face areas exist in the first target third face image and the second target third face image, and the sequence numbers are respectively: 1.2, 3,4, 5,6,7,8, 9, 10, the following table lists the identification result and the identification rate of each face area:
Sequence number 1 2 3 4 5 6 7 8 9 10
F1 B1 B1 B1 B1 B1 B1 B2 B1 B2 B2
F2 B1 B1 B1 B1 B1 B1 B1 B2 B2 B2
Wherein F1 represents a first target third face image, F2 represents a second target third face image, B1 represents a first mark, and B2 represents a second mark.
It can be seen that the first number of face regions with the same sequence number and the first identification result is 6, and the target face regions with the same sequence number and different identification results are face regions with sequence numbers 7-8.
For the sequence number 7, if the recognition rate of the second mark in the first target third face image is 99%, and the recognition rate of the second mark in the second target third face image is 90%, the final mark result of the face area of the sequence number 7 is the first mark.
For the sequence number 8, if the recognition rate of the second identifier in the first target third face image is 98%, and the recognition rate of the second identifier in the second target third face image is 92%, the final identifier result of the face region of the sequence number 7 is the first identifier.
Thus, the number of first markers in the final marker result is 2, and the number of heads of the living body is calculated to be 8.
In the alternative embodiment, the identification result of the face area detected by mistake can be corrected by identifying the face areas with the same sequence number and different identification results and determining the final identification result according to the identification rate, so that an accurate identification result is obtained, and the identification rate is higher.
The association storage module 206 is configured to read lesson information of the classroom at the current time from a campus database; the target third face image, the attendance rate and the lesson information are stored in a correlated mode; and sending the attendance rate to a teaching teacher.
In this optional embodiment, the campus database includes lesson information of each class of each classroom, and the lesson information includes: the information of the teacher in the class, the subjects on the classroom, the time of the subjects on the class and the time of the subjects on the class, the class of the class, the number of people in the class and the information of each student in the class.
Binding the target third face image, the calculated attendance and the lesson information to provide objective data support for evaluating the teaching level of a teacher; and the attendance rate is sent to the teaching teacher, so that the teaching teacher can adjust the teaching plan in time according to the attendance condition, and the teaching quality is improved.
It is emphasized that to further ensure privacy and security of the living face recognition model and/or attendance, the living face recognition model and/or attendance may also be stored in a blockchain node.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The blockchain (Blockchain), essentially a de-centralized database, is a string of data blocks that are generated in association using cryptographic methods, each of which contains information from a batch of network transactions for verifying the validity (anti-counterfeit) of its information and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
Example III
Fig. 3 is a schematic structural diagram of a terminal according to a third embodiment of the present invention. In the preferred embodiment of the invention, the terminal 3 comprises a memory 31, at least one processor 32, at least one communication bus 33 and a transceiver 34.
It will be appreciated by those skilled in the art that the configuration of the terminal shown in fig. 3 is not limiting of the embodiments of the present invention, and that it may be a bus type configuration, a star type configuration, or a combination of hardware and software, or a different arrangement of components, as the terminal 3 may include more or less hardware or software than is shown.
In some embodiments, the terminal 3 is a terminal capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and its hardware includes, but is not limited to, a microprocessor, an application specific integrated circuit, a programmable gate array, a digital processor, an embedded device, and the like. The terminal 3 may further comprise a client device, which includes, but is not limited to, any electronic product capable of performing man-machine interaction with a client through a keyboard, a mouse, a remote controller, a touch pad, a voice control device, etc., for example, a personal computer, a tablet computer, a smart phone, a digital camera, etc.
It should be noted that the terminal 3 is only used as an example, and other electronic products that may be present in the present invention or may be present in the future are also included in the scope of the present invention by way of reference.
In some embodiments, the memory 31 has a computer program stored therein, and the at least one processor 32 may call the computer program stored in the memory 31 to perform the relevant functions. For example, each of the modules described in the above embodiments is a computer program stored in the memory 31 and executed by the at least one processor 32, thereby realizing the functions of the respective modules. The Memory 31 includes Read-Only Memory (ROM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read-Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable rewritable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disc Memory, magnetic tape Memory, or any other medium that can be used for carrying or storing data.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements all or part of the steps in the attendance calculation method based on living body identification described in the embodiment one; or to implement all or part of the functions in the living body identification-based attendance calculation apparatus described in embodiment two.
The processor 32 is configured to implement all or part of the steps in the attendance calculation method based on living body identification according to the first embodiment when executing the computer program stored in the memory 31; or to implement all or part of the functions in the living body identification-based attendance calculation apparatus described in embodiment two.
In some embodiments, the at least one processor 32 is a Control Unit (Control Unit) of the terminal 3, connects the various components of the entire terminal 3 using various interfaces and lines, and performs various functions and processes of the terminal 3 by running or executing programs or modules stored in the memory 31, and invoking data stored in the memory 31. For example, the at least one processor 32, when executing the computer program stored in the memory, implements all or part of the steps of the vital sign calculation method based on the identification of living bodies described in the embodiments of the present invention. The at least one processor 32 may be comprised of integrated circuits, such as a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functionality, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, combinations of various control chips, and the like.
In some embodiments, the at least one communication bus 33 is arranged to enable connected communication between the memory 31 and the at least one processor 32 or the like.
Although not shown, the terminal 3 may further include a power source (such as a battery) for supplying power to the respective components, and preferably, the power source may be logically connected to the at least one processor 32 through a power management device, so as to perform functions of managing charging, discharging, power consumption management, etc. through the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The terminal 3 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which will not be described herein.
The integrated units implemented in the form of software functional modules described above may be stored in a computer readable storage medium. The software functional module is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a terminal, or a network device, etc.) or a processor (processor) to execute a part of the attendance calculation method based on living body identification according to the embodiments of the present invention.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it will be obvious that the term "comprising" does not exclude other elements or that the singular does not exclude a plurality. A plurality of units or means recited in the apparatus claims can also be implemented by means of one unit or means in software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (8)

1. The attendance rate calculating method based on living body identification is characterized by comprising the following steps of:
Collecting a plurality of living face videos and extracting a plurality of first face images in each living face video, and collecting a plurality of non-living face videos and extracting a plurality of second face images in each non-living face video;
Converting the color space of each first face image into a preset color space to obtain target first face images, calculating first LBP characteristics of each target first face image, converting the color space of each second face image into the preset color space to obtain target second face images, and calculating second LBP characteristics of each target second face image;
the method comprises the steps of carrying out splicing and averaging on a plurality of first LBP features of the same living body to obtain first face features, and carrying out splicing and averaging on a plurality of second LBP features of the same non-living body to obtain second face features;
Constructing a first feature pair based on each first face feature and a preset first mark, constructing a second feature pair based on each second face feature and a preset second mark, and training a support vector machine based on a plurality of constructed first feature pairs and a plurality of second feature pairs to obtain a living body face recognition model;
Identifying the student video stream through the living face recognition model to obtain an identification result, and calculating the attendance rate of the students according to the identification result, wherein the method comprises the following steps: extracting a plurality of third face images from the acquired student video stream, and converting the color space of the plurality of third face images into the preset color space to obtain a plurality of candidate third face images; the face detection model is called to detect each face area in each candidate third face image, and at least one target third face image is determined from the candidate third face images according to the detection result; invoking the living body face recognition model to recognize each face area in the at least one target third face image to obtain a marking result, and calculating the attendance rate of students according to the marking result;
The target third face image is two, and the calculating the attendance rate of the students according to the identification result comprises: outputting a marking result and a recognition rate corresponding to each face area in the first target third face image and the second target third face image; determining a first number of face areas with the same sequence number and the identification results of the first identification; determining target face areas with the same sequence number and different identification results as second identifications or with the same sequence number and different identification results; comparing the recognition rate of the target face area with the same sequence number in each of the first target third face image and the second target third face image; determining a final identification result with a higher identification rate of the target face area; calculating the second number of the first marks in the final mark result; calculating the attendance according to the sum of the first number and the second number.
2. The living body recognition-based attendance calculation method as claimed in claim 1, wherein the target third face image is one, and the calculating the student attendance based on the recognition result comprises:
Calculating the number of the first marks as the mark result;
Acquiring the attendance quantity of the students;
and calculating the attendance rate of the students according to the calculated quantity and the attendance quantity.
3. The living body identification-based attendance calculation method as claimed in claim 1 or 2, characterized in that,
The calculating the first LBP feature of each target first face image includes: respectively extracting images of each target first face image on each color component, calculating LBP characteristics of the images on each color component, and connecting the LBP characteristics of the images on each color component to obtain first LBP characteristics of each target first face image;
The second LBP feature of each target second face image includes: and respectively extracting the images of each target second face image on each color component, calculating the LBP characteristics of the images on each color component, and connecting the LBP characteristics of the images on each color component to obtain the second LBP characteristics of each target second face image.
4. The living body identification-based attendance calculation method as claimed in claim 1 or 2, characterized in that,
The step of performing the stitching and averaging on the plurality of first LBP features of the same living body to obtain a first face feature includes: calculating a first number of a plurality of first LBP features of the same living body; adding and calculating a plurality of first LBP characteristics of the same living body to obtain a first new LBP characteristic; dividing the first new LBP feature and the first number to obtain a first face feature;
The step of performing stitching and averaging on the plurality of second LBP features of the same non-living body to obtain a second face feature includes: calculating a second number of a plurality of second LBP features of the same non-living body; adding and calculating a plurality of second LBP characteristics of the same non-living body to obtain a second new LBP characteristic; and carrying out division operation on the second new LBP characteristic and the second quantity to obtain a second face characteristic.
5. The living body identification-based attendance calculation method as claimed in claim 1, further comprising:
reading lesson information of a current time classroom from a campus database;
the target third face image, the attendance rate and the lesson information are stored in a correlated mode;
and sending the attendance rate to a teaching teacher.
6. A living body identification-based attendance calculation apparatus for implementing the living body identification-based attendance calculation method as claimed in claim 1, characterized in that the living body identification-based attendance calculation apparatus comprises:
the video acquisition module is used for acquiring a plurality of living face videos and extracting a plurality of first face images in each living face video, and acquiring a plurality of non-living face videos and extracting a plurality of first face images in each non-living face video;
The feature calculation module is used for converting the color space of each first face image into a preset color space to obtain target first face images and calculating first LBP features of each target first face image, and converting the color space of each second face image into the preset color space to obtain target second face images and calculating second LBP features of each target second face image;
The feature stitching module is used for stitching and averaging a plurality of first LBP features of the same living body to obtain first face features, and stitching and averaging a plurality of second LBP features of the same non-living body to obtain second face features;
the model training module is used for constructing a first feature pair based on each first face feature and a preset first mark, constructing a second feature pair based on each second face feature and a preset second mark, and training a support vector machine based on a plurality of constructed first feature pairs and a plurality of second feature pairs to obtain a living face recognition model;
And the attendance calculation module is used for identifying the student video stream through the living face recognition model to obtain an identification result, and calculating the attendance rate of the students according to the identification result.
7. A terminal comprising a processor for implementing the living body identification-based attendance calculation method as claimed in any one of claims 1 to 5 when executing a computer program stored in a memory.
8. A computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the living body identification-based attendance calculation method as claimed in any one of claims 1 to 5.
CN202010718722.6A 2020-07-23 2020-07-23 Attendance rate calculating method and device based on living body identification, terminal and storage medium Active CN111860357B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010718722.6A CN111860357B (en) 2020-07-23 2020-07-23 Attendance rate calculating method and device based on living body identification, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010718722.6A CN111860357B (en) 2020-07-23 2020-07-23 Attendance rate calculating method and device based on living body identification, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN111860357A CN111860357A (en) 2020-10-30
CN111860357B true CN111860357B (en) 2024-05-14

Family

ID=72950470

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010718722.6A Active CN111860357B (en) 2020-07-23 2020-07-23 Attendance rate calculating method and device based on living body identification, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111860357B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113268024B (en) * 2021-05-14 2023-10-13 广东工业大学 Intelligent classroom supervision system and method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106169071A (en) * 2016-07-05 2016-11-30 厦门理工学院 A kind of Work attendance method based on dynamic human face and chest card recognition and system
CN107992842A (en) * 2017-12-13 2018-05-04 深圳云天励飞技术有限公司 Biopsy method, computer installation and computer-readable recording medium
CN107992864A (en) * 2018-01-15 2018-05-04 武汉神目信息技术有限公司 A kind of vivo identification method and device based on image texture
CN108269333A (en) * 2018-01-08 2018-07-10 平安科技(深圳)有限公司 Face identification method, application server and computer readable storage medium
CN108986245A (en) * 2018-06-14 2018-12-11 深圳市商汤科技有限公司 Work attendance method and terminal based on recognition of face
CN109492858A (en) * 2018-09-25 2019-03-19 平安科技(深圳)有限公司 Employee performance prediction technique and device, equipment, medium based on machine learning
WO2019137178A1 (en) * 2018-01-12 2019-07-18 杭州海康威视数字技术股份有限公司 Face liveness detection
CN110378665A (en) * 2019-06-13 2019-10-25 平安科技(深圳)有限公司 Data processing method, device, medium and electronic equipment under a kind of with no paper scene

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106169071A (en) * 2016-07-05 2016-11-30 厦门理工学院 A kind of Work attendance method based on dynamic human face and chest card recognition and system
CN107992842A (en) * 2017-12-13 2018-05-04 深圳云天励飞技术有限公司 Biopsy method, computer installation and computer-readable recording medium
CN108269333A (en) * 2018-01-08 2018-07-10 平安科技(深圳)有限公司 Face identification method, application server and computer readable storage medium
WO2019137178A1 (en) * 2018-01-12 2019-07-18 杭州海康威视数字技术股份有限公司 Face liveness detection
CN107992864A (en) * 2018-01-15 2018-05-04 武汉神目信息技术有限公司 A kind of vivo identification method and device based on image texture
CN108986245A (en) * 2018-06-14 2018-12-11 深圳市商汤科技有限公司 Work attendance method and terminal based on recognition of face
CN109492858A (en) * 2018-09-25 2019-03-19 平安科技(深圳)有限公司 Employee performance prediction technique and device, equipment, medium based on machine learning
CN110378665A (en) * 2019-06-13 2019-10-25 平安科技(深圳)有限公司 Data processing method, device, medium and electronic equipment under a kind of with no paper scene

Also Published As

Publication number Publication date
CN111860357A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN108596277B (en) Vehicle identity recognition method and device and storage medium
WO2018028546A1 (en) Key point positioning method, terminal, and computer storage medium
CN105512627B (en) A kind of localization method and terminal of key point
TWI716012B (en) Sample labeling method, device, storage medium and computing equipment, damage category identification method and device
CN110659646A (en) Automatic multitask certificate image processing method, device, equipment and readable storage medium
CN105975980A (en) Method of monitoring image mark quality and apparatus thereof
CN111563396A (en) Method and device for online identifying abnormal behavior, electronic equipment and readable storage medium
WO2023273297A1 (en) Multi-modality-based living body detection method and apparatus, electronic device, and storage medium
CN109376628A (en) A kind of picture quality detection method, device and storage medium
US11410461B2 (en) Information processing system, method for managing object to be authenticated, and program
CN103679133A (en) Image processing device and recording medium storing program
CN111860522A (en) Identity card picture processing method and device, terminal and storage medium
CN112137591A (en) Target object position detection method, device, equipment and medium based on video stream
CN113903068A (en) Stranger monitoring method, device and equipment based on human face features and storage medium
CN111738080A (en) Face detection and alignment method and device
CN113065607A (en) Image detection method, image detection device, electronic device, and medium
CN111860357B (en) Attendance rate calculating method and device based on living body identification, terminal and storage medium
CN113822645A (en) Interview management system, equipment and computer medium
CN109801394B (en) Staff attendance checking method and device, electronic equipment and readable storage medium
CN109064578B (en) Attendance system and method based on cloud service
CN115289991B (en) Subway track deformation monitoring method and device and electronic equipment
CN116453226A (en) Human body posture recognition method and device based on artificial intelligence and related equipment
CN113269190B (en) Data classification method and device based on artificial intelligence, computer equipment and medium
CN115760854A (en) Deep learning-based power equipment defect detection method and device and electronic equipment
CN112070662B (en) Evaluation method and device of face changing model, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant