CN113807229A - Non-contact attendance checking device, method, equipment and storage medium for intelligent classroom - Google Patents

Non-contact attendance checking device, method, equipment and storage medium for intelligent classroom Download PDF

Info

Publication number
CN113807229A
CN113807229A CN202111065834.7A CN202111065834A CN113807229A CN 113807229 A CN113807229 A CN 113807229A CN 202111065834 A CN202111065834 A CN 202111065834A CN 113807229 A CN113807229 A CN 113807229A
Authority
CN
China
Prior art keywords
character
images
information
similarity
frame number
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111065834.7A
Other languages
Chinese (zh)
Inventor
孙成智
罗同贵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jvt Technology Co ltd
Original Assignee
Shenzhen Jvt Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jvt Technology Co ltd filed Critical Shenzhen Jvt Technology Co ltd
Priority to CN202111065834.7A priority Critical patent/CN113807229A/en
Publication of CN113807229A publication Critical patent/CN113807229A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/109Time management, e.g. calendars, reminders, meetings or time accounting
    • G06Q10/1091Recording time for administrative or management purposes
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C1/00Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
    • G07C1/10Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people together with the recording, indicating or registering of other data, e.g. of signs of identity

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Strategic Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Educational Administration (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

A non-contact attendance checking device, a non-contact attendance checking method, a non-contact attendance checking device and a non-contact attendance checking storage medium for a smart classroom, wherein the method comprises the following steps: establishing a character information database, and collecting character characteristics, wherein the character characteristics comprise face characteristic information, arm characteristic information and body type characteristic information of a character; acquiring continuous frame number images when a person enters a classroom; carrying out image processing on the continuous frame number images and identifying the character outline in the continuous frame number images to obtain continuous frame number character outline images; extracting character features in the character outline images with the continuous frames, and clustering the character features to obtain a character feature matrix; and sequentially comparing the character feature matrix with a preset character information database to obtain character similarity, matching the identity information of the characters according to the character similarity, and inquiring and registering attendance information of the attendance characters according to the identity information. The invention has the effect of identity recognition under the condition that the face of the person is shielded.

Description

Non-contact attendance checking device, method, equipment and storage medium for intelligent classroom
Technical Field
The invention relates to the technical field of intelligent classroom attendance checking, in particular to a non-contact type attendance checking device, method, equipment and storage medium for an intelligent classroom.
Background
With the rapid development and wide application of the internet of things technology, more and more industries enjoy the high-efficiency convenience brought by the internet of things. The technology of the internet of things is applied to the education industry, and convenience and universality of education are improved. The intelligent classroom is used as a teaching kit frequently applied in the modern teaching process and is applied to a plurality of schools, colleges and universities.
People attendance is an important basis for measuring the learning or working state of people, so people class attendance work is an important link of daily teaching management work. The people class attendance management work is really grasped, and the method plays an important role in improving the teaching quality.
The attendance checking mode in class is mainly a mode of calling a roll by a teacher or a mode of transferring a paper list for signature checking, and the inventor thinks that the attendance checking mode can well realize the attendance checking function, but is not convenient for rapidly recording attendance information of all people and is easy to cause the problem that others replace attendance.
Disclosure of Invention
The invention aims to provide a non-contact attendance checking method for an intelligent classroom, which has the characteristic of identity recognition under the condition that the face of a person is shielded.
The above object of the present invention is achieved by the following technical solutions:
a non-contact attendance checking method for a smart classroom is characterized by comprising the following steps:
establishing a character information database, and collecting character characteristics, wherein the character characteristics comprise face characteristic information, arm characteristic information and body type characteristic information of a character;
acquiring continuous frame number images when a person enters a classroom;
carrying out image processing on the continuous frame number images and identifying the character outline in the continuous frame number images to obtain continuous frame number character outline images;
extracting character features in the character outline images with the continuous frames, and clustering the character features to obtain a character feature matrix;
and sequentially comparing the character feature matrix with a preset character information database to obtain character similarity, matching the identity information of the characters according to the character similarity, and inquiring and registering attendance information of the attendance characters according to the identity information.
By adopting the technical scheme, firstly, a character information database is established in advance according to character characteristics, and the face characteristic information, the arm characteristic information and the body type characteristic information of characters are respectively recorded as references of character information. When the people get into the classroom and pass through the classroom door, the people are shot to obtain continuous frame number images, so that the characteristic information of the people in different states can be embodied, and the recognition effect is prevented from being influenced by single images. Performing image processing on continuous frame number images for multiple times to make the details of the shot images more obvious, performing figure outline recognition on the processed images to distinguish figures from environmental features to obtain continuous frame number figure outline images, extracting the continuous frame number figure outline images to obtain the face feature information, arm feature information and body feature information of the figures as key points for recognition, clustering the face feature information, the arm feature information and the body feature information respectively to classify the related feature information to obtain three clustering feature matrixes, comparing the figure feature matrixes with the figure feature information in a figure information database to obtain the similarity relation among the face feature information, the arm feature information and the body feature information respectively, and calculating according to a preset weight and the corresponding similarity to obtain figure similarity, thereby confirming the identity information of the attendance characters.
The present invention in a preferred example may be further configured to: and the image processing and character outline identification of the continuous frame number image comprises gray processing, character edge sharpening processing and denoising processing of the image to obtain a continuous frame number character processing image.
By adopting the technical scheme, when the continuous frame number images are received, the gray level processing is carried out on the continuous frame number images, the data volume contained in the images is greatly reduced, the operation speed during the image processing can be improved, the identification efficiency is improved, and meanwhile, the details of the images can be reflected; the image is subjected to figure edge sharpening processing, so that the definition of the image edge is improved, and the contrast of the image is increased; the denoising treatment can reduce the influence of noise in the image on the image quality; after image processing, the image processing of people with continuous frames is more beneficial to identifying the people information.
The present invention in a preferred example may be further configured to: the image processing and person contour recognition of the continuous frame number images further comprises:
identifying and segmenting the figure outline boundary in the figure processing images with the continuous frame number to obtain figure segmentation images with the continuous frame number;
carrying out figure outline compensation on the figure segmentation images with the continuous frame number and carrying out associative filling on cavities of the figure segmentation images with the continuous frame number to obtain figure outline processing images with the continuous frame number;
and extracting character features from the character profile images with the continuous frames, and identifying the face feature information, the arm feature information and the body shape feature information of the characters to obtain the character profile images with the continuous frames.
By adopting the technical scheme, the figure outline information in the figure processing images with continuous frames is identified and segmented, so that the figures are distinguished from the background, interference factors are reduced for subsequent figure feature identification, and the identification speed is improved; the figure outline can be more completely reflected by the figure outline compensation and the cavity filling, and the figure feature extraction is facilitated. The character features are extracted according to the character outline images with the continuous frames, and the face feature information, the arm feature information and the body shape feature information are recognized, so that the face feature information, the arm feature information and the body shape feature information can be used as feature references for character identity recognition, and the character features in a character information database can be compared conveniently.
The present invention in a preferred example may be further configured to: the extracting of the character features in the character outline images with the continuous frames and the clustering of the character features to obtain the character feature matrix comprise:
the method comprises the steps of obtaining face characteristic information, arm characteristic information and body type characteristic information in figure outline images with continuous frames, and clustering the face characteristic information, the arm characteristic information and the body type characteristic information respectively to obtain a face characteristic matrix, an arm characteristic matrix and a body type characteristic matrix, wherein each row vector data in the matrix is an image with different frames of the same person.
By adopting the technical scheme, the face characteristic information, the arm characteristic information and the body type characteristic information are respectively clustered, so that objects with the same characteristics are classified, a face characteristic matrix, an arm characteristic matrix and a body type characteristic matrix are obtained, and the similarity between elements and an image to be compared is higher when the characteristic matrices are compared.
The present invention in a preferred example may be further configured to: the step of sequentially comparing the character feature matrix with a preset character information database to obtain the character similarity comprises the following steps:
and respectively comparing the arm characteristic information and the body type characteristic information in the face characteristic matrix, the arm characteristic matrix, the body type characteristic matrix and the similar figure information matrix to obtain the face similarity, the arm similarity and the body type similarity.
Through adopting above-mentioned technical scheme, receive when the face of personage shelters from, when the face information of collection is incomplete, can regard as the reference and then discern the personage identity through arm similarity and size similarity, improved the possibility of personage discernment.
The present invention in a preferred example may be further configured to: the step of sequentially comparing the character feature matrix with a preset character information database to obtain the character similarity further comprises the following steps:
the figure similarity is obtained by respectively calculating the face similarity, the arm similarity and the body type similarity with a preset face weight, an arm weight and a body type weight, and the identity information of the attendance figure is determined by comparing the figure similarity with a preset figure threshold value.
By adopting the technical scheme, the figure similarity is obtained by calculating the preset weight and the corresponding feature similarity, and the influence on the recognition result caused by one or more of the face similarity, the arm similarity and the body type similarity is avoided.
The invention also aims to provide a non-contact attendance checking device for the intelligent classroom, which has the characteristic of identity recognition under the condition that the face of a person is shielded.
The second aim of the invention is realized by the following technical scheme:
a smart classroom contactless attendance device, the device comprising:
the figure information database module is used for storing the identity information, attendance checking information, face characteristic information, arm characteristic information and body type characteristic information of a figure;
the image acquisition module is used for acquiring images of people entering a classroom to obtain images with continuous frame numbers;
the image processing module is used for carrying out image processing on the continuous frame number images and identifying the character outline in the continuous frame number images to obtain a character outline processing image with the continuous frame number;
the character feature clustering module is used for clustering character features in character contour processing images with continuous frames to obtain a character feature matrix, wherein the character features corresponding to the images with different frames of the same person in each behavior in the character feature matrix;
and the figure characteristic comparison module is used for comparing the figure characteristics in the figure characteristic matrix with the figure characteristics stored in a preset figure information database to respectively obtain the face similarity, the arm similarity and the body type similarity.
By adopting the technical scheme, firstly, the character information database module records the character characteristic information, the identity attendance and other information as the reference standard for identification and comparison. The image acquisition module obtains images of continuous frame numbers by shooting images of people entering a classroom and sends the images to the image processing module; the image processing module receives the continuous frame number images and then processes the continuous frame number images to enable the figure outline in the images to be clearer and more obvious, so that figure outline identification and processing are performed, the figure outline processing images are sent to the figure feature clustering module to respectively cluster the face feature information, the arm feature information and the body type feature information, irrelevant elements are classified according to certain contact, a figure feature matrix is obtained and sent to the figure feature comparison module, the figure feature comparison module compares the figure features with a preset figure information database to obtain figure feature similarity, and therefore figure similarity can be obtained through calculation, and identity information of figures is further determined.
The third purpose of the present invention is to provide an electronic device, which has the function of storing and executing the non-contact attendance checking method, so as to ensure the normal operation of the supervision method.
The third object of the invention is realized by the following technical scheme:
an electronic device comprises a memory and a processor, wherein the memory stores a computer program which can be loaded by the processor and executes any one of the intelligent classroom non-contact attendance checking methods.
By adopting the technical scheme, the memory is used for storing the computer program using the non-contact attendance checking method of the intelligent classroom, and the computer program stored in the memory can control the operation of each module through the processor.
The invention also provides a computer storage medium which can store corresponding programs and has the characteristic of being convenient for realizing the application of the intelligent classroom non-contact attendance method.
The fourth object of the invention is realized by the following technical scheme:
a computer readable storage medium storing a computer program capable of being loaded by a processor and executing any one of the above-mentioned smart classroom non-contact attendance checking methods.
In summary, the invention includes at least one of the following beneficial technical effects:
by integrating the technologies of face recognition, arm recognition and body type recognition, identity information can be recognized for people with the face being shielded, and the phenomenon that others replace attendance is reduced as much as possible;
by adopting the face recognition card punching attendance mode, the convenience in card punching is improved, and the phenomenon of congestion caused by queuing attendance is reduced as much as possible.
Drawings
FIG. 1 is an overall schematic block diagram of the present invention;
FIG. 2 is a schematic diagram of the process for image processing and contour recognition shown in FIG. 1;
fig. 3 is a schematic flow chart for obtaining the similarity of the persons in fig. 1.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The embodiment of the invention provides a non-contact attendance checking method for an intelligent classroom, and relates to a figure 1, wherein the method comprises the following steps:
s10, establishing a character information database, and collecting character characteristics, wherein the character characteristics comprise face characteristic information, arm characteristic information and body type characteristic information of a character.
In this embodiment, when a student enters a school, a group of character feature images are taken for each character, and the character feature images are used for acquiring the latest face feature information, arm feature information and body shape feature information of the character, and matching the character features with character cadastral information and course information and correspondingly inputting the matched character features into a character information database.
In order to improve the recognition accuracy, the sampling angles should be as many as possible when taking a character feature picture, and in this embodiment, the sampling angles include side view of the character, raising hand of the character, turning head of the character, walking motion of the character, and the like. The more detailed the angle and posture of the sampling, the more accurate the character identity comparison result. The character characteristics of the character are collected from different angles and different postures, and the face characteristic information, the arm characteristic information and the body type characteristic information of the character are recorded, wherein the body type characteristic information comprises shoulder width, crotch width and the like.
When the human face is collected, the positions of the five sense organs and the structural parameters between the five sense organs of the person are recorded according to the image information shot from different angles, the human face characteristic information presentation effects of the head of the person under various postures are recorded, and the human face characteristic information under different states is recorded in the person information database.
S20, acquiring continuous frame number images when a person enters a classroom;
specifically, the reference shooting positions of the images with the continuous frames are determined, the door frame of a classroom can be shot in the collected images, and the size of the door frame of the classroom is used as a reference, so that the person characteristic information of a person can be calculated conveniently.
The photographing time of the continuous frame number images is determined, and when a person enters a classroom from a door of the classroom, the person is photographed to obtain the continuous frame number images. In this embodiment, the number of consecutive frame number images is ten frames. The ten frames of images include the passage of the person through the classroom door to exit the classroom door. When the consecutive frame number images are acquired, the image acquisition module transmits the consecutive frame number images to the image processing module, and the following step S30 is performed.
S30, processing the continuous frame number image and identifying the human contour in the continuous frame number image to obtain the continuous frame number human contour image.
And S31, carrying out gray processing, character edge sharpening processing and denoising processing on the image to obtain a character processing image with continuous frame numbers.
Specifically, referring to fig. 2, when performing a gray scale process on a continuous frame number image, a color image is converted into a gray scale image. The color image contains three components of R, G and B, which are combined with each other to form various colors. Wherein, the variation range of the components is 0-255, so the possibility of combining the components with each other is up to 1658 thousands of combinations.
The image gradation processing is a process of making R, G, and B components of colors equal. The pixel points with large gray values are white with the maximum pixel value of 255, or black with the minimum pixel value of 0, so that the data range is reduced to 256.
The continuous frame number image is converted into the gray level image, so that the quality of the continuous frame number image can be improved, the continuous frame number image can display more details, for example, when the continuous frame number image is dark, the details in the image are not obvious, the image at the shadow part is not easy to identify, the details can be displayed more clearly after gray level processing, and subsequent processing is facilitated. And simultaneously, the data calculation amount in the continuous frame number image processing process is reduced. The gray scale processing method for the image in this embodiment may be a maximum value method, an average value method, or a weighted average value method.
When the USM sharpening enhancement algorithm is used for sharpening the gray image, the expression of the USM sharpening enhancement algorithm is as follows:
y(n,m)=x(n,m)+λz(n,m);
where x (n, m) is the input image, y (n, m) is the output image, n, m correspond to the horizontal and vertical coordinates in the image, respectively, (n, m) is the position in the coordinate system with the horizontal coordinate n and the vertical coordinate m, λ is a scaling factor for controlling the enhancement effect, the adjustment result is controlled by changing the value of the scaling factor λ, and z (n, m) is the correction signal, typically obtained by high-pass filtering x. In the USM algorithm, z (n, m) can be generally obtained by the following formula:
z(n,m)=4x(n,m)-x(n-1,m)-x(n +1,m)-x(n, m-1)-x(n,m+1);
the content of the high-frequency part of the image can be enhanced by sharpening the gray image, so that the visual effect of the image can be greatly improved. In this embodiment, the sharpening process is to use a gray image as an input image, and to calculate a relationship between the input image and a correction signal, a bright line and a dark line may be generated on both sides of the edge of a person, so that the whole image is clearer, and the person and the background are conveniently separated, thereby obtaining a sharpened image of persons with consecutive frames.
When the denoising algorithm based on Partial Differential Equation (PDE) is used for denoising the person sharpened image of the continuous frames, the strength and direction of image features are detected simultaneously in the denoising process, the region with strong image features has small smoothness degree, and the region with weak image features has large smoothness degree, so that the noise in the image is removed, the image edge effect is well kept, and the person processed image of the continuous frames is obtained.
And S32, recognizing and dividing the human outline boundary in the human processing images with the continuous frame number to obtain human divided images with the continuous frame number.
Specifically, referring to fig. 2, the person outline boundary of the person processing images of consecutive frames is identified and segmented by Mask Rcnn algorithm. Mask-RCNN is an example segmentation framework, and can complete various tasks such as target classification, target detection, semantic segmentation, example segmentation, human body posture estimation and the like by adding different branches.
Firstly, receiving character processing images with continuous frame numbers, inputting the character processing images with the continuous frame numbers into a pre-trained neural network, such as Resnext and the like, to obtain a plurality of corresponding characteristic graphs;
setting an ROI (region of interest) for each point in the plurality of feature maps, thereby obtaining a plurality of candidate ROIs, where ROI is a region of interest in the original image, which can be understood as a candidate frame for target detection, and is interested in a person contour region in this embodiment;
inputting the candidate ROI into an RPN region (region candidate network) to generate a network for binary classification and frame regression (BB regression), screening the candidate ROI, performing ROI align (region feature aggregation) operation on the screened ROI, and then corresponding the contour features of the person in the feature map with the actual contour features of the person;
and performing N-type classification, frame regression and MASK generation on the ROIs, namely performing pixel-level classification operation inside each ROI.
When only one person exists in the image, segmenting the person outline from the background environment by using a Mask Rcnn algorithm to obtain a single person outline image;
when a plurality of people exist in the image and the plurality of people are mutually occluded, not only the person outline and the background environment need to be segmented, but also the character features mutually occluded are identified and segmented to obtain a plurality of independent person outline information, so that the person segmented images with continuous frames are obtained, and the character feature information of the people is conveniently extracted.
And S33, performing character contour compensation on the continuous frame number character segmentation images and performing associative filling on holes of the continuous frame number character segmentation images to obtain continuous frame number character contour processing images.
Specifically, a flooding filling algorithm is used for filling the holes generated in the edge detection process. When the body of a person is shielded by an object or other persons, after the image processing images of the persons with the continuous frame number are identified and segmented by a Mask Rcnn algorithm, the obtained person outline has the phenomenon of discontinuity or only a part of the complete outline, so that the image processing images of the persons with the continuous frame number can be obtained after filling by using a water diffusion method;
the flooding filling method is a method for filling a communicated area with a specific color and achieving different filling effects by setting the upper limit and the lower limit of a communicable pixel and a communication mode. It can be understood that a seed point is given as a starting point in image processing, the seed point is diffused to adjacent pixel points nearby, all points with the same or similar colors are found out, new colors are filled, and the points form a connected area. Like a flood, filling a connected area. After image recognition and segmentation, the pixel point values of the human figure discontinuity are the same, and therefore, one seed point is given, and the adjacent pixel points nearby are diffused, so that the human figure discontinuity is filled.
S34, extracting the character feature from the continuous frames of the character outline image, recognizing the face feature information, the arm feature information and the body shape feature information of the character, obtaining the continuous frames of the character outline image, and executing the following step S40.
And S40, extracting the character features in the character outline images with the continuous frames, and clustering the character features to obtain a character feature matrix.
Specifically, when face feature information, arm feature information and body shape feature information in the figure outline images with continuous frames are obtained, the face feature information, the arm feature information and the body shape feature information are respectively clustered to obtain a face feature matrix, an arm feature matrix and a body shape feature matrix, and each row vector data in the matrix is an image with different frames of the same person.
In this embodiment, a CW clustering algorithm (Chinese _ Whisper) may be used for clustering, where the CW clustering algorithm is a random graph clustering algorithm in which the number of edges is time-linear, and by constructing an undirected graph, each feature vector is used as a node in the undirected graph, the similarity between feature vectors is used as an edge between nodes, and by iteratively searching the similarity weight cumulative sum corresponding to a node, a category is searched for and clustered.
In this embodiment, the face feature information, the arm feature information, and the body type feature information are used as feature vectors for clustering, so as to obtain three undirected graphs respectively. Firstly, people with different frame numbers and different people are divided into a two-dimensional people feature matrix, each row vector in the people feature matrix is the people feature with different frame numbers of the same person, and different attendance checking objects are located in different rows in the people feature matrix, so that people feature recognition can be simultaneously carried out on multiple persons, and the people recognition efficiency is improved.
And S50, sequentially comparing the character feature matrix with a preset character information database to obtain character similarity, matching the identity information of the characters according to the character similarity, and inquiring and registering attendance information of the attendance characters according to the identity information. Wherein, the step of sequentially comparing the character feature matrix with a preset character information database to obtain the character similarity comprises the following steps:
s51, referring to fig. 3, comparing the arm feature information and the body type feature information in the face feature matrix, the arm feature matrix, the body type feature matrix and the similar person information matrix respectively to obtain the face similarity, the arm similarity and the body type similarity.
Specifically, the face feature matrix is sequentially compared with face feature information in the figure information database to obtain the face similarity. The facial features in the facial image are used as feature points for positioning to obtain data of the facial features, so as to construct a facial model, the facial model is compared with the facial feature information established in the character information database to obtain similarity data about the facial feature information, and the following step S52 is executed.
The present embodiment does not limit the feature point positioning manner of the facial image to be recognized, for example, the position of the facial feature point (eye, eyebrow, nose, mouth, and facial outer contour) in the facial image to be recognized may be determined by an ASM algorithm based on an Active Shape Model algorithm (Active appearance Model), an Active appearance Model algorithm (Active appearance Model), or based on a dlib facial detection algorithm, and a facial feature training set is established. And inputting the face feature training set into a convolutional neural network model (CNN model) for training to obtain the face features in each frame of face image to be recognized. Preferably, the consecutive frame number image data are grouped in order; wherein each group of image data comprises N continuous frames of images, N > 1; and positioning face feature points of each group of image data, and extracting the face features in each frame of image through a preset convolutional neural network model. In the embodiment of the present invention, N is 10, and correspondingly, the length of the face feature list is 10. For the received continuous frame number image data, every 10 frame images are analyzed as one group in turn. It should be noted that the continuous frame number image data received by the self-learning-based face attendance device is video data, the video data is divided into a plurality of sub-video data with fixed lengths, and then the target frame image of each sub-video data is analyzed and screened through the face image quality as the face image to be recognized, so that the calculation amount can be reduced, and the recognition efficiency can be improved.
Specifically, the convolutional neural network regards the acquired image as a digital matrix, information of the image is stored in each pixel point, the pixel points are multiplied by a defined convolutional kernel respectively and then added, and the obtained result is used as a feature vector of a convolutional layer, wherein the convolutional layer is used for extracting features in each small part of the image. It is understood that the acquired image is firstly divided into a plurality of small parts, where the convolution kernel is understood as a screen having the same shape as the human face, and each part is screened by the screen, so as to obtain a feature result containing the human face shape, and the larger the feature result is, the higher the relevance of the part to the human face shape is, and conversely, the lower the relevance is, and the environmental feature irrelevant to the human face shape is understood.
The images processed by the convolutional layer need to be processed by the pooling layer, and the full-connection layer is used for classifying and identifying the extracted features, so that the number of training parameters is reduced, the dimensionality of feature vectors output by the convolutional layer is reduced, the over-fitting phenomenon generated in the processing process of the convolutional layer is reduced, and only the most useful picture information is kept and the transmission of noise is reduced. After the convolution kernel processing, a plurality of characteristic results are obtained, each characteristic result also comprises a plurality of characteristic elements, and the processing of the pooling layer is to calculate a proper characteristic element from each characteristic result to replace other characteristic elements, so that each characteristic result is compressed, and the data amount of the operation is reduced. Two forms of pooling are commonly used:
maximum pooling: selecting the largest number in the designated area to represent the whole area;
and (3) mean value pooling: the average value of the values in the designated area is selected to represent the whole area.
The images processed by the pooling layer are mapped through the full-link layer, and the multi-dimensional feature input is mapped into the two-dimensional feature output, so that the features in the images are identified. If the processed image contains a human face picture, the full-connection layer classifies the five sense organs of the person by picking up the five sense organs of the person, then the classified characteristics are compared with the preset image, the characteristic is judged to be the five sense organs of the person instead of a door panel or a wall body, and then the person and other objects are identified.
The method comprises the steps of obtaining face similarity, sequencing according to the face similarity to obtain a face similarity queue, screening out people with the face similarity higher than a preset face threshold value as similar people by comparing the face similarity in the queue with the preset face threshold value, and inquiring arm characteristic information and body type characteristic information of the similar people to obtain a similar people arm group and a similar people body type group.
Respectively extracting arm characteristic information and body type characteristic information in the arm characteristic matrix and the body type characteristic matrix, positioning by taking shoulders, wrists and crotch as characteristic points, taking the distance from the shoulders to the wrists as the arm characteristic information, and taking the distance between the shoulders as the shoulder width and the crotch width as the body type characteristic information;
respectively comparing the arm characteristic information and the body type characteristic information with a similar person arm group and a similar person body type group, so as to obtain corresponding arm similarity and body type similarity;
and S52, referring to FIG. 3, calculating the face similarity, the arm similarity and the body type similarity with the face weight, the arm weight and the body type weight respectively to obtain the person similarity, and comparing the person similarity with a preset person threshold value to determine the identity information of the attendance checking person.
When the face similarity, the arm similarity and the body type similarity are obtained, calculating the figure similarity according to corresponding weights, wherein the weights comprise a face weight, an arm weight and a body type weight which respectively correspond to the face similarity, the arm similarity and the body type similarity;
the human similarity calculation formula is as follows:
figure similarity = face similarity + arm similarity + body type similarity;
in this embodiment, the face weight is 0.7, the arm weight is 0.1, and the body type weight is 0.2.
Comparing the obtained person similarity with a preset person threshold, selecting the person similarity which is greater than the preset person threshold and has the maximum value, inquiring the person information corresponding to the maximum value as attendance object information, and recording the attendance registration time;
when the registration attendance time of the attendance object is earlier than the specified attendance time and the difference value between the registration attendance time and the specified attendance time is smaller than a preset first time threshold, confirming that the registration attendance of the attendance object is valid and making an information prompt on the identity information and the valid attendance result corresponding to the attendance object;
when the registration attendance time of the attendance object is later than the specified attendance time, the registration attendance invalidity of the attendance object is confirmed, and the identity information and the invalid attendance result corresponding to the attendance object are prompted.
Example two:
the embodiment of the invention provides a project progress supervision device, which comprises:
the figure information database module is used for storing the identity information, attendance checking information, face characteristic information, arm characteristic information and body type characteristic information of a figure;
the image acquisition module is used for acquiring images of people entering a classroom to obtain images with continuous frame numbers;
the image processing module is used for carrying out image processing on the continuous frame number images and identifying the character outline in the continuous frame number images to obtain a character outline processing image with the continuous frame number;
the character feature clustering module is used for clustering character features in character contour processing images with continuous frames to obtain a character feature matrix, wherein the character features corresponding to the images with different frames of the same person in each behavior in the character feature matrix;
and the figure characteristic comparison module is used for comparing the figure characteristics in the figure characteristic matrix with the figure characteristics stored in a preset figure information database to respectively obtain the face similarity, the arm similarity and the body type similarity.
Wherein, the image processing module includes:
the image gray processing unit is used for carrying out gray conversion on the continuous frames of images acquired by the image acquisition module, and reducing image data contained in the color images so as to accelerate the data volume of subsequent image processing;
the image sharpening processing unit receives the gray level image from the image gray level processing unit and sharpens the gray level image to enable the edge of a person in the image to be more obvious, so that a person sharpened image with continuous frames is obtained;
the image denoising processing unit is internally preset with a PDE-based denoising algorithm and is used for denoising the person sharpened image with the continuous frame number, reducing interference noise in the image and obtaining a person processed image with the continuous frame number;
the figure outline recognition and segmentation processing unit is used for recognizing the figures in the figure processing images with continuous frame numbers through a Mask Rcnn algorithm, segmenting the figures according to the outlines of the figures and extracting figure outline images;
when the figure outline is blocked by an object or other figures and the figure outline is incomplete, the discontinuous outline boundaries are connected by using morphological processing, so that the figure outline is clearer, and the figure feature is convenient to extract.
Example three:
an embodiment of the present invention provides an electronic device, which includes a memory and a processor, where the memory stores a computer program that can be loaded by the processor and execute any one of the above methods. Specifically, the electronic device includes a computer, a mobile phone, a tablet, a reader, and the like.
Example four:
the embodiment of the invention provides a computer scale storage medium which stores a computer program capable of being loaded by a processor and executing any one of the methods. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, the computer program can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The present embodiment is only for explaining the present invention, and it is not limited to the present invention, and those skilled in the art can make modifications of the present embodiment without inventive contribution as needed after reading the present specification, but all of them are protected by patent law within the scope of the claims of the present invention.

Claims (9)

1. A non-contact attendance checking method for a smart classroom is characterized by comprising the following steps:
establishing a character information database, and collecting character characteristics, wherein the character characteristics comprise face characteristic information, arm characteristic information and body type characteristic information of a character;
acquiring continuous frame number images when a person enters a classroom;
carrying out image processing on the continuous frame number images and identifying the character outline in the continuous frame number images to obtain continuous frame number character outline images;
extracting character features in the character outline images with the continuous frames, and clustering the character features to obtain a character feature matrix;
and sequentially comparing the character feature matrix with a preset character information database to obtain character similarity, matching the identity information of the characters according to the character similarity, and inquiring and registering attendance information of the attendance characters according to the identity information.
2. The method of claim 1, wherein the image processing and the person contour recognition for the consecutive frames of images comprises performing a gray scale process, a person edge sharpening process, and a denoising process for the images to obtain consecutive frames of images of the person process.
3. The method of claim 2, wherein the image processing and person contour recognition of the consecutive frame number images further comprises:
identifying and segmenting the figure outline boundary in the figure processing images with the continuous frame number to obtain figure segmentation images with the continuous frame number;
carrying out figure outline compensation on the figure segmentation images with the continuous frame number and carrying out associative filling on cavities of the figure segmentation images with the continuous frame number to obtain figure outline processing images with the continuous frame number;
and extracting character features from the character profile images with the continuous frames, and identifying the face feature information, the arm feature information and the body shape feature information of the characters to obtain the character profile images with the continuous frames.
4. The method of claim 1, wherein the extracting the character features of the character outline images with the consecutive frames and clustering the character features to obtain the character feature matrix comprises:
the method comprises the steps of obtaining face characteristic information, arm characteristic information and body type characteristic information in figure outline images with continuous frames, and clustering the face characteristic information, the arm characteristic information and the body type characteristic information respectively to obtain a face characteristic matrix, an arm characteristic matrix and a body type characteristic matrix, wherein each row vector data in the matrix is an image with different frames of the same person.
5. The method of claim 4, wherein the sequentially comparing the character feature matrix with a preset character information database to obtain the similarity of characters comprises:
and respectively comparing the arm characteristic information and the body type characteristic information in the face characteristic matrix, the arm characteristic matrix, the body type characteristic matrix and the similar figure information matrix to obtain the face similarity, the arm similarity and the body type similarity.
6. The method of claim 5, wherein the sequentially comparing the character feature matrix with a predetermined character information database to obtain the similarity of characters further comprises:
the figure similarity is obtained by respectively calculating the face similarity, the arm similarity and the body type similarity with a preset face weight, an arm weight and a body type weight, and the identity information of the attendance figure is determined by comparing the figure similarity with a preset figure threshold value.
7. Wisdom classroom non-contact attendance device, its characterized in that, the device includes:
the figure information database module is used for storing the identity information, attendance checking information, face characteristic information, arm characteristic information and body type characteristic information of a figure;
the image acquisition module is used for acquiring images of people entering a classroom to obtain images with continuous frame numbers;
the image processing module is used for carrying out image processing on the continuous frame number images and identifying the character outline in the continuous frame number images to obtain a character outline processing image with the continuous frame number;
the character feature clustering module is used for clustering character features in character contour processing images with continuous frames to obtain a character feature matrix, wherein the character features corresponding to the images with different frames of the same person in each behavior in the character feature matrix;
and the figure characteristic comparison module is used for comparing the figure characteristics in the figure characteristic matrix with the figure characteristics stored in a preset figure information database to respectively obtain the face similarity, the arm similarity and the body type similarity.
8. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program that can be loaded by the processor and that executes the method according to any of claims 1 to 6.
9. A computer-readable storage medium, in which a computer program is stored which can be loaded by a processor and which executes the method of any one of claims 1 to 6.
CN202111065834.7A 2021-09-13 2021-09-13 Non-contact attendance checking device, method, equipment and storage medium for intelligent classroom Pending CN113807229A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111065834.7A CN113807229A (en) 2021-09-13 2021-09-13 Non-contact attendance checking device, method, equipment and storage medium for intelligent classroom

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111065834.7A CN113807229A (en) 2021-09-13 2021-09-13 Non-contact attendance checking device, method, equipment and storage medium for intelligent classroom

Publications (1)

Publication Number Publication Date
CN113807229A true CN113807229A (en) 2021-12-17

Family

ID=78895104

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111065834.7A Pending CN113807229A (en) 2021-09-13 2021-09-13 Non-contact attendance checking device, method, equipment and storage medium for intelligent classroom

Country Status (1)

Country Link
CN (1) CN113807229A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173448A (en) * 2023-07-18 2023-12-05 国网湖北省电力有限公司经济技术研究院 Method and device for intelligently controlling and early warning progress of foundation engineering

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825176A (en) * 2016-03-11 2016-08-03 东华大学 Identification method based on multi-mode non-contact identity characteristics
CN110110593A (en) * 2019-03-27 2019-08-09 广州杰赛科技股份有限公司 Face Work attendance method, device, equipment and storage medium based on self study
CN110119673A (en) * 2019-03-27 2019-08-13 广州杰赛科技股份有限公司 Noninductive face Work attendance method, device, equipment and storage medium
CN110648415A (en) * 2019-08-23 2020-01-03 上海科技发展有限公司 Automatic identification attendance checking method, automatic identification attendance checking system, electronic device and medium
CN111914742A (en) * 2020-07-31 2020-11-10 辽宁工业大学 Attendance checking method, system, terminal equipment and medium based on multi-mode biological characteristics
CN112464850A (en) * 2020-12-08 2021-03-09 东莞先知大数据有限公司 Image processing method, image processing apparatus, computer device, and medium
CN112597850A (en) * 2020-12-15 2021-04-02 浙江大华技术股份有限公司 Identity recognition method and device
CN112884961A (en) * 2021-01-21 2021-06-01 吉林省吉科软信息技术有限公司 Face recognition gate system for epidemic situation prevention and control

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825176A (en) * 2016-03-11 2016-08-03 东华大学 Identification method based on multi-mode non-contact identity characteristics
CN110110593A (en) * 2019-03-27 2019-08-09 广州杰赛科技股份有限公司 Face Work attendance method, device, equipment and storage medium based on self study
CN110119673A (en) * 2019-03-27 2019-08-13 广州杰赛科技股份有限公司 Noninductive face Work attendance method, device, equipment and storage medium
CN110648415A (en) * 2019-08-23 2020-01-03 上海科技发展有限公司 Automatic identification attendance checking method, automatic identification attendance checking system, electronic device and medium
CN111914742A (en) * 2020-07-31 2020-11-10 辽宁工业大学 Attendance checking method, system, terminal equipment and medium based on multi-mode biological characteristics
CN112464850A (en) * 2020-12-08 2021-03-09 东莞先知大数据有限公司 Image processing method, image processing apparatus, computer device, and medium
CN112597850A (en) * 2020-12-15 2021-04-02 浙江大华技术股份有限公司 Identity recognition method and device
CN112884961A (en) * 2021-01-21 2021-06-01 吉林省吉科软信息技术有限公司 Face recognition gate system for epidemic situation prevention and control

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173448A (en) * 2023-07-18 2023-12-05 国网湖北省电力有限公司经济技术研究院 Method and device for intelligently controlling and early warning progress of foundation engineering
CN117173448B (en) * 2023-07-18 2024-05-24 国网湖北省电力有限公司经济技术研究院 Method and device for intelligently controlling and early warning progress of foundation engineering

Similar Documents

Publication Publication Date Title
CN111723860B (en) Target detection method and device
CN111931701B (en) Gesture recognition method and device based on artificial intelligence, terminal and storage medium
CN109815826B (en) Method and device for generating face attribute model
CN103810490B (en) A kind of method and apparatus for the attribute for determining facial image
CN108108807B (en) Learning type image processing method, system and server
CN107909005A (en) Personage's gesture recognition method under monitoring scene based on deep learning
CN110634116B (en) Facial image scoring method and camera
CN108182397B (en) Multi-pose multi-scale human face verification method
CN109685713B (en) Cosmetic simulation control method, device, computer equipment and storage medium
CN108647625A (en) A kind of expression recognition method and device
CN105335725A (en) Gait identification identity authentication method based on feature fusion
CN109271930B (en) Micro-expression recognition method, device and storage medium
CN109725721B (en) Human eye positioning method and system for naked eye 3D display system
CN106570480A (en) Posture-recognition-based method for human movement classification
CN108108760A (en) A kind of fast human face recognition
CN113011253B (en) Facial expression recognition method, device, equipment and storage medium based on ResNeXt network
CN111368758A (en) Face ambiguity detection method and device, computer equipment and storage medium
CN113449704B (en) Face recognition model training method and device, electronic equipment and storage medium
CN108256462A (en) A kind of demographic method in market monitor video
CN112528939A (en) Quality evaluation method and device for face image
CN111666813B (en) Subcutaneous sweat gland extraction method of three-dimensional convolutional neural network based on non-local information
CN111783885A (en) Millimeter wave image quality classification model construction method based on local enhancement
CN111582654B (en) Service quality evaluation method and device based on deep cycle neural network
CN110363103B (en) Insect pest identification method and device, computer equipment and storage medium
CN113807229A (en) Non-contact attendance checking device, method, equipment and storage medium for intelligent classroom

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination