CN115019396A - Learning state monitoring method, device, equipment and medium - Google Patents

Learning state monitoring method, device, equipment and medium Download PDF

Info

Publication number
CN115019396A
CN115019396A CN202210664223.2A CN202210664223A CN115019396A CN 115019396 A CN115019396 A CN 115019396A CN 202210664223 A CN202210664223 A CN 202210664223A CN 115019396 A CN115019396 A CN 115019396A
Authority
CN
China
Prior art keywords
head
data set
upper limb
image data
posture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210664223.2A
Other languages
Chinese (zh)
Inventor
蒋曼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Education
Original Assignee
Chongqing University of Education
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Education filed Critical Chongqing University of Education
Priority to CN202210664223.2A priority Critical patent/CN115019396A/en
Publication of CN115019396A publication Critical patent/CN115019396A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06398Performance of employee with respect to a job function
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The invention relates to the technical field of computer vision, and provides a learning state monitoring method, a learning state monitoring device, learning state monitoring equipment and a learning state monitoring medium, wherein the learning state monitoring device comprises: acquiring a learning image data set of students in a classroom, and performing image segmentation on the learning image data set in a first model to obtain a head image data set and an upper limb image data set; performing head posture recognition on the head image data set in the second model to obtain a head rotation posture; extracting a skeleton key point sequence in the upper limb image data set, and performing behavior recognition on the skeleton key point sequence in a third model to obtain an upper limb behavior posture; and respectively comparing the head rotation posture and the upper limb behavior posture with preset posture threshold values to obtain a learning state monitoring result. According to the invention, through image segmentation, the mutual influence between different parts in the image during the subsequent posture identification is avoided, and the accuracy of the monitoring result is ensured; through head posture recognition and upper limb behavior posture recognition, the learning state monitoring efficiency is improved.

Description

Learning state monitoring method, device, equipment and medium
Technical Field
The invention relates to the technical field of computer vision, in particular to a learning state monitoring method, a learning state monitoring device, learning state monitoring equipment and a learning state monitoring medium.
Background
The classroom learning process is the core of school education, and the learning state of students in the classroom learning process not only influences the learning efficiency of students, but also reflects the teaching quality of teachers from the side. Through assessing the learning state of the students, effective feedback information and teaching guidance can be formed, and classroom teaching and development of the students are promoted.
At present, the assessment of the learning state of students mostly depends on the judgment of information such as class speaking, behavior, expression and the like of the students in the course of lecturing by teachers, the mode excessively consumes the energy of the teachers and has lower efficiency, and meanwhile, the lecturing quality of the teachers is also influenced. In the existing partial scheme, a teaching process video is collected through a camera, a teacher analyzes the learning state of students in the video in the background after class, the mode still needs manual detection, and the real-time performance and the accuracy of state information cannot be ensured.
Disclosure of Invention
In view of the problems in the prior art, the invention provides a learning state monitoring method, a learning state monitoring device, learning state monitoring equipment and a learning state monitoring medium, so as to solve the problems that the learning state monitoring efficiency of students is low, and the real-time performance and the accuracy cannot be ensured.
In order to achieve the above and other objects, the present invention adopts the following technical solutions.
In an embodiment of the present application, a learning state monitoring method is provided, including:
acquiring a learning image data set of students in a classroom, and performing image segmentation on the learning image data set in a first model to obtain a head image data set and an upper limb image data set;
performing head posture recognition on the head image data set in a second model to obtain a head rotation posture;
extracting a skeleton key point sequence in the upper limb image data set, and performing behavior recognition on the skeleton key point sequence in a third model to obtain an upper limb behavior posture;
and respectively comparing the head rotation posture and the upper limb behavior posture with preset posture threshold values to obtain a learning state monitoring result.
In an embodiment of the present application, performing image segmentation on the learning image dataset in a first model to obtain a head image dataset and an upper limb image dataset, includes:
according to a preset graying processing weight, carrying out weighted average on color components in the learning image data set to obtain a grayed first learning image data set, wherein the weighted average calculation mode is as follows: i (x, y) ═ 0.3 × I R (x,y)+0.59*I G (x,y)+0.11*I B (x, y) where I (x, y) is a gray scale value, I R For the red component in the image, I G For the green component in the image, I B Is a blue component in the image;
according to a preset binarization threshold value, performing binarization processing on the first learning image data set to obtain a second learning image data set;
and respectively intercepting a head part and an upper limb part in the second learning image data set according to a preset region of interest to obtain the head image data set and the upper limb image data set.
In an embodiment of the present application, performing head pose recognition on the head image data set in the second model to obtain a head rotation pose includes:
obtaining a face rectangular frame in the head image data set by using a face detection algorithm;
converting the face rectangular frame into face coordinates in an array form, and storing the face coordinates frame by frame to obtain a face coordinate data set;
carrying out feature point detection on the face coordinate data set to obtain a face feature point set, wherein the face feature point set comprises six types of feature point sets including a left canthus, a right canthus, a nose tip, a left lip tip, a right lip tip and a chin, and the feature point acquisition mode is as follows: establishing a feature point detection matrix
Figure BDA0003691007190000021
Where σ is a scale coefficient, L xx (x, σ) is the second order partial derivative in the x-direction, L xy (x, σ) is the second order partial derivative in the xy direction, L yy (x, sigma) is a second-order partial derivative in the y direction, and a matrix threshold parameter delta is detected according to a preset characteristic point to obtain the characteristic point;
performing head posture recognition on the six types of feature point sets in the second model to obtain a deflection angle;
if the deflection angle is within a preset oscillation angle range, obtaining an oscillation gesture;
if the deflection angle is within a preset head lowering angle range, obtaining a head lowering posture;
and if the deflection angle is within a preset head raising angle range, obtaining a head raising posture.
In an embodiment of the present application, converting the face rectangle frame into face coordinates in an array form includes:
converting the two-dimensional coordinates in the face rectangular frame into camera coordinates in a manner that:
Figure BDA0003691007190000031
wherein
Figure BDA0003691007190000032
Is a two-dimensional coordinate system, and is,
Figure BDA0003691007190000033
to be the coordinates of the camera(s),
Figure BDA0003691007190000034
is an internal reference of the camera;
and converting the camera coordinates into world coordinates in a manner of:
Figure BDA0003691007190000035
Figure BDA0003691007190000036
wherein
Figure BDA0003691007190000037
The world coordinate is adopted, T is a translation matrix, and R is a rotation matrix;
and obtaining the face coordinates according to the world coordinates in the head image data set.
In an embodiment of the present application, the extracting a skeleton key point sequence in the upper limb image data set includes:
positioning an upper limb target in the upper limb image data set to obtain an upper limb positioning coordinate;
tracking and identifying the upper limb positioning coordinates through a multi-target fusion tracking network to obtain a tracking result sequence of the upper limb positioning coordinates, wherein the tracking result sequence comprises the association degree of the feature vectors of the upper limb positioning coordinates of the upper and lower frame images in the upper limb image data set;
and obtaining a skeleton key point sequence according to the tracking result sequence.
In an embodiment of the present application, performing behavior recognition on the bone key point sequence in the third model to obtain an upper limb behavior pose includes:
inputting the bone key point sequence into a third model, wherein the third model is a bone key point behavior recognition model trained in advance, and the loss function of the third model is
Figure BDA0003691007190000038
Wherein
Figure BDA0003691007190000039
Is a loss function of the predicted value of the loss,
Figure BDA00036910071900000310
the loss function of the marked value is adopted, and the bone key point behavior recognition model comprises a confidence detection convolutional layer and a connection relation convolutional layer;
predicting the bone key point sequence in the confidence coefficient detection convolutional layer to obtain the confidence coefficient of the upper limb joint;
predicting the connection relation among all the bone key points in the bone key point sequence in the connection relation convolution layer to obtain a connection relation;
obtaining an upper limb posture skeleton according to the upper limb joint confidence coefficient and the connection relation;
obtaining distance parameters between joint parts in the upper limb posture skeleton, comparing the distance parameters with a preset distance parameter threshold value to obtain an upper limb behavior posture, wherein the upper limb behavior posture comprises lying down, turning and standing.
In an embodiment of the present application, after comparing the head rotation posture and the upper limb behavior posture with preset posture thresholds respectively to obtain learning state monitoring results, the method includes:
inputting the head image data set into a pre-trained face recognition model to obtain student identity information;
converting the learning state detection result into a learning state score according to a preset weight parameter, wherein the learning state score is calculated in a mode of
Figure BDA0003691007190000041
Wherein
Figure BDA0003691007190000042
Is the head-up times within a predetermined time range,
Figure BDA0003691007190000043
is the number of head lowers within a predetermined time range,
Figure BDA0003691007190000044
is the number of times of head turning in a preset time range,
Figure BDA0003691007190000045
the times of table bending in the preset time range,
Figure BDA0003691007190000046
respectively matching the learning state scores and the student identity information for weights corresponding to head raising, head lowering, head turning and table lying behaviors to obtain the learning state scores of the students;
and if the learning state score is lower than a preset learning state score threshold value, acquiring class, seat and course arrangement information of a corresponding student according to the identity information, and using the class, seat and course arrangement information to mark the student by a teacher.
In an embodiment of the present application, a learning state monitoring device is provided, including:
the learning image segmentation module is used for acquiring a learning image data set of students in a classroom and carrying out image segmentation on the learning image data set in a first model to obtain a head image data set and an upper limb image data set;
the head posture recognition module is used for carrying out head posture recognition on the head image data set in the second model to obtain a head rotation posture;
the upper limb behavior gesture recognition module is used for extracting a skeleton key point sequence in the upper limb image data set, and performing behavior recognition on the skeleton key point sequence in a third model to obtain an upper limb behavior gesture;
and the learning state detection result acquisition module is used for respectively comparing the head rotation posture and the upper limb behavior posture with preset posture threshold values to obtain a learning state monitoring result.
In an embodiment of the present application, a computer device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the steps of the learning state monitoring method are implemented.
In one embodiment of the application, a computer-readable storage medium is provided, the computer-readable storage medium storing a computer program which, when executed by a processor, performs the steps of:
acquiring a learning image data set of students in a classroom, and performing image segmentation on the learning image data set in a first model to obtain a head image data set and an upper limb image data set;
performing head posture recognition on the head image data set in a second model to obtain a head rotation posture;
extracting a skeleton key point sequence in the upper limb image data set, and performing behavior recognition on the skeleton key point sequence in a third model to obtain an upper limb behavior posture;
and respectively comparing the head rotation posture and the upper limb behavior posture with preset posture threshold values to obtain the learning state information of the student.
In the scheme realized by the learning state monitoring method, the learning state monitoring device, the computer equipment and the storage medium, the learning image data set is classified firstly to obtain a head image data set and an upper limb image data set, and two different types of image parts are segmented, so that interference generated in the subsequent posture and behavior identification process can be effectively avoided, and the accuracy of a monitoring result is ensured; and respectively carrying out head posture recognition and behavior recognition in the second model and the third model to obtain a head rotation posture and an upper limb behavior posture, combining the two postures to obtain a learning state monitoring result, and adopting different recognition models aiming at different image contents to ensure the accuracy of the recognition result. The scheme in the application can monitor the learning state of the image data acquired in real time, and the video does not need to be analyzed after a class, so that the real-time performance of the monitoring result is ensured; compare the teacher and watch the video and acquire student's study state, it is higher to carry out the efficiency of monitoring to the study state through the recognition model, has practiced thrift teacher's energy.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
fig. 1 is a schematic diagram of an implementation environment of a learning state monitoring method according to an exemplary embodiment of the present application;
FIG. 2 is a schematic flow diagram of a learning state monitoring method shown in an exemplary embodiment of the present application;
fig. 3 is a schematic structural view of a learning state monitoring apparatus according to an exemplary embodiment of the present application;
FIG. 4 illustrates a schematic structural diagram of a computer system suitable for use in implementing the computer device of the embodiments of the present application.
Detailed Description
The following embodiments of the present invention are provided by way of specific examples, and other advantages and effects of the present invention will be readily apparent to those skilled in the art from the disclosure herein. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
In the following description, numerous details are set forth to provide a more thorough explanation of embodiments of the present invention, however, it will be apparent to one skilled in the art that embodiments of the present invention may be practiced without these specific details, and in other embodiments, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring embodiments of the present invention.
Reference to "a plurality" in this application means two or more. "and/or" describe the association relationship of the associated objects, meaning that there may be three relationships, e.g., A and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
First, Computer Vision (CV) is a science for studying how to make a machine look, and more specifically, it refers to using a camera and a Computer to replace human eyes to perform machine Vision such as recognition, tracking and measurement on a target, and further performing image processing, so that the Computer processing becomes an image more suitable for human eyes to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
Image segmentation, which is a technique and process for dividing an image into several specific regions with unique properties and proposing an object of interest. It is a key step from image processing to image analysis. The existing image segmentation methods mainly include the following categories: a threshold-based segmentation method, a region-based segmentation method, an edge-based segmentation method, a particular theory-based segmentation method, and the like. From a mathematical point of view, image segmentation is the process of dividing a digital image into mutually disjoint regions. The image segmentation process is also a labeling process, i.e. image indexes belonging to the same region are assigned with the same number.
The skeleton key points are used for describing human body postures and predicting human body behaviors. Therefore, human skeletal key point detection is the basis of many computer vision tasks, such as motion classification, abnormal behavior detection, and automatic driving. In recent years, with the development of deep learning technology, the detection effect of key points of human bones is continuously improved, and the method has started to be widely applied to the related field of computer vision.
The gesture recognition model, human gesture recognition, is defined as the positioning problem of key points of human body, and has been an important concern in the field of computer vision. The gesture recognition model recognizes various joint gestures such as joint points which are too small to be seen, shielded joint points and joint points which need to be judged according to the context through a deep learning algorithm and a neural convolution network.
The technical scheme in the embodiment of the application relates to technologies such as big data and computer vision, and is specifically explained by the following embodiment:
fig. 1 is a schematic diagram of an implementation environment of a learning state monitoring method according to an exemplary embodiment of the present application.
Referring to fig. 1, the implementation environment may include a learning state monitoring device 101, a cloud device 102, and a server group 103.
Illustratively, the learning status monitoring device 101 may obtain the learning image data through the cloud device 102 and the server group 103, and may also input the image data into the learning status monitoring device 101 by a relevant technician. The learning state monitoring device 101 acquires a learning image data set of a student in a classroom, and performs image segmentation on the learning image data set in a first model to obtain a head image data set and an upper limb image data set; performing head posture recognition on the head image data set in the second model to obtain a head rotation posture; extracting a skeleton key point sequence in the upper limb image data set, and performing behavior recognition on the skeleton key point sequence in a third model to obtain an upper limb behavior posture; and respectively comparing the head rotation posture and the upper limb behavior posture with preset posture threshold values to obtain the learning state information of the student.
Referring to fig. 2, fig. 2 is a flowchart illustrating a learning status monitoring method according to an exemplary embodiment of the present application. The method may be applied to the implementation environment shown in fig. 2 and executed by the learning state monitoring apparatus 101 in the implementation environment. It should be understood that the method may be applied to other exemplary implementation environments and is specifically executed by devices in other implementation environments, and the embodiment does not limit the implementation environment to which the method is applied.
As shown in fig. 2, in an exemplary embodiment, the learning state monitoring method includes at least steps S210-S240, which are described in detail as follows:
in step S210, a learning image dataset of a student in a classroom is acquired, and the learning image dataset is subjected to image segmentation in a first model to obtain a head image dataset and an upper limb image dataset.
In an embodiment of the application, the learning images of the students are integrated into a data set, and the learning images in the data set are subjected to image segmentation, the first model may be an image segmentation model, and the head image and the upper limb image are separated in the image segmentation model, so that the influence on the accuracy of the learning state monitoring result due to mutual interference in the subsequent image identification process is avoided.
In one embodiment of the present application, the acquiring of the learning image data set of the student in the classroom in step S210 in fig. 2 includes the following steps:
acquiring video code stream data of students in a classroom in a preset time interval during learning;
decoding the video code stream data to obtain a plurality of frames of learning images;
and integrating the multiple frames of learning images to obtain the learning image data set.
In this embodiment, since the present application mainly performs pose recognition on an object in an image, an image data set needs to be acquired first. The method comprises the steps of firstly shooting learning videos of students in a classroom in the classroom in real time through a camera, and then dividing the learning videos according to preset interval time. In this embodiment, the learning video is a normal code stream video, and the code stream video is decoded to obtain a learning image data set.
In an embodiment of the application, when performing image segmentation on the learning image dataset in the first model in step S210 in fig. 2 to obtain a head image dataset and an upper limb image dataset, the method includes the following steps:
according toAnd a preset graying processing weight, wherein the color components in the learning image data set are weighted and averaged to obtain a grayed first learning image data set, and the weighted average calculation mode is as follows: i (x, y) ═ 0.3 × I R (x,y)+0.59*I G (x,y)+0.11*I B (x, y) where I (x, y) is a gray scale value, I R For the red component in the image, I G For the green component in the image, I B Is a blue component in the image;
according to a preset binarization threshold value, performing binarization processing on the first learning image data set to obtain a second learning image data set;
and respectively intercepting a head part and an upper limb part in the second learning image data set according to a preset region of interest to obtain the head image data set and the upper limb image data set.
In this embodiment, before image capture of the head portion and the upper limb portion is performed, graying and binarization preprocessing are performed to ensure accuracy of the acquired image; the first model may be an image segmentation model in which the image is segmented according to a preset region of interest.
In step S220, head pose recognition is performed on the head image data set in the second model to obtain a head rotation pose.
In one embodiment of the present application, the second model is a head pose recognition model; and inputting the head image data set into a head posture recognition model to obtain the rotation postures of the students on the seat, wherein the rotation postures can comprise head raising, head lowering and head rotating.
In an embodiment of the present application, step S220 in fig. 2 specifically includes the following steps:
obtaining a face rectangular frame in the head image data set by using a face detection algorithm;
converting the face rectangular frame into face coordinates in an array form, and storing the face coordinates frame by frame to obtain a face coordinate data set;
carrying out feature point detection on the face coordinate data set to obtain a personThe face characteristic point set comprises six types of characteristic point sets including a left canthus, a right canthus, a nose tip, a left lip tip, a right lip tip and a chin, and the characteristic point acquisition mode is as follows: establishing a feature point detection matrix
Figure BDA0003691007190000101
Where σ is a scale coefficient, L xx (x, σ) is the second order partial derivative in the x-direction, L xy (x, σ) is the second order partial derivative in the xy direction, L yy (x, sigma) is a second-order partial derivative in the y direction, and a matrix threshold parameter delta is detected according to a preset characteristic point to obtain the characteristic point;
performing head posture recognition on the six types of feature point sets in the second model to obtain a deflection angle;
if the deflection angle is within a preset oscillation angle range, obtaining an oscillation gesture;
if the deflection angle is within a preset head lowering angle range, obtaining a head lowering posture;
and if the deflection angle is within a preset head raising angle range, obtaining a head raising posture.
In the embodiment, the head pose in the head image data is recognized through a face detection algorithm, face coordinates can be obtained during recognition, the face coordinates are classified according to human five sense organs, then the deflection angle is calculated, and the pose is judged through the deflection angle.
In one embodiment of the application, the method for converting the face rectangular frame into the face coordinates in the array form comprises the following steps:
converting the two-dimensional coordinates in the face rectangular frame into camera coordinates in a manner that:
Figure BDA0003691007190000111
wherein
Figure BDA0003691007190000112
Is a two-dimensional coordinate system, and is,
Figure BDA0003691007190000113
to be the coordinates of the camera(s),
Figure BDA0003691007190000114
is an internal reference of the camera;
converting the camera coordinates into world coordinates in a manner of:
Figure BDA0003691007190000115
Figure BDA0003691007190000116
wherein
Figure BDA0003691007190000117
The world coordinate is adopted, T is a translation matrix, and R is a rotation matrix;
and obtaining the face coordinates according to the world coordinates in the head image data set.
In step S230, a skeleton key point sequence in the upper limb image data set is extracted, and the skeleton key point sequence is subjected to behavior recognition in the third model, so as to obtain an upper limb behavior posture.
In one embodiment of the present application, student upper limb gestures are identified by skeletal key points. In this embodiment, the third model may be a human posture recognition model, which is trained in advance through a good-training sample, and during training, error parameters are obtained, and a total error is obtained according to weighted summation of the error parameters, and then parameters in a target function of the human posture recognition model are updated by using a random gradient descent method.
In an embodiment of the present application, when the bone key point sequence in the upper limb image data set is extracted in step S230 in fig. 2, the method specifically includes the following steps:
positioning an upper limb target in the upper limb image data set to obtain an upper limb positioning coordinate;
tracking and identifying the upper limb positioning coordinates through a multi-target fusion tracking network to obtain a tracking result sequence of the upper limb positioning coordinates, wherein the tracking result sequence comprises the association degree of the feature vectors of the upper limb positioning coordinates of the upper and lower frame images in the upper limb image data set;
and obtaining a skeleton key point sequence according to the tracking result sequence.
In the embodiment, the multi-target fusion tracking network is a full convolution network and a re-identification network. When a plurality of student targets exist in the image, target recognition and tracking are carried out through the two networks, and therefore the situation that the posture recognition target is changed to cause a monitoring result error is avoided.
In an embodiment of the present application, when performing behavior recognition on the bone key point sequence in the third model in step S230 in fig. 2 to obtain an upper limb behavior pose, the method specifically includes the following steps:
inputting the skeleton key point sequence into a third model, wherein the third model is a pre-trained skeleton key point behavior recognition model, and the loss function of the third model is
Figure BDA0003691007190000121
Wherein
Figure BDA0003691007190000122
Is a loss function of the predicted value of the loss,
Figure BDA0003691007190000123
the loss function of the marked value is adopted, and the bone key point behavior recognition model comprises a confidence detection convolutional layer and a connection relation convolutional layer;
predicting the bone key point sequence in the confidence coefficient detection convolutional layer to obtain the confidence coefficient of the upper limb joint;
predicting the connection relation among all the bone key points in the bone key point sequence in the connection relation convolution layer to obtain a connection relation;
obtaining an upper limb posture framework according to the upper limb joint confidence and the connection relation;
obtaining distance parameters between joint parts in the upper limb posture skeleton, comparing the distance parameters with a preset distance parameter threshold value to obtain an upper limb behavior posture, wherein the upper limb behavior posture comprises lying down, turning and standing.
In step S240, the head rotation posture and the upper limb behavior posture are respectively compared with preset posture thresholds to obtain learning state monitoring results.
In one embodiment of the present application, the head rotation posture and the upper limb behavior posture are compared according to a preset threshold value to determine the learning state monitoring result of the target object. In this embodiment, a score threshold may be preset, and when the head rotation posture and the upper limb behavior posture exceed the preset posture threshold, a score is obtained, and different postures correspond to different scores.
In an embodiment of the present application, after obtaining the learning state monitoring result, the method further includes the following steps:
inputting the head image data set into a pre-trained face recognition model to obtain student identity information;
converting the learning state detection result into a learning state score according to a preset weight parameter, wherein the learning state score is calculated in a mode of
Figure BDA0003691007190000131
Wherein
Figure BDA0003691007190000132
Is the head-up times within a predetermined time range,
Figure BDA0003691007190000133
is the number of head lowers within a predetermined time range,
Figure BDA0003691007190000134
is the number of times of head turning in a preset time range,
Figure BDA0003691007190000135
the times of table bending in the preset time range,
Figure BDA0003691007190000136
weights corresponding to the actions of raising, lowering, turning and lying down respectively are matched with the learning stateGrading and student identity information to obtain the learning state grade of each student;
and if the learning state score is lower than a preset learning state score threshold value, acquiring class, seat and course arrangement information of a corresponding student according to the identity information, and using the class, seat and course arrangement information to mark the student by a teacher.
In this embodiment, the learning states of the students are combined with the identity information, and the comprehensive weight is integrated to obtain the comprehensive score, so that the teacher marks the students with poor learning states in the background and gives a study and rest suggestion for the students in the background.
Embodiments of the apparatus of the present application are described below, which may be used to implement the learning state monitoring method in the above-described embodiments of the present application. For details that are not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the learning state monitoring method described above in the present application.
Fig. 3 is a schematic structural diagram of a learning state monitoring device according to an exemplary embodiment of the present application. The apparatus can be applied to the implementation environment shown in fig. 1, and is specifically configured in the learning state monitoring device 101. The apparatus may also be applied to other exemplary implementation environments, and is specifically configured in other devices, and the embodiment does not limit the implementation environment to which the apparatus is applied.
As shown in fig. 3, the exemplary learning state monitoring device includes: a learning image segmentation module 401, a head posture recognition module 402, an upper limb behavior posture recognition module 403, and a learning state detection result acquisition module 404.
The learning image segmentation module 401 is configured to obtain a learning image dataset of a student in a classroom, and perform image segmentation on the learning image dataset in a first model to obtain a head image dataset and an upper limb image dataset.
The head pose recognition module 402 performs head pose recognition on the head image data set in the second model to obtain a head rotation pose.
An upper limb behavior gesture recognition module 403, configured to extract a skeleton key point sequence in the upper limb image data set, perform behavior recognition on the skeleton key point sequence in a third model, and obtain an upper limb behavior gesture.
A learning state detection result obtaining module 404, configured to compare the head rotation posture and the upper limb behavior posture with preset posture thresholds, respectively, to obtain a learning state monitoring result.
It should be noted that the learning state monitoring apparatus provided in the foregoing embodiment and the learning state monitoring method provided in the foregoing embodiment belong to the same concept, and specific manners of operations executed by the modules and units have been described in detail in the method embodiment, and are not described again here. In practical applications, the learning state monitoring apparatus provided in the above embodiment may distribute the functions to different functional modules according to needs, that is, divide the internal structure of the apparatus into different functional modules to complete all or part of the functions described above, which is not limited herein.
In an embodiment of the present application, there is also provided a computer device including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the learning state monitoring method provided in each of the above embodiments when executing the computer program.
FIG. 4 illustrates a schematic structural diagram of a computer system suitable for use in implementing the computer device of the embodiments of the present application. It should be noted that the computer system 400 of the electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of the application of the embodiments.
As shown in fig. 42, the computer system 400 includes a Central Processing Unit (CPU)401, which can perform various appropriate actions and processes, such as performing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 402 or a program loaded from a storage section 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for system operation are also stored. The CPU 401, ROM 402, and RAM 403 are connected to each other via a bus 404. An Input/Output (I/O) interface 405 is also connected to the bus 404.
The following components are connected to the I/O interface 405: an input portion 406 including a keyboard, a mouse, and the like; an output section 407 including a Display device such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 408 including a hard disk and the like; and a communication section 409 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 409 performs communication processing via a network such as the internet. A driver 410 is also connected to the I/O interface 405 as needed. A removable medium 411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 410 as necessary, so that a computer program read out therefrom is mounted into the storage section 408 as necessary.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 409, and/or installed from the removable medium 411. The computer program executes various functions defined in the system of the present application when executed by a Central Processing Unit (CPU) 401.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer readable storage medium may be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer-readable signal medium may comprise a propagated data signal with a computer-readable computer program embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. The computer program embodied on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
Another aspect of the present application also provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the learning state monitoring method according to the above embodiment. The computer-readable storage medium may be included in the computer device described in the above embodiments, or may exist separately without being incorporated in the computer device.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Another aspect of the application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the image processing method provided in the above-described embodiments.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present application.
In the above embodiments, unless otherwise specified, the description of common objects by using "first", "second", etc. ordinal numbers only indicate that they refer to different instances of the same object, rather than indicating that the objects being described must be in a given sequence, whether temporally, spatially, in ranking, or in any other manner.
In the above-described embodiments, reference in the specification to "the embodiment," "an embodiment," "another embodiment," or "other embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments. The various appearances of the phrase "the present embodiment," "one embodiment," or "another embodiment" are not necessarily all referring to the same embodiment. If the specification states a component, feature, structure, or characteristic "may", "might", or "could" be included, that particular component, feature, structure, or characteristic is not necessarily included. If the specification or claim refers to "a" or "an" element, that does not mean there is only one of the element. If the specification or claim refers to "a further" element, that does not preclude there being more than one of the further element.
In the embodiments described above, although the present invention has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of these embodiments will be apparent to those skilled in the art in light of the foregoing description. For example, other memory structures (e.g., dynamic ram (dram)) may use the discussed embodiments. The embodiments of the invention are intended to embrace all such alternatives, modifications and variances that fall within the broad scope of the appended claims.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention are covered by the claims of the present invention.

Claims (10)

1. A learning state monitoring method, comprising:
acquiring a learning image data set of students in a classroom, and performing image segmentation on the learning image data set in a first model to obtain a head image data set and an upper limb image data set;
performing head posture recognition on the head image data set in a second model to obtain a head rotation posture;
extracting a skeleton key point sequence in the upper limb image data set, and performing behavior recognition on the skeleton key point sequence in a third model to obtain an upper limb behavior posture;
and respectively comparing the head rotation posture and the upper limb behavior posture with preset posture threshold values to obtain a learning state monitoring result.
2. The learning state monitoring method according to claim 1, wherein image segmentation is performed on the learning image dataset in a first model to obtain a head image dataset and an upper limb image dataset, and the method comprises:
according to a preset graying processing weight, carrying out weighted average on color components in the learning image data set to obtain a grayed first learning image data set, wherein the weighted average calculation mode is as follows: i (x, y) ═ 0.3 × I R (x,y)+0.59*I G (x,y)+0.11*I B (x, y) wherein I (x, y) is a gray value, I R For the red component in the image, I G For the green component in the image, I B Is a blue component in the image;
according to a preset binarization threshold value, performing binarization processing on the first learning image data set to obtain a second learning image data set;
and respectively intercepting a head part and an upper limb part in the second learning image data set according to a preset region of interest to obtain the head image data set and the upper limb image data set.
3. The learning state monitoring method according to claim 1, wherein performing head pose recognition on the head image data set in a second model to obtain a head rotation pose comprises:
obtaining a face rectangular frame in the head image data set by using a face detection algorithm;
converting the face rectangular frame into face coordinates in an array form, and storing the face coordinates frame by frame to obtain a face coordinate data set;
carrying out feature point detection on the face coordinate data set to obtain a face feature point set, wherein the face feature point set comprises six types of feature point sets, namely a left canthus, a right canthus, a nose tip, a left lip tip, a right lip tip and a chin, and the feature point acquisition mode is as follows: establishing a feature point detection matrix
Figure FDA0003691007180000021
Where σ is a scale coefficient, L xx (x, σ) is the second order partial derivative in the x-direction, L xy (x, σ) is the second order partial derivative in the xy direction, L yy (x, sigma) is a second-order partial derivative in the y direction, and a matrix threshold parameter delta is detected according to a preset characteristic point to obtain the characteristic point;
performing head posture recognition on the six types of feature point sets in the second model to obtain a deflection angle;
if the deflection angle is within a preset oscillation angle range, obtaining an oscillation gesture;
if the deflection angle is within a preset head lowering angle range, obtaining a head lowering posture;
and if the deflection angle is within a preset head raising angle range, obtaining a head raising posture.
4. The learning state monitoring method according to claim 3, wherein converting the face rectangle frame into face coordinates in an array form includes:
converting the two-dimensional coordinates in the face rectangular frame into camera coordinates in a manner that:
Figure FDA0003691007180000022
wherein
Figure FDA0003691007180000023
Is a two-dimensional coordinate system, and is,
Figure FDA0003691007180000024
to be the coordinates of the camera(s),
Figure FDA0003691007180000025
is an internal reference of the camera;
converting the camera coordinates into world coordinates in a manner of:
Figure FDA0003691007180000026
Figure FDA0003691007180000027
wherein
Figure FDA0003691007180000028
The world coordinate is adopted, T is a translation matrix, and R is a rotation matrix;
and obtaining the face coordinates according to the world coordinates in the head image data set.
5. The learning state monitoring method according to claim 1, wherein extracting a sequence of skeletal key points in the upper limb image dataset comprises:
positioning an upper limb target in the upper limb image data set to obtain an upper limb positioning coordinate;
tracking and identifying the upper limb positioning coordinates through a multi-target fusion tracking network to obtain a tracking result sequence of the upper limb positioning coordinates, wherein the tracking result sequence comprises the association degree of the feature vectors of the upper limb positioning coordinates of the upper and lower frame images in the upper limb image data set;
and obtaining a skeleton key point sequence according to the tracking result sequence.
6. The learning state monitoring method of claim 1, wherein the performing behavior recognition on the skeletal key point sequence in a third model to obtain an upper limb behavior posture comprises:
inputting the skeleton key point sequence into a third model, wherein the third model is a pre-trained skeleton key point behavior recognition model, and the loss function of the third model is
Figure FDA0003691007180000031
Wherein
Figure FDA0003691007180000032
Is a loss function of the predicted value of the loss,
Figure FDA0003691007180000033
the loss function of the marked value is adopted, and the bone key point behavior recognition model comprises a confidence detection convolutional layer and a connection relation convolutional layer;
predicting the bone key point sequence in the confidence coefficient detection convolutional layer to obtain the confidence coefficient of the upper limb joint;
predicting the connection relation among all the bone key points in the bone key point sequence in the connection relation convolution layer to obtain a connection relation;
obtaining an upper limb posture skeleton according to the upper limb joint confidence coefficient and the connection relation;
obtaining distance parameters between joint parts in the upper limb posture skeleton, comparing the distance parameters with a preset distance parameter threshold value to obtain an upper limb behavior posture, wherein the upper limb behavior posture comprises lying down on a table, turning around and standing.
7. The learning state monitoring method according to claim 1, wherein after comparing the head rotation posture and the upper limb behavior posture with preset posture thresholds respectively to obtain learning state monitoring results, the method comprises:
inputting the head image data set into a pre-trained face recognition model to obtain student identity information;
converting the learning state detection result into a learning state score according to a preset weight parameter, wherein the learning state score is calculated in a mode of
Figure FDA0003691007180000041
Wherein
Figure FDA0003691007180000042
Is the head-up times within a predetermined time range,
Figure FDA0003691007180000043
is the number of head lowers within a predetermined time range,
Figure FDA0003691007180000044
is the number of times of head turning in a preset time range,
Figure FDA0003691007180000045
the times of table bending in the preset time range,
Figure FDA0003691007180000046
respectively matching the learning state scores and the student identity information for weights corresponding to head raising, head lowering, head turning and table lying behaviors to obtain the learning state scores of the students;
and if the learning state score is lower than a preset learning state score threshold, acquiring class, seat and course arrangement information of the corresponding student according to the identity information, and using the class, seat and course arrangement information to mark the student by a teacher.
8. A learning state monitoring device comprising:
the learning image segmentation module is used for acquiring a learning image data set of students in a classroom and carrying out image segmentation on the learning image data set in a first model to obtain a head image data set and an upper limb image data set;
the head posture recognition module is used for carrying out head posture recognition on the head image data set in the second model to obtain a head rotation posture;
the upper limb behavior gesture recognition module is used for extracting a skeleton key point sequence in the upper limb image data set, and performing behavior recognition on the skeleton key point sequence in a third model to obtain an upper limb behavior gesture;
and the learning state detection result acquisition module is used for respectively comparing the head rotation posture and the upper limb behavior posture with preset posture threshold values to obtain a learning state monitoring result.
9. A computer arrangement comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the steps of the learning state monitoring method according to any one of claims 1 to 7 are implemented by the processor when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the learning state monitoring method according to any one of claims 1 to 7.
CN202210664223.2A 2022-06-13 2022-06-13 Learning state monitoring method, device, equipment and medium Withdrawn CN115019396A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210664223.2A CN115019396A (en) 2022-06-13 2022-06-13 Learning state monitoring method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210664223.2A CN115019396A (en) 2022-06-13 2022-06-13 Learning state monitoring method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN115019396A true CN115019396A (en) 2022-09-06

Family

ID=83075539

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210664223.2A Withdrawn CN115019396A (en) 2022-06-13 2022-06-13 Learning state monitoring method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN115019396A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205764A (en) * 2022-09-15 2022-10-18 深圳市企鹅网络科技有限公司 Online learning concentration monitoring method, system and medium based on machine vision

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205764A (en) * 2022-09-15 2022-10-18 深圳市企鹅网络科技有限公司 Online learning concentration monitoring method, system and medium based on machine vision

Similar Documents

Publication Publication Date Title
CN111126272B (en) Posture acquisition method, and training method and device of key point coordinate positioning model
CN111709409B (en) Face living body detection method, device, equipment and medium
CN110728209B (en) Gesture recognition method and device, electronic equipment and storage medium
CN111563502B (en) Image text recognition method and device, electronic equipment and computer storage medium
CN111310731A (en) Video recommendation method, device and equipment based on artificial intelligence and storage medium
CN111444828B (en) Model training method, target detection method, device and storage medium
Oszust et al. Polish sign language words recognition with Kinect
WO2020103700A1 (en) Image recognition method based on micro facial expressions, apparatus and related device
CN108399386A (en) Information extracting method in pie chart and device
CN111222486B (en) Training method, device and equipment for hand gesture recognition model and storage medium
CN112257665A (en) Image content recognition method, image recognition model training method, and medium
CN113706562B (en) Image segmentation method, device and system and cell segmentation method
CN114332911A (en) Head posture detection method and device and computer equipment
CN114549557A (en) Portrait segmentation network training method, device, equipment and medium
CN114332927A (en) Classroom hand-raising behavior detection method, system, computer equipment and storage medium
CN115019396A (en) Learning state monitoring method, device, equipment and medium
CN110008922A (en) Image processing method, unit, medium for terminal device
Xu et al. A novel method for hand posture recognition based on depth information descriptor
CN111144374A (en) Facial expression recognition method and device, storage medium and electronic equipment
Saman et al. Image Processing Algorithm for Appearance-Based Gesture Recognition
CN115116117A (en) Learning input data acquisition method based on multi-mode fusion network
CN114663835A (en) Pedestrian tracking method, system, equipment and storage medium
CN115994944A (en) Three-dimensional key point prediction method, training method and related equipment
Abdulhamied et al. Real-time recognition of American sign language using long-short term memory neural network and hand detection
CN114639132A (en) Feature extraction model processing method, device and equipment in face recognition scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220906

WW01 Invention patent application withdrawn after publication