CN112597888A - On-line education scene student attention recognition method aiming at CPU operation optimization - Google Patents

On-line education scene student attention recognition method aiming at CPU operation optimization Download PDF

Info

Publication number
CN112597888A
CN112597888A CN202011530619.5A CN202011530619A CN112597888A CN 112597888 A CN112597888 A CN 112597888A CN 202011530619 A CN202011530619 A CN 202011530619A CN 112597888 A CN112597888 A CN 112597888A
Authority
CN
China
Prior art keywords
face
engage
attention
image
student
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011530619.5A
Other languages
Chinese (zh)
Other versions
CN112597888B (en
Inventor
王�琦
吴越
李学龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202011530619.5A priority Critical patent/CN112597888B/en
Publication of CN112597888A publication Critical patent/CN112597888A/en
Application granted granted Critical
Publication of CN112597888B publication Critical patent/CN112597888B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Multimedia (AREA)
  • Educational Technology (AREA)
  • Human Computer Interaction (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an on-line education scene student attention recognition method aiming at CPU operation optimization, which comprises the steps of firstly, carrying out face detection and face key point detection on each frame of image in a training data set by using an MTCNN face recognition model to obtain a face image and face key points, carrying out face alignment by using affine transformation based on the face key points, and carrying out attention scoring on the face; constructing an Engage-CNN model on the basis of an Engage-Detection network, performing full supervision training on the Engage-CNN model by adopting an aligned face image and an attention score, and optimizing the operation speed of the Engage-CNN model on a common CPU to obtain an optimized Engage-CNN model; and then, carrying out attention evaluation on the face image of the student in the course of the class by adopting the optimized Engage-CNN model. The method has high processing speed and high accuracy, and can be used for carrying out face detection on the image with low resolution.

Description

On-line education scene student attention recognition method aiming at CPU operation optimization
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a student attention identification method.
Background
With the development of the times, the importance degree of the society to education is continuously improved, people learn and research the teaching process more and more deeply, and begin to be aware of the complexity of the behaviors of students in classroom teaching. On-line education is different from teaching in a classroom, a teacher cannot visually see all students to know the class input states of the students, the class input states of the students are obtained by using technical means, and powerful help is provided for the teacher to improve the class learning efficiency of the students. Student attention recognition is an important research project in intelligent classes. Currently, there are two main technical routes for the attention recognition task: a wearable physical sensor based method and a camera video information based method.
Among the methods based on wearable physical sensors are methods based on electroencephalograph headphones, proposed by m.hassb et al In the literature "m.hasssib, s.schnegass, p.eiglperserger, n.henze, a.schmidt, and f.alt.engagemeter: a system for imaging the audio sensing using electroencephalography, In Proceedings of the 2017CHI Conference on Human Factors In Computing Systems, pp.5114-5119,2017", which calculates student attentiveness states by collecting alpha waves, beta waves, theta waves In student electroencephalogram signals.
The method based on camera video information includes a model for performing multi-frame video attention recognition by using a Gaze-AU-Pose (GAP) feature in combination with a GRU network, proposed by X.Niu et al in documents of X.Niu, H.Han, J.Zeng, X.Sun, S.Shan, Y.Huang, S.Yang, X.Chen.automatic encoding prediction with GAP feature in Proceedings of the 20th ACM International Conference Multi Interaction, pp.599-603,2018.
Both of these approaches have their limitations. The wearable physical sensor based approach, while accurate, requires additional sensors for each student, which is costly. The method based on the camera video information has high performance overhead due to the need of processing and identifying continuous video frames, and is difficult to meet the requirements of practical application.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an online education scene student attention recognition method aiming at CPU operation optimization, which comprises the steps of firstly, carrying out face detection and face key point detection on each frame of image in a training data set by using an MTCNN face recognition model, obtaining a face image and face key points, carrying out face alignment by using affine transformation based on the face key points, and carrying out attention scoring on a face; constructing an Engage-CNN model on the basis of an Engage-Detection network, performing full supervision training on the Engage-CNN model by adopting an aligned face image and an attention score, and optimizing the operation speed of the Engage-CNN model on a common CPU to obtain an optimized Engage-CNN model; and then, carrying out attention evaluation on the face image of the student in the course of the class by adopting the optimized Engage-CNN model. The method has high processing speed and high accuracy, and can be used for carrying out face detection on the image with low resolution.
The technical scheme adopted by the invention for solving the technical problem comprises the following steps:
step 1: establishing an Engage-CNN model;
step 1-1: carrying out face detection on each image in the face image data set by using an MTCNN face recognition model to obtain a face image I and detecting key points to obtain face key points L; scoring the attention of the face in the face image I to obtain an attention score S, wherein S belongs to [0,1.0], 0 represents that the attention is not concentrated, and 1.0 represents that the attention is completely concentrated;
step 1-2: defining affine transformation matrices
Figure BDA0002852027580000021
Wherein a is1And a2As image rotation parameters, b1And b2As an image scaling parameter, c1And c2Is an image translation parameter;
solving Q to M by using a least square method, wherein Q is a predefined standard face coordinate;
using affine transformation matrix M to align face image I to obtain aligned face image IA
Step 1-3: replacing a Dropout layer in an Engage-Detection network with L2 regularization, modifying an output layer softmax classifier into sigmoid output, and enabling the model output to be attention scoring to obtain an Engage-CNN model;
step 1-4: will align the face image IAPerforming full-supervision training on the Engage-CNN model by taking the attention score S as a label as input of the Engage-CNN model, and obtaining a final Engage-CNN model after the training is finished;
step 1-5: optimizing the Engage-CNN model by using a TVM deep learning compiler to obtain a finally optimized Engage-CNN model;
step 2: recognizing attention of students;
step 2-1: acquiring a student computer camera image in the course of class, and performing face detection by using an MTCNN face recognition model to obtain a student face image and student face key points;
step 2-2: obtaining an aligned student face image by using the method in the step 1-2;
step 2-3: and (3) inputting the aligned student face image obtained in the step (2-2) into the finally optimized Engage-CNN model to obtain the attention score corresponding to the student face image.
Preferably, the face key points L include three key points: left eye center, right eye center, and mouth center.
The invention has the following beneficial effects:
1. the invention does not need additional wearable sensing equipment, only needs to use the student computer camera to acquire data, and can effectively reduce the cost.
2. The student attention recognition method aiming at the CPU operation optimization provided by the invention can finish attention recognition only by a single student face image when in use, effectively reduces the performance overhead of the algorithm, directly finishes the attention recognition calculation process on a student computer with lower CPU occupancy rate, and avoids a large amount of network transmission overhead generated by sending the student face image to a teacher-side computer and the performance overhead required by the attention recognition calculation. In the actual test, the single-frame processing time is 0.052s, the CPU occupancy rate is 5.1%, the real-time performance can be well guaranteed, the blocking caused by occupying too many computer computing resources of students is avoided, and the attention detection effect is effectively improved.
3. The invention adopts the MTCNN network, has high processing speed and high accuracy and can detect the face of the image with low resolution.
4. The invention models the relationship between the face image and the attention score in an end-to-end mode, provides more space for the Engage-CNN model to be automatically adjusted according to data, increases the overall engagement degree of the Engage-CNN model and the data, and notices that the error of the force state is 0.118 on a test set.
Drawings
FIG. 1 is a flowchart of the method of the present invention, wherein the left diagram is a flowchart of the training of the Engage-CNN model, and the right diagram is a flowchart of student attention recognition.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
As shown in fig. 1, a method for identifying student attention in an online education scene optimized for CPU operations includes the following steps:
step 1: establishing an Engage-CNN model;
step 1-1: using an MTCNN face recognition model to perform face detection on each image in a face image data set to obtain a face image I and performing key point detection to obtain a face key point L, wherein the L comprises three key points: a left eye center, a right eye center, and a mouth center; scoring the attention of the face in the face image I to obtain an attention score S, wherein S belongs to [0,1.0], 0 represents that the attention is not concentrated, and 1.0 represents that the attention is completely concentrated; the MTCNN face recognition model is a multitask convolutional neural network, and can realize rapid face detection and face key point detection;
step 1-2: defining affine transformation matrices
Figure BDA0002852027580000031
Wherein a is1And a2As image rotation parameters, b1And b2As an image scaling parameter, c1And c2Is an image translation parameter;
solving Q to M by using a least square method, wherein Q is a predefined standard face coordinate;
using affine transformation matrix M to align face image I to obtain aligned face imageLike IA
Step 1-3: replacing a Dropout layer in an Engage-Detection network with L2 regularization, modifying an output layer softmax classifier into sigmoid output, and enabling the model output to be attention scoring to obtain an Engage-CNN model;
step 1-4: will align the face image IAPerforming full-supervision training on the Engage-CNN model by taking the attention score S as a label as input of the Engage-CNN model, and obtaining a final Engage-CNN model after the training is finished;
step 1-5: optimizing the final Engage-CNN model in the steps 1-4 by using a TVM deep learning compiler and applying optimization modes such as operator fusion, branch convolution optimization and the like, and then performing assembly-level optimization aiming at AVX2 parallel operation instruction sets which are owned by most CPUs in the market at present to obtain the finally optimized Engage-CNN model and a dynamic link library file required by operation;
step 2: recognizing attention of students;
step 2-1: acquiring a student computer camera image in the course of class, and performing face detection by using an MTCNN face recognition model to obtain a student face image and student face key points;
step 2-2: obtaining an aligned student face image by using the method in the step 1-2;
step 2-3: and (3) inputting the aligned student face image obtained in the step (2-2) into the finally optimized Engage-CNN model to obtain the attention score corresponding to the student face image.
The specific embodiment is as follows:
1. conditions for carrying out
The present embodiment is implemented by a central processing unit
Figure BDA0002852027580000041
The CPU i7-6700HQ @2.60GHz, the memory 32G and the graphics processor are
Figure BDA0002852027580000042
The Geforce GTX1070 GPU and the Windows 10 operating system are carried out by utilizing a Pythrch deep learning framework and a TVM model reasoning framework.
Data used in the embodiment is collected from computer camera data of 112 students in an online education environment, and comprises 9068 video segments with the duration of 10 seconds, wherein 7255 training videos, 1813 testing videos, attention scores S e [0,1.0], 0 represents that attention is not focused, 1.0 represents that attention is completely focused, and the attention scores are labeled by 5 annotators.
2. Content of implementation
First, a depth model is trained using training set data. And then, carrying out inference optimization on the depth model by using a TVM inference framework, and testing the error between the attention score and the true value, the model operation speed and the performance overhead on the test set.
To demonstrate the effectiveness of the algorithm, an LSTM model (GAP-GRU) using the Gaze-AU-Pose feature was chosen, and as a comparison algorithm, an Engage-Detection model of the convolutional neural network was used, which is described in detail in the literature "X.Niu, H.Han, J.Zeng, X.Sun, S.Shann, Y.Huang, S.Yang, X.Chen.automatic engagement prediction with GAP feature. in Proceedings of the 20th ACM International Conference on Multi Interaction, pp.599-603,2018."; the Engage-Detection model is proposed in the literature "M.Murshed, MA.Dewan, F.Lin, D.Wen.engage Detection in e-Learning Environments using relational Neural networks.In 2019IEEE Intl Conf on dependent, Automic and Secure Computing, Intl Conf on innovative analysis and Computing, Intl Conf on statistical analysis and Computing, Intl Conf on Cyber Science and Technology Congress, pp.80-86,2019". The comparative results are shown in Table 1.
TABLE 1
Figure BDA0002852027580000051
As can be seen from Table 1, the mean absolute error of the attention score of the present invention is reduced by 0.033 compared to the Engage-Detection model also calculated using a single frame image, and is increased by 0.032 compared to the GAP-GRU model calculated using multiple frame images. Table 2 shows the run-time and performance overhead for each algorithm.
TABLE 2
Figure BDA0002852027580000052
As can be seen from Table 2, the optimized algorithm of the present invention is significantly superior to other algorithms in terms of operating speed and performance overhead. The practicability and effectiveness of the invention can be verified through the experiment.

Claims (2)

1. A student attention recognition method for an online education scene aiming at CPU operation optimization is characterized by comprising the following steps:
step 1: establishing an Engage-CNN model;
step 1-1: carrying out face detection on each image in the face image data set by using an MTCNN face recognition model to obtain a face image I and detecting key points to obtain face key points L; scoring the attention of the face in the face image I to obtain an attention score S, wherein S belongs to [0,1.0], 0 represents that the attention is not concentrated, and 1.0 represents that the attention is completely concentrated;
step 1-2: defining affine transformation matrices
Figure FDA0002852027570000011
Wherein a is1And a2As image rotation parameters, b1And b2As an image scaling parameter, c1And c2Is an image translation parameter;
solving Q to M by using a least square method, wherein Q is a predefined standard face coordinate;
using affine transformation matrix M to align face image I to obtain aligned face image IA
Step 1-3: replacing a Dropout layer in an Engage-Detection network with L2 regularization, modifying an output layer softmax classifier into sigmoid output, and enabling the model output to be attention scoring to obtain an Engage-CNN model;
step 1-4: will align the face image IAAs Engage-CNN modeInputting the model, taking the attention score S as a label, carrying out full supervision training on the Engage-CNN model, and obtaining a final Engage-CNN model after the training is finished;
step 1-5: optimizing the Engage-CNN model by using a TVM deep learning compiler to obtain a finally optimized Engage-CNN model;
step 2: recognizing attention of students;
step 2-1: acquiring a student computer camera image in the course of class, and performing face detection by using an MTCNN face recognition model to obtain a student face image and student face key points;
step 2-2: obtaining an aligned student face image by using the method in the step 1-2;
step 2-3: and (3) inputting the aligned student face image obtained in the step (2-2) into the finally optimized Engage-CNN model to obtain the attention score corresponding to the student face image.
2. The method for on-line education scene student attention recognition optimized for CPU calculation according to claim 1, wherein the face key points L comprise three key points: left eye center, right eye center, and mouth center.
CN202011530619.5A 2020-12-22 2020-12-22 Online education scene student attention recognition method aiming at CPU operation optimization Active CN112597888B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011530619.5A CN112597888B (en) 2020-12-22 2020-12-22 Online education scene student attention recognition method aiming at CPU operation optimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011530619.5A CN112597888B (en) 2020-12-22 2020-12-22 Online education scene student attention recognition method aiming at CPU operation optimization

Publications (2)

Publication Number Publication Date
CN112597888A true CN112597888A (en) 2021-04-02
CN112597888B CN112597888B (en) 2024-03-08

Family

ID=75200091

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011530619.5A Active CN112597888B (en) 2020-12-22 2020-12-22 Online education scene student attention recognition method aiming at CPU operation optimization

Country Status (1)

Country Link
CN (1) CN112597888B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011054200A (en) * 2010-11-11 2011-03-17 Fuji Electric Systems Co Ltd Neural network learning method
CN106503669A (en) * 2016-11-02 2017-03-15 重庆中科云丛科技有限公司 A kind of based on the training of multitask deep learning network, recognition methods and system
CN107103281A (en) * 2017-03-10 2017-08-29 中山大学 Face identification method based on aggregation Damage degree metric learning
CN109543606A (en) * 2018-11-22 2019-03-29 中山大学 A kind of face identification method that attention mechanism is added
CN110532920A (en) * 2019-08-21 2019-12-03 长江大学 Smallest number data set face identification method based on FaceNet method
CN110956082A (en) * 2019-10-17 2020-04-03 江苏科技大学 Face key point detection method and detection system based on deep learning
CN111178242A (en) * 2019-12-27 2020-05-19 上海掌学教育科技有限公司 Student facial expression recognition method and system for online education
CN111368830A (en) * 2020-03-03 2020-07-03 西北工业大学 License plate detection and identification method based on multi-video frame information and nuclear phase light filtering algorithm
CN111539370A (en) * 2020-04-30 2020-08-14 华中科技大学 Image pedestrian re-identification method and system based on multi-attention joint learning
CN111563476A (en) * 2020-05-18 2020-08-21 哈尔滨理工大学 Face recognition method based on deep learning
CN112101074A (en) * 2019-06-18 2020-12-18 深圳市优乐学科技有限公司 Online education auxiliary scoring method and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011054200A (en) * 2010-11-11 2011-03-17 Fuji Electric Systems Co Ltd Neural network learning method
CN106503669A (en) * 2016-11-02 2017-03-15 重庆中科云丛科技有限公司 A kind of based on the training of multitask deep learning network, recognition methods and system
CN107103281A (en) * 2017-03-10 2017-08-29 中山大学 Face identification method based on aggregation Damage degree metric learning
CN109543606A (en) * 2018-11-22 2019-03-29 中山大学 A kind of face identification method that attention mechanism is added
CN112101074A (en) * 2019-06-18 2020-12-18 深圳市优乐学科技有限公司 Online education auxiliary scoring method and system
CN110532920A (en) * 2019-08-21 2019-12-03 长江大学 Smallest number data set face identification method based on FaceNet method
CN110956082A (en) * 2019-10-17 2020-04-03 江苏科技大学 Face key point detection method and detection system based on deep learning
CN111178242A (en) * 2019-12-27 2020-05-19 上海掌学教育科技有限公司 Student facial expression recognition method and system for online education
CN111368830A (en) * 2020-03-03 2020-07-03 西北工业大学 License plate detection and identification method based on multi-video frame information and nuclear phase light filtering algorithm
CN111539370A (en) * 2020-04-30 2020-08-14 华中科技大学 Image pedestrian re-identification method and system based on multi-attention joint learning
CN111563476A (en) * 2020-05-18 2020-08-21 哈尔滨理工大学 Face recognition method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张坤;章东平;杨力;: "样本增强的人脸识别算法研究", 中国计量大学学报, no. 02 *
方书雅;刘守印;: "基于学生人体检测的无感知课堂考勤方法", 计算机应用, no. 09 *

Also Published As

Publication number Publication date
CN112597888B (en) 2024-03-08

Similar Documents

Publication Publication Date Title
Fan et al. Inferring shared attention in social scene videos
US20190311188A1 (en) Face emotion recognition method based on dual-stream convolutional neural network
CN110889672B (en) Student card punching and class taking state detection system based on deep learning
CN111726586A (en) Production system operation standard monitoring and reminding system
CN111563452B (en) Multi-human-body gesture detection and state discrimination method based on instance segmentation
Yang et al. Facs3d-net: 3d convolution based spatiotemporal representation for action unit detection
Abdulkader et al. Optimizing student engagement in edge-based online learning with advanced analytics
Li et al. Sign language recognition based on computer vision
CN109241830A (en) It listens to the teacher method for detecting abnormality in the classroom for generating confrontation network based on illumination
CN112541529A (en) Expression and posture fusion bimodal teaching evaluation method, device and storage medium
CN112036276A (en) Artificial intelligent video question-answering method
Wu et al. Pose-Guided Inflated 3D ConvNet for action recognition in videos
CN111666829A (en) Multi-scene multi-subject identity behavior emotion recognition analysis method and intelligent supervision system
CN113723277B (en) Learning intention monitoring method and system integrated with multi-mode visual information
CN112102129A (en) Intelligent examination cheating identification system based on student terminal data processing
CN111723756A (en) Facial feature point tracking method based on self-supervision and semi-supervision learning
CN111626197B (en) Recognition method based on human behavior recognition network model
Islam et al. A deep Spatio-temporal network for vision-based sexual harassment detection
Zhao et al. Human action recognition based on improved fusion attention CNN and RNN
CN112597888B (en) Online education scene student attention recognition method aiming at CPU operation optimization
CN115719497A (en) Student concentration degree identification method and system
Huang et al. Research on learning state based on students’ attitude and emotion in class learning
CN110427920B (en) Real-time pedestrian analysis method oriented to monitoring environment
CN113688789A (en) Online learning investment recognition method and system based on deep learning
Agarwal et al. Semi-Supervised Learning to Perceive Children's Affective States in a Tablet Tutor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant