CN117574098A - Learning concentration analysis method and related device - Google Patents

Learning concentration analysis method and related device Download PDF

Info

Publication number
CN117574098A
CN117574098A CN202410054897.XA CN202410054897A CN117574098A CN 117574098 A CN117574098 A CN 117574098A CN 202410054897 A CN202410054897 A CN 202410054897A CN 117574098 A CN117574098 A CN 117574098A
Authority
CN
China
Prior art keywords
data
sensor data
feature
target
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410054897.XA
Other languages
Chinese (zh)
Other versions
CN117574098B (en
Inventor
甘俊杰
陈一丰
肖慈婉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Gutin Technology Co ltd
Original Assignee
Zhuhai Gutin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Gutin Technology Co ltd filed Critical Zhuhai Gutin Technology Co ltd
Priority to CN202410054897.XA priority Critical patent/CN117574098B/en
Publication of CN117574098A publication Critical patent/CN117574098A/en
Application granted granted Critical
Publication of CN117574098B publication Critical patent/CN117574098B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a learning concentration analysis method and a related device. The method comprises the following steps: acquiring first sensor data, second sensor data and third sensor data of a target object in a learning process; preprocessing the first sensor data, the second sensor data and the third sensor data; respectively extracting features of the preprocessed first sensor data, the preprocessed second sensor data and the preprocessed third sensor data to obtain first feature data, preprocessed second feature data and preprocessed third feature data; performing feature fusion on the first feature data, the second feature data and the third feature data to obtain target feature data; inputting the target characteristic data into the action recognition model to obtain target limb inching information of the target object in the learning process; inputting the target limb inching information into a concentration degree identification model to obtain the concentration degree level of a target object in the learning process; and displaying the concentration degree of the target object in the learning process.

Description

Learning concentration analysis method and related device
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a learning concentration analysis method and a related device.
Background
With the development and popularization of educational informatization, more and more schools and teaching institutions begin to pay attention to the learning situation and learning effect of students. The concentration degree is one of important factors influencing the learning effect of students. However, the conventional concentration evaluation method often has a plurality of defects, such as only relying on subjective evaluation of teachers, needing to consume a great deal of time and manpower, having a certain error and the like.
Disclosure of Invention
The embodiment of the invention mainly aims to provide a learning concentration analysis method and a related device, and aims to solve the problems that in the related technology, only a teacher is required to perform subjective evaluation, a great deal of time and labor are required to be consumed, a certain error exists, the concentration analysis result is poor, and the reliability is low.
In a first aspect, an embodiment of the present invention provides a learning concentration analysis method, including:
acquiring first sensor data, second sensor data and third sensor data of a target object in a learning process;
preprocessing the first sensor data, the second sensor data and the third sensor data;
respectively extracting features of the preprocessed first sensor data, the preprocessed second sensor data and the preprocessed third sensor data to obtain first feature data, preprocessed second feature data and preprocessed third feature data;
Performing feature fusion on the first feature data, the second feature data and the third feature data to obtain target feature data;
inputting the target characteristic data into an action recognition model to obtain target limb inching information of a target object in a learning process;
inputting the target limb inching information into a concentration degree identification model to obtain the concentration degree level of a target object in the learning process;
and visually displaying the concentration level of the target object in the learning process.
In a second aspect, an embodiment of the present invention provides a learning concentration analysis apparatus, including:
the data acquisition module is used for acquiring first sensor data, second sensor data and third sensor data of the target object in the learning process;
the data processing module is used for preprocessing the first sensor data, the second sensor data and the third sensor data;
the feature extraction module is used for respectively carrying out feature extraction on the preprocessed first sensor data, the preprocessed second sensor data and the preprocessed third sensor data to obtain first feature data, preprocessed second feature data and preprocessed third feature data;
The feature fusion module is used for carrying out feature fusion on the first feature data, the second feature data and the third feature data to obtain target feature data;
the information determining module is used for inputting the target characteristic data into the action recognition model to obtain target limb inching information of a target object in the learning process;
the result determining module is used for inputting the limb micro-motion information into a concentration degree identification model to obtain the concentration degree level of the target object in the learning process;
and the result display module is used for visually displaying the concentration level of the target object in the learning process.
In a third aspect, embodiments of the present invention further provide a terminal device, the terminal device comprising a processor, a memory, a computer program stored on the memory and executable by the processor, and a data bus for enabling a connection communication between the processor and the memory, wherein the computer program, when executed by the processor, implements the steps of any of the learning concentration analysis methods as provided in the present specification.
In a fourth aspect, embodiments of the present invention further provide a storage medium for computer readable storage, wherein the storage medium stores one or more programs executable by one or more processors to implement steps of any learning concentration analysis method as provided in the present specification.
The embodiment of the invention provides a learning concentration analysis method and a related device. This can be used to analyze the behavior patterns and environmental characteristics of the target object in order to better understand the learning state of the target object. According to the method and the device for preprocessing the first sensor data, the second sensor data and the third sensor data, noise and unnecessary information in each sensor data are removed, and therefore the efficiency and the accuracy of subsequent processing are improved. And further, the preprocessed sensor data is subjected to feature extraction, so that first feature data corresponding to the first sensor data, second feature data corresponding to the second sensor data and third feature data corresponding to the third sensor data can be obtained, and further, the feature data are used for describing the physical state, limb inching, concentration degree, cognitive state and other information of the target object. Furthermore, the first feature data, the second feature data and the third feature data are subjected to feature fusion, so that more comprehensive target feature data can be obtained, state information of a target object can be described more accurately, the target feature data is used for obtaining target limb micro-motion information of the target object in a learning process by using a motion recognition model, and then concentration level of the target object in the learning process is analyzed by using a concentration recognition model for the target limb micro-motion information, further learning effect and progress of the target object are measured according to the concentration level, and feedback and support are provided according to needs, so that learning of the target object is promoted better. Finally, the concentration level of the target object in the learning process is visually displayed, so that the method can be used for intuitively reflecting the learning effect and progress of the target object and guiding further learning activities. This may provide better feedback and support for the target object, thereby improving learning. The problems of poor concentration analysis result and low reliability caused by the fact that only the subjective evaluation of teachers is needed, a large amount of time and labor are consumed, and certain errors exist in the related art are solved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a learning concentration analysis method according to an embodiment of the present invention;
fig. 2 is a schematic block diagram of a learning concentration analysis device according to an embodiment of the present invention;
fig. 3 is a schematic block diagram of a structure of a terminal device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations.
It is to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The embodiment of the invention provides a learning concentration analysis method and a related device. The learning concentration analysis method can be applied to terminal equipment, and the terminal equipment can be electronic equipment such as tablet computers, notebook computers, desktop computers, personal digital assistants, wearable equipment and the like. The terminal device may be a server or a server cluster.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a flow chart of a learning concentration analysis method according to an embodiment of the invention.
As shown in fig. 1, the learning concentration analysis method includes steps S101 to S107.
Step S101, acquiring first sensor data, second sensor data and third sensor data of a target object in a learning process.
For example, a corresponding sensor is installed in a learning environment of the target object, and then sensor data corresponding to the target object is acquired according to the sensor.
For example, if the camera, the inertial sensor and the pressure sensor are installed, the first sensor data of the target object collected by the camera is obtained, the second sensor data measured by the inertial sensor is obtained, and the third sensor data measured by the pressure sensor is obtained. The first sensor data is camera data such as image data or video data, the second sensor data is inertial measurement unit data, and the third sensor data is pressure measurement data.
Step S102, preprocessing the first sensor data, the second sensor data, and the third sensor data.
Illustratively, the first sensor data, the second sensor data, and the third sensor data are respectively denoised using median filtering or gaussian filtering to eliminate noise and outliers in the respective sensor data.
For example, if video data is to be processed, a denoising method based on a frame difference method may be used; if inertial measurement unit data is to be processed, a low pass filter may be used to remove high frequency noise.
Illustratively, the first sensor data, the second sensor data and the third sensor data after the denoising process are subjected to data interception, invalid portions in the respective sensor data are removed, and only useful portions are reserved.
For example, if the time length of the data included in the first sensor data is longer than that of the second sensor data or the third sensor data, the redundant time length in the first sensor data is removed, so that the data processing effect is ensured.
Optionally, the application may select and adjust the preprocessing mode to select a suitable preprocessing mode according to a specific situation to process the first sensor data, the second sensor data, and the third sensor data, so as to achieve the best effect. The method and the device are not particularly limited, and the user can select according to actual requirements.
In some embodiments, preprocessing the first sensor data, the second sensor data, the third sensor data, including; characterizing the first sensor data to obtain a first vector corresponding to the first sensor data, and characterizing the second sensor data to obtain a second vector corresponding to the second sensor data; performing consistency calibration on the first sensor data and the second sensor data according to the first vector and the second vector to obtain calibrated first sensor data and calibrated second sensor data; carrying out characteristic characterization on the calibrated second sensor data again, and updating a characteristic characterization result to the second vector; characterizing the third sensor data to obtain a third vector corresponding to the third sensor data; and carrying out consistency calibration on the calibrated second sensor data and the third sensor data according to the third vector and the second vector to obtain the calibrated third sensor data.
The first sensor data are exemplary camera data, and further a rotation angle of the head, a displacement vector of the shoulder and a position coordinate of the finger corresponding to the target object can be extracted from the camera data; the second sensor data is inertial measurement unit data, which can extract angular velocity and acceleration of the head and shoulders corresponding to the target object. And then, the measurement data of each part of the target object in the first sensor data and the measurement data of each part of the target object in the second sensor data are aligned and calibrated, so that the subsequent concentration analysis provides good support.
For example, feature extraction is performed on the first sensor data and the second sensor data, and common signal processing and image processing technologies are generally adopted, for example, feature extraction methods based on Wavelet and FFT, morphological filtering, SIFT feature extraction and the like, so as to extract important features in the first sensor data and the second sensor data, and further obtain a first vector corresponding to the important features in the first sensor data and a second vector in the second sensor data, which are used for guidance in a subsequent consistency calibration process.
After the first vector corresponding to the important feature in the first sensor data and the second vector in the second sensor data are obtained, respectively selecting the similarity between each vector in the first vector and each vector in the second vector, so as to obtain the maximum value corresponding to the similarity in the similarity calculation result, comparing the maximum value with a preset value, and when the maximum value is greater than the preset value, aligning the first sensor data and the second sensor data according to the calculation vector corresponding to the maximum value; and when the maximum value is smaller than or equal to the preset value, extracting important features from the first sensor data and the second sensor data again until the maximum value is larger than the preset value.
For example, the first sensor data corresponds to important features including features 11, 12 and 13, and the second sensor data corresponds to important features including features 21, 22 and 23, and the first sensor data corresponds to important features including features 11, 12 and 13, and the second sensor data corresponds to important features including features 21, 22 and 23, and the first sensor data corresponds to important features including features 11, 22 and 23, and the second sensor data corresponds to important features including features 12, 22 and 23, and the first sensor data corresponds to important features including features 21, 22 and 23, and the second sensor data corresponds to important features. For example, when the similarity between the vectors 12 and 23 is the largest and greater than the preset value, the first sensing position of the feature 12 corresponding to the vector 12 corresponding to the first sensor data and the second sensing position of the feature 23 corresponding to the vector 23 corresponding to the second sensor data are obtained, so that the data calibration between the first sensor data and the second sensor data is realized according to the first sensing position and the second sensing position.
For example, after the second sensor data is calibrated, the second sensor data may be deleted, and the calibrated second sensor data needs to be characterized again and updated into the second vector to ensure the accuracy of the subsequent sensor data.
The third sensor data may be pressure sensor data, including extracting strength and time information of a finger motion corresponding to the target object, further characterizing the third sensor data to obtain a third vector corresponding to the third sensor data, where the second sensor data is inertial measurement unit data, and the data may extract angular velocity and acceleration of a head and a shoulder corresponding to the target object, and then performing consistency calibration between the amplitude of the angular velocity and acceleration of the shoulder in the calibrated second sensor data and the strength of the finger motion corresponding to the third sensor data, so as to obtain calibrated third sensor data. And finally, carrying out data consistency calibration on the first sensor data, the second sensor data and the third sensor data, and providing support for the subsequent acquisition of the concentration level of the target object.
And step 103, respectively extracting the characteristics of the preprocessed first sensor data, the preprocessed second sensor data and the preprocessed third sensor data to obtain first characteristic data, second characteristic data and third characteristic data.
The first sensor data, the second sensor data and the third sensor data after the preprocessing are respectively subjected to characteristic extraction by using a characteristic extraction algorithm, so that first characteristic data corresponding to the first sensor data, second characteristic data corresponding to the second sensor data and third characteristic data corresponding to the third sensor data are obtained.
Alternatively, feature extraction algorithms include, but are not limited to, wavelet transforms, wavelet packet transforms, discrete cosine transforms, principal component analysis, local binary patterns, and the like.
And step S104, carrying out feature fusion on the first feature data, the second feature data and the third feature data to obtain target feature data.
The first feature data, the second feature data and the third feature data may be mapped to a higher dimensional feature space by linear or non-linear transformation, and the mapped features may then be fused to obtain corresponding target feature data. Common feature transformation methods include principal component analysis, linear discriminant analysis, nuclear principal component analysis, and the like.
In some embodiments, feature fusion is performed on the first feature data, the second feature data, and the third feature data to obtain target feature data, including: respectively inputting the first characteristic data, the second characteristic data and the third characteristic data into a time sequence model to obtain a hidden state corresponding to each time step; and carrying out fusion processing on the hidden state corresponding to each time step to obtain target characteristic data.
Illustratively, a time series model is constructed: a model suitable for processing time series data is selected, such as a Recurrent Neural Network (RNN) series model (e.g., LSTM, GRU), transducer, etc. The first feature data, the second feature data, and the third feature data are respectively input into the time series model. In the time series model, each time step generates a hidden state. By running the time series model, the hidden state corresponding to each time step can be obtained. These hidden states can be seen as abstract representations of the input data, containing important information of the input data.
Illustratively, fusion processing is performed on the hidden state corresponding to each time step, so as to obtain target feature data. The fusion may be by using a weighted sum, assigning a weight to the hidden state for each time step, and then weighting the hidden states by the weight. The weight can be set according to the importance of the time step, and can also be obtained through training model learning.
Illustratively, the fused target feature data may be dimensionally adjusted using linear transformation, a dimension reduction algorithm (e.g., PCA), or the like, for subsequent tasks or analysis.
Specifically, the first feature data, the second feature data, and the third feature data are respectively input to the time series model, and timing information of the data can be retained. The time sequence information comprises the evolution trend, the periodical change and the like of the data in time, and can provide richer characteristic representation. And secondly, generating a corresponding hidden state by each time step through a time sequence model. The hidden state may be regarded as an abstract representation of the input data, which contains important information of the input data. And fusion processing is carried out on the hidden states, so that richer and more representative target characteristic data can be obtained. By fusing the hidden states, features of different time steps can be integrated and synthesized. This helps to expand the feature space, enhancing the ability of the feature representation, thereby better capturing patterns, associations, and potential laws in the data. In addition, by inputting the feature data into the time series model and fusing the hidden states, more abundant and time series information-containing target feature data can be provided, thereby providing more valuable input for subsequent analysis.
In some embodiments, the fusing processing is performed on the hidden state corresponding to each time step to obtain target feature data, including: and carrying out fusion processing on the hidden state corresponding to each time step according to the following steps to obtain target characteristic data:
output = FC(fusion(h_1, h_2, …, h_t));
h_t = RNN(x1_t, x2_t, …, xn_t, h_{t-1});
Wherein output represents target feature data, FC represents a full connection layer, fusion represents a Fusion function, h_t represents a hidden state of RNN at time step t, h_ { t-1} represents a hidden state of RNN at time step t-1, h_t represents a hidden state of RNN at time step t, RNN represents a recurrent neural network, x1_t represents a data feature of a first sensor at time step t, and xn_t represents a data feature of an nth sensor at time step t.
Illustratively, the present application includes first sensor data, second sensor data, and third sensor data, then obtaining a data feature x1_t corresponding to the first sensor data at time step t under the first sensor, obtaining a data feature x2_t corresponding to the second sensor data at time step t under the second sensor, and obtaining a data feature x3_t corresponding to the third sensor data at time step t under the third sensor, and obtaining a hidden state h_t-1 of RNN at time step t-1, and further obtaining a hidden state h_t=rnn (x1_t, x2_t, x3_t, h_t-1) of RNN at time step t.
Illustratively, after obtaining the hidden state h_t=rnn (x1_t, x2_t, x3_t, h_ { t-1 }) of the RNN at the time step t, full-connection calculation is performed according to the output=fc (fusion (h_1, h_2, …, h_t)), thereby obtaining the target feature data.
Step 105, inputting the target feature data into an action recognition model to obtain target limb micro-motion information of the target object in the learning process.
Illustratively, a model suitable for motion recognition is selected to construct a motion recognition model, such as a convolutional neural network, a recurrent neural network, or the like. The action recognition model is trained using the labeled training dataset. The training data set should contain different motion samples of the target object and should correspond to the target feature data. Through training, the model learns the target limb inching information of the target object in the learning process.
The target feature data should include limb information and related time series information of the target object, and further, after the motion recognition model is obtained, the target feature data is transmitted as input to the motion recognition model, and further, the target limb inching information of the target object in the learning process can be obtained through inference of the model.
In some embodiments, the action recognition model includes an action generation network, an action evaluation network; inputting the target characteristic data into an action recognition model to obtain target limb micro-motion information of a target object in a learning process, wherein the method comprises the following steps: acquiring state information corresponding to the target object by utilizing the target characteristic data; determining a mapping relation between the state information and the initial limb inching information, and generating a network and the mapping relation according to the action to obtain a continuous action vector corresponding to the target object; performing action value evaluation on the state information and the continuous action vector through the action evaluation network to obtain an action value evaluation function; optimizing the action value evaluation function to obtain a target action value evaluation function; and guiding the action generating network to generate corresponding target limb inching information according to the state information according to the target action value evaluation function.
The acquired limb motion data is illustratively preprocessed and annotated, which is converted into state information and motion sequences. And then determining the mapping relation between the state information and the initial limb inching information according to task requirements. Such mapping may be modeled based on a priori knowledge or data driven methods, such as using neural networks, support Vector Machines (SVMs), and the like.
The motion estimation network is used to input state information and initial limb jog information or continuous motion vectors, and a new limb jog information sequence is obtained. The initial limb inching information or the continuous motion vector can be used as network input according to the mapping relation to obtain new state information. And then according to the new limb inching information sequence and the initial limb inching information, the corresponding action value evaluation function is determined, or according to the state information and the new state information, the corresponding action value evaluation function is determined.
Illustratively, model parameters in the action evaluation network are adjusted, and then the action value evaluation function is optimized to obtain a more accurate and stable target action value evaluation function.
Illustratively, according to the objective action value evaluation function, the value of each possible action under the given state information is evaluated, the state information is taken as input, a candidate limb micro-motion information can be generated through the action generating network, and the objective action value evaluation function is taken as a label or an auxiliary signal, so that the candidate limb micro-motion information is evaluated, and the corresponding action value is obtained. And selecting optimal limb jog information as target limb jog information according to the action value.
In some embodiments, the action generating network corresponding expression is:
wherein s is the state information corresponding to the target object, a is the continuous motion vector,generating a network function corresponding to the network for said action,/->Representing network parameters corresponding to the network functions;
the expression corresponding to the action evaluation network is:
wherein s is the target objectCorresponding to the state information, a is the continuous motion vector, C is a network function corresponding to the motion estimation network,and representing the network parameters corresponding to the network functions.
The action generating network is illustratively a function of input state information and output actions, and may be expressed as Wherein s is state information corresponding to the target object, a is a continuous motion vector, and +.>Generating a network function corresponding to the network for the action, < + >>Representing the network parameters corresponding to the network function.
Illustratively, the goal of determining the action generating network is to maximize a target action cost assessment function, thereby enabling control of the optimal action.
The action evaluation network is illustratively a deep neural network model for evaluating action value corresponding to state information, expressed asWherein s is state information corresponding to the target object, a is a continuous motion vector, C is a network function corresponding to the motion evaluation network, and +.>Representing the network parameters corresponding to the network function.
Illustratively, the goal of the action evaluation network is to minimize the loss between the predicted value and the target value, and the usual loss function may be a mean square error. And determining the network parameters corresponding to the action evaluation network according to the loss function.
And step S106, inputting the target limb inching information into a concentration degree identification model to obtain the concentration degree level of the target object in the learning process.
Illustratively, a concentration recognition model is constructed, such as a convolutional neural network, a recurrent neural network, or a transducer in deep learning, or the like. And taking the target limb inching information as input by adopting a supervised learning method, and training the concentration recognition model by utilizing the marked concentration level so as to obtain a corresponding concentration recognition model.
Illustratively, the target limb jog information is input into a trained concentration recognition model, and the output of the model is obtained through forward propagation. This output may represent the level of concentration of the target object during the learning process. And analyzing and judging the concentration degree of the target object according to the output result.
In some embodiments, inputting the target limb micro-motion information into a concentration recognition model to obtain a concentration level of the target object in the learning process, including: predicting the target limb inching information to obtain a probability value corresponding to the candidate concentration level of the target object in the learning process:
wherein,representing the probability value corresponding to the candidate concentration level,/for>Label representing category->Representing category index->Representing an exponential function>Indicating that the target limb micro-motion information belongs to +.>Decision of individual category, K is total number of categories, +.>Representing summing decision functions for all classes;
and taking the candidate concentration level with the probability value larger than a probability threshold value as the concentration level.
For example, the target limb jog information may belong to a plurality of categories, for each category k, a decision function is defined Representing the decision or confidence that a sample x belongs to class k. Assuming K classes, then for sample x, the probability that it belongs to class K can be determined by fitting the decision function +.>The conversion is obtained in a probability form.
Illustratively, a common transformation method is by a softmax function. The softmax function may convert a set of real numbers into a probability distribution that sums the probability values for each category to 1. For multi-class support vector machines, the decision function may be implemented using a softmax functionConverting to obtain probability of sample x belonging to category kWherein (1)>Probability value representing candidate concentration level, +.>Label representing category->Representing category index->Representing an exponential function>Indicating that the target limb jog information belongs to +.>Decision of individual category, K is total number of categories, +.>Representing summing the decision functions of the categories.
For example, after obtaining a probability value corresponding to each of candidate concentration levels of the limb jog information in the learning process of the target object, the probability value is compared with a probability threshold value, and then the candidate concentration level with the probability value being greater than the probability threshold value is used as the concentration level.
And step S107, visually displaying the concentration level of the target object in the learning process.
Illustratively, a suitable visual display mode is selected according to the analysis result of the concentration level of the target object in the learning process and the task requirement. Common visualization means include line graphs, bar graphs, thermodynamic diagrams, radar graphs, scatter graphs, and the like. And a proper display mode can be selected according to factors such as the change trend of concentration degree, concentration degree contrast of different time periods and the like. And converting the concentration level into a visual pattern by using a programming tool or a data visual tool.
Illustratively, the visual presentation can convert the concentration level into a graphical form, and intuitively present the concentration change trend and the real-time state of the target object, so that an observer can quickly understand the data. The target object can also know the concentration level of the target object in time, so that the learning strategy is adjusted, and the concentration level is improved.
In addition, the visual display can help teachers, researchers and the like monitor and analyze the concentration of the target object. Information such as concentration level, time interval difference and the like can be found, and references are provided for subsequent improvement and optimization.
In summary, the concentration level of the target object in the learning process is visually displayed, visual and accurate information feedback can be provided, the target object and an observer are helped to know the change trend and state of concentration, and convenience is provided for monitoring, analyzing and spreading concentration data.
Referring to fig. 2, fig. 2 is a learning concentration analysis device 200 provided in an embodiment of the present application, where the learning concentration analysis device 200 includes a data acquisition module 201, a data processing module 202, a feature extraction module 203, a feature fusion module 204, an information determination module 205, a result determination module 206, and a result display module 207, where the data acquisition module 201 is configured to acquire first sensor data, second sensor data, and third sensor data of a target object in a learning process; a data processing module 202, configured to pre-process the first sensor data, the second sensor data, and the third sensor data; the feature extraction module 203 is configured to perform feature extraction on the preprocessed first sensor data, the preprocessed second sensor data, and the preprocessed third sensor data, so as to obtain first feature data, preprocessed second feature data, and preprocessed third feature data; the feature fusion module 204 is configured to perform feature fusion on the first feature data, the second feature data, and the third feature data to obtain target feature data; the information determining module 205 is configured to input the target feature data into an action recognition model, so as to obtain target limb jog information of the target object in the learning process; the result determining module 206 is configured to input the limb micro-motion information into a concentration recognition model, so as to obtain a concentration level of the target object in the learning process; the result display module 207 is configured to visually display the concentration level of the target object in the learning process.
In some embodiments, the data processing module 202 processes the first sensor data, the second sensor data, and the third sensor data during preprocessing;
characterizing the first sensor data to obtain a first vector corresponding to the first sensor data, and characterizing the second sensor data to obtain a second vector corresponding to the second sensor data;
performing consistency calibration on the first sensor data and the second sensor data according to the first vector and the second vector to obtain calibrated first sensor data and calibrated second sensor data;
carrying out characteristic characterization on the calibrated second sensor data again, and updating a characteristic characterization result to the second vector;
characterizing the third sensor data to obtain a third vector corresponding to the third sensor data;
and carrying out consistency calibration on the calibrated second sensor data and the third sensor data according to the third vector and the second vector to obtain the calibrated third sensor data.
In some embodiments, the feature fusion module 204 performs, in a process of performing feature fusion on the first feature data, the second feature data, and the third feature data to obtain target feature data:
respectively inputting the first characteristic data, the second characteristic data and the third characteristic data into a time sequence model to obtain a hidden state corresponding to each time step;
and carrying out fusion processing on the hidden state corresponding to each time step to obtain target characteristic data.
In some embodiments, the feature fusion module 204 performs, in a process of performing fusion processing on the hidden state corresponding to each time step to obtain target feature data, the following steps:
and carrying out fusion processing on the hidden state corresponding to each time step according to the following steps to obtain target characteristic data:
output = FC(fusion(h_1, h_2, …, h_t));
h_t = RNN(x1_t, x2_t, …, xn_t, h_{t-1});
wherein output represents target feature data, FC represents a full connection layer, fusion represents a Fusion function, h_t represents a hidden state of RNN at time step t, h_ { t-1} represents a hidden state of RNN at time step t-1, h_t represents a hidden state of RNN at time step t, RNN represents a recurrent neural network, x1_t represents a data feature of a first sensor at time step t, and xn_t represents a data feature of an nth sensor at time step t.
In some embodiments, the action recognition model includes an action generation network, an action evaluation network;
the information determining module 205 performs, in inputting the target feature data into the motion recognition model, performing:
acquiring state information corresponding to the target object by utilizing the target characteristic data;
determining a mapping relation between the state information and the initial limb inching information, and generating a network and the mapping relation according to the action to obtain a continuous action vector corresponding to the target object;
performing action value evaluation on the state information and the continuous action vector through the action evaluation network to obtain an action value evaluation function;
optimizing the action value evaluation function to obtain a target action value evaluation function;
and guiding the action generating network to generate corresponding target limb inching information according to the state information according to the target action value evaluation function.
In some implementations, the information determination module 205 performs: the expression corresponding to the action generating network is:
wherein s is the state information corresponding to the target object, a is the continuous motion vector, Generating a network function corresponding to the network for said action,/->Representing network parameters corresponding to the network functions;
the expression corresponding to the action evaluation network is:
wherein s is the state information corresponding to the target object, a is the continuous motion vector, C is a network function corresponding to the motion evaluation network,and representing the network parameters corresponding to the network functions.
In some embodiments, the result determining module 206 performs, in inputting the target limb jog information into the concentration recognition model, to obtain the concentration level of the target object in the learning process:
predicting the target limb inching information to obtain a probability value corresponding to the candidate concentration level of the target object in the learning process:
wherein,representing the probability value corresponding to the candidate concentration level,/for>Label representing category->Representing category index->Representing an exponential function>Indicating that the target limb micro-motion information belongs to +.>Decision of individual category, K is total number of categories, +.>Representing summing decision functions for all classes;
and taking the candidate concentration level with the probability value larger than a probability threshold value as the concentration level.
In some embodiments, the learning concentration analysis apparatus 200 may be applied to a terminal device.
It should be noted that, for convenience and brevity of description, the specific working process of the learning concentration analysis device 200 described above may refer to the corresponding process in the foregoing learning concentration analysis method embodiment, which is not described herein again.
Referring to fig. 3, fig. 3 is a schematic block diagram of a structure of a terminal device according to an embodiment of the present invention.
As shown in fig. 3, the terminal device 300 includes a processor 301 and a memory 302, the processor 301 and the memory 302 being connected by a bus 303, such as an I2C (Inter-integrated Circuit) bus.
In particular, the processor 301 is used to provide computing and control capabilities, supporting the operation of the entire terminal device. The processor 301 may be a central processing unit (Central Processing Unit, CPU), the processor 301 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Specifically, the Memory 302 may be a Flash chip, a Read-Only Memory (ROM) disk, an optical disk, a U-disk, a removable hard disk, or the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 3 is merely a block diagram of a portion of the structure related to the embodiment of the present invention, and does not constitute a limitation of the terminal device to which the embodiment of the present invention is applied, and that a specific server may include more or less components than those shown in the drawings, or may combine some components, or have a different arrangement of components.
The processor is used for running a computer program stored in the memory, and implementing any one of the learning concentration analysis methods provided by the embodiment of the invention when the computer program is executed.
In an embodiment, the processor is configured to run a computer program stored in a memory and to implement the following steps when executing the computer program:
acquiring first sensor data, second sensor data and third sensor data of a target object in a learning process;
preprocessing the first sensor data, the second sensor data and the third sensor data;
Respectively extracting features of the preprocessed first sensor data, the preprocessed second sensor data and the preprocessed third sensor data to obtain first feature data, preprocessed second feature data and preprocessed third feature data;
performing feature fusion on the first feature data, the second feature data and the third feature data to obtain target feature data;
inputting the target characteristic data into an action recognition model to obtain target limb inching information of a target object in a learning process;
inputting the target limb inching information into a concentration degree identification model to obtain the concentration degree level of a target object in the learning process;
and visually displaying the concentration level of the target object in the learning process.
In some embodiments, the processor 301 performs the preprocessing of the first sensor data, the second sensor data, and the third sensor data;
characterizing the first sensor data to obtain a first vector corresponding to the first sensor data, and characterizing the second sensor data to obtain a second vector corresponding to the second sensor data;
Performing consistency calibration on the first sensor data and the second sensor data according to the first vector and the second vector to obtain calibrated first sensor data and calibrated second sensor data;
carrying out characteristic characterization on the calibrated second sensor data again, and updating a characteristic characterization result to the second vector;
characterizing the third sensor data to obtain a third vector corresponding to the third sensor data;
and carrying out consistency calibration on the calibrated second sensor data and the third sensor data according to the third vector and the second vector to obtain the calibrated third sensor data.
In some embodiments, the processor 301 performs, in performing feature fusion on the first feature data, the second feature data, and the third feature data to obtain target feature data:
respectively inputting the first characteristic data, the second characteristic data and the third characteristic data into a time sequence model to obtain a hidden state corresponding to each time step;
and carrying out fusion processing on the hidden state corresponding to each time step to obtain target characteristic data.
In some embodiments, the processor 301 performs, in the process of fusing the hidden states corresponding to each of the time steps to obtain the target feature data, the following steps:
and carrying out fusion processing on the hidden state corresponding to each time step according to the following steps to obtain target characteristic data:
output = FC(fusion(h_1, h_2, …, h_t));
h_t = RNN(x1_t, x2_t, …, xn_t, h_{t-1});
wherein output represents target feature data, FC represents a full connection layer, fusion represents a Fusion function, h_t represents a hidden state of RNN at time step t, h_ { t-1} represents a hidden state of RNN at time step t-1, h_t represents a hidden state of RNN at time step t, RNN represents a recurrent neural network, x1_t represents a data feature of a first sensor at time step t, and xn_t represents a data feature of an nth sensor at time step t.
In some embodiments, the action recognition model includes an action generation network, an action evaluation network;
the processor 301 performs, in inputting the target feature data into the motion recognition model, to obtain target limb jog information of the target object in the learning process:
acquiring state information corresponding to the target object by utilizing the target characteristic data;
determining a mapping relation between the state information and the initial limb inching information, and generating a network and the mapping relation according to the action to obtain a continuous action vector corresponding to the target object;
Performing action value evaluation on the state information and the continuous action vector through the action evaluation network to obtain an action value evaluation function;
optimizing the action value evaluation function to obtain a target action value evaluation function;
and guiding the action generating network to generate corresponding target limb inching information according to the state information according to the target action value evaluation function.
In some implementations, the processor 301 performs: the expression corresponding to the action generating network is:
wherein s is the state information corresponding to the target object, a is the continuous motion vector,generating a network function corresponding to the network for said action,/->Representing network parameters corresponding to the network functions;
the expression corresponding to the action evaluation network is:
wherein s is the state information corresponding to the target object, a is the continuous motion vector, C is a network function corresponding to the motion evaluation network,and representing the network parameters corresponding to the network functions.
In some embodiments, the processor 301 performs, in inputting the target limb jog information into the concentration recognition model, obtaining a concentration level of the target object in the learning process:
Predicting the target limb inching information to obtain a probability value corresponding to the candidate concentration level of the target object in the learning process:
wherein,representing the probability value corresponding to the candidate concentration level,/for>Label representing category->Representing category index->Representing an exponential function>Indicating that the target limb micro-motion information belongs to +.>Decision of individual category, K is total number of categories, +.>Representing summing decision functions for all classes;
and taking the candidate concentration level with the probability value larger than a probability threshold value as the concentration level.
It should be noted that, for convenience and brevity of description, a person skilled in the art may clearly understand that, in the specific working process of the terminal device described above, reference may be made to a corresponding process in the foregoing embodiment of the learning concentration analysis method, which is not described herein again.
Embodiments of the present invention also provide a storage medium for computer readable storage, where the storage medium stores one or more programs executable by one or more processors to implement the steps of any learning concentration analysis method as provided in the embodiments of the present invention.
The storage medium may be an internal storage unit of the terminal device according to the foregoing embodiment, for example, a hard disk or a memory of the terminal device. The storage medium may also be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, functional modules/units in the apparatus, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware embodiment, the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed cooperatively by several physical components. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
It should be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments. While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (9)

1. A method of learning concentration analysis, the method comprising:
acquiring first sensor data, second sensor data and third sensor data of a target object in a learning process;
preprocessing the first sensor data, the second sensor data and the third sensor data;
respectively extracting features of the preprocessed first sensor data, the preprocessed second sensor data and the preprocessed third sensor data to obtain first feature data, preprocessed second feature data and preprocessed third feature data;
performing feature fusion on the first feature data, the second feature data and the third feature data to obtain target feature data;
inputting the target characteristic data into an action recognition model to obtain target limb inching information of a target object in a learning process;
inputting the target limb inching information into a concentration degree identification model to obtain the concentration degree level of a target object in the learning process;
visually displaying the concentration level of the target object in the learning process;
performing feature fusion on the first feature data, the second feature data and the third feature data to obtain target feature data, wherein the method comprises the following steps:
Respectively inputting the first characteristic data, the second characteristic data and the third characteristic data into a time sequence model to obtain a hidden state corresponding to each time step;
and carrying out fusion processing on the hidden state corresponding to each time step to obtain target characteristic data.
2. The method of claim 1, wherein preprocessing the first sensor data, the second sensor data, the third sensor data comprises;
characterizing the first sensor data to obtain a first vector corresponding to the first sensor data, and characterizing the second sensor data to obtain a second vector corresponding to the second sensor data;
performing consistency calibration on the first sensor data and the second sensor data according to the first vector and the second vector to obtain calibrated first sensor data and calibrated second sensor data;
carrying out characteristic characterization on the calibrated second sensor data again, and updating a characteristic characterization result to the second vector;
characterizing the third sensor data to obtain a third vector corresponding to the third sensor data;
And carrying out consistency calibration on the calibrated second sensor data and the third sensor data according to the third vector and the second vector to obtain the calibrated third sensor data.
3. The method of claim 1, wherein the fusing the hidden states corresponding to each time step to obtain target feature data includes:
and carrying out fusion processing on the hidden state corresponding to each time step according to the following steps to obtain target characteristic data:
output = FC(fusion(h_1, h_2, …, h_t));
h_t = RNN(x1_t, x2_t, …, xn_t, h_{t-1});
wherein output represents target feature data, FC represents a full connection layer, fusion represents a Fusion function, h_t represents a hidden state of RNN at time step t, h_ { t-1} represents a hidden state of RNN at time step t-1, h_t represents a hidden state of RNN at time step t, RNN represents a recurrent neural network, x1_t represents a data feature of a first sensor at time step t, and xn_t represents a data feature of an nth sensor at time step t.
4. The method of claim 1, wherein the action recognition model comprises an action generating network, an action evaluating network;
inputting the target characteristic data into an action recognition model to obtain target limb micro-motion information of a target object in a learning process, wherein the method comprises the following steps:
Acquiring state information corresponding to the target object by utilizing the target characteristic data;
determining a mapping relation between the state information and the initial limb inching information, and generating a network and the mapping relation according to the action to obtain a continuous action vector corresponding to the target object;
performing action value evaluation on the state information and the continuous action vector through the action evaluation network to obtain an action value evaluation function;
optimizing the action value evaluation function to obtain a target action value evaluation function;
and guiding the action generating network to generate corresponding target limb inching information according to the state information according to the target action value evaluation function.
5. The method of claim 4, wherein the action generating network-corresponding expression is:
wherein s is the state information corresponding to the target object, a is the continuous motion vector,generating a network function corresponding to the network for said action,/->Representing network parameters corresponding to the network functions;
the expression corresponding to the action evaluation network is:
wherein s is the state information corresponding to the target object, a is the continuous motion vector, C is a network function corresponding to the motion evaluation network, And representing the network parameters corresponding to the network functions.
6. The method of claim 1, wherein inputting the target limb jog information into a concentration recognition model to obtain a concentration level of the target object during learning comprises:
predicting the target limb inching information to obtain a probability value corresponding to the candidate concentration level of the target object in the learning process:
wherein,representing the probability value corresponding to the candidate concentration level,/for>Label representing category->Representing category index->Representing an exponential function>Indicating that the target limb micro-motion information belongs to +.>Decision of individual category, K is total number of categories, +.>Representing summing decision functions for all classes;
and taking the candidate concentration level with the probability value larger than a probability threshold value as the concentration level.
7. A learning concentration analysis device, comprising:
the data acquisition module is used for acquiring first sensor data, second sensor data and third sensor data of the target object in the learning process;
the data processing module is used for preprocessing the first sensor data, the second sensor data and the third sensor data;
The feature extraction module is used for respectively carrying out feature extraction on the preprocessed first sensor data, the preprocessed second sensor data and the preprocessed third sensor data to obtain first feature data, preprocessed second feature data and preprocessed third feature data;
the feature fusion module is used for carrying out feature fusion on the first feature data, the second feature data and the third feature data to obtain target feature data;
the information determining module is used for inputting the target characteristic data into the action recognition model to obtain target limb inching information of a target object in the learning process;
the result determining module is used for inputting the limb micro-motion information into a concentration degree identification model to obtain the concentration degree level of the target object in the learning process;
the result display module is used for visually displaying the concentration level of the target object in the learning process;
the feature fusion module performs feature fusion on the first feature data, the second feature data and the third feature data to obtain target feature data, and performs:
respectively inputting the first characteristic data, the second characteristic data and the third characteristic data into a time sequence model to obtain a hidden state corresponding to each time step;
And carrying out fusion processing on the hidden state corresponding to each time step to obtain target characteristic data.
8. A terminal device, characterized in that the terminal device comprises a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and to implement the learning concentration analysis method according to any one of claims 1 to 6 when the computer program is executed.
9. A computer storage medium for computer storage, wherein the computer storage medium stores one or more programs executable by one or more processors to implement the steps of the learning concentration analysis method of any one of claims 1 to 6.
CN202410054897.XA 2024-01-15 2024-01-15 Learning concentration analysis method and related device Active CN117574098B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410054897.XA CN117574098B (en) 2024-01-15 2024-01-15 Learning concentration analysis method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410054897.XA CN117574098B (en) 2024-01-15 2024-01-15 Learning concentration analysis method and related device

Publications (2)

Publication Number Publication Date
CN117574098A true CN117574098A (en) 2024-02-20
CN117574098B CN117574098B (en) 2024-04-02

Family

ID=89862767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410054897.XA Active CN117574098B (en) 2024-01-15 2024-01-15 Learning concentration analysis method and related device

Country Status (1)

Country Link
CN (1) CN117574098B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113283334A (en) * 2021-05-21 2021-08-20 浙江师范大学 Classroom concentration analysis method and device and storage medium
CN115690867A (en) * 2021-07-30 2023-02-03 奇酷软件(深圳)有限公司 Classroom concentration detection method, device, equipment and storage medium
CN115719497A (en) * 2022-11-29 2023-02-28 华中师范大学 Student concentration degree identification method and system
CN116127350A (en) * 2022-12-12 2023-05-16 华中师范大学 Learning concentration monitoring method based on Transformer network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113283334A (en) * 2021-05-21 2021-08-20 浙江师范大学 Classroom concentration analysis method and device and storage medium
CN115690867A (en) * 2021-07-30 2023-02-03 奇酷软件(深圳)有限公司 Classroom concentration detection method, device, equipment and storage medium
CN115719497A (en) * 2022-11-29 2023-02-28 华中师范大学 Student concentration degree identification method and system
CN116127350A (en) * 2022-12-12 2023-05-16 华中师范大学 Learning concentration monitoring method based on Transformer network

Also Published As

Publication number Publication date
CN117574098B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
Rashid et al. Times-series data augmentation and deep learning for construction equipment activity recognition
CN111052146B (en) System and method for active learning
CN110136103B (en) Medical image interpretation method, device, computer equipment and storage medium
US20210117760A1 (en) Methods and apparatus to obtain well-calibrated uncertainty in deep neural networks
US10210418B2 (en) Object detection system and object detection method
WO2018121690A1 (en) Object attribute detection method and device, neural network training method and device, and regional detection method and device
CN110222641B (en) Method and apparatus for recognizing image
CN110633711B (en) Computer device and method for training feature point detector and feature point detection method
CN113743607A (en) Training method of anomaly detection model, anomaly detection method and device
CN113705534A (en) Behavior prediction method, behavior prediction device, behavior prediction equipment and storage medium based on deep vision
Rumberger et al. Probabilistic deep learning for instance segmentation
Hertel et al. Probabilistic SAR-based water segmentation with adapted Bayesian convolutional neural network
JP2019105871A (en) Abnormality candidate extraction program, abnormality candidate extraction method and abnormality candidate extraction apparatus
CN117574098B (en) Learning concentration analysis method and related device
Bogensperger et al. Score-based generative models for medical image segmentation using signed distance functions
Sameki et al. ICORD: Intelligent Collection of Redundant Data-A Dynamic System for Crowdsourcing Cell Segmentations Accurately and Efficiently.
CN114422450B (en) Network traffic analysis method and device based on multi-source network traffic data
Anjomshoae et al. Visual explanations for DNNS with contextual importance
CN115690514A (en) Image recognition method and related equipment
CN111582404B (en) Content classification method, device and readable storage medium
Farag et al. Inductive Conformal Prediction for Harvest-Readiness Classification of Cauliflower Plants: A Comparative Study of Uncertainty Quantification Methods
Gangopadhyay et al. Benchmarking framework for anomaly localization: Towards real-world deployment of automated visual inspection
US20240012852A1 (en) Image data bias detection with explainability in machine learning
WO2022247448A1 (en) Data processing method and apparatus, computing device, and computer readable storage medium
Sagar et al. 3 Classification and regression algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant