CN118038559A - Statistical analysis method, device, system and storage medium for learning - Google Patents

Statistical analysis method, device, system and storage medium for learning Download PDF

Info

Publication number
CN118038559A
CN118038559A CN202410422316.3A CN202410422316A CN118038559A CN 118038559 A CN118038559 A CN 118038559A CN 202410422316 A CN202410422316 A CN 202410422316A CN 118038559 A CN118038559 A CN 118038559A
Authority
CN
China
Prior art keywords
standard
standard action
sub
models
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410422316.3A
Other languages
Chinese (zh)
Other versions
CN118038559B (en
Inventor
李贞海
徐旭
李贞伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202410422316.3A priority Critical patent/CN118038559B/en
Publication of CN118038559A publication Critical patent/CN118038559A/en
Application granted granted Critical
Publication of CN118038559B publication Critical patent/CN118038559B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The application relates to a statistical analysis method, a device, a system and a storage medium for learning, which belong to the technical field of data processing, wherein the method comprises the steps of responding to a received instruction to acquire all objects in a video stream; analyzing the motion of each object in the video stream and constructing a standard motion model by using the motion; comparing any two standard action models and obtaining a similarity value, wherein one standard action model needs to be compared with all other standard action models and is grouped according to the similarity value, and a label is assigned to each group according to the standard action model in each group. The application is realized by means of video streaming, the integrity analysis and the diversity analysis of a group, then the groups are grouped according to the analysis result, and individuals in each group have a certain degree of consistency, so that problems can be found out and proper intervention means can be adopted.

Description

Statistical analysis method, device, system and storage medium for learning
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a statistical analysis method, apparatus, system, and storage medium for learning.
Background
The development of neural networks, artificial intelligence, big data and other technologies brings new development directions to the education and teaching industry, such as promotion of value view guidance based on algorithm treatment. The direction is to analyze the data obtained in the teaching process and improve the teaching content, the teaching mode and the individual education.
At present, more teaching is used by indifferent infusion in a classroom dominated by a teacher, and the teaching mode has the advantages of being simple to use, but does not accurately know the demands of students. More importantly, problems are found out in time through the classroom performance of students in the daily learning process, and proper intervention means are adopted to guide and educate the students, but the implementation of the method is difficult because of the mass analysis and judgment of individual data.
Taking a class as an example, hundreds of students in a class have the feasibility of observing and recording each student while only relying on a lecturer in the class, but the analysis of massive individuals can be realized by means of data obtained by an image acquisition device and automatic data analysis. But in terms of technical implementation it is difficult to analyze in a custom standard way.
Disclosure of Invention
The application provides a statistical analysis method, a device, a system and a storage medium for learning, which are used for realizing the overall analysis and the differential analysis of a student group through comparing and analyzing a classroom video stream, then grouping the student groups according to analysis results, wherein individuals in each group have a certain degree of consistency, so that problems can be found out and proper educational content and means can be adopted.
The above object of the present application is achieved by the following technical solutions:
in a first aspect, the present application provides a statistical analysis method for learning, comprising:
responding to the received instruction, and acquiring all objects in the video stream;
Analyzing the motion of each object in the video stream and constructing a standard motion model for each object by using the motion of each object; a standard motion model refers to a set of motions performed by an object in a video stream, including a corresponding motion for each point in time or period in the video stream;
comparing standard action models of any two objects, and obtaining a similarity value, wherein one standard action model needs to be compared with all other standard action models; wherein the similarity value is the ratio of the time points or time periods of the two standard action models to the complete time length of the standard action models, wherein the two standard action models have similar actions at the time points or time periods;
Grouping the standard action models according to the similarity value, and assigning labels to each group according to the standard action model in each group; specifically, the similarity value of the two standard action models is greater than or equal to a threshold value, the two standard action models are put into one group, and otherwise, the two standard action models are respectively put into the two groups.
In a possible implementation manner of the first aspect, the standard motion model is expanded in time, and there is a standard motion model period and a blank period on the time sequence;
When comparing the two standard motion models, only the overlapping areas of the standard motion model time periods in the two standard motion models are compared.
In a possible implementation manner of the first aspect, the method further includes:
Dividing the standard action model time period into a plurality of sub-standard action model time periods;
Comparing the similarity of two corresponding sub-standard action model time periods belonging to the two standard action models;
The sub-standard action model time period can move forwards and backwards on a time sequence in the comparison process, and the distance of the forward movement and the distance of the backward movement are smaller than or equal to the allowable distance.
In a possible implementation manner of the first aspect, the method further includes:
Dividing the standard action model time period for a plurality of times by using the sub-standard action model time period, wherein the lengths of the sub-standard action model time periods are different in each division, and the lengths of the sub-standard action model time periods obtained by each division are the same;
And merging sub-standard action model time periods which are divided for a plurality of times and have similar comparison results.
In a possible implementation manner of the first aspect, the actions of the object in the video stream include a plurality of sub-actions belonging to different regions, and the sub-actions belonging to different regions are processed separately when comparing any two standard action models.
In a possible implementation manner of the first aspect, counting occurrence frequencies of all the standard action models in each group at a time point or a time length, and making a sub-action model by using sub-actions with the frequency greater than or equal to a set frequency;
Screening the standard action models in other groups by using the sub-action model;
grouping adjustment is carried out on the standard action model according to the screening result;
Wherein the sub-action model is generated based on all of the standard action models in a group.
In a possible implementation manner of the first aspect, when there are a plurality of sub-actions at a time point or a time length, a plurality of sub-action models are generated according to the number of sub-actions.
In a second aspect, the present application provides a statistical analysis device for learning, comprising:
The acquisition unit is used for responding to the received instruction and acquiring all objects in the video stream;
An analysis unit for analyzing the motion of each object in the video stream and constructing a standard motion model using the motion; a standard motion model refers to a set of motions performed by an object in a video stream, including a corresponding motion for each point in time or period in the video stream;
the first comparison unit is used for comparing any two standard action models and obtaining a similarity value, and one standard action model needs to be compared with all other standard action models; wherein the similarity value is the ratio of the time points or time periods of the two standard action models to the complete time length of the standard action models, wherein the two standard action models have similar actions at the time points or time periods;
a grouping unit, configured to group the standard action models according to the similarity value, and assign a label to each group according to the standard action model in each group; specifically, the similarity value of the two standard action models is greater than or equal to a threshold value, the two standard action models are put into one group, and otherwise, the two standard action models are respectively put into the two groups.
In a third aspect, the present application provides a statistical analysis system for learning, the system comprising:
one or more memories for storing instructions; and
One or more processors configured to invoke and execute the instructions from the memory, to perform the method as described in the first aspect and any possible implementation of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium comprising:
a program which, when executed by a processor, performs a method as described in the first aspect and any possible implementation of the first aspect.
In a fifth aspect, the present application provides a computer program product comprising program instructions which, when executed by a computing device, perform a method as described in the first aspect and any possible implementation of the first aspect.
In a sixth aspect, the present application provides a chip system comprising a processor for implementing the functions involved in the above aspects, e.g. generating, receiving, transmitting, or processing data and/or information involved in the above methods.
The chip system can be composed of chips, and can also comprise chips and other discrete devices.
In one possible design, the system on a chip also includes memory to hold the necessary program instructions and data. The processor and the memory may be decoupled, provided on different devices, respectively, connected by wire or wirelessly, or the processor and the memory may be coupled on the same device.
The beneficial effects of the invention are as follows:
The application provides a statistical analysis method, a device, a system and a storage medium for learning, which realize the overall analysis and the differential analysis of a group through video streaming, wherein the method has universality and wider applicability, and groups and analyzes the group through a non-standardized comparison mode, and the mode can find commonality and distinguishability among a plurality of individuals and provides basis for subsequent intervention means.
Drawings
FIG. 1 is a schematic block diagram of a statistical analysis method for learning according to the present application.
Fig. 2 is a schematic diagram of a comparison of two standard motion models provided by the present application.
Fig. 3 is a schematic diagram of accumulation of sub-standard motion model time periods according to the present application.
Fig. 4 is a schematic diagram of a method for adjusting packets using a sub-action model according to the present application.
Detailed Description
The statistical analysis method for learning disclosed by the application is applied to a set of analysis servers, wherein the servers can be deployed locally or at the cloud end, and videos recorded in the learning process are sent to the servers for analysis, and then analysis results are obtained.
In addition, the server can also count the attendance conditions, the learning question answer conditions and the like according to the video, and can also perform data intercommunication with a score system and a scoring system so as to collect multidimensional data. The data are collected together, so that the comprehensive recording and statistics of the attendance of a student from the lesson, the learning progress, the learning question answer, the learning exercise condition and the terminal examination condition can be realized.
The data can be applied to analysis and early warning, such as specific guidance of students through attendance in lessons, learning progress, learning question answers, learning exercise conditions and the like; of course, according to the difference of the analysis models, multidimensional analysis of students can also be realized.
The technical scheme in the application is further described in detail below with reference to the accompanying drawings.
The application discloses a statistical analysis method for learning, referring to fig. 1, in some examples, the statistical analysis method for learning disclosed in the embodiment comprises the following steps:
s101, responding to a received instruction, and acquiring all objects in a video stream;
s102, analyzing the motion of each object in the video stream and constructing a standard motion model for each object by using the motion of each object;
s103, comparing standard action models of any two objects, and obtaining a similarity value, wherein one standard action model needs to be compared with all other standard action models;
s104, grouping the standard action models according to the similarity value, and assigning labels to each group according to the standard action models in each group.
In step S101, the server receives an analysis instruction, which refers to a video stream, for which the server first obtains all objects in the video stream, where the objects refer to people appearing in the video stream.
It should be understood that the video stream is composed of images sequentially arranged in time series, and for the acquisition of the object, the face can be directly obtained in the images, and then the object is obtained after the face is summarized and removed. For whether all objects are obtained, selecting a plurality of images in the video stream and then performing the processing procedures described in the above contents; in addition, when there is a fixed position in the image, the object can be obtained by a counting manner.
In step S102, the motion of each object in the video stream is analyzed and a standard motion model is constructed using the motion, in which the motion belonging to the object is first acquired, where the motion mainly includes facial motion and limb motion, and the limb motion mainly includes parts of shoulders, hands, legs, body, and the like. Building a standard action model refers to associating actions described in the above with time, e.g. at a certain point in time or period of time, which actions an object takes.
Here, the start time point of the standard motion model is the start time point of the video stream, and the end time point of the standard motion model is the end time point of the video stream.
Next, in step S103, any two standard motion models are compared, and a similarity value is obtained, where one standard motion model needs to be compared with all other standard motion models, for example, ten standard motion models in total, and then each standard motion model needs to be compared with the remaining nine standard motion models.
For action comparison in the standard action model, the specific way is as follows:
for the actions respectively belonging to the two objects, the presence or absence of the actions is taken as the most basic judgment criterion, and based on the judgment criterion, the amplitude is additionally taken as the additional criterion, and mainly refers to the action amplitude, and taking synchronous rotation as an example, whether the two judgment criteria of rotation and rotation amplitude are present or not.
However, whether this criterion is used for the rotation amplitude needs to be determined empirically and practically, since the motion amplitude naturally varies for different individuals. But in a further high accuracy analysis process the amplitude can be introduced as an additional criterion while the accuracy is adjusted by a given range of differences.
A difference range refers to a range of differences in the magnitude of two actions (e.g., the angle of rotation of the head) that are within an allowable range, and the two actions are considered similar.
For a similarity value, a specific interpretation is that two standard motion models have similar motion at a certain point in time or time period, the duty ratio of these points in time or time periods over the whole time length of the standard motion models, i.e. the similarity value is a specific number, and the data is greater than or equal to zero and less than or equal to one.
The specific value of the similarity value is manually input or a built-in fixed value is used, and the value is related to the calculation precision and can be manually adjusted.
The period of time herein refers to a period of time.
In step S104, the standard motion models are grouped according to the similarity value, and the specific grouping process is as follows:
Firstly, a threshold value is set, wherein the threshold value can be a numerical value input manually or a fixed value input in advance, the similarity value of two standard action models is larger than or equal to the threshold value, the two standard action models are put into one group, and otherwise, the two standard action models are respectively put into two groups.
Finally, labels are assigned to each group according to the standard action model in each group, the labels are various in understanding, unintelliging, confusion and the like, the labels are assigned manually in the initial stage of analysis by using the statistical analysis method for learning disclosed by the application, and the labels are automatically assigned by a server after the initial stage.
In general, all objects in a video stream can be grouped and given labels by action analysis and grouping, the grouping belongs to integrity analysis, the labels belong to diversity analysis, and for some specific contents, the objects belonging to a group should have similar actions.
Based on this, all objects in the video stream can be classified, and then an overall analysis can be performed on one type of object, and the analysis by data is obviously more practical than a one-to-one inquiry mode, a lecture mode, and the like, because the earlier analysis process is completed by the server.
For objects belonging to one group, the objects in the whole group can be known by inquiring or knowing one of the objects, which has the advantage of greatly reducing the preparation and analysis work in the early stage, because the one-to-one inquiry mode, lecture mode and the like require a lot of time and labor, and have considerable hysteresis in obtaining results.
It should be understood that for learning, it is not possible to evaluate an object by simple examination scores or other criteria, but rather it should discover possible problems and conduct appropriate guidance and targeted instruction by analysis of its learning process. In capturing video, more accurate actions attributed to an object may be obtained by presenting certain specific questions or specific scenes.
Based on the method, in the process of analyzing the video stream, an important part or a core part in the video stream can be selected for analysis in a manual interception mode, so that a more accurate analysis result is obtained.
Meanwhile, some gaps may exist in the video stream for an object, that is, the object is limited by practical influencing factors such as environment in the process of acquiring the video stream, and all actions of an object on a time sequence cannot be acquired. Thus, in some examples, the standard motion model is developed over time, and there are standard motion model time periods and blank time periods in the time sequence, where the blank time periods refer to the aforementioned gaps, and the standard motion model time periods and the blank time periods are divided based on the content of the video stream.
When comparing the two standard motion models, only the overlapping areas of the standard motion model time periods in the two standard motion models are compared, and the blank time period is not involved in the comparison process no matter what the standard motion model time period corresponds to the blank time period, as shown in fig. 2.
In some examples, the following steps are added:
S201, dividing a standard action model time period into a plurality of sub-standard action model time periods;
S202, comparing the similarity of two corresponding sub-standard action model time periods belonging to two standard action models;
The sub-standard action model time period can move forwards and backwards on a time sequence in the comparison process, and the distance of the forward movement and the distance of the backward movement are smaller than or equal to the allowable distance.
The contents of step S201 and step S202 are mainly to solve the problem by dividing the sub-standard motion model period and the moving sub-standard motion model period in consideration of the difference in reaction time between the objects, for example, when two objects perform the same motion, the occurrence time and the existence time of the motion are different.
Further, the following is added:
S301, dividing the standard action model time period for a plurality of times by using the sub-standard action model time period, wherein the lengths of the sub-standard action model time periods are different in each division, and the lengths of the sub-standard action model time periods obtained in each division are the same;
s302, merging sub-standard action model time periods with similar comparison results of sub-standard action model time periods divided for a plurality of times.
The contents in step S301 and step S302 are further optimized for the contents in step S201 and step S202, because the number of actions included therein is indeterminate when the lengths of the sub-standard action model periods are identical. Here, it is assumed that the first sub-standard motion model period includes three motions, and the second sub-standard motion model period includes one motion, which may result in dissimilar comparison results, and thus lower similarity values.
By using the multiple divisions and the ways of adjusting the lengths of each division provided in this embodiment, when comparing the obtained sub-standard action model time periods, similar portions can be found as far as possible, and finally, the sub-standard action model time periods with similar comparison results of the sub-standard action model time periods divided multiple times are combined, as shown in fig. 3, and the sub-standard action model time periods are accumulated.
By the method, the number of the packets can be effectively reduced. For example, a video stream includes ten objects, and to some extent, it is appropriate to divide the ten objects into two to four groups, and if the ten objects are divided into ten groups, the groups have no practical meaning. For objects in a group, it may be required that they have some degree of similarity rather than being identical.
In some examples, the actions of the object in the video stream include a plurality of sub-actions belonging to different regions, and the sub-actions belonging to different regions are processed separately when comparing any two standard action models. In this method, the actions are divided into a plurality of sub-actions, and the actions are processed separately, and the purpose of this processing method is to reduce the number of packets as well.
For example, for a reaction of a problem or similar content, there are a plurality of possible actions, and for an object having a difference, there may be one or a combination of the actions, so in the present application, the actions are divided into a plurality of sub-actions belonging to different areas, and then processed separately.
In some examples, the following is added:
S401, counting occurrence frequencies of all the standard action models in each group at a time point or a time length, and manufacturing sub-action models by using sub-actions with the frequency larger than or equal to the set frequency;
S402, screening standard action models in other groups by using the sub-action models;
s403, carrying out grouping adjustment on the standard action model according to the screening result;
Wherein the sub-action model is generated based on all of the standard action models in a group.
In steps S401 to S403, sub-actions with high occurrence frequency in each packet are separated, and then a sub-action model is obtained, and then this sub-action model is used to perform packet adjustment on the standard action model, where the purpose of packet adjustment is still to reduce the number of packets.
The specific principle is that the number of objects in some groups is reduced by some common contents and even some groups are directly scattered. Because of the grouping performed in the foregoing, there is an individual (object) influence in the comparison process, and using the contents in steps S401 to S403, the individual influence can be eliminated, as shown in fig. 4.
In some possible implementations, when there are multiple sub-actions at one time point or one time length, multiple sub-action models are generated according to the number of sub-actions, where the purpose is to reduce the judgment misalignment caused by using multiple sub-actions at the same time, because the greater the number of sub-actions at one time point or one time length, the easier the judgment result is dissimilar, and therefore, the multiple sub-actions existing at one time point or one time length are split.
The application also provides a statistical analysis device for learning, comprising:
The acquisition unit is used for responding to the received instruction and acquiring all objects in the video stream;
An analysis unit for analyzing the motion of each object in the video stream and constructing a standard motion model using the motion; a standard motion model refers to a set of motions performed by an object in a video stream, including a corresponding motion for each point in time or period in the video stream;
the first comparison unit is used for comparing any two standard action models and obtaining a similarity value, and one standard action model needs to be compared with all other standard action models; wherein the similarity value is the ratio of the time points or time periods of the two standard action models to the complete time length of the standard action models, wherein the two standard action models have similar actions at the time points or time periods;
a grouping unit, configured to group the standard action models according to the similarity value, and assign a label to each group according to the standard action model in each group; specifically, the similarity value of the two standard action models is greater than or equal to a threshold value, the two standard action models are put into one group, and otherwise, the two standard action models are respectively put into the two groups.
Further, the standard action model is unfolded in time, and a standard action model time period and a blank time period exist on the time sequence;
When comparing the two standard motion models, only the overlapping areas of the standard motion model time periods in the two standard motion models are compared.
Further, the method further comprises the following steps:
a first dividing unit for dividing the standard action model period into a plurality of sub-standard action model periods;
The second comparison unit is used for comparing the similarity of the time periods of the two corresponding sub-standard action models belonging to the two standard action models;
The sub-standard action model time period can move forwards and backwards on a time sequence in the comparison process, and the distance of the forward movement and the distance of the backward movement are smaller than or equal to the allowable distance.
Further, the method further comprises the following steps:
The second dividing unit is used for dividing the standard action model time period for a plurality of times by using the sub-standard action model time period, the lengths of the sub-standard action model time periods are different in each division, and the lengths of the sub-standard action model time periods obtained in each division are the same;
and the merging processing unit is used for merging sub-standard action model time periods which are divided for a plurality of times and have similar comparison results.
Further, the actions of the object in the video stream comprise a plurality of sub-actions belonging to different areas, and the sub-actions belonging to the different areas are respectively processed when any two standard action models are compared.
Further, counting the occurrence frequency of one sub-action of all standard action models in each group at one time point or one time length, and manufacturing the sub-action models by using the sub-actions with the frequency more than or equal to the set frequency;
Screening the standard action models in other groups by using the sub-action model;
grouping adjustment is carried out on the standard action model according to the screening result;
Wherein the sub-action model is generated based on all of the standard action models in a group.
Further, when there are a plurality of sub-actions at one point in time or one time length, a plurality of sub-action models are generated according to the number of sub-actions.
In one example, the unit in any of the above apparatuses may be one or more integrated circuits configured to implement the above methods, for example: one or more application specific integrated circuits (application specific integratedcircuit, ASIC), or one or more digital signal processors (DIGITAL SIGNAL processor, DSP), or one or more field programmable gate arrays (field programmable GATE ARRAY, FPGA), or a combination of at least two of these integrated circuit forms.
For another example, when the units in the apparatus may be implemented in the form of a scheduler of processing elements, the processing elements may be general-purpose processors, such as a central processing unit (central processing unit, CPU) or other processor that may invoke a program. For another example, the units may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Various objects such as various messages/information/devices/network elements/systems/devices/actions/operations/processes/concepts may be named in the present application, and it should be understood that these specific names do not constitute limitations on related objects, and that the named names may be changed according to the scenario, context, or usage habit, etc., and understanding of technical meaning of technical terms in the present application should be mainly determined from functions and technical effects that are embodied/performed in the technical solution.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It should also be understood that in various embodiments of the present application, first, second, etc. are merely intended to represent that multiple objects are different. For example, the first time window and the second time window are only intended to represent different time windows. Without any effect on the time window itself, the first, second, etc. mentioned above should not impose any limitation on the embodiments of the present application.
It is also to be understood that in the various embodiments of the application, where no special description or logic conflict exists, the terms and/or descriptions between the various embodiments are consistent and may reference each other, and features of the various embodiments may be combined to form new embodiments in accordance with their inherent logic relationships.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a computer-readable storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned computer-readable storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The application also provides a statistical analysis system for learning, the system comprising:
one or more memories for storing instructions; and
One or more processors configured to invoke and execute the instructions from the memory to perform the method as set forth above.
The present application also provides a computer program product comprising instructions which, when executed, cause the statistical analysis system to perform operations of the statistical analysis system corresponding to the above method.
The present application also provides a chip system comprising a processor for implementing the functions involved in the above, e.g. generating, receiving, transmitting, or processing data and/or information involved in the above method.
The chip system can be composed of chips, and can also comprise chips and other discrete devices.
The processor referred to in any of the foregoing may be a CPU, microprocessor, ASIC, or integrated circuit that performs one or more of the procedures for controlling the transmission of feedback information described above.
In one possible design, the system on a chip also includes memory to hold the necessary program instructions and data. The processor and the memory may be decoupled, and disposed on different devices, respectively, and connected by wired or wireless means, so as to support the chip system to implement the various functions in the foregoing embodiments. Or the processor and the memory may be coupled to the same device.
Optionally, the computer instructions are stored in a memory.
Alternatively, the memory may be a storage unit in the chip, such as a register, a cache, etc., and the memory may also be a storage unit in the terminal located outside the chip, such as a ROM or other type of static storage device, a RAM, etc., that may store static information and instructions.
It will be appreciated that the memory in the present application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory.
The non-volatile memory may be a ROM, programmable ROM (PROM), erasable programmable ROM (erasable PROM, EPROM), electrically erasable programmable EPROM (EEPROM), or flash memory.
The volatile memory may be RAM, which acts as external cache. There are many different types of RAM, such as sram (STATIC RAM, SRAM), DRAM (DYNAMIC RAM, DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (double DATA RATE SDRAM, DDR SDRAM), enhanced SDRAM (ENHANCED SDRAM, ESDRAM), synchronous DRAM (SYNCH LINK DRAM, SLDRAM), and direct memory bus RAM.
The embodiments of the present application are all preferred embodiments of the present application, and are not intended to limit the scope of the present application in this way, therefore: all equivalent changes in structure, shape and principle of the application should be covered in the scope of protection of the application.

Claims (10)

1. A statistical analysis method for learning, comprising:
responding to the received instruction, and acquiring all objects in the video stream;
Analyzing the motion of each object in the video stream and constructing a standard motion model for each object by using the motion of each object; a standard motion model refers to a set of motions performed by an object in a video stream, including a corresponding motion for each point in time or period in the video stream;
comparing standard action models of any two objects, and obtaining a similarity value, wherein one standard action model needs to be compared with all other standard action models; wherein the similarity value is the ratio of the time points or time periods of the two standard action models to the complete time length of the standard action models, wherein the two standard action models have similar actions at the time points or time periods;
Grouping the standard action models according to the similarity value, and assigning labels to each group according to the standard action model in each group; specifically, the similarity value of the two standard action models is greater than or equal to a threshold value, the two standard action models are put into one group, and otherwise, the two standard action models are respectively put into the two groups.
2. The statistical analysis method for learning of claim 1, wherein the standard motion model is developed in time, and there are a standard motion model period and a blank period on the time series;
When comparing the two standard motion models, only the overlapping areas of the standard motion model time periods in the two standard motion models are compared.
3. The statistical analysis method for learning of claim 2, further comprising:
Dividing the standard action model time period into a plurality of sub-standard action model time periods;
Comparing the similarity of two corresponding sub-standard action model time periods belonging to the two standard action models;
The sub-standard action model time period can move forwards and backwards on a time sequence in the comparison process, and the distance of the forward movement and the distance of the backward movement are smaller than or equal to the allowable distance.
4. A statistical analysis method for learning as claimed in claim 3, further comprising:
Dividing the standard action model time period for a plurality of times by using the sub-standard action model time period, wherein the lengths of the sub-standard action model time periods are different in each division, and the lengths of the sub-standard action model time periods obtained by each division are the same;
And merging sub-standard action model time periods which are divided for a plurality of times and have similar comparison results.
5. A statistical analysis method for learning according to any one of claims 1 to 4, wherein the actions of the object in the video stream comprise a plurality of sub-actions belonging to different regions, and the sub-actions belonging to different regions are processed separately when comparing any two standard action models.
6. The statistical analysis method for learning according to claim 5, wherein the occurrence frequency of one sub-action at one time point or one time length of all the standard action models in each group is counted, and the sub-action model is created using the sub-actions equal to or more than the set frequency;
Screening the standard action models in other groups by using the sub-action model;
grouping adjustment is carried out on the standard action model according to the screening result;
Wherein the sub-action model is generated based on all of the standard action models in a group.
7. The statistical analysis method for learning of claim 6, wherein when there are a plurality of sub-actions at one time point or one time length, a plurality of sub-action models are generated according to the number of sub-actions.
8. A statistical analysis device for learning, comprising:
The acquisition unit is used for responding to the received instruction and acquiring all objects in the video stream;
An analysis unit for analyzing the motion of each object in the video stream and constructing a standard motion model using the motion; a standard motion model refers to a set of motions performed by an object in a video stream, including a corresponding motion for each point in time or period in the video stream;
the first comparison unit is used for comparing any two standard action models and obtaining a similarity value, and one standard action model needs to be compared with all other standard action models; wherein the similarity value is the ratio of the time points or time periods of the two standard action models to the complete time length of the standard action models, wherein the two standard action models have similar actions at the time points or time periods;
a grouping unit, configured to group the standard action models according to the similarity value, and assign a label to each group according to the standard action model in each group; specifically, the similarity value of the two standard action models is greater than or equal to a threshold value, the two standard action models are put into one group, and otherwise, the two standard action models are respectively put into the two groups.
9. A statistical analysis system for learning, the system comprising:
A memory for storing instructions; and a processor for invoking and executing said instructions from said memory to perform the method of any of claims 1-7.
10. A computer-readable storage medium, the computer-readable storage medium comprising: program which, when executed by a processor, performs a method according to any one of claims 1 to 7.
CN202410422316.3A 2024-04-09 2024-04-09 Statistical analysis method, device, system and storage medium for learning Active CN118038559B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410422316.3A CN118038559B (en) 2024-04-09 2024-04-09 Statistical analysis method, device, system and storage medium for learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410422316.3A CN118038559B (en) 2024-04-09 2024-04-09 Statistical analysis method, device, system and storage medium for learning

Publications (2)

Publication Number Publication Date
CN118038559A true CN118038559A (en) 2024-05-14
CN118038559B CN118038559B (en) 2024-06-18

Family

ID=90989524

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410422316.3A Active CN118038559B (en) 2024-04-09 2024-04-09 Statistical analysis method, device, system and storage medium for learning

Country Status (1)

Country Link
CN (1) CN118038559B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190371144A1 (en) * 2018-05-31 2019-12-05 Henry Shu Method and system for object motion and activity detection
CN111783650A (en) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 Model training method, action recognition method, device, equipment and storage medium
US20220318555A1 (en) * 2021-03-31 2022-10-06 International Business Machines Corporation Action recognition using limited data
CN115690635A (en) * 2021-07-21 2023-02-03 广州视源电子科技股份有限公司 Video processing method and device, computer storage medium and intelligent interactive panel
CN116980717A (en) * 2023-09-22 2023-10-31 北京小糖科技有限责任公司 Interaction method, device, equipment and storage medium based on video decomposition processing
CN117615182A (en) * 2024-01-23 2024-02-27 江苏欧帝电子科技有限公司 Live broadcast and interaction dynamic switching method, system and terminal based on number of participants

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190371144A1 (en) * 2018-05-31 2019-12-05 Henry Shu Method and system for object motion and activity detection
CN111783650A (en) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 Model training method, action recognition method, device, equipment and storage medium
US20220318555A1 (en) * 2021-03-31 2022-10-06 International Business Machines Corporation Action recognition using limited data
CN115690635A (en) * 2021-07-21 2023-02-03 广州视源电子科技股份有限公司 Video processing method and device, computer storage medium and intelligent interactive panel
CN116980717A (en) * 2023-09-22 2023-10-31 北京小糖科技有限责任公司 Interaction method, device, equipment and storage medium based on video decomposition processing
CN117615182A (en) * 2024-01-23 2024-02-27 江苏欧帝电子科技有限公司 Live broadcast and interaction dynamic switching method, system and terminal based on number of participants

Also Published As

Publication number Publication date
CN118038559B (en) 2024-06-18

Similar Documents

Publication Publication Date Title
Lämsä et al. What do we do when we analyse the temporal aspects of computer-supported collaborative learning? A systematic literature review
Baker et al. Human classification of low-fidelity replays of student actions
Bal The examination of representations used by classroom teacher candidates in solving mathematical problems.
CN111046819A (en) Behavior recognition processing method and device
Sharma et al. Instruments used in the collection of data in research
CN113870395A (en) Animation video generation method, device, equipment and storage medium
US20150056597A1 (en) System and method facilitating adaptive learning based on user behavioral profiles
CN106448306A (en) Answer data processing method and processing system
CN107133894B (en) Online learning grouping method based on complex network theory
CN114115392A (en) Intelligent classroom control system and method based on 5G cloud edge combination
CN110516749A (en) Model training method, method for processing video frequency, device, medium and calculating equipment
Johnson A longitudinal perspective on EFL learning motivation in Japanese engineering students
CN114299617A (en) Teaching interaction condition identification method, device, equipment and storage medium
CN112101074A (en) Online education auxiliary scoring method and system
CN111353439A (en) Method, device, system and equipment for analyzing teaching behaviors
Purwati et al. EFL Students’ Perceptions of Online Learning using Zoom During Covid-19 Pandemic: A Case Study
KR102550840B1 (en) Electronic apparatus for designing learning process based on comparative evaluation between student and artificial inteligence model, and learning management method
CN118038559B (en) Statistical analysis method, device, system and storage medium for learning
CN110427277B (en) Data verification method, device, equipment and storage medium
Gottipati et al. Automated discussion analysis-framework for knowledge analysis from class discussions
WO2018204645A1 (en) A/b testing for massive open online courses
CN114580882A (en) Teaching effect evaluation system and method for hybrid teaching method
Shapsough et al. Using machine learning to automate classroom observation for low-resource environments
CN113128421A (en) Learning state detection method and system, learning terminal, server and electronic equipment
Ayotte et al. Introducing machine learning concepts using hands-on Android-based exercises

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant