CN115064275B - Method, equipment and medium for quantifying and training children computing capacity - Google Patents

Method, equipment and medium for quantifying and training children computing capacity Download PDF

Info

Publication number
CN115064275B
CN115064275B CN202210995597.2A CN202210995597A CN115064275B CN 115064275 B CN115064275 B CN 115064275B CN 202210995597 A CN202210995597 A CN 202210995597A CN 115064275 B CN115064275 B CN 115064275B
Authority
CN
China
Prior art keywords
user
target point
determining
finger
glance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210995597.2A
Other languages
Chinese (zh)
Other versions
CN115064275A (en
Inventor
宋业臻
肖维斌
王荣全
韩伟
黄岩
曲继新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Xinfa Technology Co ltd
Original Assignee
Shandong Xinfa Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Xinfa Technology Co ltd filed Critical Shandong Xinfa Technology Co ltd
Priority to CN202210995597.2A priority Critical patent/CN115064275B/en
Publication of CN115064275A publication Critical patent/CN115064275A/en
Application granted granted Critical
Publication of CN115064275B publication Critical patent/CN115064275B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Business, Economics & Management (AREA)
  • Public Health (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Primary Health Care (AREA)
  • Educational Technology (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Epidemiology (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Educational Administration (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Rehabilitation Tools (AREA)

Abstract

The application relates to the field of data identification and data representation, in particular to a method, equipment and medium for quantifying and training children computing capacity, wherein the method comprises the following steps: determining a preset first number set and a preset second number set; determining a first target point coordinate and a second target point coordinate preset on a display screen; the first target point coordinate and the second target point coordinate are respectively arranged on the left side and the right side of the display screen; alternately showing the first number set and the second number set to a user, and showing the target point at the first target point coordinate or the second target point coordinate; the method comprises the steps that eye movement data in the watching process of a user are collected through an eye movement instrument, and the computing capacity grade of the user is determined according to the eye movement data; determining an auxiliary training level of the user according to the computing power level; alternately showing the first digit set and the second digit set to a user to acquire the finger data of the user; and performing computing power auxiliary training on the user according to the finger movement data.

Description

Method, equipment and medium for quantifying and training children computing capacity
Technical Field
The application relates to the field of data identification and data representation, in particular to a method, equipment and medium for quantifying and training children computing capacity in an auxiliary mode.
Background
Computing power is one of the more important components of learning power, and when children have significantly less computing power than the norm, there may be developmental computing barriers.
In current clinical practice, screening and diagnosis for the ability to calculate learning of children of low age are mainly performed by methods of tables and observation of clinicians, and the table tools mainly comprise a plurality of common tools such as 'digital sense combined table', 'calculation obstacle screening table' and 'Tedi-mathematic combined table'. But the quantification of the computing power by the scale is difficult to obtain objective and precise multi-level cause analysis and elimination.
Disclosure of Invention
In order to solve the above problem, the present application proposes a method for quantifying computing power of a child and assisting in training, wherein the method comprises: determining a preset first number set and a preset second number set; the numbers in the second set of numbers are larger than the first set of numbers; determining a first target point coordinate and a second target point coordinate preset on a display screen; the first target point coordinate and the second target point coordinate are respectively arranged on the left side and the right side of the display screen; alternately displaying the first number set and the second number set to a user, and displaying a target point on the first target point coordinate or the second target point coordinate; acquiring eye movement data of the user in the watching process through an eye movement instrument, and determining the computing power level of the user according to the eye movement data; determining an auxiliary training level of the user according to the computing power level; alternately showing the first digit set and the second digit set to a user, and acquiring the finger data of the user; and performing computing power auxiliary training on the user with the auxiliary training grade lower than a preset threshold value according to the finger data.
In one example, the determining the computing power level of the user according to the eye movement data specifically includes: determining a first glance duration set, a second glance duration set, a third glance duration set, and a fourth glance duration set of the user from the eye movement data; determining a first difference value of the first glance-duration set and the second glance-duration set; determining a second difference value of the third set of glance durations and the fourth set of glance durations; and determining the computing power level of the user according to the first difference value and the second difference value.
In an example, the determining the computing power level of the user according to the first difference value and the second difference value specifically includes: determining a quantitative value of the computing power of the user according to the following formula and the first difference value and the second difference value;
Figure DEST_PATH_IMAGE001
(ii) a Wherein G is the quantified value of computing power; z, a, b are coefficients related to the number of digits in the digit set; c is a preset constant; Δ 1 is the first difference value; Δ 2 is the second difference value; and determining the computing power level of the user according to the computing power quantized value and a preset level division threshold value.
In one example, the first glance duration is the glance duration for when the presentation screen presents a first set of numbers and the target point is at the first target point coordinate; the second glance duration is the glance duration for when the display screen displays the first set of numbers and the target point is located at the second target point coordinate; the third glance duration is the glance duration for when the display screen displays a second digital set and the target point is located at the first target point coordinate; the fourth glance duration is the glance duration when the display screen displays the second digital set and the target point is located at the second target point coordinate. In one example, the alternately presenting the first number set and the second number set to the user and acquiring the finger data of the user specifically includes: displaying the finger data corresponding to different numbers to the user through the display screen; acquiring a finger image of the user; inputting the finger image into a pre-trained finger judging model to obtain the finger data; the finger movement data at least comprises the finger action type of the user, the hand type corresponding to the finger action type and the completion time.
In an example, the performing, according to the instruction data, computing power assisted training on the user whose assisted training level is lower than a preset threshold specifically includes: determining the action completion degree of the user according to the finger action type and the hand type of the user; determining the completion accuracy of the user according to the action completion degree and the completion time; and determining whether the auxiliary training needs to be restarted or not according to the completion accuracy and a preset completion threshold.
In an example, the determining the completion accuracy of the user according to the action completion degree and the completion time specifically includes: determining a completion accuracy rate of the user by the following formula:
Figure 902349DEST_PATH_IMAGE002
(ii) a Wherein P is the completion accuracy; sign (x) is a sign function; n is the number of digits in the digit set; f. of i The motion completion degree of the ith motion is; f. of 0 Is a preset threshold of completion; t is t i Is the completion time of the ith action; t is t 0 Is a preset time threshold.
In one example, before the inputting the image of the finger into a preset finger discriminant model, the method further comprises: determining a preset initial model, and acquiring training data, wherein the training data are pre-acquired finger actions and corresponding finger images; and training the initial model through the training data to obtain the finger judgment model.
The application also provides a quantification and auxiliary training device for the computing power of children, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to: determining a preset first number set and a preset second number set; the numbers in the second set of numbers are larger than the first set of numbers; determining a first target point coordinate and a second target point coordinate preset on a display screen; the first target point coordinate and the second target point coordinate are respectively arranged on the left side and the right side of the display screen; alternately displaying the first number set and the second number set to a user, and displaying a target point at the first target point coordinate or the second target point coordinate; acquiring eye movement data of the user in the watching process through an eye movement instrument, and determining the computing power level of the user according to the eye movement data; determining an auxiliary training level of the user according to the computing power level; alternately showing the first digit set and the second digit set to a user, and acquiring the finger data of the user; and performing computing power auxiliary training on the user with the auxiliary training grade lower than a preset threshold value according to the finger data.
The present application further provides a non-transitory computer storage medium storing computer-executable instructions configured to: determining a preset first number set and a preset second number set; the numbers in the second set of numbers are larger than the first set of numbers; determining a first target point coordinate and a second target point coordinate preset on a display screen; the first target point coordinate and the second target point coordinate are respectively arranged on the left side and the right side of the display screen; alternately displaying the first number set and the second number set to a user, and displaying a target point at the first target point coordinate or the second target point coordinate; collecting eye movement data of the user in the watching process through an eye movement instrument, and determining the computing power level of the user according to the eye movement data; determining an auxiliary training level of the user according to the computing power level; alternately showing the first digit set and the second digit set to a user, and acquiring the finger data of the user; and performing computing power assisted training on the user with the assisted training grade lower than a preset threshold value according to the finger data.
According to the method, the problem that the calculation capability of the younger children is difficult to evaluate can be solved according to the calculation cognitive scientific principle of the younger children. Meanwhile, each stage of the brain function development of the children is comprehensively considered, the accuracy of the quantification of the computing power is improved through an artificial intelligence technology and an eye movement tracking technology, and the children with low computing power can be subjected to auxiliary training.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a flow chart illustrating a method for quantifying computing power and assisting in training a child according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a device for quantifying computing power of a child and assisting training in the embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating a method for quantifying computing power and assisting in training a child according to one or more embodiments of the present disclosure. The process may be performed by computing devices in the respective domain, with certain input parameters or intermediate results in the process allowing for manual intervention adjustments to help improve accuracy.
The analysis method according to the embodiment of the present application may be implemented by a terminal device or a server, which is not particularly limited in this application. For convenience of understanding and description, the following embodiments are described in detail by taking a terminal device as an example. It should be noted that the server may be a single device, or may be a system composed of multiple devices, that is, a distributed server, which is not specifically limited in this application.
As the children can automatically form a psychological axis when performing cognitive representation on the digital size, namely, a smaller digital representation is mapped to the left side of the space, and a larger digital representation is mapped to the right side of the space. According to the psychology number axis theory, in the attention information processing paradigm, a small number is presented in advance, a large number is presented later, and after the small number is presented, a target point appears on the left side, so that the target point is tested to have attention enhancement effect on the target point on the left side; when the target point appears on the right side after the larger number is presented, the target point is tested to have attention enhancing effect on the target point on the right side, otherwise, attention reducing effect appears. Based on the method, the application provides a method for quantifying the computing capacity of the children and assisting in training.
As shown in fig. 1, the present application provides a method for quantifying and training children's computing ability, which is applied to a device for quantifying and training children's computing ability, where the device includes a display screen and an eye tracker. When the method is implemented, a user is required to sit in front of the display screen, and the eye movement data of the user is acquired through the eye movement instrument. The quantification method of the computing power of the children comprises the following steps:
s101: determining a preset first number set and a preset second number set; the numbers in the second set of numbers are larger than the first set of numbers.
First, a first number set and a second number set which are set in advance need to be determined, wherein the first number set and the second number set are used for being displayed in the center of a display screen, and numbers in the second number set are larger than those in the first number set, that is, the numbers in the second number set are larger than those in the first number set.
S102: determining a first target point coordinate and a second target point coordinate preset on a display screen; the first target point coordinate and the second target point coordinate are respectively arranged on the left side and the right side of the display screen.
And then, a first target point coordinate and a second target point coordinate preset on the display screen are required to be determined, wherein the two target point coordinates are respectively arranged on the left side and the right side of the display screen, the first target point coordinate is arranged on the left side, the second target point coordinate is arranged on the right side, and the target point coordinate is the coordinate when the target point is determined.
S103: and alternately displaying the first number set and the second number set to a user, and displaying the target point to the first target point coordinate or the second target point coordinate.
When the target point is displayed to the user, the values in the first number set and the second number set need to be alternately displayed, and the target point is fixed at the first target point coordinate or the second target point coordinate. That is, each time the display is performed, only one number and one target point appear on the display screen, and the number may belong to the first number set or the second number set. The target point may appear in the first target point coordinates and may also appear in the second target point coordinates.
S104: and acquiring eye movement data of the user in the watching process through an eye movement instrument, and determining the computing power level of the user according to the eye movement data.
And when the number and the target point are displayed for the user, collecting eye movement data of the process that the fixation point of the user turns to the target point from the middle of the screen, and determining the calculation grade of the user according to the eye movement data.
In one embodiment, in determining the user's level of computing power from eye movement data, it is first necessary to determine the user's first glance duration set, second glance duration set, third glance duration set, and fourth glance duration set from eye movement data. The first glance duration here is the glance duration for when the presentation screen presents the first set of numbers with the target point located at the first target point coordinate. The second glance duration is the glance duration for when the display screen shows the first set of numbers and the target point is at the second target point coordinate. The third glance duration is the glance duration for when the display screen shows the second set of numbers and the target point is located at the first target point coordinate. The fourth glance duration is the glance duration for when the display screen displays the second digital set and the target point is at the second target point coordinate. Then, a first difference value between the first saccade duration set and the second saccade duration set is determined, a second difference value between the third saccade duration set and the fourth saccade duration set is determined, and finally, the computing power level of the user can be determined according to the first difference value and the second difference value.
In one embodiment, when determining the computing power level of the user according to the first difference value and the second difference value, the computing power quantization value of the user may be calculated by the following formula:
Figure DEST_PATH_IMAGE003
where G is the computing power quantified value; z, a, b are coefficients related to the number of digits in the digit set; c is a preset constant; delta 1 Is the first difference value; delta 2 Is the second difference value. And then, determining the computing power level of the user through the computing power quantized value and a preset level division threshold value.
After the computing power of a user is quantitatively evaluated, the brain function research in the educational neuroscience is combined, and the temporal lobe and temporal sulcus area responsible for the digital sensation is also responsible for the functional representation of the finger movement on the functional level, so that a group of auxiliary training methods for training the digital sensation of children through the finger movement function are designed. The auxiliary training of the computing power can be carried out on the users with lower computing power levels. The auxiliary training method comprises the following steps:
s105: and determining the auxiliary training level of the user according to the computing power level.
Firstly, whether the user needs to perform the auxiliary training is judged according to the computing power level of the user, and if the user needs to perform the auxiliary training, the training level is several levels. Different auxiliary training levels are different, and corresponding finger action difficulty is different.
S106: and alternately showing the first number set and the second number set to a user, and acquiring the finger data of the user.
When the auxiliary training is carried out, the numbers in the first number set and the second number set are alternately displayed to the user through the display screen, and the finger data of the user is acquired.
In one embodiment, before acquiring the finger data of the user, the finger data corresponding to different numbers respectively needs to be presented to the user through the presentation screen, for example, if the screen presents the number "3", the finger action that the user should make is "three fingers are set up in the right hand". When acquiring the finger data of the user, it is necessary to acquire a finger image of the user, i.e., a hand motion image, and input the finger image to a pre-trained finger determination model to obtain the finger data. The finger movement data at least comprises a finger action type of the user, a hand type corresponding to the finger action type and completion time.
S107: and performing computing power auxiliary training on the user with the auxiliary training grade lower than a preset threshold value according to the finger data.
After the finger data is obtained, the user with the auxiliary training level lower than the preset threshold value can be subjected to auxiliary training according to the finger data.
In one embodiment, when using the finger movement data, firstly, the movement completion degree of the user, that is, the completion degree of the user for the specified finger movement, may be determined according to the finger movement type of the user and the hand type; and then determining the completion accuracy of the user according to the action completion degree and the completion time. And finally, determining whether the auxiliary training needs to be restarted or not according to the completion accuracy and a preset completion threshold.
Further, in determining the completion accuracy of the user, the completion accuracy of the user may be determined by the following formula:
Figure 766400DEST_PATH_IMAGE002
wherein P is the completion accuracy; sign (x) is a sign function; n is the number of digits in the digit set; f. of i The motion completion degree of the ith motion; f. of 0 Is a preset threshold value of the degree of completion; t is t i Is the completion time of the ith action; t is t 0 Is a preset time threshold.
In one embodiment, before inputting a finger image into a preset finger judging model, a preset initial model needs to be determined, and training data is acquired, wherein the training data is a finger action and a corresponding finger image which are acquired in advance; and training the initial model through training data to obtain a finger judgment model.
The constructed finger judgment model is trained in advance through a training data set, and when the set training precision and accuracy are achieved, the finger judgment model trained at the current time is determined to complete training so as to be used for prediction processing.
As shown in fig. 2, an embodiment of the present application further provides a device for quantifying computing power of a child and assisting in training, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to cause the at least one processor to:
determining a preset first number set and a preset second number set; the numbers in the second set of numbers are larger than the first set of numbers; determining a first target point coordinate and a second target point coordinate preset on a display screen; the first target point coordinate and the second target point coordinate are respectively arranged on the left side and the right side of the display screen; alternately displaying the first number set and the second number set to a user, and displaying a target point at the first target point coordinate or the second target point coordinate; acquiring eye movement data of the user in the watching process through an eye movement instrument, and determining the computing power level of the user according to the eye movement data; determining an auxiliary training level of the user according to the computing power level; alternately showing the first digit set and the second digit set to a user, and acquiring the finger data of the user; and performing computing power auxiliary training on the user with the auxiliary training grade lower than a preset threshold value according to the finger data.
An embodiment of the present application further provides a non-volatile computer storage medium storing computer-executable instructions, where the computer-executable instructions are configured to:
determining a preset first number set and a preset second number set; the numbers in the second set of numbers are larger than the first set of numbers; determining a first target point coordinate and a second target point coordinate preset on a display screen; the first target point coordinate and the second target point coordinate are respectively arranged on the left side and the right side of the display screen; alternately displaying the first number set and the second number set to a user, and displaying a target point at the first target point coordinate or the second target point coordinate; acquiring eye movement data of the user in the watching process through an eye movement instrument, and determining the computing power level of the user according to the eye movement data; determining an auxiliary training level of the user according to the computing power level; alternately showing the first digit set and the second digit set to a user, and acquiring the finger data of the user; and performing computing power auxiliary training on the user with the auxiliary training grade lower than a preset threshold value according to the finger data.
The embodiments in the present application are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the device and media embodiments, the description is relatively simple as it is substantially similar to the method embodiments, and reference may be made to some descriptions of the method embodiments for relevant points.
The device and the medium provided by the embodiment of the application correspond to the method one to one, so the device and the medium also have the similar beneficial technical effects as the corresponding method, and the beneficial technical effects of the method are explained in detail above, so the beneficial technical effects of the device and the medium are not repeated herein.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises that element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (7)

1. A method for quantifying and assisting in training the computing power of children, which is characterized by comprising the following steps:
determining a preset first number set and a preset second number set; the numbers in the second set of numbers are larger than the first set of numbers;
determining a first target point coordinate and a second target point coordinate preset on a display screen; the first target point coordinate and the second target point coordinate are respectively arranged on the left side and the right side of the display screen;
alternately displaying the first number set and the second number set to a user, and displaying a target point at the first target point coordinate or the second target point coordinate;
acquiring eye movement data of the user in the watching process through an eye movement instrument, and determining the computing power level of the user according to the eye movement data;
determining an auxiliary training level of the user according to the computing power level;
alternately showing the first digit set and the second digit set to a user, and acquiring the finger data of the user;
according to the finger data, performing computing power assisted training on the user with the assisted training level lower than a preset threshold;
determining the computing power level of the user according to the eye movement data specifically comprises:
determining a first glance duration set, a second glance duration set, a third glance duration set, and a fourth glance duration set of the user from the eye movement data;
determining a first difference value of the first set of glance durations and the second set of glance durations;
determining a second difference value of the third set of glance durations and the fourth set of glance durations;
determining the computing power level of the user according to the first difference value and the second difference value;
the first glance duration is the glance duration for the presentation screen showing the first set of numbers with the target point located at the first target point coordinate;
the second glance duration is the glance duration for the display screen showing the first set of numbers with the target point located at the second target point coordinate;
the third glance duration is the glance duration for the display screen displaying the second digital set with the target point located at the first target point coordinate;
the fourth glance duration is the glance duration for the display screen displaying the second digital set with the target point located at the second target point coordinate;
the alternately showing the first digit set and the second digit set to a user and acquiring the finger data of the user specifically includes:
displaying the finger data corresponding to different numbers to the user through the display screen;
acquiring a finger image of the user;
inputting the finger image into a pre-trained finger discrimination model to obtain the finger data;
the finger movement data at least comprises the finger action type of the user, the hand type corresponding to the finger action type and the finishing time.
2. The method according to claim 1, wherein the determining the computing power level of the user according to the first difference value and the second difference value specifically comprises:
determining a quantitative value of the computing power of the user according to the following formula and the first difference value and the second difference value;
Figure 535207DEST_PATH_IMAGE001
wherein G is the quantified value of computing power; z, a, b are coefficients related to the number of digits in the set of digits; c is a preset constant; delta of 1 Is the first difference value; delta 2 Is the second difference value;
and determining the computing power level of the user according to the computing power quantized value and a preset level division threshold value.
3. The method according to claim 1, wherein performing computing power training assistance on the user with the training assistance level lower than a preset threshold according to the instruction data specifically comprises:
determining the action completion degree of the user according to the finger action type and the hand type of the user;
determining the completion accuracy of the user according to the action completion degree and the completion time;
and determining whether the auxiliary training needs to be restarted or not according to the completion accuracy and a preset completion threshold.
4. The method according to claim 3, wherein the determining the completion accuracy of the user according to the action completion degree and the completion time specifically comprises:
determining a completion accuracy rate of the user by the following formula:
Figure 727154DEST_PATH_IMAGE002
wherein P is the completion accuracy; sign (x) is a sign function; n is the number of digits in the digit set; f. of i The motion completion degree of the ith motion; f. of 0 Is a preset threshold value of the degree of completion; t is t i Is the completion time of the ith action; t is t 0 Is a preset time threshold.
5. The method according to claim 1, wherein before inputting the finger image to a preset finger discriminant model, the method further comprises:
determining a preset initial model, and acquiring training data, wherein the training data are pre-acquired finger actions and corresponding finger images;
and training the initial model through the training data to obtain the finger judgment model.
6. An apparatus for quantifying computing power and assisting in training a child, comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform:
determining a preset first number set and a preset second number set; the numbers in the second set of numbers are larger than the first set of numbers;
determining a first target point coordinate and a second target point coordinate preset on a display screen; the first target point coordinate and the second target point coordinate are respectively arranged on the left side and the right side of the display screen;
alternately displaying the first number set and the second number set to a user, and displaying a target point at the first target point coordinate or the second target point coordinate;
acquiring eye movement data of the user in the watching process through an eye movement instrument, and determining the computing power level of the user according to the eye movement data;
determining an auxiliary training level of the user according to the computing power level;
alternately showing the first digit set and the second digit set to a user, and acquiring the finger data of the user;
according to the finger data, performing computing power assisted training on the user with the assisted training level lower than a preset threshold;
determining the computing power level of the user according to the eye movement data specifically comprises:
determining a first glance duration set, a second glance duration set, a third glance duration set, and a fourth glance duration set of the user from the eye movement data;
determining a first difference value of the first set of glance durations and the second set of glance durations;
determining a second difference value of the third glance-duration set and the fourth glance-duration set;
determining the computing power level of the user according to the first difference value and the second difference value;
the first glance duration is a glance duration for when the display screen shows a first set of numbers and the target point is located at the first target point coordinate;
the second glance duration is the glance duration for the display screen showing the first set of numbers with the target point located at the second target point coordinate;
the third glance duration is the glance duration for when the display screen displays a second digital set and the target point is located at the first target point coordinate;
the fourth glance duration is the glance duration for the display screen displaying the second digital set with the target point located at the second target point coordinate;
the alternately showing the first digit set and the second digit set to a user and acquiring the finger data of the user specifically includes:
displaying the finger data corresponding to different numbers to the user through the display screen;
acquiring a finger image of the user;
inputting the finger image into a pre-trained finger judging model to obtain the finger data;
the finger movement data at least comprises the finger action type of the user, the hand type corresponding to the finger action type and the completion time.
7. A non-transitory computer storage medium storing computer-executable instructions, the computer-executable instructions configured to:
determining a preset first number set and a preset second number set; the numbers in the second set of numbers are larger than the first set of numbers;
determining a first target point coordinate and a second target point coordinate preset on a display screen; the first target point coordinate and the second target point coordinate are respectively arranged on the left side and the right side of the display screen;
alternately displaying the first number set and the second number set to a user, and displaying a target point at the first target point coordinate or the second target point coordinate;
collecting eye movement data of the user in the watching process through an eye movement instrument, and determining the computing power level of the user according to the eye movement data;
determining an auxiliary training level of the user according to the computing power level;
alternately showing the first digit set and the second digit set to a user, and acquiring the finger data of the user;
according to the finger data, performing computing power assisted training on the user with the assisted training level lower than a preset threshold;
determining the computing power level of the user according to the eye movement data specifically comprises:
determining a first glance duration set, a second glance duration set, a third glance duration set, and a fourth glance duration set of the user from the eye movement data;
determining a first difference value of the first set of glance durations and the second set of glance durations;
determining a second difference value of the third glance-duration set and the fourth glance-duration set;
determining the computing power level of the user according to the first difference value and the second difference value;
the first glance duration is a glance duration for when the display screen shows a first set of numbers and the target point is located at the first target point coordinate;
the second glance duration is the glance duration for when the display screen displays the first set of numbers and the target point is located at the second target point coordinate;
the third glance duration is the glance duration for the display screen displaying the second digital set with the target point located at the first target point coordinate;
the fourth saccade duration is a saccade duration when the display screen displays a second digital set and the target point is located in the second target point coordinate;
the alternately showing the first digit set and the second digit set to a user and acquiring the instruction data of the user specifically includes:
displaying the finger data corresponding to different numbers to the user through the display screen;
acquiring a finger image of the user;
inputting the finger image into a pre-trained finger discrimination model to obtain the finger data;
the finger movement data at least comprises the finger action type of the user, the hand type corresponding to the finger action type and the finishing time.
CN202210995597.2A 2022-08-19 2022-08-19 Method, equipment and medium for quantifying and training children computing capacity Active CN115064275B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210995597.2A CN115064275B (en) 2022-08-19 2022-08-19 Method, equipment and medium for quantifying and training children computing capacity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210995597.2A CN115064275B (en) 2022-08-19 2022-08-19 Method, equipment and medium for quantifying and training children computing capacity

Publications (2)

Publication Number Publication Date
CN115064275A CN115064275A (en) 2022-09-16
CN115064275B true CN115064275B (en) 2022-12-02

Family

ID=83207545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210995597.2A Active CN115064275B (en) 2022-08-19 2022-08-19 Method, equipment and medium for quantifying and training children computing capacity

Country Status (1)

Country Link
CN (1) CN115064275B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106843500A (en) * 2017-02-27 2017-06-13 南通大学 Human-subject test rehabilitation training system based on the dynamic tracer technique of eye
CN107422852A (en) * 2017-06-27 2017-12-01 掣京机器人科技(上海)有限公司 Healing hand function training and estimating method and system
CN109770921A (en) * 2019-02-03 2019-05-21 清华大学 The method and device of autism children early stage language and cognition ability screening
CN112446594A (en) * 2020-11-12 2021-03-05 书丸子(北京)科技有限公司 Multi-level quantifiable computing method for comprehensive ability analysis of preschool children
CN114724709A (en) * 2022-06-07 2022-07-08 深圳市铱硙医疗科技有限公司 Dementia risk screening system, equipment and medium based on VR eye movement tracking
CN114795104A (en) * 2022-04-27 2022-07-29 郑州大学 Eyeball motion power quantitative evaluation system based on eye tracker
CN114847950A (en) * 2022-04-29 2022-08-05 深圳市云长数字医疗有限公司 Attention assessment and training system and method based on virtual reality and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107544660B (en) * 2016-06-24 2020-12-18 联想(北京)有限公司 Information processing method and electronic equipment
US10599682B2 (en) * 2017-08-08 2020-03-24 International Business Machines Corporation User interaction during ground truth curation in a cognitive system
CN116784795A (en) * 2017-11-30 2023-09-22 思维有限公司 Methods for assessing impaired neurological function associated with multiple sclerosis
WO2019210217A1 (en) * 2018-04-27 2019-10-31 C. Light Technologies, Inc. Method of detection, prognostication, and monitoring of neurological disorders
CN114330418A (en) * 2021-11-29 2022-04-12 北京机械设备研究所 Electroencephalogram and eye movement fusion method, medium and equipment for AR target recognition
CN114399709A (en) * 2021-12-30 2022-04-26 北京北大医疗脑健康科技有限公司 Child emotion recognition model training method and child emotion recognition method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106843500A (en) * 2017-02-27 2017-06-13 南通大学 Human-subject test rehabilitation training system based on the dynamic tracer technique of eye
CN107422852A (en) * 2017-06-27 2017-12-01 掣京机器人科技(上海)有限公司 Healing hand function training and estimating method and system
CN109770921A (en) * 2019-02-03 2019-05-21 清华大学 The method and device of autism children early stage language and cognition ability screening
CN112446594A (en) * 2020-11-12 2021-03-05 书丸子(北京)科技有限公司 Multi-level quantifiable computing method for comprehensive ability analysis of preschool children
CN114795104A (en) * 2022-04-27 2022-07-29 郑州大学 Eyeball motion power quantitative evaluation system based on eye tracker
CN114847950A (en) * 2022-04-29 2022-08-05 深圳市云长数字医疗有限公司 Attention assessment and training system and method based on virtual reality and storage medium
CN114724709A (en) * 2022-06-07 2022-07-08 深圳市铱硙医疗科技有限公司 Dementia risk screening system, equipment and medium based on VR eye movement tracking

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Eye movement as a predictor of cognitive ability;Valay Patel等;《IEEE》;20171214;全文 *
基于眼动追踪的数字界面用户情绪分析;沈竹琦等;《设计》;20180308(第05期);全文 *
行为问题儿童注意力及短时记忆力认知综合训练干预效果评价;赵娜等;《中国学校卫生》;20120525(第05期);全文 *

Also Published As

Publication number Publication date
CN115064275A (en) 2022-09-16

Similar Documents

Publication Publication Date Title
Ritchey et al. Neural similarity between encoding and retrieval is related to memory via hippocampal interactions
Stokes et al. Complementary roles of human hippocampal subfields in differentiation and integration of spatial context
Laski et al. Spatial skills as a predictor of first grade girls' use of higher level arithmetic strategies
Itti Quantitative modelling of perceptual salience at human eye position
Bower et al. Aging, perceptual learning, and changes in efficiency of motion processing
Mormann et al. Scene-selective coding by single neurons in the human parahippocampal cortex
van Leeuwen et al. Forget binning and get SMART: Getting more out of the time-course of response data
KR20210108376A (en) Apparatus and method for utilizing brain trait activity map database to characterize content
Gliksman et al. Enumeration and alertness in developmental dyscalculia
CN113871015B (en) Man-machine interaction scheme pushing method and system for improving cognition
Lemaire et al. Age-related changes in children’s strategies for solving two-digit addition problems
Ten Oever et al. Phase-coded oscillatory ordering promotes the separation of closely matched representations to optimize perceptual discrimination
Korkki et al. Hippocampal–cortical encoding activity predicts the precision of episodic memory
West et al. Landmark-dependent navigation strategy declines across the human life-span: evidence from over 37,000 participants
Drascher et al. Long-term memory interference is resolved via repulsion and precision along diagnostic memory dimensions
CN117612712A (en) Method and system for detecting and improving cognition evaluation diagnosis precision
Smith et al. Nonblurred regions show priority for gaze direction over spatial blur
Parra et al. Predictors of performance in real and virtual scenarios across age
CN115064275B (en) Method, equipment and medium for quantifying and training children computing capacity
Cai et al. Duration estimates within a modality are integrated sub-optimally
KR102011422B1 (en) Method for providing study group matching service based on online study
Sanjurjo Search, memory, and choice: an experiment
Skagenholt et al. Connectome-based predictive modeling indicates dissociable neurocognitive mechanisms for numerical order and magnitude processing in children
Miendlarzewska et al. Prior reward conditioning dampens hippocampal and striatal responses during an associative memory task
Shikauchi et al. Decoding the view expectation during learned maze navigation from human fronto-parietal network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant