CN116230169A - Cognitive ability test and training method based on user behaviors - Google Patents
Cognitive ability test and training method based on user behaviors Download PDFInfo
- Publication number
- CN116230169A CN116230169A CN202310240831.5A CN202310240831A CN116230169A CN 116230169 A CN116230169 A CN 116230169A CN 202310240831 A CN202310240831 A CN 202310240831A CN 116230169 A CN116230169 A CN 116230169A
- Authority
- CN
- China
- Prior art keywords
- cognitive ability
- gridding
- coordinates
- training method
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012360 testing method Methods 0.000 title claims abstract description 40
- 238000012549 training Methods 0.000 title claims abstract description 27
- 238000000034 method Methods 0.000 title claims abstract description 25
- 230000003930 cognitive ability Effects 0.000 title claims abstract description 21
- 230000006399 behavior Effects 0.000 title abstract description 15
- 238000013528 artificial neural network Methods 0.000 claims abstract description 5
- 238000004364 calculation method Methods 0.000 claims description 12
- 230000006870 function Effects 0.000 claims description 4
- 238000003062 neural network model Methods 0.000 claims description 3
- 230000019771 cognition Effects 0.000 abstract description 6
- 238000013461 design Methods 0.000 abstract description 3
- 230000001149 cognitive effect Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 3
- 238000011002 quantification Methods 0.000 description 2
- 230000035484 reaction time Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/70—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Health & Medical Sciences (AREA)
- Child & Adolescent Psychology (AREA)
- Developmental Disabilities (AREA)
- Hospice & Palliative Care (AREA)
- Psychiatry (AREA)
- Psychology (AREA)
- Social Psychology (AREA)
- Engineering & Computer Science (AREA)
- Epidemiology (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Instructional Devices (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
The invention belongs to the technical field of cognitive ability testing, and discloses a cognitive ability testing and training method based on user behaviors, which comprises the following steps: performing gridding point burying on the game interface, and collecting operation data of a user, wherein the operation data comprises gridding coordinates of the click game interface, time stamps corresponding to the gridding coordinates and gesture data of a test terminal corresponding to the gridding coordinates; calculating to obtain a click sequence, a time interval and a sliding speed; the click sequence, the time interval, the sliding speed and the gesture data form depth data of the current grid coordinate; the depth data is input to a neural network based discriminant model for scoring. According to the cognitive ability testing and training method, in the process that a user finishes testing the checkpoint, the cognition degree is calculated, so that the behavior of the user can be quantitatively judged, and the difficulty design and content change of the testing checkpoint are facilitated.
Description
Technical Field
The invention belongs to the technical field of cognitive ability testing, and particularly relates to a cognitive ability testing and training method based on user behaviors.
Background
In a training game for old people's cognition, the cognition level of the old people is determined from the performance of the old people during the course of the game, in addition to the final game result. However, due to the diversity of the senile cognitive training games, doctors cannot fully know all the games existing at present, and cannot judge all the game results in a unified standard.
Disclosure of Invention
The present invention aims to solve the above technical problems at least to some extent. Therefore, the invention aims to provide a cognitive ability test and training method based on user behaviors.
The technical scheme adopted by the invention is as follows:
a cognitive ability testing and training method based on user behavior, comprising the steps of:
s1, performing gridding point burying on a game interface of a screen of a test terminal, and collecting operation data of a user, wherein the operation data comprise gridding coordinates of the click game interface, time stamps corresponding to the gridding coordinates and gesture data of the test terminal corresponding to the gridding coordinates;
s2, sorting through the sequence of the time stamps to obtain a clicking sequence, obtaining time intervals through the difference value of the time stamps corresponding to the grid coordinates, and obtaining sliding speed through the ratio of the pixel point distance corresponding to the grid coordinates to the time intervals; the click sequence, the time interval, the sliding speed and the gesture data form depth data of the current grid coordinate;
and S3, inputting the depth data into a neural network-based discriminator model to score.
Preferably, the discriminant model comprises 5 convolutions layers and 3 full-joins layers; the first layer of roll base layer comprises 32 convolution kernels of size 4 x 3; the second layer of roll base layer comprises 64 convolution kernels of 32 x 5 size; the third layer of roll base layer comprises 64 convolution kernels of 64 x 3 size; the fourth layer of roll base layer comprises 32 convolution kernels with a size of 32 x 3; the fifth layer of roll base layer includes 16 convolution kernels of 32 x 1 size; each full connection layer is 512 parameters with MSE as a loss function.
Preferably, in step S3, the arbiter model performs three parallel calculations at the same time, and performs loss calculation, gradient solution, and updating parameters of the neural network model according to the results of the three calculations.
Preferably, in step S1, the game interface is gridded according to the pixel range of m×n as a region.
Preferably, the gesture data of the test terminal are acquired through a gyroscope on the test terminal.
The beneficial effects of the invention are as follows:
according to the cognition capability test and training method based on the user behavior, the cognition degree of the user is calculated in the process of completing the test of the level (game), so that the behavior of the user can be quantitatively judged, and the difficulty design and content change of the test level are facilitated.
Drawings
FIG. 1 is a schematic diagram of the screen coordinate setup of the test terminal of the present invention.
Fig. 2 is a schematic diagram of depth data of the present invention.
Fig. 3 is a schematic diagram of a triple shared network of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made more apparent and fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should also be appreciated that in the embodiments, the functions/acts may occur in a different order than the figures. For example, two figures shown in succession may in fact be executed substantially concurrently or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
As shown in fig. 1 to 3, according to the cognitive ability testing and training method based on user behaviors, the behaviors of the user in the test checkpoint are subjected to multidimensional quantitative calculation, so that convenience is provided for subsequent cognitive degree calculation; based on quantification, scoring calculation is carried out by using a neural network-based discriminant model to obtain a final cognitive ability test result, and the method specifically comprises the following steps:
s1, performing gridding point burying on a game interface of a touch screen of a mobile phone, and collecting operation data of a user, wherein the operation data comprise gridding coordinates of the click game interface, time stamps corresponding to the gridding coordinates and gesture data of the mobile phone corresponding to the gridding coordinates.
The grid coordinate is used for collecting the touch behavior of the finger on the screen through the page buried point, namely, collecting the position of a point of the finger touch screen, wherein the position coordinate of the point takes the upper left corner of the screen as an origin, as shown in fig. 1, a plane rectangular coordinate system is established, and when a touch screen event occurs, the coordinate position and the time stamp of the touch screen are collected, such as: when the finger touches the screen coordinates 300,600 position at unix timestamp 1675078158000, it is recorded as ((300, 600), 1675078158000) and when multi-touch, it is sequentially recorded. The coordinate acquisition frequency is 10 times per second, i.e. once every 100ms.
The gesture data of the mobile phone are acquired through a gyroscope on the mobile phone, the data format is that the acceleration of the mobile phone in the x, y and z directions in an xyz space rectangular coordinate system taking the center of gravity of the mobile phone as an origin is recorded as (x, y and z).
In addition to the above data, the acquisition of touch screen duration and test reaction time is performed synchronously.
When the grid coordinates of the two adjacent clicking game interfaces are the same, obtaining the touch screen time length through the difference value of the time stamps, and if the grid coordinates of the two adjacent clicking game interfaces are different, generating the touch screen time length as a default value; specifically, if the acquired information of the point 300,600 is ((300, 600), 1675078158000) and ((300, 600), 1675078159000), the touch screen duration is 1000ms, and if the acquired information of the point 300,600 only appears once, the default touch time is less than 100ms.
After the test starts, the link in the level, which needs the user to react, starts to count as t0, the user responds to finish the link, and the current time is counted as t1, and t1-t0 is the game reaction time.
S2, obtaining depth data with depth of 4 from the acquired data: as shown in fig. 2, the game interface is divided into grids with a size of 10×18 and a depth of 4; such as: the screen resolution is horizontal 1000, vertical 1800, every 100 x 100 pixel range is a region, and then the screen resolution can be divided into 10 x 18 regions), the grid mode is determined according to the game interface and the screen size, the size of the game interface can be adaptively adjusted, and the depth is 4. Each training of the user in the test generates a gridding feature with a size of a x b and a depth of 4, and each training is to answer the question. Different test checkpoints have different training times, if: if there are 12 questions to be determined in one test, 12 user feature matrices are generated.
Recording is made when the user's finger is swiped through a certain grid area. 123 in fig. 2 is the sequence of the strokes in sequence. The depth data is in turn the click sequence in which the point was clicked, the time interval from the last point to the point, the sliding speed when sliding off from the point, and the current handset gesture data.
The clicking sequence is obtained by sequencing the time stamps, as shown in fig. 2, three points marked as 1,2 and 3 in the figure represent the position area where the user clicks 1, the position area where the user clicks 2, and the position where the user clicks 3 finally, and so on, and the positions of 9 areas in the screen are shown in fig. 2.
The time interval is obtained by the difference of the corresponding time stamps of the gridding coordinates, and for the area 2, the data are: when the finger leaves from zone 1, the time interval to zone 2, zone 1 time interval is empty.
The sliding speed is obtained through the ratio of the distance between the pixel points corresponding to each grid coordinate and the time interval, wherein the sliding speed is the pixel point/time of the sliding area, and the time is in units of ms.
And S3, scoring the behavior data of the user by using a neural network-based discriminator model after the quantification is completed, wherein the final output of the discriminator model is the cognition score of the player. The discriminant model comprises 5 layers of convolution layers and 3 layers of full-connection layers; the first layer of roll base layer comprises 32 convolution kernels of size 4 x 3; the second layer of roll base layer comprises 64 convolution kernels of 32 x 5 size; the third layer of roll base layer comprises 64 convolution kernels of 64 x 3 size; the fourth layer of roll base layer comprises 32 convolution kernels with a size of 32 x 3; the fifth layer of roll base layer includes 16 convolution kernels of 32 x 1 size; each full connection layer is 512 parameters with MSE as a loss function.
In order to make the discriminant model have better fitting capability, a triple shared network as shown in fig. 3 is designed for unsupervised self-adaptive scoring training. The unsupervised self-adaptive training does not need to manually mark data, meanwhile, the problem that a user cannot be completely scored in the whole process is solved, the behavior of the user in a test checkpoint is subjected to positive and negative two-way fine adjustment, each group of data after fine adjustment is used for training a model, so that a discriminator model has the capability of distinguishing positive adjustment, original data and negative adjustment, and the model achieves the purpose of scoring the behavior of the user.
The specific method comprises the following steps: and simultaneously carrying out three parallel calculations on the discriminator model, and carrying out loss calculation, gradient solving and updating parameters of the neural network model according to the results of the three calculations. As shown in fig. 3, net is the above-described arbiter model, and Net is used three times. x is the current feature of the user, x - For the operation after the algorithm conversion, the user is assumed to react slightly slowly; x+ is an operation which is converted by an algorithm, and the user is assumed to react slightly faster. In the training process, the score of x-is ensured to be lower than x, and the score of x is ensured to be lower than x+ so that the discriminant model is provided withThere is the ability to identify the cognitive differences that the current operation has.
Specifically, if the current feature is x, after Net calculates x, a score of 50 is obtained; for x - The Net calculated score is 60, then 50-60 is used as a standard for gradient solving, and the model is subjected to parameter updating with the aim of enabling Net (x - )<Net (x); this is the scenario of model errors, -10 acts as a penalty.
If the current characteristic is x, calculating x by Net to obtain a score 50; for x + The Net calculated score is 70, then 70-50 is used as a criterion for gradient solution, and the model is subjected to parameter updating with the aim of enabling Net (x + )>Net (x); the current prediction of the model is accurate, and a certain rewarding effect is achieved.
The cognitive ability test and training method can be applied to all cognitive training games, the calculated cognitive degree of the method is divided into 0-100 points, 100 represents excellent performance in the games, and 0 points represent great difficulty of players in the games. The score is used as a reference for game difficulty adjustment and level design, and can also be used as a reference for relevant professionals in combination with game results.
The cognitive ability test and training method is not a method for recognizing the cognitive ability of a person in medicine, but a method for quantitatively calculating the cognitive ability of the elderly in a professional cognitive ability training game.
The invention is not limited to the above-described alternative embodiments, and any person who may derive other various forms of products in the light of the present invention, however, any changes in shape or structure thereof, all falling within the technical solutions defined in the scope of the claims of the present invention, fall within the scope of protection of the present invention.
Claims (5)
1. A cognitive ability testing and training method based on user behavior, comprising the steps of:
s1, performing gridding point burying on a game interface of a screen of a test terminal, and collecting operation data of a user, wherein the operation data comprise gridding coordinates of the click game interface, time stamps corresponding to the gridding coordinates and gesture data of the test terminal corresponding to the gridding coordinates;
s2, sorting through the sequence of the time stamps to obtain a clicking sequence, obtaining time intervals through the difference value of the time stamps corresponding to the grid coordinates, and obtaining sliding speed through the ratio of the pixel point distance corresponding to the grid coordinates to the time intervals; the click sequence, the time interval, the sliding speed and the gesture data form depth data of the current grid coordinate;
and S3, inputting the depth data into a neural network-based discriminator model to score.
2. The cognitive ability testing and training method of claim 1, wherein: the discriminant model comprises 5 layers of convolution layers and 3 layers of full-connection layers; the first layer of roll base layer comprises 32 convolution kernels of size 4 x 3; the second layer of roll base layer comprises 64 convolution kernels of 32 x 5 size; the third layer of roll base layer comprises 64 convolution kernels of 64 x 3 size; the fourth layer of roll base layer comprises 32 convolution kernels with a size of 32 x 3; the fifth layer of roll base layer includes 16 convolution kernels of 32 x 1 size; each full connection layer is 512 parameters with MSE as a loss function.
3. The cognitive ability testing and training method of claim 1, wherein: and step S3, simultaneously carrying out three parallel calculations on the discriminator model, and carrying out loss calculation, gradient solving and updating parameters of the neural network model according to the results of the three calculations.
4. The cognitive ability testing and training method of claim 1, wherein: in step S1, the game interface is gridded according to the pixel range of m×n as a region.
5. The cognitive ability testing and training method of claim 1, wherein: the attitude data of the test terminal are acquired through a gyroscope on the test terminal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310240831.5A CN116230169A (en) | 2023-03-14 | 2023-03-14 | Cognitive ability test and training method based on user behaviors |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310240831.5A CN116230169A (en) | 2023-03-14 | 2023-03-14 | Cognitive ability test and training method based on user behaviors |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116230169A true CN116230169A (en) | 2023-06-06 |
Family
ID=86580422
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310240831.5A Pending CN116230169A (en) | 2023-03-14 | 2023-03-14 | Cognitive ability test and training method based on user behaviors |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116230169A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117092046A (en) * | 2023-08-03 | 2023-11-21 | 首都医科大学附属北京安定医院 | Method for detecting whether oral cavity of mental patient is hidden with medicine |
-
2023
- 2023-03-14 CN CN202310240831.5A patent/CN116230169A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117092046A (en) * | 2023-08-03 | 2023-11-21 | 首都医科大学附属北京安定医院 | Method for detecting whether oral cavity of mental patient is hidden with medicine |
CN117092046B (en) * | 2023-08-03 | 2024-03-08 | 首都医科大学附属北京安定医院 | Method for detecting whether oral cavity of mental patient is hidden with medicine |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103530540B (en) | User identity attribute detection method based on man-machine interaction behavior characteristics | |
CN104200480B (en) | A kind of image blur evaluation method and system applied to intelligent terminal | |
CN111598081A (en) | Automatic seven-step hand washing method operation normative detection method | |
CN102509088B (en) | Hand motion detecting method, hand motion detecting device and human-computer interaction system | |
CN105468279B (en) | Contact action identification and response method, device and game control method, device | |
CN116230169A (en) | Cognitive ability test and training method based on user behaviors | |
CN103631487B (en) | A kind of method and device of the configuration page | |
CN109432767A (en) | A kind of exchange method and system of game paddle and terminal | |
CN109508429B (en) | Individualized self-adaptive learning recommendation method based on big data analysis of education platform | |
CN106648397B (en) | A kind of the game operation record processing method and system of mobile terminal | |
CN107908300A (en) | A kind of synthesis of user's mouse behavior and analogy method and system | |
CN105045584B (en) | Touch control method and system for screen of vehicle machine | |
CN105068735B (en) | User interface layout adjusting method and device | |
CN111462919A (en) | Method and system for predicting insect-borne diseases based on sliding window time sequence model | |
CN104778387A (en) | Cross-platform identity authentication system and method based on human-computer interaction behaviors | |
CN110135487A (en) | A kind of computer user mouse Behavior modeling method | |
CN108052960A (en) | Method, model training method and the terminal of identification terminal grip state | |
CN105893959A (en) | Gesture identifying method and device | |
CN108433728A (en) | A method of million accidents of danger are fallen based on smart mobile phone and ANN identification construction personnel | |
WO2024140268A1 (en) | Finger interaction trajectory acquisition method and system, and storage medium | |
CN113128693A (en) | Information processing method, device, equipment and storage medium | |
CN105094405A (en) | Method and apparatus for automatically adjusting effective contact | |
CN107391289A (en) | A kind of three-dimensional pen-based interaction Interface Usability appraisal procedure | |
CN108259503A (en) | A kind of is the system and method for website and application division machine and mankind's access | |
CN115576475A (en) | Matching method based on touch point track |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |