CN112396114A - Evaluation system, evaluation method and related product - Google Patents

Evaluation system, evaluation method and related product Download PDF

Info

Publication number
CN112396114A
CN112396114A CN202011318301.0A CN202011318301A CN112396114A CN 112396114 A CN112396114 A CN 112396114A CN 202011318301 A CN202011318301 A CN 202011318301A CN 112396114 A CN112396114 A CN 112396114A
Authority
CN
China
Prior art keywords
test
content
result
tester
classifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011318301.0A
Other languages
Chinese (zh)
Inventor
唐红思
黄艳
于成龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202011318301.0A priority Critical patent/CN112396114A/en
Publication of CN112396114A publication Critical patent/CN112396114A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification

Abstract

An evaluation system, an evaluation method and a related product. The method comprises the following steps: a processor, a display device, an input device; the display device is used for displaying the test content after receiving the evaluation request; the input device is used for receiving the reaction result of the tester to the test content; the processor is used for extracting the characteristic attribute of the value of the test result, and the value of the test result is determined by the accuracy of the reaction result; inputting the characteristic attribute into a classifier to be evaluated to obtain a class of the characteristic attribute in the classifier as a classification result of the tester; the classifier is obtained by performing classification training on the test results of at least two experimenters. The method comprises the steps of inputting a test result of a tester for the test content into a classifier obtained by performing classification training on test results of at least two testers with known real classification results, and performing classification processing to obtain a classification result, thereby objectively and effectively providing a test result with guiding significance for the tester.

Description

Evaluation system, evaluation method and related product
Technical Field
The application relates to the field of data analysis, in particular to an evaluation system, an evaluation method and a related product.
Background
There are many methods for evaluating the superiority and inferiority of a tester in various aspects through psychological tests or cognitive tests, and through the tests, the accuracy of the test result of the tester in various aspects can be obtained, or the position percentage of the test result in sample data can be obtained so as to indicate the superiority and inferiority or comprehensive ability of the tester in various aspects. However, most of these tests do not analyze the data further, and whether the used test contents can be truly used for evaluating the various aspects of the tester is considered. Even if the data is analyzed, only certain established rules are specified for judging and analyzing the data, for example, calling out the category questions with the first three correct rates in the test result indicates that the tester has the advantage in the aspect of capability, and the sample data is not further analyzed, so that a more objective and more instructive result is presented.
For example, when an artist selects his or her own artistic direction, the judgment about expertise is generally based on professional teachers and experts, and depends largely on subjective evaluation. In the market, a test method for obtaining the advantage and disadvantage results of individuals in various artistic directions through psychological tests or cognitive competence tests does not analyze sample data further, and whether the test content can objectively represent the artistic expertise potential of a tester cannot be known, so that a relatively objective and more instructive result cannot be presented.
Therefore, how to provide an evaluation method and a data analysis method to provide more instructive test results for testers more objectively and effectively becomes an important research topic in the technical field.
Disclosure of Invention
Therefore, it is necessary to provide an evaluation system, an evaluation method and related products for the above technical problems, so as to provide more instructive test results for testers more objectively and effectively.
In a first aspect, the present application provides an evaluation system, comprising: a processor, a display device, an input device; the display device is used for displaying the test content after receiving the evaluation request; the input device is used for receiving the reaction result of the tester to the test content; the processor is used for extracting the characteristic attribute of the value of the test result, and the value of the test result is determined by the accuracy of the reaction result; inputting the characteristic attribute into a classifier to be evaluated to obtain a class to which the characteristic attribute belongs in the classifier as a classification result of the tester; the classifier is obtained by performing classification training on the test results of at least two experimenters.
With reference to the first aspect, in some embodiments, the above display device comprises a display and a photometer; the refresh rate of the display is greater than or equal to a refresh rate threshold; the processor is also used for controlling the display brightness of the test content through the photometer.
In combination with the first aspect, in some embodiments, the test content includes at least one of content for testing memory, content for testing tonal discriminating ability, content for testing perception ability of objective stimuli, content for testing mental rotation ability, content for testing executive ability, content for testing multi-object tracking ability, content for testing auditory memory ability, or content for testing observation ability.
With reference to the first aspect, in some embodiments, determining the value of the test result from the correctness of the reaction result includes: the value of the test result is equal to the accuracy of the reaction result.
With reference to the first aspect, in some embodiments, the system further includes: a timer for recording the reaction time of the tester completing the test content; the determining of the value of the test result from the correctness of the reaction result includes: the value of the test result is equal to the quotient of the reaction duration and the accuracy of the reaction result. In other embodiments, the timer 104 is a clock device, such as a clock chip. In an alternative implementation, the time precision of the timer 104 is higher than a second threshold, which may be 1 millisecond, 5 milliseconds, 3 seconds, or other values; for example, the timer adopts a self-made reaction key box and records of programmed control reaction time, the reaction key box is linked to the terminal equipment through a parallel port, and compared with a keyboard, the accuracy of the reaction key box is higher and can reach millisecond level.
With reference to the first aspect, in some embodiments, the training of the classifier by classifying the test results of at least two experimenters includes: the classifier is obtained by performing classification training on the test results of at least two experimenters by adopting a Bayesian classification algorithm or a k nearest neighbor classification algorithm.
With reference to the first aspect, in some embodiments, the content for testing the intonation recognition capability includes a first test content; the first test content comprises first prompt information, first test audio and second test audio, and the second test audio comprises audio content with the same tone as or different from the first test audio; the first prompt message is used to prompt the tester to determine whether the tones of the first test audio and the second test audio are the same.
With reference to the first aspect, in some embodiments, the content for testing the execution function includes second test content; the second test content includes second prompt information and at least two test pictures, and the second prompt content is used for prompting the tester of the operation type indicated by each of the at least two test pictures.
With reference to the first aspect, in some embodiments, the content for testing memory includes a third test content; the third test content includes third prompt information, a first test picture and a second test picture, the second test picture includes a distinguishing graph different from the first test picture, and the third prompt information is used for prompting the tester to select the distinguishing graph from the second test picture.
With reference to the first aspect, in some embodiments, the content for testing the perception capability of the objective stimulus comprises fourth test content; the fourth test content includes a fourth prompt message and a third test picture, the third test picture includes a first region and a second region, the second region includes a third region and a fourth region, the third region surrounds the fourth region, the third region and the fourth region are different in color, the first region and the fourth region are the same in color, and the first region and the fourth region are the same or different in color brightness; the fourth prompting message is used for prompting the tester to judge whether the color brightness of the first area is the same as that of the fourth area.
With reference to the first aspect, in some embodiments, the content for testing mental rotation capability includes a fifth test content; the fifth test content includes a fifth prompt message and a fourth test picture, the fourth test picture includes a main figure and at least two auxiliary figures, the at least two auxiliary figures include a rotation figure obtained by rotating the main figure, and the fifth prompt message is used for prompting the tester to select the rotation figure from the at least two auxiliary figures.
With reference to the first aspect, in some embodiments, the content for testing the multi-object tracking capability includes sixth test content; the sixth test content comprises sixth prompt information and sixth test content; the sixth test content includes N patterns with the same shape and color, where N is an integer greater than 1, after the test for testing the tracking capability of the multiple objects starts, K patterns in the N patterns with the same shape and color flash, and then the N patterns with the same shape and color randomly move for a certain time period and stand still after the movement ends; the sixth prompting message is used for prompting the tester to select the K patterns from the N patterns having the same shape and color.
With reference to the first aspect, in some embodiments, the content for testing auditory memory includes a seventh test content; the seventh test content comprises seventh prompt information and seventh test content; the seventh test content comprises M test tones, where M is an integer greater than 2; the prompt message is used to prompt the tester that whether the tones of the M test tones are the same or not needs to be determined every 2 tones separated by one tone.
With reference to the first aspect, in some embodiments, the content for testing observation capability includes eighth prompt content and eighth test content; the eighth test content comprises two similar patterns, the two similar patterns comprising at least one difference; the prompt information is used for prompting the tester to click the left mouse button to select the at least one difference.
In a second aspect, the present invention provides a testing method, including: displaying the test content after receiving the evaluation request; receiving the reaction result of the tester to the test content; extracting the characteristic attribute of the value of the test result, wherein the value of the test result is determined by the accuracy of the reaction result; inputting the characteristic attribute into a classifier to be evaluated to obtain a class to which the characteristic attribute belongs in the classifier as a classification result of the tester; the classifier is obtained by performing classification training on the test results of at least two experimenters.
In combination with the second aspect, in some embodiments, the test content includes at least one of content for testing memory, content for testing tonal discrimination ability, content for testing perception ability of objective stimuli, content for testing mental rotation ability, content for testing executive ability, content for testing multi-object tracking ability, content for testing auditory memory ability, or content for testing observation ability.
With reference to the second aspect, in some embodiments, the classifying training of the test results of at least two experimenters includes: the classifier is obtained by performing classification training on the test results of at least two experimenters by adopting a Bayesian classification algorithm or a k nearest neighbor classification algorithm.
In combination with the second aspect, in some embodiments, the system further includes: a timer for recording the reaction time of the tester completing the test content; the determining of the value of the test result from the correctness of the reaction result includes: the value of the test result is equal to the quotient of the reaction duration and the accuracy of the reaction result.
In a third aspect, the present application provides an evaluation device, comprising:
the display unit is used for displaying the test content after receiving the expertise potential cognition evaluation request;
a receiving unit, for receiving the reaction result of the tester to the test content;
a classification unit for extracting a characteristic attribute of a value of a test result, the value of the test result being determined by a correctness of the reaction result; inputting the characteristic attribute into a classifier to be evaluated to obtain a class to which the characteristic attribute belongs in the classifier as a classification result of the tester; the classifier is obtained by performing classification training on the test results of at least two experimenters.
In a fourth aspect, the present application provides a terminal device, comprising: the device comprises a memory and a processor, wherein the memory stores program instructions; the program instructions, when executed by the processor, cause the processor to perform the method as described in the second aspect and any possible implementation manner of the second aspect.
In a fifth aspect, the present application provides a computer readable storage medium having a computer program stored therein; the computer program, when run on one or more processors, causes the terminal device to perform the method as described in the second aspect and any possible implementation form of the second aspect.
In a sixth aspect, the present application provides a computer program product containing instructions that, when run on a terminal device, cause the terminal device to perform the method as described in the first aspect and any possible implementation manner of the first aspect.
The embodiment of the invention provides an evaluation system, an evaluation method and a related product, wherein test contents are displayed after an evaluation request is received, a reaction result of a tester to the test contents is received, a characteristic attribute of a value of the test result is extracted, the value of the test result is determined by the accuracy of the reaction result, and the characteristic attribute is input into a classifier for evaluation to obtain a class to which the characteristic attribute belongs in the classifier as a classification result of the tester; the classifier is obtained by performing classification training on the test results of at least two experimenters. The experimental data used by the classification training is the objectively existing characteristic attributes of experimenters with known real classification results to the test results of the test contents, and the obtained classifier can objectively express the difference of the characteristic attributes among the experimenters of different classes of the same test contents, so that the test results of the test contents can objectively indicate which class the tester belongs to, and the class of the tester is determined by the probability of the characteristic attributes of the test results of the tester. Thereby more objectively and effectively providing a more instructive test result for the tester.
Drawings
In order to more clearly illustrate the technical solution in the embodiments of the present invention, the drawings required to be used in the embodiments will be briefly described below.
Fig. 1 is a schematic structural diagram of an evaluation system provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of another evaluation system provided in an embodiment of the present application;
FIG. 3A is a diagram illustrating an exemplary embodiment of a method for testing pitch recognition capability;
FIG. 3B is a diagram illustrating an example of content for testing performance capabilities according to an embodiment of the present disclosure;
FIG. 3C is a diagram illustrating a method for testing memory according to an embodiment of the present disclosure;
FIG. 3D is a diagram illustrating a method for testing the perception capability of an objective stimulus according to an embodiment of the present application;
FIG. 3E is a diagram illustrating an example of content for testing mental rotation capability according to an embodiment of the present disclosure;
FIG. 3F is a diagram illustrating an example of a system for testing multi-object tracking capability according to an embodiment of the present disclosure;
FIG. 3G is a schematic diagram of a content for testing auditory memory according to an embodiment of the present application;
FIG. 3H is a schematic diagram of a content for testing observation capability provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of an evaluation method provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a data analysis module according to an embodiment of the present disclosure;
fig. 6 is a schematic view of an evaluation device provided in an embodiment of the present application;
fig. 7A-7B are schematic flow charts of an evaluation system according to an embodiment of the present application;
fig. 8A-8B are schematic flow charts of another evaluation system provided in the embodiments of the present application.
Detailed Description
The present invention is described in further detail below with reference to the attached drawing figures.
The terminology used in the following embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in the specification of the present application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In this application, "at least one" means one or more, "a plurality" means two or more, "at least two" means two or three and three or more, "and/or" for describing an association relationship of associated objects, which means that there may be three relationships, for example, "a and/or B" may mean: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one item(s) below" or similar expressions refer to any combination of these items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b," a and c, "" b and c, "or" a and b and c.
In order to more clearly describe the scheme of the present application, some knowledge related to the biometric identification of the scheme is introduced below.
And (3) cognitive ability testing: cognitive ability testing is a test that measures a person's ability to learn and perform a task. Such tests are particularly suitable for use in selecting a group of inexperienced candidates, and learning-and work-related capabilities may be categorized as linguistic, computational, perceptual, spatial, and inferential capabilities. The superiority and the inferiority of each individual ability can be judged through the cognitive ability test, the selection of the working or learning direction is carried out according to the superiority and the inferiority of the ability, but the test result cannot bring more objective and direct test results to the tester, for example, the tester is tested to have stronger space imagination ability and language ability, the tester hopes to select the professional direction suitable for the individual through the individual superiority, then ask for the fact that the stronger language ability and the space imagination ability are suitable for the literal arts or the physical arts, and are suitable for mathematics or sports? Obviously, the test result is not intuitive enough, and the test result with more guiding significance is not objectively and effectively provided for the tester. For example, the psychological test is performed to obtain the test result of some psychological test contents of the tester, the emotional tendency or the work excellence field of the tester is reflected through the test result, and in essence, the adopted evaluation standard is artificially established, for example, the accuracy of the test result is referred to, no further sampling and analysis are performed on the test result, and whether the used test contents can truly reflect the capability of the tester is still considered. Therefore, how to provide a data analysis method to provide more instructive test results for testers more objectively and effectively becomes an important research topic in the technical field.
A classifier: classification is a very important method of data mining. The concept of classification is to learn a classification function or construct a classification model based on existing data. The function or model can map data records in the database to one of a given category and thus can be applied to data prediction. In a word, the classifier is a general term of a method for classifying samples in data mining, and includes algorithms such as decision trees, logistic regression, naive bayes, neural networks and the like.
Bayesian classification algorithm: for a given item to be classified, solving the probability of occurrence of each class under the condition of occurrence of the item, and classifying the item to be classified into the class to which the maximum probability belongs in the probability of occurrence of each class. The whole Bayesian classification is divided into three stages: the first stage, inputting all data to be classified, and outputting characteristic attributes and training samples; inputting characteristic attributes and training samples, performing classification training by adopting a classification algorithm to obtain the characteristic attributes of each class in the classifier, and outputting the classifier; and the third stage, an application stage, classifying the items to be classified by using a classifier, wherein the input of the classifier is the classifier and the items to be classified, and the output is the mapping relation between the items to be classified and the categories.
k-nearest neighbor classification algorithm: that is, given a training data set, for a new input instance, k instances in the training data set that are closest to the instance are found, and a majority of the k instances belong to a class, the input instance is classified into the class. The neighbors selected by the k-nearest neighbor algorithm are all objects that have been correctly classified. The method only determines the category of the sample to be classified according to the category of the nearest sample or a plurality of samples in the classification decision.
Visual search: visual search is a perceived task that requires attention. The visual search ability is a basic cognitive ability of a human, which refers to the ability of an individual to capture a target stimulus from a plurality of visual stimuli, and is also an important way for the individual to acquire external information for processing. Visual search tests can be used to test an individual's ability to control attention and inhibit distractors during the process of attention.
Reaction inhibition: reaction inhibition refers primarily to three interrelated cognitive processes: inhibiting spontaneous reactions to environmental events; preventing the current reaction to ensure a delay in deciding which reaction to take; this delayed period is protected from disruption by nuisance events, allowing self-directed behavior to occur. The reaction inhibition ability enables the brain to have sufficient room for processing information after receiving external stimulation, and realizes working memory and behavior execution process.
Execution capacity: the executive function is a psychological process for consciously controlling thought and action by an individual with a life. The concept of "executive function" stems from the study of damage to the prefrontal cortex, which causes a series of neuropsychological deficits, such as: difficulties in planning, concept formation, abstract thinking, decision making, cognitive flexibility, feedback utilization, chronological ordering of events, and monitoring of actions, which correspond to a series of abilities that are the initial meaning of the term "perform a function".
Some of the drawings to which this application relates are further described below.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an evaluation system according to an embodiment of the present application.
As shown in fig. 1, the evaluation system includes: a processor 101, a display device 102, an input device 103; the processor 101 is connected to the input device 103 and the display device 102, respectively.
And the display device 102 is used for displaying the test content after receiving the evaluation request. The test contents are contents for testing a tester, and include texts and pictures. The display device 102 displays test content to the tester for visual stimulation to the tester. In some embodiments, the test content is compiled by matrix laboratories (maltlab) and Psychology Toolkits (PTB).
In an alternative embodiment, the test content includes content for testing cognitive abilities. It should be noted that, in the research of the present application, it is found that people of different specialties have different advantages in cognitive ability, for example, people of mathematical specialties have stronger spatial ability, gymnastics athletes have stronger spatial imagination, painters have stronger object perception ability, and the like. The method comprises the steps of collecting test result data of a large number of students with different specialties on the same cognitive test content, establishing a speciality database, training an artificial intelligence classification model, applying the trained classification model to potential judgment of the specialties of general students, and providing objective indexes for selection of student speciality culture.
And the input device 103 is used for receiving the reaction result of the tester to the test content. The tester returns the reaction result through the input device 103, and the input device 103 detects and receives the reaction result returned by the tester. In some embodiments, the input device 103 is a keyboard and mouse. In other embodiments, the input device 103 is a key box with a time accuracy higher than a first threshold, which may be 1 millisecond, 4 milliseconds, 1 second, or other values. In an alternative implementation, the time precision of the input device 103 is milliseconds.
A processor 101 for extracting a characteristic attribute of a value of a test result, the value of the test result being determined by a correctness of the reaction result; inputting the characteristic attribute into a classifier to be evaluated to obtain a class to which the characteristic attribute belongs in the classifier as a classification result of the tester; the classifier is obtained by performing classification training on the test results of at least two experimenters.
For example, the test content comprises eight sets of test questions, and under the condition that the test content is the psychological rotation test, the accuracy of the psychological rotation test of the tester is calculated to be used as the test score of the psychological rotation test; under the condition that the test content is the visual search test, calculating the accuracy of the visual search test of the tester, and taking the quotient of the accuracy and the test duration of the tester as the test score of the visual search test of the tester; and obtaining the test scores of the other five sets of test questions in the same way, and recording the test score of each set of test questions.
Under the condition that the algorithm used by the classifier is a k-nearest neighbor classification algorithm, the test score of each set of questions is extracted as the characteristic attribute, the characteristic attribute is an item to be classified, and the classifier is a sample data set of a tester including a real known real classification result. The item to be classified belongs to the classifier, the Euclidean distance formula is used for calculating the distance from the item to be classified to each data in the sample data set, k sample data with the closest distance between the item to be classified and the sample data set are selected, the number of samples contained in each class in the k sample data sets is calculated, and the item to be classified is classified into the class with the largest number of samples contained in each class; assuming that k is equal to 3, 2 points of the 3 points closest to the item to be classified belong to class A, and 1 point belongs to class B, then the item to be classified belongs to class A.
And if the algorithm used by the classifier is a Bayesian classification algorithm, setting values of the characteristic attributes, for example, a test score belonging to a characteristic a in a range of 0-50%, a test score belonging to b characteristic in a range of 50-80%, and a test score belonging to c characteristic in a range of 80-90%, extracting the characteristic attributes of the test scores of the eight sets of test contents, and inputting the characteristic attributes into the classifier to obtain a category to which the characteristic attributes belong as a classification result. The classifier is obtained by classifying and training the test results of a certain number of testers with real known real classification results by adopting a Bayesian classification algorithm. And recording the characteristic attribute of the test result of the tester with the known real classification result as an item to be classified, solving the probability of each class under the condition of the item to be classified, classifying the item to be classified as the class to which the maximum probability belongs in the probability of each class to obtain the classifier.
Referring to fig. 2, fig. 2 is a schematic structural diagram of another evaluation system according to an embodiment of the present disclosure. As shown in fig. 2, the evaluation system further comprises a timer 201, the display device 102 comprises a display 202 and a photometer 203.
The timer 201 is used for recording the reaction duration of the tester completing the test content; the determining of the value of the test result from the correctness of the reaction result includes: the value of the test result is equal to the quotient of the reaction duration and the accuracy of the reaction result. In other embodiments, the timer 201 is a clock device, such as a clock chip. In an alternative implementation, the time precision of the timer 201 is higher than a second threshold, which may be 1 millisecond, 5 milliseconds, 3 seconds, or other values; for example, in some alternative embodiments, the timer uses a self-made response button box linked to the host computer by a parallel port, which is more accurate than a keyboard key, and can be on the order of milliseconds, and a programmed record of the response time. After receiving the visual stimulation of the display module 102, the testee inputs a feedback signal or a selection result through operations such as pressing keys, clicking and the like; meanwhile, the processor 101 informs the response button box to record the response time of the tested person or the duration of responding to the visual stimulus, the response result fed back by the tested person and the response duration recorded by the timer 201 through programming control, as the basis for evaluating the cognitive ability of the tested person.
Display 202 is used to display the test content, and in some embodiments, the refresh rate of display 202 is greater than or equal to the refresh rate threshold to ensure the time accuracy of display 202 displaying the test content, and thus the time accuracy of recording the test duration. In an alternative implementation, the refresh rate threshold is 144 hertz and the refresh rate of display 202 is greater than or equal to 144 hertz.
The processor 101 is further configured to control the display brightness of the test content through the photometer 203. The processor 101 controls the display brightness of the test content through the photometer 203, specifically: the processor 101 obtains the display brightness of the test content detected by the photometer 203, and the processor 101 adjusts the display brightness of the display 202 according to the display brightness of the test content detected by the photometer 203 until the display brightness of the test content detected by the photometer 203 is detected to be within the brightness range. The display brightness of the test content is adjusted through the photometer 203, the display brightness of the test content is ensured to be within a brightness range, the contrast between the test content and the screen background is ensured, and accurate visual stimulation is given to a tester.
In an alternative implementation, the test content displayed by display 202 includes at least one of content for testing memory, content for testing tonal recognition capabilities, content for testing perceptual capabilities of objective stimuli, content for testing mental rotation capabilities, content for testing executive capabilities, content for testing multi-object tracking capabilities, content for testing auditory memory capabilities, or content for testing observation capabilities. In some embodiments, the test content includes content for testing memory, content for testing tonal recognition capabilities, content for testing perceptual capabilities of objective stimuli, content for testing mental rotation capabilities, content for testing executive capabilities, content for testing multi-object tracking capabilities, content for testing auditory memory capabilities, or content for testing observation capabilities. In the embodiment, the tester is tested through a plurality of cognitive ability test contents, the cognitive ability of the tester is evaluated in a multi-dimensional mode, and the evaluation accuracy is improved.
Referring to fig. 3A-3H, fig. 3A-3H are schematic diagrams of visual stimuli of multiple capability tests according to an embodiment of the invention. The content of the plurality of capability tests comprises: content for testing memory, content for testing pitch discrimination ability, content for testing perception ability of objective stimulus, content for testing mental rotation ability, content for testing executive ability, content for testing multi-object tracking ability, content for testing auditory memory ability, and content for testing observation ability.
The specific first set of test content is used for testing the tone recognition capability, and the content used for testing the tone recognition capability comprises first test content; the first test content comprises first prompt information, first test audio and second test audio, and the second test audio comprises audio content with the same tone as or different from the first test audio; the first prompt message is used to prompt the tester to determine whether the tones of the first test audio and the second test audio are the same. The display module 102 displays a test guide to the testee, and guides the testee to start a test by pressing a designated key (e.g., a space bar of a keyboard), the display module 102 displays a prompt message and two audio segments shown in fig. 3A after the key is pressed, the tester needs to press the key to listen to the audio, and the prompt message prompts the tester to determine whether the tones of the two audio segments are the same. Preferably, the content for testing the intonation recognition capability has a total of 10 trials, each presenting a different audio stimulus.
The second set of test content is used for testing the execution function, and the content used for testing the execution function comprises second test content; the second test content includes second prompt information and at least two test pictures, and the second prompt content is used for prompting the tester of the operation type indicated by each of the at least two test pictures. In some alternative embodiments, the display module 102 displays a test guide to the testee, guides the testee to press a designated key (e.g. the space bar of the keyboard) to start the test, displays prompt information and test contents after the key is pressed by the display module 102, as shown in fig. 3B, the processor controls the display device to display one of a left arrow 301 or a right arrow 302 for a preset time period, or, the processor controls the display device to display an upper arrow 303 or a lower arrow 304 immediately after a left arrow 301 or a right arrow 302 is displayed within a preset time period, the prompt information prompts the tester, when only one left arrow 301 or right arrow 302 is displayed within the preset time period, the corresponding left and right keys in the keyboard are pressed to react, when an up arrow 303 or a down arrow 304 is displayed immediately after a left arrow 301 or a right arrow 302 is displayed within the preset time period, no key operation is required. And on the basis of performing a function test on the tester, further testing the reaction inhibition capability of the tester. And the timer 201 records the response time of the subject. Preferably, the above contents for testing the execution capability are tested 20 times in total.
A third set of test content is content for testing memory, and the content for testing memory comprises third test content; the third test content includes third prompt information, a first test picture and a second test picture, the second test picture includes a distinguishing graph different from the first test picture, and the third prompt information is used for prompting the tester to select the distinguishing graph from the second test picture. In some alternative embodiments, as shown in fig. 3C, 3, 4, and 5 irregular graphics are displayed on the gray background screen, the graphic 401 disappears after 1s appears, the same number of graphics 402 are again shown at the same position of the screen after 1s disappears, one of the graphics changes in shape, and the tester needs to select the changed graphic with the mouse. Preferably, the above content for testing memory is 10 trials in total, each trial presenting a different stimulus picture.
A fourth set of test content is used for testing the perception capability of the objective stimulus, and the fourth set of test content comprises fourth test content; the fourth test content includes a fourth prompt message and a third test picture, the third test picture includes a first region and a second region, the second region includes a third region and a fourth region, the third region surrounds the fourth region, the third region and the fourth region are different in color, the first region and the fourth region are the same in color, and the first region and the fourth region are the same or different in color brightness; the fourth prompting message is used for prompting the tester to judge whether the color brightness of the first area is the same as that of the fourth area. In some alternative embodiments, as shown in fig. 3D, the visual illusion test is used to test the perception of the objective stimulus, and the tester needs to determine whether there is a difference in brightness between the circle on the left and the circle inside the black circle on the right. Preferably, the above-described contents for testing the perception capability of the objective stimulus are tested 20 times in total, with half each difference and half each no difference.
A fifth set of further test contents is contents for testing the psychological rotation ability, and the contents for testing the psychological rotation ability comprise fifth test contents; the fifth test content includes a fifth prompt message and a fourth test picture, the fourth test picture includes a main figure and at least two auxiliary figures, the at least two auxiliary figures include a rotation figure obtained by rotating the main figure, and the fifth prompt message is used for prompting the tester to select the rotation figure from the at least two auxiliary figures. In some embodiments, as shown in FIG. 3E, the tester derives from the leftmost artwork which of the four options A-D are derived from the artwork by rotation. Preferably, the mental rotation ability test is provided with 10 trials, and each trial presents different visual stimulation pictures to the tested person.
A further sixth set of test contents is contents for testing the multi-object tracking capability, and the contents for testing the multi-object tracking capability comprise sixth test contents; the sixth test content comprises sixth prompt information and sixth test content; the sixth test content includes N patterns with the same shape and color, where N is an integer greater than 1, after the test for testing the tracking capability of the multiple objects starts, K patterns in the N patterns with the same shape and color flash, and then the N patterns with the same shape and color randomly move for a certain time period and stand still after the movement ends; the sixth prompting message is used for prompting the tester to select the K patterns from the N patterns having the same shape and color. In some embodiments, as shown in fig. 3F, the visual stimulus is 8 gray beads, which are displayed on a black background (with a brightness of 0cd/cm2), wherein 4 random target beads are flickered for 1.2 seconds for 3, and then 8 beads are randomly moved for 6 seconds, after the movement is finished, the beads are still, and the subject needs to select 4 target beads, and input the selection result through the response receiving module 130, and display the feedback of the test result on the display module 120. Preferably, the speed of the movement of the small ball is 300 pixels/second or 600 pixels/second, the fixation point (the small dot at the center of the display area) and the connecting line of the centers of the two eyes of the measured person are in the same horizontal position, and the distance is 57 cm. Preferably, the above-described contents for testing the multi-object tracking capability are 8 trials in total.
A seventh set of further test contents is contents for testing auditory memory ability, and the contents for testing auditory memory ability comprise seventh test contents; the seventh test content comprises seventh prompt information and seventh test content; the seventh test content comprises M test tones, where M is an integer greater than 2; the prompt message is used to prompt the tester that whether the tones of the M test tones are the same or not needs to be determined every 2 tones separated by one tone. In some embodiments, as shown in fig. 3G, the auditory stimuli are presented to the tester at different tones for 0.5s in sequence, and the tester needs to determine whether 2 auditory stimuli, every 1 time interval, are consistent in tone. Preferably, the above-described contents for testing auditory memory ability are 5 trials in total.
A further eighth set of test content is content for testing observation capability, and the content for testing observation capability includes eighth prompt content and eighth test content; the eighth test content comprises two similar patterns, the two similar patterns comprising at least one difference; the prompt information is used for prompting the tester to click the left mouse button to select the at least one difference. As shown in fig. 3H, the visual stimulus is two similar target pictures, and the two target pictures are different from each other only in one place. The subject needs to find the difference between the two target pictures and input the reaction result through the input device 102, for example, click a specific position of the picture through the left button of the mouse and press a designated button to start the next trial. And the timer 201 records the response time of the subject. Preferably, the observability test has a total of 10 trials, each presenting a different stimulus picture.
Optionally, the visual stimuli in the cognitive ability tests are presented on backgrounds with different colors, for example, the color of a ring in the object perception ability test is color, the background in the psychological rotation ability test is white, the background in the multi-object tracking ability test is black, and the like, so as to highlight the visual stimuli picture and optimize the visual stimulus effect.
Referring to fig. 4, fig. 4 is a schematic view illustrating an evaluation method according to an embodiment of the present application. The method can be applied to the evaluation system and executed by a processor. The method comprises the following steps: step 401, displaying test contents after receiving an evaluation request; step 402, receiving the reaction result of the tester to the test content; step 403, extracting characteristic attributes of the values of the test results; step 404, inputting the feature attributes into the evaluated classifier to obtain the category to which the feature attributes belong in the classifier.
Referring to fig. 5, in some embodiments, as shown in fig. 5, the evaluation system further includes: the data analysis module comprises a raw database unit 501, a radar map database unit 502 and a classification training unit 503. The original database 501 is a database constructed by the key reaction accuracy and the key reaction time of each set of cognitive tests recorded by the reaction monitoring system and the data of the tester such as the specialty, sex, age, right or left handedness, and the like. The radar map database unit 502 is used for performing standardized analysis on data in an original database through MATLAB, and forming a cognitive radar map for each tester, wherein the database contains data of all testers. The database can be stored in a local magnetic disk of the computer, storage media such as a flash disk and an optical disk can be exported, and further, the database can be stored in the cloud through the Internet, so that remote monitoring and other applications can be realized. The classification training unit 503 performs classifier training on the radar map in the radar map database to form an optimal classification model, and the classification model is used for classifying new testers, so as to provide more objective and effective test results with guiding significance for the testers.
Referring to fig. 6, fig. 6 is a schematic view of an evaluation device according to an embodiment of the present application. As shown in fig. 6, the apparatus includes:
the display unit 601 is used for displaying the test content after receiving the expertise potential cognition evaluation request;
a receiving unit 602, configured to receive a reaction result of the tester for the test content;
a classification unit 603 configured to extract a feature attribute of a value of a test result, where the value of the test result is determined by a correctness of the reaction result; inputting the characteristic attribute into a classifier to be evaluated to obtain a class to which the characteristic attribute belongs in the classifier as a classification result of the tester; the classifier is obtained by performing classification training on the test results of at least two experimenters.
The following describes a specific processing flow for implementing data analysis by using the evaluation system provided by the present application, with reference to two different usage scenarios.
Use scenario one: and carrying out classification training on the test result data of the experimenter with the known art expertise by using a Bayesian classification algorithm to obtain a classifier, and applying the classifier to the expertise potential classification evaluation of the experimenter with the art expertise to be tested.
In this embodiment, the evaluation system needs to generate a classifier using a bayesian classification algorithm before classifying the test result of the tester. As shown in fig. 7A, step 701, collecting a test result of an experimenter with a known artistic expertise on the test content includes: the display device 102 displays the test content after receiving the evaluation request; the input device 103 receives the reaction result of the experimenter to the test content; the above test contents are the eight sets of capability test contents of fig. 3A-3H; in some optional test contents, the test result further includes a reaction duration of the tester to the test contents recorded by the timer 201. The evaluation system further comprises an original database unit 501, and the original database unit 501 is a database constructed by the data of the key reaction accuracy, the key reaction time, the specialty, the sex, the age, the right and the left handedness and the like of each set of cognitive tests recorded by the reaction monitoring system.
Step 702, extracting the characteristic attribute of the value of the test result of the experimenter with the known artistic expertise, for example, for the content of testing memory, the content of testing tone identification ability, the content of testing perception ability of objective stimulation, the content of testing psychological rotation ability, the content of testing multi-object tracking ability, the content of testing auditory memory ability, the test result is the accuracy of the reaction result; for the content of the test execution capacity or the content of the test observation capacity, the test result is the quotient of the reaction duration and the reaction result; extracting the characteristic attribute of the test result, comprising: setting the value of the characteristic attribute, for example, regarding the accuracy of the reaction result as the characteristic attribute of a in the interval of 0-50%, regarding the accuracy of the reaction result as the test result, regarding the characteristic attribute of b in the interval of 50-80%, regarding the characteristic attribute of c in the interval of 80-100%, regarding the quotient of the reaction time length and the accuracy of the reaction result as the test result, regarding the characteristic attribute of c in the interval of 0.6-0.75, regarding the characteristic attribute of b in the interval of 0.75-2.4, regarding the test result of more than 2.4, and regarding the test result of the eight sets of questions as the characteristic attribute, obtaining the characteristic attribute of each test result. That is, the evaluation system further includes a radar map database unit 502, where the radar map database unit 502 performs standardized analysis on data in the original database through MATLAB, that is, in this embodiment, the characteristic attribute of the extracted test result, and forms a cognitive radar map for each tester.
Step 703, calculating the conditional probability of the feature attribute for each category, for example, assuming that the categories included in the experimenter include a category a, a category B, a category C, or a category D, and for the feature attribute of each test result, calculating the conditional probability of the occurrence of the category a, the category B, the category C, or the category D when the feature attribute occurs, where the algorithm used in the calculation is a bayesian algorithm. For example, let the experimenter be sample data, which includes characteristic attributes possessed by professional and test results of the experimenter, let characteristic attribute X ═ { k1, k2, k3, k4, k5, k6, k7, k8} be items to be classified, each k be one characteristic attribute of X, let characteristic attribute X appear as event X, let a, B, C or D appear as event M, and calculate, according to a bayesian algorithm, a conditional probability that event M appears in the case of event X is equal to a quotient of a product of the conditional probability that event X appears in the case of event M and the probability that event M appears, and the formula is as follows:
Figure BDA0002790106900000161
probability P for occurrence of event X in sample data(X)That is, the quotient of the occurrence frequency of the event X in the characteristic attribute X of all the sample data and the total sample number is calculated; probability P for occurrence of event M in sample data(M)That is, the quotient of the number of professional occurrences of the event M corresponding to the event M in all the professions of the sample data and the total sample number is calculated. Conditional probability P for occurrence of event X in case of occurrence of event M(X/M)The bayesian algorithm assumes that each dimension attribute in X is conditionally independent, and that each sample data is known in the art, so that the conditional probability of occurrence of the event X in the case of occurrence of the event M is the product of the conditional probabilities of occurrence of each dimension attribute of the event X in the case of occurrence of the event M, and the formula is as follows:
P(X/M)=P(k1/M)·P(k2/M)·…·P(k8/M)
the conditional probability of occurrence of event M in the case of occurrence of event X is obtained as follows:
Figure BDA0002790106900000162
for further example, the characteristic attribute of sample data is x ═ { a, a, C, C, C, B, a, a }, and the formula for calculating the probability of occurrence of each of class a, class B, class C, and class D when x occurs is as follows:
Figure BDA0002790106900000171
working out P in the same way(B/x)、P(C/x)、P(D/x)
Step 704, attributing the characteristic attributes to the category to which the maximum probability of the conditional probabilities of each category belongs, i.e. attributing the characteristic attributes to the P(A/x)、P(B/x)、P(C/x)、P(D/x)For further example, assume that the maximum probability among the above probabilities is P(B/x)Then the above feature attribute x is classified into class B.
Step 705, a classifier is obtained, each class of which comprises a plurality of feature attributes of different types. The classification training in steps 703-704 is performed on the feature attributes of all the test results in the sample data to obtain the classifier. That is, the classification training unit 503 performs classifier training on the radar map in the radar map database to form an optimal classification model, and the classification model is used for classifying new testers, so as to provide more objective and effective test results with guiding significance for the testers.
As shown in FIG. 7B, the classifier obtained through the step of FIG. 7A is applied to the judgment of the expertise potential classification of the tester of the artistic expertise to be tested.
Step 706, collecting the test result of the tester on the test content, including: the display device 102 displays the test content after receiving the evaluation request; the input device 103 receives the reaction result of the experimenter to the test content; the above test contents are the eight sets of capability test contents of fig. 3A-3H; in some optional test contents, the test result further includes a reaction duration of the tester to the test contents recorded by the timer 201.
Step 707 is to extract the feature attributes of the test result value of the tester, and perform the feature attribute extraction operation described in step 702 on the test result to obtain the feature attributes of the test result.
Step 708, inputting the feature attributes into the evaluated classifier to obtain the class to which the feature attributes belong in the classifier, and using the class as the classification result of the artistic expertise of the tester.
The embodiment provides an art expertise potential evaluation system, which receives an evaluation request, displays test content, receives a reaction result of a tester to the test content, extracts a characteristic attribute of a value of the test result, determines the value of the test result according to the accuracy of the reaction result, and inputs the characteristic attribute into a classifier for art expertise evaluation to obtain a class to which the characteristic attribute belongs in the classifier, wherein the class is used as an art expertise classification result of the tester; the classifier is obtained by performing classification training on test results of at least two experimenters who are actually known to be real artistic specialties. The experimental data used by the classification training is the objectively existing characteristic attributes of experimenters with known real classification results to the test results of the test contents, and the obtained classifier can objectively express the difference of the characteristic attributes among the experimenters of different classes of the same test contents, so that the test results of the test contents can objectively indicate which class the tester belongs to, and the class of the tester is determined by the probability of the characteristic attributes of the test results of the tester. Thereby more objectively and effectively providing a more instructive test result for the tester.
(II) usage scenario II: and (4) performing classification evaluation on the expertise potential of the tester of the art expertise to be tested by using a k nearest neighbor classification algorithm.
In this embodiment, the evaluation system needs a k-nearest neighbor classification algorithm to generate a classifier before classifying the test result of the tester. As shown in fig. 8A, step 801, collecting the test result of the experimenter with known artistic expertise on the test content includes: the display device 102 displays the test content after receiving the evaluation request; the input device 103 receives the reaction result of the experimenter to the test content; the above test contents are the eight sets of capability test contents of fig. 3A-3H; in some optional test contents, the test result further includes a reaction duration of the tester to the test contents recorded by the timer 201. The evaluation system further comprises an original database unit 501, and the original database unit 501 is a database constructed by the data of the key reaction accuracy, the key reaction time, the specialty, the sex, the age, the right and the left handedness and the like of each set of cognitive tests recorded by the reaction monitoring system.
Step 802, extracting the characteristic attribute of the value of the test result, and dividing the characteristic attribute into a positive sample and a negative sample, wherein the positive sample is input into a classifier, and the negative sample is used for training the classifier; the extracted feature attributes are, for example, the test content is content for testing memory, content for testing pitch recognition ability, content for testing perception ability of objective stimulation, content for testing psychological rotation ability, content for testing multi-object tracking ability, and content for testing auditory memory ability, and the test result is the accuracy of the response result and serves as the feature attributes; for the content of the test execution ability or the content of the test observation ability, the test result is the quotient of the reaction time length and the reaction result, and is used as the characteristic attribute. That is, the evaluation system further includes a radar map database unit 502, where the radar map database unit 502 performs standardized analysis on data in the original database through MATLAB, that is, in this embodiment, the characteristic attribute of the extracted test result, and forms a cognitive radar map for each tester.
Step 803, calculating the euclidean distance of each negative sample to each positive sample, calculating the occurrence frequency of each category in k positive samples nearest to the euclidean distance of the negative sample, and taking the category with the maximum frequency as the prediction classification result of the negative sample; and step 804, adjusting the value of the k parameter until the accuracy of the classifier on the class prediction of the negative sample is more than or equal to 90%.
Step 805, adjusting the value of the k parameter, and repeating the steps 803-804 until the accuracy of the classifier on the category prediction of the negative sample is more than or equal to 90%; step 806, get k neighbor classifier.
As shown in fig. 8B, in step 807, collecting the test result of the tester for the test content, where the test content is the eight sets of capability test contents of fig. 3A-3H; in some optional test contents, the test result further includes a reaction duration of the tester to the test contents recorded by the timer 201. 808, extracting the characteristic attributes of the values of the test results, and recording as items to be classified; the extracted feature attributes are, for example, the test content is content for testing memory, content for testing pitch recognition ability, content for testing perception ability of objective stimulation, content for testing psychological rotation ability, content for testing multi-object tracking ability, and content for testing auditory memory ability, and the test result is the accuracy of the response result and serves as the feature attributes; for the content of the test execution ability or the content of the test observation ability, the test result is the quotient of the reaction time length and the reaction result, and is used as the characteristic attribute.
Step 809, inputting the item to be classified into the k-nearest neighbor classifier; step 810, in some optional implementations, calculating the distance between the above item to be classified and each sample data in the classifier by using an euclidean distance formula, where the distance calculation formula may be an euclidean distance, a hammington distance, or a minkowski distance formula calculation method, this embodiment is further exemplified by using an euclidean distance formula, and it is assumed that a characteristic attribute x ═ (x1, x2, x3, x4, x5, x6, x7, x8) is an item to be classified, and there is a characteristic attribute y in the classifier (y1, y2, y3, y4, y5, y6, y7, y8), then calculating the euclidean distance between x and y, where the formula is as follows:
Figure BDA0002790106900000191
similarly, the distance between the item to be classified and each sample data in the classifier is calculated, and the above feature attributes are sorted according to the size of the distance value, in this embodiment, ascending sorting is used. Step 811, calculating the frequency of occurrence of each category in the k nearest sample data of the item to be classified; that is, the sample data k before ranking is selected from the above sorting, and the occurrence frequency of each category in the k sample data is calculated. Step 812, using the category with the maximum frequency as the classification result of the item to be classified.
The embodiment provides an art expertise potential evaluation system, which receives an evaluation request, displays test content, receives a reaction result of a tester to the test content, extracts a characteristic attribute of a value of the test result, determines the value of the test result according to the accuracy of the reaction result, and inputs the characteristic attribute into a classifier for art expertise evaluation to obtain a class to which the characteristic attribute belongs in the classifier, wherein the class is used as an art expertise classification result of the tester; the classifier is obtained by performing classification training on test results of at least two experimenters who are actually known to be real artistic specialties. The experimental data used by the classification training is the characteristic attributes of experimenters objectively known real classification results for the test results of the test contents, and the obtained classifier can objectively determine the category of the experimenters according to the probability of the characteristic attributes of the test results of the experimenters. Thereby more objectively and effectively providing a more instructive test result for the tester. That is, the classification training unit 503 performs classifier training on the radar map in the radar map database to form an optimal classification model, and the classification model is used for classifying new testers, so as to provide more objective and effective test results with guiding significance for the testers.
As used in the above embodiments, the term "when …" may be interpreted to mean "if …" or "after …" or "in response to a determination of …" or "in response to a detection of …", depending on the context. Similarly, depending on the context, the phrase "at the time of determination …" or "if (a stated condition or event) is detected" may be interpreted to mean "if the determination …" or "in response to the determination …" or "upon detection (a stated condition or event)" or "in response to detection (a stated condition or event)".
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the present application occur, in whole or in part, when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The available media may be magnetic media (e.g., floppy disks, hard disks, tapes), optical media (e.g., DVDs), or semiconductor media (e.g., solid state drives), among others.
One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium includes: various media capable of storing program codes, such as ROM or RAM, magnetic or optical disks, etc.

Claims (10)

1. An evaluation system, comprising: a processor, a display device, an input device;
the display equipment is used for displaying the test content after receiving the evaluation request;
the input device is used for receiving the reaction result of the tester to the test content;
the processor is used for extracting the characteristic attribute of the value of the test result, and the value of the test result is determined by the accuracy of the reaction result; inputting the characteristic attribute into an evaluated classifier to obtain a class of the characteristic attribute in the classifier as a classification result of the tester; the classifier is obtained by performing classification training on the test results of at least two experimenters.
2. The evaluation system according to claim 1, wherein the test contents include at least one of contents for testing memory, contents for testing pitch recognition ability, contents for testing perception ability of objective stimulus, contents for testing mental rotation ability, contents for testing execution ability, contents for testing multi-object tracking ability, contents for testing auditory memory ability, or contents for testing observation ability.
3. The evaluation system according to claim 1 or 2, further comprising:
the timer is used for recording the reaction duration of the tester for completing the test content;
the determination of the value of the test result from the correctness of the reaction result comprises: the value of the test result is equal to the quotient of the reaction duration and the accuracy of the reaction result.
4. The evaluation system of claim 3, wherein the classifier is obtained by performing classification training on the test results of at least two experimenters and comprises: the classifier is obtained by performing classification training on the test results of at least two experimenters by adopting a Bayesian classification algorithm or a k nearest neighbor classification algorithm.
5. The assessment system according to claim 3, wherein said contents for testing the intonation recognition ability comprise first test contents; the first test content comprises first prompt information, first test audio and second test audio, and the second test audio comprises audio content with the same tone as or different from the first test audio; the first prompt information is used for prompting the tester to judge whether the tones of the first test audio and the second test audio are the same.
6. The test system of claim 3, wherein the content for testing the executive function comprises second test content; the second test content comprises second prompt information and at least two test pictures, and the second prompt content is used for prompting the tester of the operation type indicated by each test picture in the at least two test pictures.
7. An evaluation method, comprising:
displaying the test content after receiving the evaluation request;
receiving a reaction result of a tester to the test content;
extracting characteristic attributes of values of the test results, wherein the values of the test results are determined by the accuracy of the reaction results; inputting the characteristic attribute into an evaluated classifier to obtain a class of the characteristic attribute in the classifier as a classification result of the tester; the classifier is obtained by performing classification training on the test results of at least two experimenters.
8. An evaluation device, comprising:
the display unit is used for displaying the test content after receiving the expertise potential cognition evaluation request;
the receiving unit is used for receiving the reaction result of the tester to the test content;
the classification unit is used for extracting the characteristic attribute of the value of the test result, and the value of the test result is determined by the accuracy of the reaction result; inputting the characteristic attribute into an evaluated classifier to obtain a class of the characteristic attribute in the classifier as a classification result of the tester; the classifier is obtained by performing classification training on the test results of at least two experimenters.
9. A terminal device, comprising: a memory, a processor, wherein the memory stores program instructions; the program instructions, when executed by the processor, cause the processor to perform the method of claim 7.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium; the method of claim 7 is performed when the computer program is run on one or more processors.
CN202011318301.0A 2020-11-20 2020-11-20 Evaluation system, evaluation method and related product Pending CN112396114A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011318301.0A CN112396114A (en) 2020-11-20 2020-11-20 Evaluation system, evaluation method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011318301.0A CN112396114A (en) 2020-11-20 2020-11-20 Evaluation system, evaluation method and related product

Publications (1)

Publication Number Publication Date
CN112396114A true CN112396114A (en) 2021-02-23

Family

ID=74606813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011318301.0A Pending CN112396114A (en) 2020-11-20 2020-11-20 Evaluation system, evaluation method and related product

Country Status (1)

Country Link
CN (1) CN112396114A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116172560A (en) * 2023-04-20 2023-05-30 浙江强脑科技有限公司 Reaction speed evaluation method for reaction force training, terminal equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020106617A1 (en) * 1996-03-27 2002-08-08 Techmicro, Inc. Application of multi-media technology to computer administered vocational personnel assessment
US20160239783A1 (en) * 2015-02-13 2016-08-18 Tata Consultancy Services Limited Method and system for employee assesment
CN106934410A (en) * 2015-12-30 2017-07-07 阿里巴巴集团控股有限公司 The sorting technique and system of data
CN109589122A (en) * 2018-12-18 2019-04-09 中国科学院深圳先进技术研究院 A kind of cognitive ability evaluation system and method
CN110974261A (en) * 2019-12-18 2020-04-10 中国科学院深圳先进技术研究院 Talent evaluation system, talent evaluation method and related products
CN111143517A (en) * 2019-12-30 2020-05-12 浙江阿尔法人力资源有限公司 Method, device, equipment and storage medium for predicting human-selected label
CN111428963A (en) * 2020-02-21 2020-07-17 贝壳技术有限公司 Data processing method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020106617A1 (en) * 1996-03-27 2002-08-08 Techmicro, Inc. Application of multi-media technology to computer administered vocational personnel assessment
US20160239783A1 (en) * 2015-02-13 2016-08-18 Tata Consultancy Services Limited Method and system for employee assesment
CN106934410A (en) * 2015-12-30 2017-07-07 阿里巴巴集团控股有限公司 The sorting technique and system of data
CN109589122A (en) * 2018-12-18 2019-04-09 中国科学院深圳先进技术研究院 A kind of cognitive ability evaluation system and method
CN110974261A (en) * 2019-12-18 2020-04-10 中国科学院深圳先进技术研究院 Talent evaluation system, talent evaluation method and related products
CN111143517A (en) * 2019-12-30 2020-05-12 浙江阿尔法人力资源有限公司 Method, device, equipment and storage medium for predicting human-selected label
CN111428963A (en) * 2020-02-21 2020-07-17 贝壳技术有限公司 Data processing method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116172560A (en) * 2023-04-20 2023-05-30 浙江强脑科技有限公司 Reaction speed evaluation method for reaction force training, terminal equipment and storage medium
CN116172560B (en) * 2023-04-20 2023-08-29 浙江强脑科技有限公司 Reaction speed evaluation method for reaction force training, terminal equipment and storage medium

Similar Documents

Publication Publication Date Title
Whitehill et al. The faces of engagement: Automatic recognition of student engagementfrom facial expressions
Baker et al. 16 Interaction-Based Affect Detection in Educational Software
Chu et al. Facial emotion recognition with transition detection for students with high-functioning autism in adaptive e-learning
Khan et al. Beyond activity recognition: skill assessment from accelerometer data
US10019690B2 (en) Intellectual-productivity analysis apparatus and program
KR101957456B1 (en) System And Method For Detecting And Predicting Brain Disease
US11763174B2 (en) Learning material recommendation method, learning material recommendation device, and learning material recommendation program
KR20210067451A (en) Reading ability improvement training apparatus and method for providing training service to improve reading ability in connection with reading ability diagnosis apparatus based on eye tracking
Karimah et al. Automatic engagement estimation in smart education/learning settings: a systematic review of engagement definitions, datasets, and methods
Yasser et al. Detection of confusion behavior using a facial expression based on different classification algorithms
Ceneda et al. Show me your face: Towards an automated method to provide timely guidance in visual analytics
CN109101883B (en) Depression tendency evaluation device and system
Villegas-Ch et al. Identification of emotions from facial gestures in a teaching environment with the use of machine learning techniques
CN111832669B (en) Method and device for establishing learning participation degree recognition network model
Lin et al. Modeling reading behaviors: An automatic approach to eye movement analytics
CN112396114A (en) Evaluation system, evaluation method and related product
Beriwal et al. Techniques for suicidal ideation prediction: a qualitative systematic review
JP2022045493A (en) Signal processing apparatus, signal processing method, and signal processing program
CN112464774A (en) Emotion identification method in video learning based on eye movement measurement
Utami et al. A Brief Study of The Use of Pattern Recognition in Online Learning: Recommendation for Assessing Teaching Skills Automatically Online Based
CN113805695A (en) Reading understanding level prediction method and device, electronic equipment and storage medium
Roegiers et al. Distinctive features of nonverbal behavior and mimicry in application interviews through data analysis and machine learning
Ye et al. Concentration Chromatography Analysis of Online Learners
KR20190049342A (en) Method and apparatus for modeling based on cognitive response of smart senior
US20210256249A1 (en) Detecting visual attention of children with autism spectrum disorder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination