CN111990971B - Experiment and analysis method based on touch screen operation box vision space pairing learning - Google Patents

Experiment and analysis method based on touch screen operation box vision space pairing learning Download PDF

Info

Publication number
CN111990971B
CN111990971B CN202010912439.7A CN202010912439A CN111990971B CN 111990971 B CN111990971 B CN 111990971B CN 202010912439 A CN202010912439 A CN 202010912439A CN 111990971 B CN111990971 B CN 111990971B
Authority
CN
China
Prior art keywords
experimental
correct
stage
stimulus
experimental animal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010912439.7A
Other languages
Chinese (zh)
Other versions
CN111990971A (en
Inventor
王玮文
张伟
井海洋
王杰思
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Psychology of CAS
Original Assignee
Institute of Psychology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Psychology of CAS filed Critical Institute of Psychology of CAS
Priority to CN202010912439.7A priority Critical patent/CN111990971B/en
Publication of CN111990971A publication Critical patent/CN111990971A/en
Application granted granted Critical
Publication of CN111990971B publication Critical patent/CN111990971B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4076Diagnosing or monitoring particular conditions of the nervous system
    • A61B5/4088Diagnosing of monitoring cognitive diseases, e.g. Alzheimer, prion diseases or dementia
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K29/00Other apparatus for animal husbandry
    • A01K29/005Monitoring or measuring activity, e.g. detecting heat or mating
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/40Animals
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/42Evaluating a particular growth phase or type of persons or animals for laboratory research
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/70Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in livestock or poultry

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Neurology (AREA)
  • Environmental Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Neurosurgery (AREA)
  • Physiology (AREA)
  • Physics & Mathematics (AREA)
  • Psychiatry (AREA)
  • Hospice & Palliative Care (AREA)
  • Pathology (AREA)
  • Engineering & Computer Science (AREA)
  • Developmental Disabilities (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Psychology (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Animal Husbandry (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses an experiment and analysis method based on touch screen operation box visual space pairing learning, which is used for extracting training experiment original data in the touch screen operation box pairing association learning experiment process; sequentially extracting different pattern stimulus combinations generated by each test experiment and the positions of the corresponding experimental animal strike screens; presetting the correct position of a touch screen and counting correct items for each test; calculating to obtain the total accuracy of the experimental animal individuals in the training stage, the paired combination accuracy of the single pair of vision spaces and the paired combination solidus rate of the single pair of vision spaces; determining experimental animals which are converted into the PAL test stage according to the paired combination fixed rate of the pair of visual spaces of the last day of the training stage; and extracting experimental original data generated by the experimental animal converted to the testing stage, and calculating to obtain the total correct rate of the experimental animal individual in the testing stage and the correct rate of pairing combination of a pair of vision spaces. The invention can judge the difference of the learning ability of different groups of experimental animals in a shorter test period.

Description

Experiment and analysis method based on touch screen operation box vision space pairing learning
Technical Field
The invention belongs to the technical field of animal behavioural experiments, and particularly relates to an experiment and analysis method based on touch screen operation box vision space pairing learning.
Background
Paired Association Learning (PAL) is an associative memory task and has cross-species similarity in cognitive ability detection. Whereas visual space pairing learning is one of the modes of pairing association learning, which detects learning memory ability by human or animal grasping the relationship between an object and a position in a plurality of stages. The experimental technique is widely applied and is often used for detecting mild cognitive impairment in humans or rodents. Because the water maze and the eight-arm maze experiments can only detect and distinguish rodents with serious spatial cognitive function impairment, and are greatly limited in complex cognitive function level detection, the technology is more often used for evaluating early symptoms such as mild cognitive function impairment and the like of human beings or rodents under diseases such as schizophrenia, alzheimer's disease and the like, and evaluating the therapeutic potential of developed medicaments or the effect of new therapies in rodent models. Therefore, the technology has very important significance in clinical diagnosis and scientific research.
The touch screen operation box is a behavioural experimental device designed for evaluating the cognitive ability of rodents, and the cognitive ability is usually observed through the effect of visual space pairing association learning in the operation box. The experimental process is divided into a training phase and a PAL testing phase. In the training phase, the experiment is often divided into 5-6 stepwise steps, so that the experimental animal learns and grasps all experimental behaviors required by the whole test time, including: training the animal to receive a test initiation signal; performing touch reaction on a stimulus image appearing on a screen; obtaining food rewards after touching correctly; when a penalty of 5s is accepted for touch errors. In the subsequent PAL testing phase, the experimenter would design three different graphical stimuli to match the corresponding positions, i.e. three pairs of correct visual space pairings. For example, when a flower appears at the left side position or an airplane appears at the middle position or a spider appears at the right side position, touching the screen is a correct reaction, and if another image at the wrong position is touched, the other image is wrong. The total learning accuracy of animals per day is usually recorded during the test phase as an assessment of their cognitive performance. For example, andrew J.Roebuck et al found that in schizophrenic model mice constructed by MK-801, both accuracy and completion time were significantly reduced during the PAL test phase compared to the control group; its team also reported that LE male rats were significantly more efficient and accurate in PAL tests than control group under acute constraint stress, whereas this phenomenon did not occur in the experimental group injected with cortisol alone, which might be related to the ability to treat catecholamines released in amygdala differently; cholinergic neurotransmission impairment is closely related to aging and Alzheimer's disease, carola Romberg et al found that in mice deficient in M2-type muscarinic receptors, object-position correlation learning ability was impaired during the PAL test phase, and accuracy was significantly lower than that in the control group; the results were consistent with the VAChT knockout mice associated with Alzheimer's disease. Therefore, the experimental technology can distinguish normal cognitive ability from impaired animals in scientific research.
Although the experimental method is currently used as an evaluation means of vision-space pairing association learning ability in scientific research, due to the defects of complex behavior paradigm, long detection time, limited detection distinguishing effectiveness of a single evaluation index and the like, when the conventional experimental method is used for observing and distinguishing the cognitive abilities of experimental animals in different treatment groups, the problems of non-uniform detection program standard, long modeling period, poor operability, insufficient evaluation index and the like exist, and specifically: firstly, because the detection program is complex and lacks unified standard, the difference of the training and learning sufficiency degree of the task rules of the animals in the test stage can influence the experimental result in the test stage, and the pairing association learning capability of the animals in the test stage is difficult to evaluate accurately. For example, the correct response rate of some experimental animals at the initial stage of the test shows a "floor effect", the distinction degree of each experimental group is small, and the experimental result is greatly affected by errors; secondly, the evaluation index of the total learning accuracy cannot reflect the learning process, the problem solving strategy establishing process, the learning strategy and the like of experimental animals in a dynamic complex vision-space pairing task, so that the reliability and the effectiveness of the method are insufficient, finer experimental analysis is difficult to perform, and the learning ability of the animals cannot be accurately evaluated, which are the defects of the existing experimental method.
Disclosure of Invention
The analysis method for judging the cognitive ability of the experimental animal by analyzing the total learning accuracy in the conventional pairing association learning experiment is used for solving the problems that an experimental program lacks uniform standards, has low effectiveness, cannot explore learning strategies and effectiveness and the like. The invention provides more effective experimental parameters and a new experimental analysis method, and therefore, the invention provides an experimental and analysis method based on touch screen operation box vision space pairing learning.
The specific embodiment is as follows:
an experiment and analysis method based on touch screen operation box visual space pairing learning extracts training stage experiment original data generated in the touch screen operation box pairing association learning experiment process; sequentially extracting different graph stimulus combinations generated by each experiment and the positions of the corresponding experimental animal strike screens; presetting the correct position of a touch screen, and automatically counting whether each test is correct or not; calculating to obtain the total correct rate of the experimental animal individuals in the training stage, the paired combination correct rate of the single pair of vision spaces and the paired combination fixed rate of the single pair of vision spaces; determining experimental animals which are converted into the PAL test stage according to the paired combination fixed rate of the pair of visual spaces of the last day of the training stage; and extracting experimental original data generated by the experimental animal converted to the testing stage, calculating to obtain the total correct rate of the experimental animal individual in the testing stage and the correct rate of paired combination of a single pair of visual space, and evaluating the cognitive ability of the experimental animal.
And (3) converting the experimental animals with the paired visual space fixing rates of less than 40% into PAL testing stages by using paired visual space fixing rates of the single pair visual space fixing rates obtained by the experimental animals on the last day of the training stage, and eliminating the experimental animals with the unsatisfied requirements.
In the training stage, three touch screens arranged in the touch operation box correspond to one preset correct graphic stimulus respectively, two random screens show two identical stimuli in each test time, one screen corresponds to the preset correct graphic stimulus, and the other screen shows blank; when the experimental animal selects the pattern stimulus at the correct position, the test time is regarded as correct and the next test time randomly appears the stimulus combination with the different pattern or position of the test time; when the experimental animal touches the graphic stimulus or the blank screen at the wrong position, the test time is regarded as wrong and the stimulus combination at the same graphic and position still appears in the next test time until the experimental animal selects to touch the graphic stimulus at the correct position, and the paired combination solid execution rate of the pair of vision space is calculated according to the following formula:
Figure BDA0002663820020000031
setting a training phase to be 14 days, wherein the training time is less than 1 hour every day, and 100 test stops are completed within 1 hour, and the accuracy of pairing and combining a single pair of vision space of experimental animal individuals in the training phase is as follows:
Figure BDA0002663820020000041
the experimental animal after being trained in the training stage is in the PAL testing stage, the same graphic stimulus appears on two random screens in three screens in the touch screen operation box and a blank screen appears on one screen in each test time through program control, whether the individual experimental animal touches the graphic stimulus correctly or not, the next test time is randomly selected from the combination types of the rest graphic stimulus, and the total accuracy of the individual experimental animal in the testing stage and the accuracy of paired combination of a single pair of visual spaces are calculated through the following formula;
Figure BDA0002663820020000042
Figure BDA0002663820020000043
in the training phase and the testing phase, the graphical stimulus used is identical to the correct position corresponding to the graphical stimulus.
Further preferably, the method further comprises a parametric analysis of the uncorrected correct rate, the single pair of visual space pairings combined uncorrected test run correct rate, the single screen correct rate, and the analysis of the single screen correct rate during the test phase for the individual experimental animals during the training phase.
The technical scheme of the invention has the following advantages:
A. according to the invention, through definitely converting the evaluation standard of the experimental animal from the training stage to the PAL testing stage, the single pair of the experimental animal individuals obtained in the training stage pair-wise combination solid execution rate provides a reliable basis for converting the training stage to the testing stage, and meanwhile, the reliability and stability of the experiment are enhanced;
B. according to the experimental analysis method provided by the invention, the overall accuracy of experimental animals and the accuracy of pairing combination of a single pair of vision spaces are analyzed by combining the training stage and the testing stage, so that the learning process of each sub-item in the overall task can be evaluated in high efficiency and fine learning ability, the effectiveness and sensitivity of the experiment on animal cognitive ability detection are enhanced while the floor effect of the initial stage is avoided, and the difference of the learning ability of different experimental animals can be accurately judged in a short experimental period range to a great extent.
C. The analysis data in the invention can provide firm data support for an experimenter to further explore learning strategies of experimental animals in different treatment groups.
Drawings
In order to more clearly illustrate the embodiments of the present invention, the drawings that are required for the embodiments will be briefly described, and it will be apparent that the drawings in the following description are some embodiments of the present invention and that other drawings may be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a block diagram of an experimental and analytical method provided by the present invention;
FIG. 2 shows the paired combined solid rate of the experimental animal individuals X three pairs of visual spaces;
FIG. 3 shows the overall accuracy of the individual Y PAL test of the experimental animal provided by the invention;
FIG. 4 shows the accuracy of paired combinations of three pairs of visual spaces of experimental animal individuals Y provided by the invention;
FIG. 5 shows the PAL test total accuracy daily comparison of two groups of experimental animals (P < 0.05, compared with the treated group of experimental animals)
FIG. 6 is a comparison of PAL test total accuracy per stage after block for two groups of experimental animals (P < 0.05, compared to treated group of experimental animals);
note that: block with 5 days-average
FIG. 7 is a comparison of the average of the overall accuracy of PAL tests for two groups of experimental animals (P < 0.001);
fig. 8 shows the ratio of the mean of the correct rates of the optimal single pair of visual space pairs to the number of experimental animals in each block phase of two groups of experimental animals (< 0.05, < 0.01, < 0.001, "/number" here represents different significance levels compared to control group of experimental animals;
note that: the left axis corresponds to a histogram, and is the optimal single pair of paired pair of vision space combination accuracy average value data; the right axis corresponds to a line graph, and is the proportional data of the number of experimental animals. The number of experimental animals reaches 80% of the accuracy of pairing combination of the optimal pair of visual space according to each stage as a judgment standard.
FIG. 9 is a graph showing the ratio of the mean of the combined correct rates of the optimal two pairs of visual space pairings for each block phase of two groups of experimental animals to the number of experimental animals (< 0.05 for P) compared to the control group of experimental animals;
note that: and the number of experimental animals is up to 70% according to the optimal matching combination accuracy of two pairs of visual spaces in each stage.
The comparison of the single day data with the continuous multiple day data uses a three day one average value unless specifically stated. And the continuous multi-day data is that each day adopts an average value, namely 32 days of data adopts continuous 30-bit numerical values; the optimal and suboptimal visual space pairing combination is selected by 30-day average comparison.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, the invention provides an experiment and analysis method based on touch screen operation box visual space pairing learning, which extracts training stage experiment original data generated in the touch screen operation box pairing association learning experiment process; sequentially extracting different graph stimulus combinations generated by each experiment and the positions of the corresponding experimental animal strike screens; automatically counting whether each test is correct or not by presetting the correct position of the touch screen; calculating to obtain the total correct rate of the experimental animal individuals in the training stage, the paired combination correct rate of the single pair of vision spaces and the paired combination fixed rate of the single pair of vision spaces; determining experimental animals which are converted into the PAL test stage according to the paired combination fixed rate of the pair of visual spaces of the last day of the training stage; and extracting experimental original data generated by the experimental animal converted to the testing stage, calculating to obtain the total correct rate of the experimental animal individual in the testing stage and the correct rate of paired combination of a single pair of visual space, and evaluating the cognitive ability of the experimental animal.
The training stage experiment original data generated in the pairing association learning experiment process of the touch screen operation box can be data extraction of Raw date (original data) of the experiment generated in the touch screen operation box based on original total correct rate and other data, a corresponding calculation program is manufactured according to the format and information of the existing Raw date, different graph stimulus combinations (comprising graph stimulus types and appearance positions) generated in each experiment and the positions of the corresponding experimental animal touch screens can be sequentially extracted through program calculation, the correct or not of each test time is automatically counted according to preset correct positions, and the respective correct rate and the fixed execution rate of the paired combination of the three pairs of vision spaces are finally calculated.
The visual space pairing learning experimental analysis method comprises the following experimental parameters of a training stage and a testing stage: the method comprises the steps of training the total accuracy of experimental animal individuals in a stage, training the accuracy of paired combination of single pair of vision spaces of experimental animal individuals in a stage, testing the total accuracy of experimental animal individuals in a stage, and testing the accuracy of paired combination of single pair of vision spaces of experimental animal individuals in a stage.
The invention also provides the following computable experimental analysis parameters: the individual uncorrected correct rate of the experimental animal in the training stage, the individual uncorrected test time correct rate of the experimental animal in the training stage by pairing and combining a pair of vision spaces, the correct rate of each position in the training stage and the correct rate of each position in the testing stage.
Figure BDA0002663820020000071
Figure BDA0002663820020000072
Figure BDA0002663820020000073
Figure BDA0002663820020000074
The experimental data can be extracted from the operation original data of the touch screen operation box experimental instrument. The single day data and the continuous multiple day data are compared by adopting a three-day one-average value. And the continuous multi-day data is an average value adopted for each day, namely, the data of 32 days adopts continuous 30-bit numerical values.
The experimental procedure in the training phase is as follows:
in the touch screen box, each test run was started from the experimental animal exploring the trough, after which the trough lamp was turned off, and the graphic stimulus was randomly present in three positions (left, middle, right) on the screen. Of the two graphic stimuli displayed on the screen, one is in the correct position (s+), and the other is in the wrong position (S-). When the experimental animal touches the graphic stimulus at the correct position, the stimulus disappears, and the experimental animal obtains the reward; in contrast, if the experimental animal touches the graphic stimulus at the wrong location, the stimulus disappears, the experimental box lights up for 5 seconds, and no bonus is available. Different from the conventional experimental process, in the training stage, three touch screens in the touch screen operation box are respectively corresponding to one preset correct graphic stimulus, namely three pairs of correct visual space are paired and combined, two identical image stimulus appears on two screens randomly in each test, one screen corresponds to the preset correct graphic stimulus, the third screen is displayed as blank, and one graphic stimulus of the touch screen at the correct position is contained in the third screen, namely six different graphic stimulus combination types are included in the stage. When the experimental animal selects the pattern stimulus at the correct position, the test time is regarded as correct and the stimulus combinations of different patterns or positions randomly appear in the next test time; when the experimental animal touches the graphic stimulus or the blank screen at the wrong position, the test is regarded as wrong and the stimulus combination of the same graphic and position still appears in the next test (correction test) until the experimental animal selects to touch the correct graphic stimulus. The training phase is preferably set to 14 days, the training time is 1 hour at the maximum, and the training time reaches 100 test times in 1 hour to stop, so that the experimental animal can master the rule of correlation between the stimulus and the position. Of course, other days and daily training times and trials may be provided than in the present invention. In this training phase daily task, the experiment was recorded:
Figure BDA0002663820020000081
Figure BDA0002663820020000082
Figure BDA0002663820020000083
in the training stage experimental animal individual solid execution rate calculation formula, if a certain pair of visual space paired combinations starts to appear in the 100 th test time and the experimental animal is selected incorrectly, the number of times that the pair of visual space paired combinations appears in the uncorrected test time is not counted.
In the training stage of the calculation formula, the experimental animal individuals pair-wise in the visual space to combine the solid content parameters, and reflect whether the experimental animal individuals effectively master the paired learning rule of the training stage, wherein the solid content parameters are aimed at achieving the aim of improving the experimental stability.
According to the invention, the single pair of vision space paired combination solid rate of experimental animal individuals in the last day of the training stage is lower than 40%, and the animal mastering the paired learning paradigm rule is considered to enter the testing stage.
The training stage is finished and qualified, and then the PAL test stage is carried out, and besides the total accuracy, the invention adopts a new analysis method and evaluation indexes to further carry out dynamic and refined analysis on each pair of paired learning score change processes and correlations thereof. Specifically, the same graphic stimulus appears in two random target positions, one is a blank screen, and the graphic stimulus contains a touch screen at the correct position. The test stage is identical to the previous training stage in the type of the graphic stimulus and the correct position corresponding to the stimulus. Whether the experimental animal touched the graphic stimulus or not, the next test run was randomly selected from the remaining five graphic stimulus combination categories, and this process was repeated until 100 test runs. The test phase was 32 days, and during this PAL test phase task, the experiment was recorded:
Figure BDA0002663820020000091
Figure BDA0002663820020000092
the total accuracy of experimental animal individuals in the test stage and the accuracy of pairing combination of single pair of visual spaces of experimental animal individuals in the test stage can be analyzed to judge the learning effect of experimental animals in different groups, so that the problem that the existing experimental analysis method is poor in distinguishing effectiveness and always presents a floor effect in a long time is solved.
According to the invention, through analysis of the learning results of the single pair of visual space pairing combinations, the difference of the correct rate and standard reaching number of the optimal visual space pairing combinations of each group is observed. And simultaneously observing the difference of the correct rate and the standard reaching number of the optimal paired combination of two pairs of visual spaces of each group, so as to observe the learning and cognition ability of experimental animals of each group.
According to the invention, 6 experimental animals including a Control group (Control group) and an experimental group (Treated group) are adopted, and according to training in a training stage of 14 days, the experimental animals with the paired combination of the pair of visual space in the last day with the solid content lower than 40% are converted into a PAL testing stage, namely the experimental animals are mastered and understand the learning paradigm, as shown in figure 2.
As shown in fig. 3, in the PAL test phase, the respective correct rates of the paired combinations of three pairs of visual spaces were extracted from the Raw date (Raw data), and the experimental animal individuals Y PAL test the total correct rate after the experimental test for 32 days.
Fig. 4 shows the accuracy of the paired combinations of the three pairs of visual spaces of the experimental animal individuals Y, and the model reflects that the learning ability is far better than the total accuracy of the test. As shown in fig. 5, in the PAL experimental test of 32 days, the total accuracy of experimental animals between the two groups was significantly different continuously after the 28 th day. In addition, the experimental animals were averaged for 5 days as one stage, a significant difference was seen in fig. 6 at the 6 th stage, and the total accuracy of the two groups of experimental animals for 30 days was compared with the average, P < 0.001, as shown in fig. 7.
In the new experimental parameter extraction data shown in fig. 8, it can be seen that the optimal single pair of vision spatial pairing combination accuracy rate of the control group is significantly different from that of the experimental group in each period, and the standard-reaching experimental animals of the control group are far higher than those of the experimental group (the optimal pair of vision spatial pairing combination accuracy rate reaches 80% as a judgment standard); as shown in fig. 9, in the optimal two pairs of spatial paired learning, the control group showed a significant difference in accuracy at the 4 th period compared with the experimental group, and the control group reached the standard experimental animals higher than the experimental group.
The experimental results show that the number of experimental animals which are paired and combined in visual space is more in the control group than in the experimental group in the same time, the learning efficiency is faster, and compared with the existing data, the experimental results can analyze the learning difference between two groups of experimental animals in a shorter period, and the experimental stability is high, so that the distinguishing effectiveness of the model for reflecting the learning ability is far better than that of the existing experimental analysis method.
From this experimental analysis it is known that: the cognitive ability of the control group of experimental animals is significantly higher than that of the experimental group.
According to the invention, the training standard from the training stage to the testing stage in the original touch screen operation box experimental step is set, so that the experimental animal can master the basic rules required by subsequent pairing learning in the last stage of training, and the learning ability and learning effect of the animal can be more objectively evaluated in the testing stage. And finally, comprehensive and accurate experimental data and analysis methods are provided for the touch screen operation box pairing association learning ability evaluation, and the method has great significance on the complex cognitive ability detection method of the experimental animals.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. While obvious variations or modifications are contemplated as falling within the scope of the present invention.

Claims (4)

1. An experiment and analysis method based on touch screen operation box visual space pairing learning is characterized in that training stage experiment original data generated in the touch screen operation box pairing association learning experiment process are extracted; sequentially extracting different pattern stimulus combinations generated by each test experiment and the positions of the corresponding experimental animal strike screens; presetting the correct position of a touch screen, and automatically counting whether each test is correct or not; calculating to obtain the total correct rate of the experimental animal individuals in the training stage, the paired combination correct rate of the single pair of vision spaces and the paired combination fixed rate of the single pair of vision spaces; determining experimental animals which are converted into the PAL test stage according to the paired combination fixed rate of the pair of visual spaces of the last day of the training stage; extracting experimental original data generated by experimental animals converted to a testing stage, calculating to obtain the total correct rate of the experimental animal individuals in the testing stage and the correct rate of pairing combination of a single pair of visual spaces, and evaluating the cognitive ability of the experimental animals;
in the training stage, three touch screens arranged in the touch operation box correspond to one preset correct graphic stimulus respectively, two random screens show two identical stimulus when in each test, one screen corresponds to the preset correct graphic stimulus, and the third screen shows blank; when the experimental animal selects the pattern stimulus at the correct position, the test time is regarded as correct and the next test time randomly appears the stimulus combination with the different pattern or position of the test time; when the experimental animal touches the graphic stimulus or the blank screen at the wrong position, the test time is regarded as wrong and the stimulus combination at the same graphic and position still appears in the next test time until the experimental animal selects to touch the graphic stimulus at the correct position, and the paired combination solid execution rate of the pair of vision space is calculated according to the following formula:
Figure FDA0004249080340000011
setting a training phase for 14 days, wherein the training time is less than 1 hour every day, and 100 test stops are completed within 1 hour, and the accuracy of pairing and combining the pair of single pair of vision space of experimental animal individuals in the training phase is as follows:
Figure FDA0004249080340000012
the experimental animal after being trained in the training stage is in the PAL testing stage, the same graphic stimulus appears on two random screens in three screens in the touch screen operation box and a blank screen appears on one screen in each test time through program control, whether the individual experimental animal touches the graphic stimulus correctly or not, the next test time is randomly selected from the combination types of the rest graphic stimulus, and the total accuracy of the individual experimental animal in the testing stage and the accuracy of paired combination of a single pair of visual spaces are calculated through the following formula;
Figure FDA0004249080340000021
Figure FDA0004249080340000022
2. the experimental and analysis method based on touch screen operation box vision space pairing learning according to claim 1, wherein the experimental animals with the single pair of vision space pairing combination solid content lower than 40% are converted into PAL testing stage by the single pair of vision space pairing combination solid content obtained by the experimental animal individuals on the last day of training stage, and the experimental animal individuals with unsatisfied requirements are eliminated.
3. The method according to claim 1, wherein the graphic stimulus used and the correct position corresponding to the graphic stimulus are identical in the training phase and the testing phase.
4. The touch screen operation box vision space pairing learning-based experiment and analysis method according to claim 1, further comprising parameter analysis of uncorrected correct rate, single pair of vision space pairing combination uncorrected test times correct rate, single screen correct rate of individual experimental animals in a training stage, and analysis of single screen correct rate in a testing stage.
CN202010912439.7A 2020-09-02 2020-09-02 Experiment and analysis method based on touch screen operation box vision space pairing learning Active CN111990971B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010912439.7A CN111990971B (en) 2020-09-02 2020-09-02 Experiment and analysis method based on touch screen operation box vision space pairing learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010912439.7A CN111990971B (en) 2020-09-02 2020-09-02 Experiment and analysis method based on touch screen operation box vision space pairing learning

Publications (2)

Publication Number Publication Date
CN111990971A CN111990971A (en) 2020-11-27
CN111990971B true CN111990971B (en) 2023-07-07

Family

ID=73465228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010912439.7A Active CN111990971B (en) 2020-09-02 2020-09-02 Experiment and analysis method based on touch screen operation box vision space pairing learning

Country Status (1)

Country Link
CN (1) CN111990971B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103125406A (en) * 2013-03-19 2013-06-05 郑州大学 Visual cognitive behavioral learning automatic training system of big and small mice
CN104616231A (en) * 2013-11-04 2015-05-13 中国科学院心理研究所 Cloud-based psychological laboratory system and using method thereof
CN106614383A (en) * 2017-02-27 2017-05-10 中国科学院昆明动物研究所 Training method and device for correcting screen contact way of macaque
WO2018112103A1 (en) * 2016-12-13 2018-06-21 Akili Interactive Labs, Inc. Platform for identification of biomarkers using navigation tasks and treatments using navigation tasks
CN109566447A (en) * 2018-12-07 2019-04-05 中国人民解放军军事科学院军事医学研究院 The research system of non-human primate movement and cognitive function based on touch screen
CN110199902A (en) * 2019-07-07 2019-09-06 江苏赛昂斯生物科技有限公司 Toy touch screen conditioned behavior control box

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10524670B2 (en) * 2014-09-02 2020-01-07 Apple Inc. Accurate calorimetry for intermittent exercises
US10334823B2 (en) * 2016-01-31 2019-07-02 Margaret Jeannette Foster Functional communication lexigram device and training method for animal and human

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103125406A (en) * 2013-03-19 2013-06-05 郑州大学 Visual cognitive behavioral learning automatic training system of big and small mice
CN104616231A (en) * 2013-11-04 2015-05-13 中国科学院心理研究所 Cloud-based psychological laboratory system and using method thereof
WO2018112103A1 (en) * 2016-12-13 2018-06-21 Akili Interactive Labs, Inc. Platform for identification of biomarkers using navigation tasks and treatments using navigation tasks
CN106614383A (en) * 2017-02-27 2017-05-10 中国科学院昆明动物研究所 Training method and device for correcting screen contact way of macaque
CN109566447A (en) * 2018-12-07 2019-04-05 中国人民解放军军事科学院军事医学研究院 The research system of non-human primate movement and cognitive function based on touch screen
CN110199902A (en) * 2019-07-07 2019-09-06 江苏赛昂斯生物科技有限公司 Toy touch screen conditioned behavior control box

Also Published As

Publication number Publication date
CN111990971A (en) 2020-11-27

Similar Documents

Publication Publication Date Title
Pustina et al. Improved accuracy of lesion to symptom mapping with multivariate sparse canonical correlations
Polonio et al. Strategic sophistication and attention in games: An eye-tracking study
Shin et al. Cognitive functioning in obsessive-compulsive disorder: a meta-analysis
Barlett et al. The effect of violent and non-violent computer games on cognitive performance
Mendez et al. Temporal and spatial categorization in human and non-human primates
US9324241B2 (en) Predictive executive functioning models using interactive tangible-graphical interface devices
Janouschek et al. The functional neural architecture of dysfunctional reward processing in autism
Hunsaker Comprehensive neurocognitive endophenotyping strategies for mouse models of genetic disorders
CN114360728A (en) Prediction model for mild cognitive dysfunction of diabetes and construction method of nomogram
Zhang et al. What can “drag & drop” tell? Detecting mild cognitive impairment by hand motor function assessment under dual-task paradigm
CN111990971B (en) Experiment and analysis method based on touch screen operation box vision space pairing learning
EP3537974B1 (en) Method and apparatus for determining an indication of cognitive impairment
Garofalo et al. Influence of colour on object motor representation
Morcom Re-engaging with the past: recapitulation of encoding operations during episodic retrieval
Mui et al. Ex-Gaussian analysis of simple response time as a measure of information processing speed and the relationship with brain morphometry in multiple sclerosis
Knorr et al. A comparison of fMRI and behavioral models for predicting inter-temporal choices
Wöhner et al. Semantic facilitation in blocked picture categorization: Some data and considerations regarding task selection.
McPhetres What does the cognitive reflection test really measure: A process dissociation investigation.
CN113729708A (en) Lie evaluation method based on eye movement technology
Li et al. ERP correlates of verbal and numerical probabilities in risky choices: a two-stage probability processing view
Talarposhti et al. Modeling one-choice discrete-continuous dual task
Macaskill Face Perception Deficits in Developmental Prosopagnosia
Talwar Computational models describe individual differences in cognitive function and their relationships to mental health symptoms
Skinner et al. The attentional blink paradigm in individuals with high and low levels of depression and anxiety
Stokes et al. Integration of Novel Shape Templates During Human Spatial Navigation Leads to Prototype Extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant