CN111990971A - Experiment and analysis method based on touch screen operation box visual space pairing learning - Google Patents

Experiment and analysis method based on touch screen operation box visual space pairing learning Download PDF

Info

Publication number
CN111990971A
CN111990971A CN202010912439.7A CN202010912439A CN111990971A CN 111990971 A CN111990971 A CN 111990971A CN 202010912439 A CN202010912439 A CN 202010912439A CN 111990971 A CN111990971 A CN 111990971A
Authority
CN
China
Prior art keywords
visual space
experiment
experimental animal
experimental
learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010912439.7A
Other languages
Chinese (zh)
Other versions
CN111990971B (en
Inventor
王玮文
张伟
井海洋
王杰思
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Psychology of CAS
Original Assignee
Institute of Psychology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Psychology of CAS filed Critical Institute of Psychology of CAS
Priority to CN202010912439.7A priority Critical patent/CN111990971B/en
Publication of CN111990971A publication Critical patent/CN111990971A/en
Application granted granted Critical
Publication of CN111990971B publication Critical patent/CN111990971B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4076Diagnosing or monitoring particular conditions of the nervous system
    • A61B5/4088Diagnosing of monitoring cognitive diseases, e.g. Alzheimer, prion diseases or dementia
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K29/00Other apparatus for animal husbandry
    • A01K29/005Monitoring or measuring activity, e.g. detecting heat or mating
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/40Animals
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/42Evaluating a particular growth phase or type of persons or animals for laboratory research
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/70Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in livestock or poultry

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Neurology (AREA)
  • Environmental Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Neurosurgery (AREA)
  • Physiology (AREA)
  • Physics & Mathematics (AREA)
  • Psychiatry (AREA)
  • Hospice & Palliative Care (AREA)
  • Pathology (AREA)
  • Engineering & Computer Science (AREA)
  • Developmental Disabilities (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Psychology (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Animal Husbandry (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses an experiment and analysis method based on touch screen operation box visual space pairing learning, which comprises the steps of extracting training experiment original data in the process of a touch screen operation box pairing association learning experiment; sequentially extracting different graphic stimulus combinations generated by each experiment and the positions of the corresponding experimental animal touch screens; presetting the correct position of a touch screen and counting correct items of each trial time; calculating to obtain the total accuracy, the single-pair visual space pairing combination accuracy and the single-pair visual space pairing combination solid-state rate of the experimental animal individuals in the training stage; determining experimental animals converted to the PAL testing stage according to the single-pair visual space pairing combination solid-state enforcement rate of the last day of the training stage; extracting experiment original data generated by the experimental animals converted into the testing stage, and calculating to obtain the total accuracy of the experimental animal individuals in the testing stage and the accuracy of the single-pair visual space pairing combination. The invention can judge the difference of the learning abilities of experimental animals of different groups in a shorter test period.

Description

Experiment and analysis method based on touch screen operation box visual space pairing learning
Technical Field
The invention belongs to the technical field of animal behavioural experiments, and particularly relates to an experiment and analysis method based on touch screen operation box visual space pairing learning.
Background
Paired Association Learning (PAL) is an associative memory task and has cross-species similarity in cognitive ability testing. Visual space paired learning is one of the modes of paired associative learning, which detects learning memory ability by a human or an animal holding a relationship between an object and a position in a plurality of stages. The experimental technique has wide application range and is often used for detecting mild cognitive dysfunction of human or rodent. Since the water maze and eight-arm maze experiments can only detect rodents with severe spatial cognitive function impairment, and the detection of complex cognitive function level is very limited, the technology is more commonly used for evaluating early symptoms such as mild cognitive function impairment and the like of humans or rodents under the diseases such as schizophrenia and Alzheimer's disease and evaluating the treatment potential of developed drugs or the effectiveness of new therapies in rodent models. Therefore, the technology has very important significance in clinical diagnosis and scientific research.
The touch screen operation box is a behavioristics experimental device designed for evaluating the cognitive ability of rodents, and the cognitive ability of the behavioristics experimental device is usually observed through the effect of visual space pairing associative learning in the operation box. The experimental process is divided into a training phase and a PAL testing phase. In the training phase, the experiment is often divided into 5-6 staged steps, so that the experimental animal learns and grasps all experimental behaviors required by the whole test trial, including: training animals to receive a trial start signal; performing touch reaction on the stimulation image appearing on the screen; obtaining food reward after correct touch; touch error accepts a penalty of 5 s. In the subsequent PAL testing phase, the experimenter will design three different graphical stimuli to match the corresponding positions, i.e. three pairs of correct visual-spatial pairings. For example, when a flower appears in the left position or an airplane appears in the middle position or a spider appears in the right position, touching the screen will be a correct response, and if another image in the wrong position is touched, it will be an error. The total learning accuracy of the animals per day is usually recorded during the testing period as an assessment of their cognitive abilities. For example, Andrew J.Roebuck et al found that in mice model of schizophrenia constructed by MK-801, both the accuracy and completion time were significantly reduced in the PAL test phase compared to the control group; their group also reported that the efficiency and accuracy of LE male rats in PAL testing under acute restraint stress were significantly higher than those of the control group, while this phenomenon was not observed in the experimental group injected with cortisol only, which may be related to the ability to treat differently the catecholamines released in the amygdala; impairment of cholinergic neurotransmission is closely related to aging and alzheimer's disease, caroa Romberg et al found that in M2 type muscarinic receptor deficient mice, the object-location associated learning ability was impaired at the PAL test stage, with a significantly lower accuracy than in the control group; the results are consistent with the expression of the VAChT knockout mice associated with alzheimer's disease. Therefore, the experimental technology can distinguish the animals with normal cognitive ability from the animals with damaged cognitive ability in scientific research.
Although the experimental method is currently used as an evaluation means for the visual-air pairing associative learning ability in scientific research, due to the defects of more complex behavior paradigm, long detection time course, limited detection and distinguishing effectiveness of a single evaluation index and the like, when the cognitive abilities of experimental animals of different treatment groups are observed and distinguished by using the conventional experimental method, the problems of non-uniform detection program standard, long modeling period, poor operability, insufficient evaluation indexes and the like exist, and specifically: firstly, due to the complex detection procedure and the lack of unified standards, the difference of training and learning sufficiency of the animals on the task rules in the testing stage may affect the experimental result in the testing stage, and the pairing associated learning ability of the animals is difficult to accurately evaluate. For example, the correct reaction rate of some experimental animals presents a floor effect at the initial stage of testing, and the differentiation of each experimental group is small, so that the experimental result is greatly influenced by errors; secondly, the evaluation index of the total learning accuracy rate cannot reflect the learning process of the experimental animal in the dynamic complex visual-air pairing task, the problem solution strategy establishment process, the learning strategy and the like, so that the reliability and the validity of the method are insufficient, more detailed experimental analysis is difficult to perform, and the learning capability of the animal cannot be accurately evaluated, which are all the defects of the existing experimental method.
Disclosure of Invention
The method aims to solve the problems that an existing analysis method for judging the cognitive ability of an experimental animal by analyzing the total learning accuracy in the pairing association learning experiment has the defects that an experiment program lacks unified standards, the effectiveness is low, a learning strategy and the effectiveness cannot be explored, and the like. The invention provides more effective experimental parameters and a new experimental analysis method, and therefore, the invention provides an experiment and analysis method based on touch screen operation box visual space pairing learning.
The specific embodiment is as follows:
an experiment and analysis method based on touch screen operation box visual space pairing learning is characterized in that original experiment data of a training stage generated in the process of a touch screen operation box pairing association learning experiment is extracted; sequentially extracting different graphic stimulus combinations generated by each experiment and the positions of the corresponding experimental animal touch screens; presetting the correct position of a touch screen, and automatically counting whether each test is correct or not; calculating to obtain the total accuracy, the single-pair visual space pairing combination accuracy and the single-pair visual space pairing combination solid-state rate of the experimental animal individuals in the training stage; determining experimental animals converted to the PAL testing stage according to the single-pair visual space pairing combination solid-state enforcement rate of the last day of the training stage; extracting experiment original data generated by the experimental animal converted into the testing stage, calculating to obtain the total accuracy of the experimental animal individuals in the testing stage and the accuracy of the single pair of visual space pairing combination, and evaluating the cognitive ability of the experimental animal.
The experimental animals with the single pair of visual space pairing combination solid state rate lower than 40 percent obtained by experimental animal individuals on the last day of the training stage are converted to the PAL testing stage, and the experimental animal individuals which are not satisfied with the requirements are eliminated.
In the training stage, three touch screens arranged in a touch operation box are respectively corresponding to a preset correct graphic stimulus, two random screens generate two identical stimuli in each test, one screen is corresponding to the preset correct graphic stimulus, and the other screen is blank; when the experimental animal selects the figure stimulus with the correct position, the trial time is regarded as correct and the stimulus combination with the different figure or position of the trial time randomly appears in the next trial time; when the experimental animal touches the graphic stimulus at the wrong position or the blank screen, the trial is regarded as wrong and the stimulus combination with the same graphic and position still appears in the next trial until the experimental animal selects the graphic stimulus touching the correct position, and the single-pair visual space pairing combination solid-state ratio is calculated according to the following formula:
Figure BDA0002663820020000031
setting a training period to be 14 days, wherein the training time is less than 1 hour every day, 100 trials are completed within 1 hour and are stopped, wherein the accuracy of the single-pair visual space pairing combination of the experimental animal individuals in the training period is as follows:
Figure BDA0002663820020000041
the experimental animal qualified by training enters a PAL testing stage, random two screens in three screens in a touch screen operation box are controlled to have the same graphic stimulus and one screen has blank by a program in each test, and the next test is randomly selected from the other graphic stimulus combination types no matter whether the individual experimental animal touches the graphic stimulus correctly, and the total accuracy of the individual experimental animal and the accuracy of the single-pair visual space pairing combination in the testing stage are calculated by the following formula;
Figure BDA0002663820020000042
Figure BDA0002663820020000043
in the training phase and the testing phase, the adopted graphic stimulus and the correct position corresponding to the graphic stimulus are completely the same.
Further preferably, the method further comprises parameter analysis of the uncorrected correct rate, the single-pair visual space pairing combination uncorrected trial correct rate and the single-screen correct rate of the experimental animal individuals in the training stage, and analysis of the single-screen correct rate in the testing stage.
The technical scheme of the invention has the following advantages:
A. according to the invention, by determining the evaluation standard for converting the experimental animal from the training stage to the PAL testing stage, the single-pair visual space pairing combination solid-borne rate of the experimental animal individual obtained in the training stage provides a reliable basis for the conversion from the training stage to the testing stage, and meanwhile, the reliability and stability of the experiment are enhanced;
B. according to the experimental analysis method provided by the invention, the training stage and the testing stage are combined, and the total accuracy of the experimental animals and the accuracy of the single pair of visual space pairing combination are analyzed, so that the learning process of each item in the total task can be evaluated with high efficiency and fine learning capability, the effectiveness and the sensitivity of the experiment on the detection of the cognitive ability of the animals are enhanced while the floor effect in the initial stage is avoided, and the difference of the learning capabilities of different groups of experimental animals can be accurately judged in a shorter experiment period range to a great extent.
C. The analysis data in the invention can provide firm data support for experimenters to further explore learning strategies for experimental animals of different treatment groups.
Drawings
In order to more clearly illustrate the embodiments of the present invention, the drawings which are needed to be used in the embodiments will be briefly described below, and it is apparent that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained from the drawings without inventive labor to those skilled in the art.
FIG. 1 is a block diagram of an experimental and analytical method provided by the present invention;
FIG. 2 shows the X three pairs of visual space pairing and combination solid holding rates of the experimental animal individuals provided by the present invention;
FIG. 3 shows the total accuracy of the Y PAL test of the experimental animal individuals provided by the present invention;
FIG. 4 shows the accuracy of Y three pairs of visual space pairings of experimental animal individuals according to the present invention;
FIG. 5 shows the daily comparison of the total accuracy of PAL test in two groups of experimental animals (P < 0.05, compared with the treated group of experimental animals)
FIG. 6 shows the comparison of PAL test total correctness per stage after block for two groups of experimental animals (P < 0.05, compared with experimental animals in treated group);
note: block adopts 5 days-average value
FIG. 7 is a comparison of the mean total correctness of two groups of experimental animals tested by PAL (P < 0.001);
FIG. 8 is a comparison of the mean of the best single pair of visual-spatial pairings of correct rates in each block phase with the ratio of the number of experimental animals in the two groups (. SP < 0.05,. SP < 0.01,. SP < 0.001, and with the experimental animals in the control group, note that the ". SP" numbers here represent different levels of significance);
note: the left axis corresponds to a histogram and is the correct rate mean value data of the optimal single-pair visual space pairing combination; the right axis corresponds to a line graph and is the data of the ratio of the number of experimental animals. The number of experimental animals is determined according to the judgment standard that the accuracy of the optimal single-pair visual space pairing combination reaches 80 percent at each stage.
FIG. 9 is the comparison of the average of the combined correct rate of the optimal two pairs of visual space pairings in each block phase of two groups of experimental animals with the ratio of the number of experimental animals (. P < 0.05, compared with the experimental animals in the control group);
note: the number of experimental animals is determined according to the judgment standard that the accuracy of the optimal two pairs of visual space pairing combinations in each stage reaches 70%.
The above single-day data and continuous multi-day data are compared by using a three-day-one average value unless otherwise specified. And continuous multi-day data is an average value of every day, namely 32 days of data adopt continuous 30-bit numerical values; optimal and suboptimal visual space pairing combinations are selected with a 30 day average comparison.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the invention provides an experiment and analysis method based on touch screen operation box visual space pairing learning, which extracts experiment original data in a training stage generated in an experiment process of touch screen operation box pairing association learning; sequentially extracting different graphic stimulus combinations generated in each experiment and the positions of the experimental animal touch screens corresponding to the graphic stimulus combinations; automatically counting whether each test is correct or not by presetting the correct position of the touch screen; calculating to obtain the total accuracy, the single-pair visual space pairing combination accuracy and the single-pair visual space pairing combination solid-state rate of the experimental animal individuals in the training stage; determining experimental animals converted to the PAL testing stage according to the single-pair visual space pairing combination solid-state enforcement rate of the last day of the training stage; extracting experiment original data generated by the experimental animal converted into the testing stage, calculating to obtain the total accuracy of the experimental animal individuals in the testing stage and the accuracy of the single pair of visual space pairing combination, and evaluating the cognitive ability of the experimental animal.
The extracted original data of the experiment in the training stage generated in the process of the pairing association learning experiment of the touch screen operation box can be obtained by extracting data of Raw data of the experiment generated in the touch screen experiment box on the basis of data such as original total accuracy and the like, manufacturing a corresponding calculation program according to the format and information of the existing Raw data, sequentially extracting different graphic stimulus combinations (including the types and the occurrence positions of the graphic stimuli) generated in each experiment and the positions of the experimental animal touch screens corresponding to the graphic stimulus combinations through program calculation, automatically counting the correctness of each test according to a preset correct position, and finally calculating the respective accuracy and the solid state rate of three pairs of visual space pairing combinations.
The visual space pairing learning experiment analysis method comprises the following experiment parameters in a training stage and a testing stage: the method comprises the following steps of training stage experimental animal individual total accuracy, training stage experimental animal individual single pair visual space pairing combination solid state accuracy, testing stage experimental animal individual total accuracy and testing stage experimental animal individual single pair visual space pairing combination accuracy.
The invention also provides the following calculable experimental analysis parameters: the method comprises the following steps of correcting the individual uncorrected accuracy of the experimental animal in the training stage, correcting the individual uncorrected trial times of the single-pair visual space pairing combination of the experimental animal in the training stage, correcting the position in the training stage and correcting the position in the testing stage.
Figure BDA0002663820020000071
Figure BDA0002663820020000072
Figure BDA0002663820020000073
Figure BDA0002663820020000074
The experimental data can be extracted from the original data of the experimental instrument of the touch screen operation box. The single-day data and continuous multi-day data are compared by adopting a three-day-one average value. And continuous multi-day data adopts an average value for each day, namely 32 days of data adopt continuous 30-bit numerical values.
The experimental procedure in the training phase was as follows:
in the touch screen box, each trial started with the experimental animal exploring the trough, after which the trough lights were turned off and graphical stimuli appeared randomly in three locations (left, center, right) on the screen. Of the two graphical stimuli displayed on the screen, one is in the correct position (S +) and the other is in the wrong position (S-). When the experimental animal touches the graphic stimulus at the correct position, the stimulus disappears, and the experimental animal obtains the reward; conversely, if the test animal touches the graphical stimulus at the wrong location, the stimulus disappears and the test box lights up for 5 seconds with no reward. Different from the conventional experimental process, in the training stage, the three touch screens in the touch screen operation box are respectively and correspondingly provided with the correct graphic stimulus, namely three pairs of correct visual space pairing combinations, two same image stimuli appear on the two screens randomly at each test, one screen is correspondingly and correspondingly provided with the correct graphic stimulus, the third screen is displayed to be blank, and the graphic stimulus of the touch screen at the correct position is contained in the screen, namely the stage comprises six different graphic stimulus combination types. When the experimental animal selects the figure stimulus with the correct position, the trial time is regarded as correct and the stimulus combination with different figures or positions randomly appears in the next trial time; when the experimental animal touches the graphical stimulus at the wrong location or a blank screen, the trial is considered as wrong and the stimulus combination of the same graph and location still appears in the next trial (correction trial) until the experimental animal chooses to touch the correct graphical stimulus. The training period is preferably set to be 14 days, the training time is 1 hour at most every day, and the training is stopped after 100 test times within 1 hour, so that the experimental animal can master the rule that the stimulus has the relevance with the position. Of course, other days and training times and trials per day than the present invention may be set. During this daily task of the training phase, the experiment requires recording:
Figure BDA0002663820020000081
Figure BDA0002663820020000082
Figure BDA0002663820020000083
in the experimental animal individual solid-borne rate calculation formula in the training stage, if a certain pair of visual space pairing combination appears at the beginning of the 100 th trial and the experimental animal is selected wrongly, the number of times of the non-correction trial of the pair of visual space pairing combination is not counted in the trial.
The single-pair visual space pairing combination solid-borne rate parameter of the experimental animal individual in the training stage in the calculation formula reflects whether the experimental animal individual effectively grasps the pairing learning rule of the training stage, and the solid-borne rate parameter aims to achieve the aim of improving the experimental stability.
According to the invention, the single-pair visual space pairing combination solid-state ratio of the experimental animal individuals in the last day of the training stage is lower than 40%, and the animal which grasps the pairing learning paradigm rule enters the testing stage.
After the training stage is completed and qualified, the PAL testing stage is entered, except for the total accuracy, the invention adopts a new analysis method and evaluation indexes to further carry out dynamic and fine analysis on the change process of each pair of visual-aerial paired learning results and the mutual relation thereof. Specifically, the same graphical stimulus appears at random in two of the three target locations, one blank screen and containing a graphical stimulus touching the screen at the correct location. The test stage and the last training stage have the same pattern stimulus type and the correct position corresponding to the pattern stimulus type. No matter whether the experimental animal touches the figure stimulus correctly or not, the next test run is randomly selected from the other five figure stimulus combination types, and the process is repeated until 100 test runs are carried out. This testing phase is 32 days, and in this PAL testing phase task, the experiment needs to record:
Figure BDA0002663820020000091
Figure BDA0002663820020000092
by analyzing the total accuracy of the experimental animal individuals in the testing stage and the single-pair visual space pairing combination accuracy of the experimental animal individuals in the testing stage, the two methods can judge the learning effect of the experimental animals in different groups, and the problems that the distinguishing effectiveness is poor and the floor effect is often presented in a long time in the existing experimental analysis method are solved.
According to the invention, through the analysis of the learning result of the single pair of visual space pairing combination, the difference between the accuracy and the standard reaching number of the optimal visual space pairing combination of each group is observed. And simultaneously observing the difference between the correct rate and the standard reaching number of the optimal two pairs of visual space pairing combinations of each group so as to observe the learning and cognitive abilities of experimental animals of each group.
The invention adopts 6 experimental animals containing a Control group (Control group) and an experimental group (Treated group), and converts the experimental animals with the single pair visual space pairing combination solid-borne rate lower than 40% in the last day into a PAL test stage according to the training in the 14-day training stage, namely the experimental animals have mastered and understood the learning paradigm, as shown in figure 2.
As shown in FIG. 3, in the PAL test stage, three pairs of visual space pairing combinations were extracted from the Raw date (Raw data) for each accuracy, and the Y PAL of the experimental animal individuals tested the total accuracy over 32 days of experimental testing.
Fig. 4 shows the accuracy of the Y three pairs of visual space pairing combination of the experimental animal individual, and the accuracy of the paradigm reflecting learning ability is far better than the total accuracy of the test. As shown in fig. 5, in the 32-day PAL assay, the total correct rate of the experimental animals between the two groups was continuously significantly different after day 28. In addition, the 5-day data of the experimental animals were averaged as a stage, and it can be seen in fig. 6 that the significant difference occurred in the 6 th stage, and the 30-day total correct rate mean of the two groups of experimental animals was compared, P < 0.001, as shown in fig. 7.
In the new experimental parameter extraction data shown in fig. 8, it can be seen that the correct rate of the optimal single-pair visual space pairing combination in the control group is significantly different from that in the experimental group at each time period, and the standard-reaching experimental animals in the control group are much higher than that in the experimental group (the correct rate of the optimal visual space pairing combination reaches 80% as the determination standard); as shown in fig. 9, in the optimal two pairs of visual space pairing combination learning, the accuracy of the control group at the 4 th stage is significantly different from that of the experimental group, and the standard experimental animals in the control group are higher than those in the experimental group.
The experimental results show that the number of experimental animals which can be subjected to visual space pairing combination in the same time by the control group is larger than that of the experimental group, the learning efficiency is higher, compared with the existing data, the experimental method can analyze the learning difference between two groups of experimental animals in a shorter period, the experimental stability is high, and the distinguishing efficiency of the paradigm reflecting the learning ability is far better than that of the existing experimental analysis method.
From this experimental analysis it is known that: the cognitive ability of the control group of experimental animals was significantly higher than that of the experimental group.
The invention sets the training standard of the original touch screen operation box in the experimental step from the training stage to the testing stage, so that the experimental animal can master the basic rule required by the subsequent pairing learning in the final training stage, and the learning ability and the learning effect of the animal can be evaluated more objectively in the testing stage. And finally, providing comprehensive and accurate experimental data and an analysis method for the evaluation of the matching and associated learning capability of the touch screen operation box through the analysis of experimental parameters such as the total accuracy of the individual experimental animals in the training stage, the accuracy of the individual single-pair visual space matching combination of the experimental animals in the training stage, the robustness of the individual single-pair visual space matching combination of the experimental animals in the training stage, the overall accuracy of the individual experimental animals in the testing stage, the accuracy of the individual single-pair visual space matching combination of the experimental animals in the testing stage and the like, and having great significance on the complex cognitive capability detection method of the experimental animals.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are intended to be within the scope of the invention.

Claims (7)

1. An experiment and analysis method based on touch screen operation box visual space pairing learning is characterized in that experiment original data in a training stage generated in the process of the touch screen operation box pairing association learning experiment are extracted; sequentially extracting different graphic stimulus combinations generated by each experiment and the positions of the corresponding experimental animal touch screens; presetting the correct position of a touch screen, and automatically counting whether each test is correct or not; calculating to obtain the total accuracy, the single-pair visual space pairing combination accuracy and the single-pair visual space pairing combination solid-state rate of the experimental animal individuals in the training stage; determining experimental animals converted to the PAL testing stage according to the single-pair visual space pairing combination solid-state enforcement rate of the last day of the training stage; extracting experiment original data generated by the experimental animal converted into the testing stage, calculating to obtain the total accuracy of the experimental animal individuals in the testing stage and the accuracy of the single pair of visual space pairing combination, and evaluating the cognitive ability of the experimental animal.
2. The experiment and analysis method based on touch screen operation box visual space pairing learning of claim 1, wherein single pair of visual space pairing combination solid state rate obtained by experimental animal individuals on the last day of training stage is converted from experimental animal with single pair of visual space pairing combination solid state rate lower than 40% to PAL testing stage, and experimental animal individuals not meeting the requirement are eliminated.
3. The touch screen operation box visual space pairing learning-based experiment and analysis method according to claim 1, characterized in that in a training phase, three touch screens arranged in a touch operation box are respectively corresponding to a preset correct graphic stimulus, two random screens appear with two same stimuli at each trial time, one screen is corresponding to the preset correct graphic stimulus, and the third screen is blank; when the experimental animal selects the figure stimulus with the correct position, the trial time is regarded as correct and the stimulus combination with the different figure or position of the trial time randomly appears in the next trial time; when the experimental animal touches the graphic stimulus at the wrong position or the blank screen, the trial is regarded as wrong and the stimulus combination with the same graphic and position still appears in the next trial until the experimental animal selects the graphic stimulus touching the correct position, and the single-pair visual space pairing combination solid-state ratio is calculated according to the following formula:
Figure FDA0002663820010000011
4. the touch screen operation box visual space pairing learning-based experiment and analysis method according to claim 3, wherein a training period is set for 14 days, the training time is less than 1 hour per day, and 100 trials are stopped within 1 hour, wherein the accuracy of the individual single-pair visual space pairing combination of experimental animals in the training period is as follows:
Figure FDA0002663820010000021
5. the touch screen operation box visual space pairing learning-based experiment and analysis method according to claim 3, characterized in that the experimental animal qualified by training enters a PAL test stage, every test controls two random screens of three screens in the touch screen operation box to present the same graphic stimulus and one blank screen by a program, and no matter whether the individual experimental animal touches the graphic stimulus correctly, the next test randomly selects from the other graphic stimulus combination types, and calculates the individual accuracy of the experimental animal and the single pair visual space pairing combination accuracy in the test stage by the following formula;
Figure FDA0002663820010000022
Figure FDA0002663820010000023
6. the touch screen operation box visual space pairing learning-based experiment and analysis method according to claim 5, wherein in the training phase and the testing phase, the adopted graphic stimulus and the correct position corresponding to the graphic stimulus are completely the same.
7. The touch screen operation box visual space pairing learning-based experiment and analysis method as claimed in claim 1, wherein the method further comprises parameter analysis of uncorrected correct rate of experimental animal individuals in training stage, correct rate of uncorrected trial times of single pair visual space pairing combination, correct rate of single screen, and correct rate of single screen in testing stage.
CN202010912439.7A 2020-09-02 2020-09-02 Experiment and analysis method based on touch screen operation box vision space pairing learning Active CN111990971B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010912439.7A CN111990971B (en) 2020-09-02 2020-09-02 Experiment and analysis method based on touch screen operation box vision space pairing learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010912439.7A CN111990971B (en) 2020-09-02 2020-09-02 Experiment and analysis method based on touch screen operation box vision space pairing learning

Publications (2)

Publication Number Publication Date
CN111990971A true CN111990971A (en) 2020-11-27
CN111990971B CN111990971B (en) 2023-07-07

Family

ID=73465228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010912439.7A Active CN111990971B (en) 2020-09-02 2020-09-02 Experiment and analysis method based on touch screen operation box vision space pairing learning

Country Status (1)

Country Link
CN (1) CN111990971B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103125406A (en) * 2013-03-19 2013-06-05 郑州大学 Visual cognitive behavioral learning automatic training system of big and small mice
CN104616231A (en) * 2013-11-04 2015-05-13 中国科学院心理研究所 Cloud-based psychological laboratory system and using method thereof
US20160058370A1 (en) * 2014-09-02 2016-03-03 Apple, Inc. Accurate calorimetry for intermittent exercises
CN106614383A (en) * 2017-02-27 2017-05-10 中国科学院昆明动物研究所 Training method and device for correcting screen contact way of macaque
US20180132453A1 (en) * 2016-01-31 2018-05-17 Margaret Jeannette Foster Functional Communication Lexigram Device and Training Method for Animal and Human
WO2018112103A1 (en) * 2016-12-13 2018-06-21 Akili Interactive Labs, Inc. Platform for identification of biomarkers using navigation tasks and treatments using navigation tasks
CN109566447A (en) * 2018-12-07 2019-04-05 中国人民解放军军事科学院军事医学研究院 The research system of non-human primate movement and cognitive function based on touch screen
CN110199902A (en) * 2019-07-07 2019-09-06 江苏赛昂斯生物科技有限公司 Toy touch screen conditioned behavior control box

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103125406A (en) * 2013-03-19 2013-06-05 郑州大学 Visual cognitive behavioral learning automatic training system of big and small mice
CN104616231A (en) * 2013-11-04 2015-05-13 中国科学院心理研究所 Cloud-based psychological laboratory system and using method thereof
US20160058370A1 (en) * 2014-09-02 2016-03-03 Apple, Inc. Accurate calorimetry for intermittent exercises
US20180132453A1 (en) * 2016-01-31 2018-05-17 Margaret Jeannette Foster Functional Communication Lexigram Device and Training Method for Animal and Human
WO2018112103A1 (en) * 2016-12-13 2018-06-21 Akili Interactive Labs, Inc. Platform for identification of biomarkers using navigation tasks and treatments using navigation tasks
CN106614383A (en) * 2017-02-27 2017-05-10 中国科学院昆明动物研究所 Training method and device for correcting screen contact way of macaque
CN109566447A (en) * 2018-12-07 2019-04-05 中国人民解放军军事科学院军事医学研究院 The research system of non-human primate movement and cognitive function based on touch screen
CN110199902A (en) * 2019-07-07 2019-09-06 江苏赛昂斯生物科技有限公司 Toy touch screen conditioned behavior control box

Also Published As

Publication number Publication date
CN111990971B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
Richler et al. A meta-analysis and review of holistic face processing.
CN107887024A (en) Intelligent diagnosis system and traditional Chinese medical science robot based on traditional Chinese medical science robot
Kollman et al. A taxometric investigation of the latent structure of social anxiety disorder in outpatients with anxiety and mood disorders
Fabiani et al. Definition, identification, and reliability of measurement of the P300 component of the event-related brain potential
Brigman et al. Executive functions in the heterozygous reeler mouse model of schizophrenia.
JP3581319B2 (en) Brain activity automatic judgment device
CN100591285C (en) Dementia check device, dementia check server, dementia check client, and dementia check system
CN107292129A (en) Susceptible genotype detection method
CN106667426B (en) A kind of maxicell access function detection method and system
Vandermosten et al. White matter pathways mediate parental effects on children’s reading precursors
US20210005306A1 (en) Systems and Methods for Neuro-Behavioral Relationships in Dimensional Geometric Embedding (N-Bridge)
Smits et al. An expert panel‐based study on recognition of gastro‐esophageal reflux in difficult esophageal pH‐impedance tracings
CN111466882A (en) Intelligent traditional Chinese medicine hand diagnosis analysis system and method
EP4025971A1 (en) Measuring spatial working memory using mobile-optimized software tools
CN111028230A (en) Fundus image optic disc and macula lutea positioning detection algorithm based on YOLO-V3
CN111990971A (en) Experiment and analysis method based on touch screen operation box visual space pairing learning
Meyer et al. Perceiving faces: Too much, too fast?—face specificity in response caution.
Morcom Re-engaging with the past: recapitulation of encoding operations during episodic retrieval
CN112231037A (en) Method for designing corresponding icon based on emotion
CN111768863A (en) Artificial intelligence-based infant development monitoring system and method
CN105475163B (en) It is a kind of to test primate memory, judgement and the method for executive capability
CN114298189A (en) Fatigue driving detection method, device, equipment and storage medium
CN113729708A (en) Lie evaluation method based on eye movement technology
Masud et al. Advanced correlation grid: Analysis and visualisation of functional connectivity among multiple spike trains
Zhao et al. Unitization of internal and external features contributes to associative recognition for faces: Evidence from modulations of the FN400

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant