CN116172560A - Reaction speed evaluation method for reaction force training, terminal equipment and storage medium - Google Patents
Reaction speed evaluation method for reaction force training, terminal equipment and storage medium Download PDFInfo
- Publication number
- CN116172560A CN116172560A CN202310428570.XA CN202310428570A CN116172560A CN 116172560 A CN116172560 A CN 116172560A CN 202310428570 A CN202310428570 A CN 202310428570A CN 116172560 A CN116172560 A CN 116172560A
- Authority
- CN
- China
- Prior art keywords
- training
- reaction
- information
- feature
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/162—Testing reaction times
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4848—Monitoring or testing the effects of treatment, e.g. of medication
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient ; user input means
- A61B5/742—Details of notification to user or communication with user or patient ; user input means using visual displays
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient ; user input means
- A61B5/7475—User input or interface means, e.g. keyboard, pointing device, joystick
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Physics & Mathematics (AREA)
- Surgery (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Child & Adolescent Psychology (AREA)
- Educational Technology (AREA)
- Developmental Disabilities (AREA)
- Social Psychology (AREA)
- Psychology (AREA)
- Psychiatry (AREA)
- Hospice & Palliative Care (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention discloses a reaction speed evaluation method for reaction training, terminal equipment and a storage medium, wherein the method comprises the following steps: acquiring target characteristics for reaction training, and acquiring display duration information of the target characteristics; acquiring element information of the target feature, and receiving a training instruction based on the element information of the target feature, wherein the element information comprises color or orientation; and controlling a training object to execute training actions corresponding to the training instructions according to the training instructions, and acquiring execution time information of the training actions, wherein the training actions comprise: controlling the training object to switch colors or controlling the training object to adjust the orientation; and determining a reaction speed evaluation result according to the display duration information and the execution time information. The invention can evaluate the reaction speed in the reaction force training, is more accurate in evaluation, and is helpful for more intuitively evaluating the effect of the reaction force training.
Description
Technical Field
The present invention relates to the field of reaction force training technologies, and in particular, to a reaction speed evaluation method, a terminal device, and a storage medium for reaction force training.
Background
The reaction force is the response capability of a person to an emergency, and although the reaction force depends on the congenital condition of the person, the reaction force of the person can be improved through later training.
The prior art basically trains the reaction force through repeated training of a specific action, and evaluates the training result of the reaction force through artificial observation. Moreover, the reaction rate cannot be accurately estimated in the prior art.
Accordingly, there is a need for improvement and advancement in the art.
Disclosure of Invention
The invention aims to solve the technical problems that the reaction speed of the reaction training cannot be accurately estimated in the prior art by providing a reaction speed estimation method, terminal equipment and storage medium for the reaction training.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
in a first aspect, the present invention provides a reaction rate assessment method for reaction training, wherein the method comprises:
acquiring target characteristics for reaction training, and acquiring display duration information of the target characteristics;
acquiring element information of the target feature, and receiving a training instruction based on the element information of the target feature, wherein the element information comprises color or orientation;
and controlling a training object to execute training actions corresponding to the training instructions according to the training instructions, and acquiring execution time information of the training actions, wherein the training actions comprise: controlling the training object to switch colors or controlling the training object to adjust the orientation;
and determining a reaction speed evaluation result according to the display duration information and the execution time information.
In one implementation, the acquiring the target feature for the reaction training and acquiring the display duration information of the target feature includes:
acquiring display information in a preset area on a display screen, and determining the feature quantity of feature objects and the feature types of the feature objects according to the display information;
determining layout information according to the feature quantity and the feature type;
and determining target features in the feature objects according to the layout information.
In one implementation, the determining the target feature in the feature object according to the layout information includes:
determining a combination form of the feature objects according to the layout information, wherein the combination form comprises a single combination form formed by combining the same feature types and a mixed combination form formed by combining the feature objects with different feature types;
determining an arrangement mode of the feature objects based on the combination form and the feature quantity, and determining a target position corresponding to the arrangement mode;
and determining the target characteristics according to the target positions.
In one implementation manner, the obtaining the display duration information of the target feature includes:
acquiring initial display time and final display time of the target feature, wherein the initial display time is the time when the target feature appears in the preset area, and the final display time is the time when the target feature disappears in the preset area;
and determining the display duration information according to the initial display time and the ending display time.
In one implementation manner, the determining the reaction speed evaluation result according to the display duration information and the execution time information includes:
determining the time position of the execution time information in the display time information according to the display time information and the execution time information, and determining the reaction time according to the time position;
and determining a reaction speed evaluation result according to the reaction duration.
In one implementation manner, the determining the reaction speed evaluation result according to the reaction duration includes:
acquiring an interval range corresponding to the reaction duration according to the reaction duration;
and obtaining the response speed corresponding to the interval range, calculating a training score of the reaction training based on the response speed, and taking the training score as the response speed evaluation result, wherein the response speed is in direct proportion to the training score.
In one implementation, the method further comprises:
and according to the reaction speed evaluation result, acquiring an animation corresponding to the reaction speed evaluation result, and playing the animation.
In a second aspect, an embodiment of the present invention further provides a reaction rate assessment device for reaction training, where the device includes:
the display duration determining module is used for acquiring target characteristics for reaction training and acquiring display duration information of the target characteristics;
the training instruction receiving module is used for acquiring element information of the target feature and receiving a training instruction based on the element information of the target feature, wherein the element information comprises color or orientation;
the execution time determining module is used for controlling a training object to execute training actions corresponding to the training instructions according to the training instructions and obtaining execution time information of the training actions, wherein the training actions comprise: controlling the training object to switch colors or controlling the training object to adjust the orientation;
and the reaction speed evaluation module is used for determining a reaction speed evaluation result according to the display duration information and the execution time information.
In a third aspect, an embodiment of the present invention further provides a terminal device, where the terminal device includes a memory, a processor, and a reaction rate evaluation program for reaction training stored in the memory and capable of running on the processor, and when the processor executes the reaction rate evaluation program for reaction training, the processor implements the step of the reaction rate evaluation method for reaction training in any one of the above schemes.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium, where the computer readable storage medium stores a reaction rate evaluation program for reaction training, where the reaction rate evaluation program for reaction training, when executed by a processor, implements the steps of the reaction rate evaluation method for reaction training according to any one of the above schemes.
The beneficial effects are that: compared with the prior art, the invention provides a reaction speed evaluation method for reaction training, which comprises the steps of firstly acquiring target characteristics for the reaction training and acquiring display duration information of the target characteristics. Then, element information of the target feature is acquired, and a training instruction is received based on the element information of the target feature, wherein the element information comprises a color or an orientation. And then, controlling a training object to execute a training action corresponding to the training instruction according to the training instruction, and acquiring execution time information of the training action, wherein the training action comprises: and controlling the training object to switch colors or controlling the training object to adjust the orientation. And finally, determining a reaction speed evaluation result according to the display duration information and the execution time information. The invention can evaluate the reaction speed in the reaction force training, is more accurate in evaluation, and is helpful for more intuitively evaluating the effect of the reaction force training.
Drawings
FIG. 1 is a flowchart of a specific embodiment of a reaction rate assessment method for reaction training according to an embodiment of the present invention.
Fig. 2 is a functional schematic diagram of a reaction rate assessment device for reaction training according to an embodiment of the present invention.
Fig. 3 is a schematic block diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and effects of the present invention clearer and more specific, the present invention will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment provides a reaction speed assessment method for reaction training, and based on the method of the embodiment, the reaction speed in the reaction training can be assessed, so that the assessment is more accurate, and the effect of the reaction training can be assessed more intuitively. In specific application, the embodiment first obtains a target feature for reaction training, and obtains display duration information of the target feature. Then, element information of the target feature is acquired, and a training instruction is received based on the element information of the target feature, wherein the element information comprises a color or an orientation. And then, controlling a training object to execute a training action corresponding to the training instruction according to the training instruction, and acquiring execution time information of the training action, wherein the training action comprises: and controlling the training object to switch colors or controlling the training object to adjust the orientation. And finally, determining a reaction speed evaluation result according to the display duration information and the execution time information.
For example, the target features are animal images displayed on a display screen, the animal images are displayed at regular time, and in the display process of the animal images, a certain reaction action is performed on a training object to perform reaction training. Therefore, the present embodiment first acquires display time length information of an animal image and element information, which may be a color of the animal image or an orientation of the animal image. Then, when performing the reaction training, the present embodiment may acquire and receive a training instruction based on the element information, and the training instruction may be used to adjust the color or orientation of the training object to be the same as the color or orientation of the animal image. When the training instruction is generated, there is a certain delay, so this embodiment determines the execution time information of the training instruction, which is the time node for operating the color conversion or orientation adjustment of the training object. In this embodiment, for the display duration information and the execution time information, the corresponding reaction duration may be determined, where the reaction duration is inversely proportional to the reaction speed. Therefore, the reaction rate evaluation result can be accurately obtained in the present embodiment.
Exemplary method
The reaction rate evaluation method for the reaction training of the embodiment can be applied to terminal equipment, wherein the terminal equipment comprises intelligent products such as computers, intelligent televisions, mobile phones and the like. As shown in fig. 1, the reaction rate evaluation method of the reaction force training of the present embodiment includes the steps of:
step S100, obtaining target characteristics for reaction training, and obtaining display duration information of the target characteristics.
The target feature in the present embodiment is used as a reference for the reactive force training, and the action of the reactive force training is formulated based on the target feature. To train the user's reaction force, the target feature of the present embodiment is displayed periodically, or the target template is moved, e.g., from the left side of the screen to the right side of the screen. When performing the reaction training, the present embodiment needs to enable the user to perform a corresponding action on the training object during the display process of the target feature, and based on this, evaluate the reaction. In order to measure the training effect and the rationality of the training based on the reaction force more advantageously, the present embodiment needs to obtain the display duration information of the target feature.
In one implementation, the method in this embodiment includes the following steps when acquiring the target feature:
step S101, obtaining display information in a preset area on a display screen, and determining the feature quantity of feature objects and the feature types of the feature objects according to the display information;
step S102, determining the layout information according to the feature quantity and the feature types;
step S103, determining target features in the feature objects according to the layout information.
In this embodiment, the feature object is used for training the reaction force, where the feature object includes a plurality of features, and the plurality of features exhibit a certain arrangement rule to form the layout information of the feature object. After the layout information is acquired, the embodiment can determine the target feature of the feature object based on the layout information. Specifically, the embodiment first obtains display information in a preset area on a display screen, and determines the feature quantity of the feature objects and the feature types of the feature objects according to the display information. In this embodiment, the feature number refers to an object of the features in the feature object, and the feature type refers to whether or not the respective features in the feature object are the same display pattern. Since the feature objects of the present embodiment are displayed in the display area, the display patterns corresponding to different feature objects are also different. For this reason, the present embodiment can determine layout information according to the number of features and the kind of features, and the layout information can reflect how the feature objects are arranged. Next, the embodiment can determine the combination form of the feature objects according to the layout information, where the combination form includes a single combination form formed by combining the same feature types and a mixed combination form formed by combining feature objects of different feature types. For example, when the feature types are patterns of small animals, it is determined that the combination of the feature objects is a single combination. And when the feature type includes a pattern of food and a pattern of small animals, it can be determined that the combination form of the feature objects is a mixed combination form at this time. Then, the embodiment determines the arrangement mode of the feature objects according to the combination form and the feature quantity. For example, when the number of features is less than 6 and the combination form is a single combination form, the arrangement mode is a transverse arrangement. For another example, when the number of features is 6 or more and the combination is a hybrid combination, the arrangement is a V-shaped arrangement. Then, the present embodiment may determine the corresponding target position based on the arrangement mode, where the target position is the position of the target feature. In one implementation, if the combination of the present embodiment is a symmetrical form, the feature object located at the right intermediate position is taken as the target feature. For example, when the feature object is 5 small animal patterns arranged laterally, the target feature is a small animal pattern at a middle position.
When the embodiment determines the target feature, the embodiment may acquire display duration information of the target feature, and first acquire an initial display time and a final display time of the target feature, where the initial display time is a time when the target feature appears in the preset area, and the final display time is a time when the target feature disappears in the preset area. Then, the embodiment determines the display duration information according to the start display time and the end display time.
In the present embodiment, the element information of the target feature reflects the color or orientation of the target feature. Since the target feature is a reference for the reactive training, the present embodiment can perform the reactive training according to the element information of the target feature. The embodiment can receive a training instruction, the training instruction is generated based on element information, and when the reaction force training is performed, the embodiment can control a training object to execute a training action corresponding to the training instruction according to the training instruction, so as to obtain a training result.
In this embodiment, the training object is an operation object for performing the reaction training in this embodiment, and the training object may be a plurality of buttons disposed on the lower right of the display screen. These buttons may be used to switch the color of the training object or to adjust the orientation of the training object. For example, when the element information of the target feature is yellow, a training instruction for switching the color of the training object to yellow may be generated, and an action corresponding to the training instruction may be executed to complete the reactive training. When executing the training action corresponding to the training instruction, the embodiment obtains the execution time information of the training action, where the execution time information is the time node information of the button of the training object pressed by the user.
Step 400, determining a reaction speed evaluation result according to the display duration information and the execution time information.
In this embodiment, after the display duration information is determined, the reaction speed may be estimated based on the display duration information and the execution time information, and the reaction speed estimation result may be determined. Because the execution time information is a time node for the user to execute the training instruction, when the user sees the target feature, the time between the user and the execution of the training instruction is the reaction time, and the reaction time is inversely proportional to the reaction speed, so that the embodiment can determine the reaction speed after obtaining the execution time information.
In one implementation, step S400 of the present embodiment specifically includes the following steps:
step S401, determining a time position of the execution time information in the display time information according to the display time information and the execution time information, and determining a reaction time according to the time position;
and step S402, determining a reaction speed evaluation result according to the reaction time.
Specifically, the embodiment determines, based on the display duration information and the execution time information, a time position of the execution time information in the display duration information, that is, a position of a time node corresponding to the execution time information in the whole display duration information, and determines a reaction duration according to the time position. For example, the display duration information is not 5S, the display start time is 10 points 25 minutes 3 seconds, the display end time is not 10 points 25 minutes 8 seconds, and the position of the time node corresponding to the execution time information is 10 points 25 minutes 5 seconds, so that the reaction duration can be determined to be 2 seconds. After determining the reaction time, the embodiment can determine the reaction speed evaluation result corresponding to the reaction time.
In one implementation manner, the embodiment of the present invention may segment the reaction duration, for example, the reaction duration is up to 1 second (including 1 second), and then the reaction speed evaluation result is determined as follows: the reaction speed is high. And (3) the reaction time is 1-2 seconds, and the reaction speed evaluation result is determined as follows: the reaction speed reaches the standard. And if the reaction time exceeds 2 seconds, determining that the reaction speed evaluation result is as follows: the reaction speed does not reach the standard.
In another implementation manner, the present embodiment may further obtain an interval range corresponding to the reaction duration according to the reaction duration. The interval range is a preset duration interval for corresponding to different reaction speeds. After determining the interval range, the embodiment may obtain a response speed corresponding to the interval range, calculate a training score of the reaction training based on the response speed, and use the training score as the response speed evaluation result, where the response speed is proportional to the training score. In this embodiment, the training score is calculated based on the reaction speed by a preset proportional function, and the independent variable is the reaction speed and the dependent variable is the training score. The training score of this example can be used as the reaction rate evaluation result, and thus the reaction rate can be evaluated. According to the reaction speed evaluation result, the embodiment obtains the animation corresponding to the reaction speed evaluation result, and plays the animation so as to remind the user of the reaction speed at the moment.
In addition, in the present embodiment, when performing the reaction force training, first, the reference color of the target feature, that is, the reference color for switching the color of the training object is acquired. Then, the embodiment receives a corresponding color switching instruction, for example, if the reference color is yellow, the corresponding color switching instruction is a yellow switching instruction, and the yellow switching instruction may be sent by the user by clicking a color button of the training object. After the color switching instruction is sent out, the embodiment controls the training object to carry out color adjustment according to the color switching instruction, then compares the adjusted color with the reference color, determines whether the adjusted color is consistent with the reference color, and obtains the training result based on the adjusted color. In the present embodiment, first, a reference orientation of the target feature, which is likewise a reference for adjusting the orientation of the training object, is acquired. Then, the embodiment receives a corresponding direction adjustment instruction, for example, if the reference direction is leftward, the corresponding direction adjustment instruction is a leftward adjustment instruction. Likewise, the instruction to adjust to the left may be issued by the user by clicking on the direction button of the training object. After the direction adjustment instruction is sent out, the embodiment controls the training object to perform direction adjustment according to the direction adjustment instruction, then compares the adjusted direction with the reference direction, determines whether the adjusted direction is consistent with the reference direction, and obtains the training result based on the fact that the adjusted direction is consistent with the reference direction. Of course, in order to achieve better reaction training, the present embodiment may execute the color switching instruction or the orientation adjustment instruction within a predetermined time period (for example, 0.5S) to ensure that the user performs a corresponding reaction within the time period. In this embodiment, the training result reflects whether the adjusted color is consistent with the reference color or whether the adjusted direction is consistent with the reference direction, and when the adjusted color is consistent with the reference direction, the training is satisfactory. Based on the above, the embodiment can evaluate the training result to obtain the evaluation result of the reactive training, so as to evaluate whether the reactive training meets the standard.
In a specific implementation, if the training result is that the adjusted color is the same as the reference color or the adjusted direction is the same as the reference direction, the training is satisfactory, and the training score is increased, for example, by 1 score. And if the training result is that the adjusted color is different from the reference color or the adjusted direction is different from the reference direction, the training is not satisfactory, and the training score is reduced. Then, the embodiment counts the training results of the continuous preset times of the reactive training, determines the times of the continuous preset times of the reactive training meeting the requirements, and calculates the total training score of the reactive training. And if the total training score exceeds a preset score value, determining that the evaluation result is that the reactive training reaches the standard.
In summary, the present embodiment first obtains a target feature for reaction training, and obtains display duration information of the target feature. Then, element information of the target feature is acquired, and a training instruction is received based on the element information of the target feature, wherein the element information comprises a color or an orientation. And then, controlling a training object to execute a training action corresponding to the training instruction according to the training instruction, and acquiring execution time information of the training action, wherein the training action comprises: and controlling the training object to switch colors or controlling the training object to adjust the orientation. And finally, determining a reaction speed evaluation result according to the display duration information and the execution time information.
Exemplary apparatus
Based on the above embodiment, the present invention further provides a reaction rate assessment device for reaction force training, as shown in fig. 2, the device includes: a display duration determination module 10, a training instruction receiving module 20, an execution time determination module 30, and a reaction speed evaluation module 40. Specifically, the display duration determining module 10 is configured to obtain a target feature for reaction training, and obtain display duration information of the target feature. The training instruction receiving module 20 is configured to obtain element information of the target feature, and receive a training instruction based on the element information of the target feature, where the element information includes a color or an orientation. The execution time determining module 30 is configured to control, according to the training instruction, a training object to execute a training action corresponding to the training instruction, and obtain execution time information of the training action, where the training action includes: and controlling the training object to switch colors or controlling the training object to adjust the orientation. The reaction speed evaluation module 40 is configured to determine a reaction speed evaluation result according to the display duration information and the execution time information.
In one implementation, the display duration determining module 10 includes:
the characteristic analysis unit is used for acquiring display information in a preset area on the display screen and determining the characteristic quantity of the characteristic objects and the characteristic types of the characteristic objects according to the display information;
a layout determining unit configured to determine the layout information according to the feature number and the feature type;
and the feature determining unit is used for determining target features in the feature objects according to the layout information.
In one implementation, the feature determining unit includes:
a form analysis subunit, configured to determine, according to the layout information, a combination form of the feature objects, where the combination form includes a single combination form formed by combining the same feature types and a mixed combination form formed by combining feature objects of different feature types;
a position determining subunit, configured to determine an arrangement manner of the feature objects based on the combination form and the feature quantity, and determine a target position corresponding to the arrangement manner;
and the characteristic determining subunit is used for determining the target characteristic according to the target position.
In one implementation, the display duration determining module 10 includes:
the time acquisition unit is used for acquiring the initial display time and the final display time of the target feature, wherein the initial display time is the time when the target feature appears in the preset area, and the final display time is the time when the target feature disappears in the preset area;
and the duration analysis unit is used for determining the display duration information according to the initial display time and the ending display time.
In one implementation, the reaction rate assessment module 40 includes:
the reaction time length determining unit is used for determining the time position of the execution time information in the display time length information according to the display time length information and the execution time information, and determining the reaction time length according to the time position;
and the evaluation result determining unit is used for determining a reaction speed evaluation result according to the reaction time length.
In one implementation, the evaluation result determining unit includes:
a section range obtaining subunit, configured to obtain a section range corresponding to the reaction duration according to the reaction duration;
and the training score determining subunit is used for obtaining the response speed corresponding to the interval range, calculating the training score of the reaction training based on the response speed, and taking the training score as the response speed evaluation result, wherein the response speed is in direct proportion to the training score.
In one implementation, the apparatus further comprises:
and the animation reminding unit is used for acquiring the animation corresponding to the reaction speed evaluation result according to the reaction speed evaluation result and playing the animation.
The working principle of each module in the reaction rate evaluation device for reaction force training in this embodiment is the same as that of each step in the above method embodiment, and will not be described here again.
Based on the above embodiment, the present invention also provides a terminal device, and a schematic block diagram of the terminal device may be shown in fig. 3. The terminal device may include one or more processors 100 (only one shown in fig. 3), a memory 101, and a computer program 102 stored in the memory 101 and executable on the one or more processors 100, e.g., a program for reaction rate assessment for reaction force training. The one or more processors 100, when executing the computer program 102, may implement the various steps in an embodiment of a method for reaction rate assessment for reaction training. Alternatively, the functions of the modules/units in the apparatus embodiments that may perform the reaction rate assessment of the reaction force training when the one or more processors 100 execute the computer program 102 are not limited herein.
In one embodiment, the processor 100 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In one embodiment, the memory 101 may be an internal storage unit of the electronic device, such as a hard disk or a memory of the electronic device. The memory 101 may also be an external storage device of the electronic device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) card, a flash card (flash card) or the like, which are provided on the electronic device. Further, the memory 101 may also include both an internal storage unit and an external storage device of the electronic device. The memory 101 is used to store computer programs and other programs and data required by the terminal device. The memory 101 may also be used to temporarily store data that has been output or is to be output.
It will be appreciated by persons skilled in the art that the functional block diagram shown in fig. 3 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the terminal device to which the present inventive arrangements are applied, and that a particular terminal device may include more or fewer components than shown, or may combine some of the components, or may have a different arrangement of components.
Those skilled in the art will appreciate that implementing all or part of the above-described methods may be accomplished by way of a computer program, which may be stored on a non-transitory computer readable storage medium, that when executed may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, operational database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual operation data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
In summary, the invention discloses a reaction rate evaluation method, terminal equipment and storage medium for reaction training, wherein the method comprises the following steps: acquiring target characteristics for reaction training, and acquiring display duration information of the target characteristics; acquiring element information of the target feature, and receiving a training instruction based on the element information of the target feature, wherein the element information comprises color or orientation; and controlling a training object to execute training actions corresponding to the training instructions according to the training instructions, and acquiring execution time information of the training actions, wherein the training actions comprise: controlling the training object to switch colors or controlling the training object to adjust the orientation; and determining a reaction speed evaluation result according to the display duration information and the execution time information. The invention can evaluate the reaction speed in the reaction force training, is more accurate in evaluation, and is helpful for more intuitively evaluating the effect of the reaction force training.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (10)
1. A method for assessing the rate of response of a reactive exercise, the method comprising:
acquiring target characteristics for reaction training, and acquiring display duration information of the target characteristics;
acquiring element information of the target feature, and receiving a training instruction based on the element information of the target feature, wherein the element information comprises color or orientation;
and controlling a training object to execute training actions corresponding to the training instructions according to the training instructions, and acquiring execution time information of the training actions, wherein the training actions comprise: controlling the training object to switch colors or controlling the training object to adjust the orientation;
and determining a reaction speed evaluation result according to the display duration information and the execution time information.
2. The reaction rate evaluation method for reaction training according to claim 1, wherein the acquiring the target feature for reaction training and acquiring the display duration information of the target feature includes:
acquiring display information in a preset area on a display screen, and determining the feature quantity of feature objects and the feature types of the feature objects according to the display information;
determining layout information according to the feature quantity and the feature type;
and determining target features in the feature objects according to the layout information.
3. The reaction rate evaluation method of reaction training according to claim 2, wherein the determining the target feature in the feature object based on the layout information includes:
determining a combination form of the feature objects according to the layout information, wherein the combination form comprises a single combination form formed by combining the same feature types and a mixed combination form formed by combining the feature objects with different feature types;
determining an arrangement mode of the feature objects based on the combination form and the feature quantity, and determining a target position corresponding to the arrangement mode;
and determining the target characteristics according to the target positions.
4. The reaction rate assessment method for reaction training according to claim 3, wherein the acquiring the display duration information of the target feature includes:
acquiring initial display time and final display time of the target feature, wherein the initial display time is the time when the target feature appears in the preset area, and the final display time is the time when the target feature disappears in the preset area;
and determining the display duration information according to the initial display time and the ending display time.
5. The reaction rate assessment method for reaction training according to claim 4, wherein the determining the reaction rate assessment result according to the display duration information and the execution time information comprises:
determining the time position of the execution time information in the display time information according to the display time information and the execution time information, and determining the reaction time according to the time position;
and determining a reaction speed evaluation result according to the reaction duration.
6. The reaction rate assessment method for reaction training according to claim 5, wherein the determining the reaction rate assessment result according to the reaction time period comprises:
acquiring an interval range corresponding to the reaction duration according to the reaction duration;
and obtaining the response speed corresponding to the interval range, calculating a training score of the reaction training based on the response speed, and taking the training score as the response speed evaluation result, wherein the response speed is in direct proportion to the training score.
7. The reaction rate assessment method of reaction training according to claim 1, characterized in that the method further comprises:
and according to the reaction speed evaluation result, acquiring an animation corresponding to the reaction speed evaluation result, and playing the animation.
8. A reaction rate assessment device for reaction force training, the device comprising:
the display duration determining module is used for acquiring target characteristics for reaction training and acquiring display duration information of the target characteristics;
the training instruction receiving module is used for acquiring element information of the target feature and receiving a training instruction based on the element information of the target feature, wherein the element information comprises color or orientation;
the execution time determining module is used for controlling a training object to execute training actions corresponding to the training instructions according to the training instructions and obtaining execution time information of the training actions, wherein the training actions comprise: controlling the training object to switch colors or controlling the training object to adjust the orientation;
and the reaction speed evaluation module is used for determining a reaction speed evaluation result according to the display duration information and the execution time information.
9. A terminal device, characterized in that the terminal device comprises a memory, a processor and a reaction rate assessment program for reaction training stored in the memory and executable on the processor, the processor implementing the steps of the reaction rate assessment method for reaction training according to any one of claims 1-7 when executing the reaction rate assessment program for reaction training.
10. A computer-readable storage medium, wherein a reaction rate estimation program for reaction training is stored on the computer-readable storage medium, which when executed by a processor, implements the steps of the reaction rate estimation method for reaction training according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310428570.XA CN116172560B (en) | 2023-04-20 | 2023-04-20 | Reaction speed evaluation method for reaction force training, terminal equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310428570.XA CN116172560B (en) | 2023-04-20 | 2023-04-20 | Reaction speed evaluation method for reaction force training, terminal equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116172560A true CN116172560A (en) | 2023-05-30 |
CN116172560B CN116172560B (en) | 2023-08-29 |
Family
ID=86449166
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310428570.XA Active CN116172560B (en) | 2023-04-20 | 2023-04-20 | Reaction speed evaluation method for reaction force training, terminal equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116172560B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117116428A (en) * | 2023-10-23 | 2023-11-24 | 深圳市心流科技有限公司 | Training strategy adjustment method and device for concentration training |
CN118248292A (en) * | 2024-05-27 | 2024-06-25 | 浙江强脑科技有限公司 | Comprehensive training method, device, terminal and medium for memory and concentration |
Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090210786A1 (en) * | 2008-02-19 | 2009-08-20 | Kabushiki Kaisha Toshiba | Image processing apparatus and image processing method |
CN101657846A (en) * | 2007-04-13 | 2010-02-24 | 耐克国际有限公司 | Visual cognition and coordination testing and training |
CN101779960A (en) * | 2010-02-24 | 2010-07-21 | 沃建中 | Test system and method of stimulus information cognition ability value |
CN106361357A (en) * | 2016-08-30 | 2017-02-01 | 西南交通大学 | Testing method and system for driving ability |
CN107847226A (en) * | 2015-06-05 | 2018-03-27 | 视空间工房株式会社 | The early detection prophylactic procedures and system of mild dementia |
CN108471991A (en) * | 2015-08-28 | 2018-08-31 | 艾腾媞乌有限责任公司 | cognitive skill training system and program |
CN109106329A (en) * | 2018-09-20 | 2019-01-01 | 南方科技大学 | Visual fatigue stimulation method, device, storage medium and terminal |
CN110974261A (en) * | 2019-12-18 | 2020-04-10 | 中国科学院深圳先进技术研究院 | Talent evaluation system, talent evaluation method and related products |
US20200342648A1 (en) * | 2017-10-27 | 2020-10-29 | Sony Corporation | Information processing apparatus, information processing method, program, and information processing system |
WO2020262973A2 (en) * | 2019-06-25 | 2020-12-30 | 고려대학교 산학협력단 | Virtual reality game- and biosignal sensor-based vestibulo-ocular reflex assessment and rehabilitation apparatus |
CN112396114A (en) * | 2020-11-20 | 2021-02-23 | 中国科学院深圳先进技术研究院 | Evaluation system, evaluation method and related product |
CN113284052A (en) * | 2020-02-19 | 2021-08-20 | 阿里巴巴集团控股有限公司 | Image processing method and apparatus |
CN114392563A (en) * | 2022-01-19 | 2022-04-26 | 福建中科多特健康科技有限公司 | Color block generation method and storage device |
CN114788918A (en) * | 2022-06-23 | 2022-07-26 | 深圳市心流科技有限公司 | Method, device, equipment and storage medium for formulating reaction force training scheme |
CN115192023A (en) * | 2022-06-21 | 2022-10-18 | 王岩韬 | Fatigue response testing and evaluating method for civil aviation mental practitioner |
CN115373519A (en) * | 2022-10-21 | 2022-11-22 | 北京脑陆科技有限公司 | Electroencephalogram data interactive display method, device and system and computer equipment |
CN115444423A (en) * | 2022-10-18 | 2022-12-09 | 上海耐欣科技有限公司 | Prediction system, prediction method, prediction device, prediction equipment and storage medium |
CN115845214A (en) * | 2023-02-27 | 2023-03-28 | 深圳市心流科技有限公司 | Concentration and reaction dual-training method and device and terminal equipment |
CN115845397A (en) * | 2023-02-15 | 2023-03-28 | 深圳市心流科技有限公司 | Training difficulty control method and device for attention and reaction force and terminal equipment |
CN115887859A (en) * | 2023-02-17 | 2023-04-04 | 深圳市心流科技有限公司 | Concentration and reaction training control method and device and terminal equipment |
-
2023
- 2023-04-20 CN CN202310428570.XA patent/CN116172560B/en active Active
Patent Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101657846A (en) * | 2007-04-13 | 2010-02-24 | 耐克国际有限公司 | Visual cognition and coordination testing and training |
US20090210786A1 (en) * | 2008-02-19 | 2009-08-20 | Kabushiki Kaisha Toshiba | Image processing apparatus and image processing method |
CN101779960A (en) * | 2010-02-24 | 2010-07-21 | 沃建中 | Test system and method of stimulus information cognition ability value |
CN107847226A (en) * | 2015-06-05 | 2018-03-27 | 视空间工房株式会社 | The early detection prophylactic procedures and system of mild dementia |
CN108471991A (en) * | 2015-08-28 | 2018-08-31 | 艾腾媞乌有限责任公司 | cognitive skill training system and program |
CN106361357A (en) * | 2016-08-30 | 2017-02-01 | 西南交通大学 | Testing method and system for driving ability |
US20200342648A1 (en) * | 2017-10-27 | 2020-10-29 | Sony Corporation | Information processing apparatus, information processing method, program, and information processing system |
CN109106329A (en) * | 2018-09-20 | 2019-01-01 | 南方科技大学 | Visual fatigue stimulation method, device, storage medium and terminal |
WO2020262973A2 (en) * | 2019-06-25 | 2020-12-30 | 고려대학교 산학협력단 | Virtual reality game- and biosignal sensor-based vestibulo-ocular reflex assessment and rehabilitation apparatus |
CN110974261A (en) * | 2019-12-18 | 2020-04-10 | 中国科学院深圳先进技术研究院 | Talent evaluation system, talent evaluation method and related products |
CN113284052A (en) * | 2020-02-19 | 2021-08-20 | 阿里巴巴集团控股有限公司 | Image processing method and apparatus |
CN112396114A (en) * | 2020-11-20 | 2021-02-23 | 中国科学院深圳先进技术研究院 | Evaluation system, evaluation method and related product |
CN114392563A (en) * | 2022-01-19 | 2022-04-26 | 福建中科多特健康科技有限公司 | Color block generation method and storage device |
CN115192023A (en) * | 2022-06-21 | 2022-10-18 | 王岩韬 | Fatigue response testing and evaluating method for civil aviation mental practitioner |
CN114788918A (en) * | 2022-06-23 | 2022-07-26 | 深圳市心流科技有限公司 | Method, device, equipment and storage medium for formulating reaction force training scheme |
CN115444423A (en) * | 2022-10-18 | 2022-12-09 | 上海耐欣科技有限公司 | Prediction system, prediction method, prediction device, prediction equipment and storage medium |
CN115373519A (en) * | 2022-10-21 | 2022-11-22 | 北京脑陆科技有限公司 | Electroencephalogram data interactive display method, device and system and computer equipment |
CN115845397A (en) * | 2023-02-15 | 2023-03-28 | 深圳市心流科技有限公司 | Training difficulty control method and device for attention and reaction force and terminal equipment |
CN115887859A (en) * | 2023-02-17 | 2023-04-04 | 深圳市心流科技有限公司 | Concentration and reaction training control method and device and terminal equipment |
CN115845214A (en) * | 2023-02-27 | 2023-03-28 | 深圳市心流科技有限公司 | Concentration and reaction dual-training method and device and terminal equipment |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117116428A (en) * | 2023-10-23 | 2023-11-24 | 深圳市心流科技有限公司 | Training strategy adjustment method and device for concentration training |
CN117116428B (en) * | 2023-10-23 | 2024-04-09 | 深圳市心流科技有限公司 | Training strategy adjustment method and device for concentration training |
CN118248292A (en) * | 2024-05-27 | 2024-06-25 | 浙江强脑科技有限公司 | Comprehensive training method, device, terminal and medium for memory and concentration |
Also Published As
Publication number | Publication date |
---|---|
CN116172560B (en) | 2023-08-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116172560B (en) | Reaction speed evaluation method for reaction force training, terminal equipment and storage medium | |
CN115845214B (en) | Concentration force and reaction force dual training method and device and terminal equipment | |
CN116721739B (en) | Method and device for evaluating concentration training based on movement duration | |
CN115845397B (en) | Method and device for controlling training difficulty of concentration force and reaction force and terminal equipment | |
JPWO2019064743A1 (en) | Authentication device, authentication system, authentication method, and program | |
EP2457499A1 (en) | Line-of-sight estimation device | |
CN116392700B (en) | Method, device and storage medium for training concentration force based on electroencephalogram signals | |
JP2011115450A (en) | Apparatus, method, and program for determining aimless state | |
CN116370788A (en) | Training effect real-time feedback method and device for concentration training and terminal equipment | |
US9959635B2 (en) | State determination device, eye closure determination device, state determination method, and storage medium | |
US12039879B2 (en) | Electronic device and method for eye-contact training | |
CN116139387B (en) | Training control method for reaction force training, terminal equipment and storage medium | |
CN116172561B (en) | Reactive training evaluation method and device, terminal equipment and storage medium | |
CN112883947B (en) | Image processing method, image processing device, computer equipment and storage medium | |
CN114120190A (en) | Video switching method, display terminal and storage medium | |
JP6671263B2 (en) | Congestion prediction device, congestion prediction method, and congestion prediction program | |
JPWO2022145294A5 (en) | ||
CN109218623B (en) | Light supplementing method and device, computer device and readable storage medium | |
CN113386779A (en) | Driving style recognition method, device and storage medium | |
CN109614878B (en) | Model training and information prediction method and device | |
CN111723609B (en) | Model optimization method, device, electronic equipment and storage medium | |
CN113900526A (en) | Three-dimensional human body image display control method and device, storage medium and display equipment | |
CN116721738B (en) | Method and device for controlling movement of target object based on concentration force | |
CN115661935B (en) | Human body action accuracy determining method and device | |
CN115793861B (en) | APP theme scene setting method and device, terminal equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |