CN115953930B - Concentration training method, device, terminal and storage medium based on vision tracking - Google Patents
Concentration training method, device, terminal and storage medium based on vision tracking Download PDFInfo
- Publication number
- CN115953930B CN115953930B CN202310253392.1A CN202310253392A CN115953930B CN 115953930 B CN115953930 B CN 115953930B CN 202310253392 A CN202310253392 A CN 202310253392A CN 115953930 B CN115953930 B CN 115953930B
- Authority
- CN
- China
- Prior art keywords
- virtual
- time length
- throwing
- user
- visual tracking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Rehabilitation Tools (AREA)
Abstract
The invention discloses a method, a device, a terminal and a storage medium for training concentration based on visual tracking, wherein the method comprises the following steps: generating a virtual barrier in a video scene, and acquiring the visual tracking time length of a user; extracting virtual throws from the virtual thrower pool according to the visual tracking time length; acquiring throwing selection information of a user, and updating the existing time length according to the throwing selection information and the extracted virtual throwing object; judging whether the updated existence time length is zeroed, if not, randomly transforming the position of the virtual obstacle, and continuously executing the step of obtaining the visual tracking time length of the user until the existence time length is zeroed; and determining the shape and the size of the new virtual obstacle generated next time according to the whole duration consumed by the zero existing duration. The problem of among the prior art based on the eye tracking concentration training method usually adopt the visual tracking card to train child's eyeball to chase after the ability and concentration, nevertheless the visual tracking card can't the dynamic adjustment training degree of difficulty, leads to training effect limited is solved.
Description
Technical Field
The invention relates to the field of concentration training, in particular to a vision tracking-based concentration training method, a device, a terminal and a storage medium.
Background
Vision tracking capability refers to the ability to follow and track objects with coordinated eye movements, which is a fundamental premise of learning and reading text materials. Some children have poor vision coordination and tracking capability, and the problems of missed reading, wrong writing, incorrect writing and incorrect writing posture, slow speed and poor concentration can easily occur. The existing eye tracking-based concentration training method generally adopts a vision tracking card to train the eye tracking ability and concentration of children, however, the vision tracking card cannot dynamically adjust the training difficulty, so that the training effect is limited.
Accordingly, there is a need for improvement and development in the art.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a concentration training method, a device, a terminal and a storage medium based on vision tracking, which aims to solve the problems that the concentration training method based on the vision tracking in the prior art usually adopts a vision tracking card to train the eye tracking ability and concentration of children, but the vision tracking card cannot dynamically adjust the training difficulty, so that the training effect is limited.
The technical scheme adopted by the invention for solving the problems is as follows:
in a first aspect, an embodiment of the present invention provides a method for training concentration based on visual tracking, where the method includes:
playing a video scene for concentration training, generating a virtual barrier in the video scene, and acquiring visual tracking time length generated by a user based on the virtual barrier;
extracting virtual throwing objects from a preset virtual throwing object pool according to the visual tracking time length, wherein different virtual throwing objects have different influences on the existence time length of the virtual obstacle respectively;
acquiring throwing selection information of the user, and updating the existing time length according to the throwing selection information and the extracted virtual throwing object;
judging whether the updated time length is zero, if not, randomly transforming the position of the virtual obstacle, and continuously executing the step of obtaining the visual tracking time length of the user until the time length is zero;
and acquiring the whole duration consumed by zeroing the existing duration, and determining the shape and the size of the new virtual obstacle generated next time according to the whole duration.
In one embodiment, when the screen area is less than or equal to the preset area value, the obtaining the visual tracking duration of the user includes:
acquiring a facial image of the user, and determining the pupil position of the user according to the facial image;
determining the viewpoint position corresponding to the user on a screen according to the pupil position;
and taking the time length spent by the viewpoint position reaching the position of the virtual obstacle as the visual tracking time length.
In one embodiment, when the screen area is greater than a preset area value, the obtaining the visual tracking duration of the user includes:
acquiring the rotation direction and the rotation angle of the head of the user;
according to the rotation direction and the rotation angle, determining the viewpoint position corresponding to the user on the screen;
and taking the time length spent by the viewpoint position reaching the position of the virtual obstacle as the visual tracking time length.
In one embodiment, the virtual projectile pool includes an aggressor pool and a healing pool, and the extracting virtual projectile from the preset virtual projectile pool according to the visual tracking time period includes:
determining extraction probabilities corresponding to the attack object pool and the cure object pool respectively according to the visual tracking time length, wherein the attack object pool comprises a plurality of attacks for reducing the existing time length, and the cure object pool comprises a plurality of cure objects for increasing the existing time length;
and randomly extracting the virtual throwing objects from the virtual throwing object pool according to the extraction probabilities respectively corresponding to the attacking object pool and the healing object pool.
In one embodiment, the updating the present duration according to the throwing selection information and the extracted virtual throwing object includes:
when the throwing selection information is throwing, updating the existing time length according to the extracted virtual throwing object;
and when the throwing selection information is not throwing, maintaining the current time duration of the virtual obstacle.
In one embodiment, the randomly transforming the position of the virtual obstacle comprises:
determining a transformation distance according to the visual tracking time length, wherein the visual tracking time length is in inverse relation with the transformation distance;
randomly transforming the position of the virtual obstacle on the circumference taking the current position of the virtual obstacle as the circle center and the transformation distance as the radius.
In one embodiment, the overall duration is proportional to the size of the shape of the new virtual obstacle to be generated next.
In a second aspect, an embodiment of the present invention further provides a concentration training device based on vision tracking, where the device includes:
the playing module is used for playing a video scene for performing concentration training, generating a virtual barrier in the video scene and acquiring visual tracking time generated by a user based on the virtual barrier;
the extraction module is used for extracting virtual throwing objects from a preset virtual throwing object pool according to the visual tracking time length, wherein different virtual throwing objects have different influences on the existence time length of the virtual obstacle respectively;
the updating module is used for acquiring throwing selection information of the user and updating the existing time length according to the throwing selection information and the extracted virtual throwing object;
the transformation module is used for judging whether the updated time length is zero or not, if not, randomly transforming the position of the virtual obstacle, and continuously executing the step of obtaining the visual tracking time length of the user until the time length is zero;
the determining module is used for obtaining the whole duration consumed by zeroing the existing duration and determining the shape and the size of the new virtual obstacle generated next time according to the whole duration.
In one embodiment, when the screen area is less than or equal to the preset area value, the playing module includes:
a pupil positioning unit, configured to acquire a face image of the user, and determine a pupil position of the user according to the face image;
the pupil analysis unit is used for determining the viewpoint position corresponding to the user on the screen according to the pupil position;
and the tracking timing unit is used for taking the time spent by the viewpoint position reaching the position of the virtual obstacle as the visual tracking time.
In one embodiment, when the screen area is greater than the preset area value, the playing module includes:
the head detection unit is used for acquiring the rotation direction and the rotation angle of the head of the user;
the head analysis unit is used for determining the viewpoint position corresponding to the user on the screen according to the rotation direction and the rotation angle;
and the tracking timing unit is used for taking the time spent by the viewpoint position reaching the position of the virtual obstacle as the visual tracking time.
In one embodiment, the virtual projectile pool includes an aggressor pool and a curative pool, and the extraction module includes:
the probability distribution unit is used for determining the extraction probabilities respectively corresponding to the attack object pool and the healing object pool according to the visual tracking time length, wherein the attack object pool comprises a plurality of attacks for reducing the existence time length, and the healing object pool comprises a plurality of heals for increasing the existence time length;
and the object extraction unit is used for randomly extracting the virtual throwing objects from the virtual throwing object pool according to the extraction probabilities respectively corresponding to the attack object pool and the healing object pool.
In one embodiment, the update module includes:
the throwing unit is used for updating the existing time length according to the extracted virtual throwing object when the throwing selection information is throwing;
and the throwing-free unit is used for keeping the current time duration of the virtual obstacle when the throwing selection information is throwing-free.
In one embodiment, the transformation module comprises:
a distance determining unit, configured to determine a transformation distance according to the visual tracking duration, where the visual tracking duration is inversely related to the transformation distance;
and the position transformation unit is used for randomly transforming the position of the virtual obstacle on the circumference taking the current position of the virtual obstacle as the center of a circle and taking the transformation distance as the radius.
In one embodiment, the overall duration is proportional to the size of the shape of the new virtual obstacle to be generated next.
In a third aspect, an embodiment of the present invention further provides a terminal, where the terminal includes a memory and one or more processors; the memory stores more than one program; the program comprising instructions for performing a vision tracking-based concentration training method as described in any one of the above; the processor is configured to execute the program.
In a fourth aspect, embodiments of the present invention further provide a computer readable storage medium having stored thereon a plurality of instructions, wherein the instructions are adapted to be loaded and executed by a processor to implement the steps of any of the above-described vision tracking-based concentration training methods.
The invention has the beneficial effects that: according to the embodiment of the invention, the virtual barrier is generated in the video scene, so that the visual tracking time length of the user is obtained; extracting virtual throws from the virtual thrower pool according to the visual tracking time length; acquiring throwing selection information of a user, and updating the existing time length according to the throwing selection information and the extracted virtual throwing object; judging whether the updated existence time length is zeroed, if not, randomly transforming the position of the virtual obstacle, and continuously executing the step of obtaining the visual tracking time length of the user until the existence time length is zeroed; and determining the shape and the size of the new virtual obstacle generated next time according to the whole duration consumed by the zero existing duration. The problem of among the prior art based on the eye tracking concentration training method usually adopt the visual tracking card to train child's eyeball to chase after the ability and concentration, nevertheless the visual tracking card can't the dynamic adjustment training degree of difficulty, leads to training effect limited is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings may be obtained according to the drawings without inventive effort to those skilled in the art.
Fig. 1 is a schematic flow chart of a concentration training method based on vision tracking according to an embodiment of the present invention.
Fig. 2 is a schematic block diagram of a concentration training device based on visual tracking according to an embodiment of the present invention.
Fig. 3 is a schematic block diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The invention discloses a method, a device, a terminal and a storage medium for training concentration based on visual tracking, which are used for making the purposes, the technical scheme and the effects of the invention clearer and more specific, and the invention is further described in detail below by referring to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In view of the above-mentioned drawbacks of the prior art, the present invention provides a method for focus training based on visual tracking, the method comprising: playing a video scene for concentration training, generating a virtual barrier in the video scene, and acquiring visual tracking time length generated by a user based on the virtual barrier; extracting virtual throwing objects from a preset virtual throwing object pool according to the visual tracking time length, wherein different virtual throwing objects have different influences on the existence time length of the virtual obstacle respectively; acquiring throwing selection information of the user, and updating the existing time length according to the throwing selection information and the extracted virtual throwing object; judging whether the updated time length is zero, if not, randomly transforming the position of the virtual obstacle, and continuously executing the step of obtaining the visual tracking time length of the user until the time length is zero; and acquiring the whole duration consumed by zeroing the existing duration, and determining the shape and the size of the new virtual obstacle generated next time according to the whole duration. The problem of among the prior art based on the eye tracking concentration training method usually adopt the visual tracking card to train child's eyeball to chase after the ability and concentration, nevertheless the visual tracking card can't the dynamic adjustment training degree of difficulty, leads to training effect limited is solved.
As depicted in fig. 1, the method comprises:
step S100, playing a video scene for concentration training, generating a virtual barrier in the video scene, and acquiring a visual tracking time length generated by a user based on the virtual barrier;
step 200, extracting virtual throwing objects from a preset virtual throwing object pool according to the visual tracking time length, wherein different virtual throwing objects have different influences on the existence time length of the virtual obstacle respectively;
step S300, obtaining throwing selection information of the user, and updating the existing time length according to the throwing selection information and the extracted virtual throwing object;
step S400, judging whether the updated time length is zero, if not, randomly transforming the position of the virtual obstacle, and continuously executing the step of obtaining the visual tracking time length of the user until the time length is zero;
and S500, acquiring the whole duration consumed by zeroing the existing duration, and determining the shape and the size of the new virtual obstacle generated next time according to the whole duration.
Specifically, first, a user will watch a section of play video, and the play video will show the virtual character and the specific video scene corresponding to the user. The terminal can generate virtual barriers in the video scene according to a preset time interval or in a random mode when playing the video. When the virtual obstacle is generated, the terminal can acquire the time spent from the viewpoint of the user to track the virtual obstacle, and the visual tracking time is obtained. The user's concentration training task is to eliminate the virtual obstacle, i.e., to zero the duration of the virtual obstacle. The visual tracking time length can reflect the response speed of the user, the terminal can extract virtual throwing objects from the virtual throwing object pool according to the preset reward and punishment system based on the visual tracking time length, the virtual throwing objects can be attacks such as hammers and stones, and the existence time length of virtual barriers can be reduced; the virtual throwing object can also be a cure object such as a band-aid and love, and the existence time of the virtual barrier can be prolonged. The user needs to correctly judge whether to throw the virtual throwing object according to the extracted virtual throwing object. After receiving the selection information made by the user, the terminal updates the existence time of the virtual throwing object according to the selection information of the user. If the updated existing time length is not zero, the current virtual obstacle is not eliminated, and training is continued. Randomly transforming the positions of the virtual obstacles, and repeating the steps until the existing time length of the current virtual obstacle is zero, namely, the current virtual obstacle is eliminated. The terminal can evaluate the current concentration value and the response speed of the user according to the whole duration consumed by zeroing the existing duration, and the shorter the whole duration is, the higher the current concentration value of the user is, and the faster the response is. And then the training difficulty of the concentration training is dynamically adjusted according to the overall duration, the adjustment of the training difficulty is mainly reflected in the adjustment of the shape and the size of the virtual obstacle, and the larger the shape of the virtual obstacle, the lower the training difficulty is, and otherwise, the higher the training difficulty is. Through the dynamic adjustment training degree of difficulty in the concentration training, can effectively promote user's training effect.
In one implementation manner, when the screen area is smaller than or equal to the preset area value, the acquiring the visual tracking duration of the user specifically includes:
step S101, acquiring a facial image of the user, and determining the pupil position of the user according to the facial image;
step S102, determining the viewpoint position corresponding to the user on a screen according to the pupil position;
and step S103, taking the time spent by the viewpoint position reaching the position of the virtual obstacle as the visual tracking time.
For a small screen, the tracking range of the user is small, and thus the viewpoint position can be determined by detecting the pupil position of the user. Specifically, a photographing device is provided in advance on a screen, and a face image of a user is acquired by the photographing device. The facial image is analyzed and the pupil position of the user is determined by gray level detection. Since the relative positional relationship of the user to the screen is known, the pupil position can be converted into the viewpoint position of the user on the screen. When the viewpoint position falls into the region where the virtual obstacle is located, judging that the user viewpoint reaches the virtual obstacle, and then calculating the time spent in the process to obtain the visual tracking time.
In another implementation manner, when the screen area is greater than the preset area value, the acquiring the visual tracking duration of the user specifically includes:
step S104, acquiring the rotation direction and the rotation angle of the head of the user;
step S105, determining the viewpoint position corresponding to the user on the screen according to the rotation direction and the rotation angle;
and step S106, taking the time spent by the viewpoint position reaching the position of the virtual obstacle as the visual tracking time.
For a large screen, the tracking range of the user is large, and thus the viewpoint position needs to be determined by the movement condition of the user's head. Specifically, the user wears a specific detecting device on the head in advance for acquiring the rotation direction and rotation angle of the head. And then determining the face orientation of the user according to the rotation direction and the rotation angle, and determining the viewpoint position of the user on the screen according to the face orientation and the predetermined relative position relationship between the user and the screen. When the viewpoint position falls into the region where the virtual obstacle is located, judging that the user viewpoint reaches the virtual obstacle, and then calculating the time spent in the process to obtain the visual tracking time.
According to the embodiment, different viewpoint positions are distributed to screens with different sizes, so that the accuracy of viewpoint positioning can be effectively improved, and further reliable visual tracking time is obtained.
In one implementation, the virtual throwing object pool includes an aggressor pool and a curative pool, and the step S200 specifically includes:
step S201, determining extraction probabilities corresponding to the aggressor pool and the healer pool respectively according to the visual tracking time length, wherein the aggressor pool comprises a plurality of aggressors for reducing the existence time length, and the healer pool comprises a plurality of healers for increasing the existence time length;
step S202, randomly extracting the virtual throwing objects from the virtual throwing object pool according to the extraction probabilities respectively corresponding to the attacking object pool and the healing object pool.
Because the visual tracking duration can reflect the response speed of the user, the embodiment sets a reward and punishment system for the visual tracking duration in advance. Specifically, the shorter the visual tracking duration is, the faster the user's reaction is, the greater the probability of extracting an attack object pool is, and the probability of extracting a healing object pool is small; the longer the visual tracking time length, the slower the user's reaction, the smaller the probability of extraction of the aggressor pool and the greater the probability of extraction of the healing pool. Because the task of the user is to eliminate the virtual obstacle, only the attacker can reduce the existence time of the virtual obstacle, and the healed object can increase the existence time of the virtual obstacle instead, the extraction probability of the two pools is adjusted through the visual tracking time, which is equivalent to rewarding or punishing the user according to the response speed, thereby integrating the reaction training of the user into the concentration training to obtain a better training effect.
In one implementation, the updating the existing duration according to the throwing selection information and the extracted virtual throwing object specifically includes:
step S301, when the throwing selection information is throwing, updating the existing time length according to the extracted virtual throwing object;
and step S302, when the throwing selection information is not throwing, the current time duration of the virtual obstacle is maintained.
Specifically, after receiving selection information input by a user, the terminal indicates that a life value of a virtual obstacle needs to be updated based on a virtual throwing object generated at present if the selection information is throwing; if the selection information is not thrown, the virtual throwing object generated at present is invalid, and the current life value of the virtual obstacle is maintained.
In one implementation, the randomly transforming the position of the virtual obstacle includes:
step S401, determining a transformation distance according to the visual tracking time length, wherein the visual tracking time length and the transformation distance are in inverse proportion;
step S402, randomly transforming the position of the virtual obstacle on the circumference taking the current position of the virtual obstacle as the center and the transformation distance as the radius.
Specifically, the longer the visual tracking duration is, the slower the current response of the user is, and in order to reduce training difficulty, a position is randomly selected to be used as a new appearance position near the original appearance position of the virtual obstacle; the shorter the visual tracking time length is, the faster the current response of the user is, and in order to improve training difficulty, a position is randomly selected from the area far from the original appearance position of the virtual obstacle to serve as a new appearance position of the virtual obstacle. According to the method and the device for dynamically adjusting the random appearance area of the virtual obstacle through the visual tracking time, the problem that a user has poor training effect due to improper training difficulty can be avoided.
In one implementation, the overall duration is proportional to the size of the shape of the new virtual obstacle that is generated next.
Specifically, the overall duration can objectively reflect the current concentration degree of the user, so that the shape and the size of the virtual obstacle generated next time are dynamically adjusted according to the overall duration. The longer the whole duration is, the lower the current concentration degree of the user is, the larger the shape of the new virtual obstacle generated next time is, so that the training difficulty is reduced; the shorter the whole duration is, the higher the current concentration degree of the user is, and the smaller the shape of the new virtual obstacle generated next time is, so that the training difficulty is improved. Thereby avoiding the problem of poor training effect caused by improper training difficulty of users.
Based on the above embodiment, the present invention further provides a concentration training device based on visual tracking, as shown in fig. 2, where the device includes:
the playing module 01 is used for playing a video scene for performing concentration training, generating a virtual barrier in the video scene, and acquiring visual tracking time length generated by a user based on the virtual barrier;
the extraction module 02 is configured to extract virtual throws from a preset virtual throws pool according to the visual tracking duration, where different virtual throws have different effects on the existence duration of the virtual obstacle respectively;
the updating module 03 is configured to obtain throwing selection information of the user, and update the duration of existence according to the throwing selection information and the extracted virtual throwing object;
the conversion module 04 is configured to determine whether the updated duration returns to zero, if not, randomly convert the position of the virtual obstacle, and continuously perform the step of obtaining the visual tracking duration of the user until the duration returns to zero;
the determining module 05 is configured to obtain an overall duration consumed for zeroing the existing duration, and determine a shape and a size of the virtual obstacle that is generated next according to the overall duration.
In one embodiment, when the screen area is less than or equal to the preset area value, the playing module 01 includes:
a pupil positioning unit, configured to acquire a face image of the user, and determine a pupil position of the user according to the face image;
the pupil analysis unit is used for determining the viewpoint position corresponding to the user on the screen according to the pupil position;
and the tracking timing unit is used for taking the time spent by the viewpoint position reaching the position of the virtual obstacle as the visual tracking time.
In one embodiment, when the screen area is greater than the preset area value, the playing module 01 includes:
the head detection unit is used for acquiring the rotation direction and the rotation angle of the head of the user;
the head analysis unit is used for determining the viewpoint position corresponding to the user on the screen according to the rotation direction and the rotation angle;
and the tracking timing unit is used for taking the time spent by the viewpoint position reaching the position of the virtual obstacle as the visual tracking time.
In one embodiment, the virtual projectile pool includes an aggressor pool and a curative pool, and the extraction module 02 includes:
the probability distribution unit is used for determining the extraction probabilities respectively corresponding to the attack object pool and the healing object pool according to the visual tracking time length, wherein the attack object pool comprises a plurality of attacks for reducing the existence time length, and the healing object pool comprises a plurality of heals for increasing the existence time length;
and the object extraction unit is used for randomly extracting the virtual throwing objects from the virtual throwing object pool according to the extraction probabilities respectively corresponding to the attack object pool and the healing object pool.
In one embodiment, the updating module 03 includes:
the throwing unit is used for updating the existing time length according to the extracted virtual throwing object when the throwing selection information is throwing;
and the throwing-free unit is used for keeping the current time duration of the virtual obstacle when the throwing selection information is throwing-free.
In one embodiment, the transformation module 04 includes:
a distance determining unit, configured to determine a transformation distance according to the visual tracking duration, where the visual tracking duration is inversely related to the transformation distance;
and the position transformation unit is used for randomly transforming the position of the virtual obstacle on the circumference taking the current position of the virtual obstacle as the center of a circle and taking the transformation distance as the radius.
In one embodiment, the overall duration is proportional to the size of the shape of the new virtual obstacle to be generated next.
Based on the above embodiment, the present invention also provides a terminal, and a functional block diagram thereof may be shown in fig. 3. The terminal comprises a processor, a memory, a network interface and a display screen which are connected through a system bus. Wherein the processor of the terminal is adapted to provide computing and control capabilities. The memory of the terminal includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the terminal is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a vision tracking based concentration training method. The display screen of the terminal may be a liquid crystal display screen or an electronic ink display screen.
It will be appreciated by those skilled in the art that the functional block diagram shown in fig. 3 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the terminal to which the present inventive arrangements may be applied, and that a particular terminal may include more or less components than those shown, or may combine some of the components, or have a different arrangement of components.
In one implementation, the memory of the terminal has stored therein one or more programs, and the execution of the one or more programs by one or more processors includes instructions for performing a vision tracking based concentration training method.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
In summary, the invention discloses a concentration training method, a device, a terminal and a storage medium based on vision tracking, wherein the method comprises the following steps: playing a video scene for concentration training, generating a virtual barrier in the video scene, and acquiring visual tracking time length generated by a user based on the virtual barrier; extracting virtual throwing objects from a preset virtual throwing object pool according to the visual tracking time length, wherein different virtual throwing objects have different influences on the existence time length of the virtual obstacle respectively; acquiring throwing selection information of the user, and updating the existing time length according to the throwing selection information and the extracted virtual throwing object; judging whether the updated time length is zero, if not, randomly transforming the position of the virtual obstacle, and continuously executing the step of obtaining the visual tracking time length of the user until the time length is zero; and acquiring the whole duration consumed by zeroing the existing duration, and determining the shape and the size of the new virtual obstacle generated next time according to the whole duration. The problem of among the prior art based on the eye tracking concentration training method usually adopt the visual tracking card to train child's eyeball to chase after the ability and concentration, nevertheless the visual tracking card can't the dynamic adjustment training degree of difficulty, leads to training effect limited is solved.
It is to be understood that the invention is not limited in its application to the examples described above, but is capable of modification and variation in light of the above teachings by those skilled in the art, and that all such modifications and variations are intended to be included within the scope of the appended claims.
Claims (10)
1. A method of concentration training based on visual tracking, the method comprising:
playing a video scene for concentration training, generating a virtual barrier in the video scene, and acquiring visual tracking time length generated by a user based on the virtual barrier;
extracting virtual throwing objects from a preset virtual throwing object pool according to the visual tracking time length, wherein different virtual throwing objects respectively have different influences on the existence time length of the virtual obstacle, the virtual throwing objects are attacks or healers, the attacks are used for reducing the existence time length of the virtual obstacle, and the healers are used for increasing the existence time length of the virtual obstacle;
acquiring throwing selection information of the user, and updating the existing time length according to the throwing selection information and the extracted virtual throwing object;
judging whether the updated time length is zero, if not, randomly transforming the position of the virtual obstacle, and continuously executing the step of obtaining the visual tracking time length of the user until the time length is zero;
and acquiring the whole duration consumed by zeroing the existing duration, and determining the shape and the size of the new virtual obstacle generated next time according to the whole duration.
2. The vision tracking-based concentration training method according to claim 1, wherein the acquiring the vision tracking duration of the user when the screen area is less than or equal to a preset area value includes:
acquiring a facial image of the user, and determining the pupil position of the user according to the facial image;
determining the viewpoint position corresponding to the user on a screen according to the pupil position;
and taking the time length spent by the viewpoint position reaching the position of the virtual obstacle as the visual tracking time length.
3. The vision tracking-based concentration training method according to claim 1, wherein the acquiring the vision tracking duration of the user when the screen area is greater than a preset area value includes:
acquiring the rotation direction and the rotation angle of the head of the user;
according to the rotation direction and the rotation angle, determining the viewpoint position corresponding to the user on the screen;
and taking the time length spent by the viewpoint position reaching the position of the virtual obstacle as the visual tracking time length.
4. The vision tracking-based concentration training method of claim 1, wherein the virtual projectile pool includes an aggressor pool and a healing pool, and wherein the extracting virtual projectile from a preset virtual projectile pool according to the vision tracking duration includes:
determining extraction probabilities corresponding to the attack object pool and the cure object pool respectively according to the visual tracking time length, wherein the attack object pool comprises a plurality of attacks for reducing the existing time length, and the cure object pool comprises a plurality of cure objects for increasing the existing time length;
and randomly extracting the virtual throwing objects from the virtual throwing object pool according to the extraction probabilities respectively corresponding to the attacking object pool and the healing object pool.
5. The vision tracking-based concentration training method according to claim 1, wherein the updating the presence duration according to the throwing selection information and the extracted virtual throwing object includes:
when the throwing selection information is throwing, updating the existing time length according to the extracted virtual throwing object;
and when the throwing selection information is not throwing, maintaining the current time duration of the virtual obstacle.
6. The vision tracking-based concentration training method of claim 1 wherein the randomly transforming the position of the virtual obstacle comprises:
determining a transformation distance according to the visual tracking time length, wherein the visual tracking time length is in inverse relation with the transformation distance;
randomly transforming the position of the virtual obstacle on the circumference taking the current position of the virtual obstacle as the circle center and the transformation distance as the radius.
7. The vision tracking-based concentration training method of claim 1 wherein the overall duration is proportional to the size of the shape of the new virtual obstacle that is generated next.
8. An attention training device based on visual tracking, the device comprising:
the playing module is used for playing a video scene for performing concentration training, generating a virtual barrier in the video scene and acquiring visual tracking time generated by a user based on the virtual barrier;
the extraction module is used for extracting virtual throwing objects from a preset virtual throwing object pool according to the visual tracking time length, wherein different virtual throwing objects respectively have different influences on the existence time length of the virtual obstacle, the virtual throwing objects are attacks or heals, the attacks are used for reducing the existence time length of the virtual obstacle, and the heals are used for increasing the existence time length of the virtual obstacle;
the updating module is used for acquiring throwing selection information of the user and updating the existing time length according to the throwing selection information and the extracted virtual throwing object;
the transformation module is used for judging whether the updated time length is zero or not, if not, randomly transforming the position of the virtual obstacle, and continuously executing the step of obtaining the visual tracking time length of the user until the time length is zero;
the determining module is used for obtaining the whole duration consumed by zeroing the existing duration and determining the shape and the size of the new virtual obstacle generated next time according to the whole duration.
9. A terminal for performing concentration training, the terminal comprising a memory and one or more processors; the memory stores more than one program; the program comprising instructions for performing the vision tracking-based concentration training method of any one of claims 1-7; the processor is configured to execute the program.
10. A computer readable storage medium having stored thereon a plurality of instructions adapted to be loaded and executed by a processor to implement the steps of the vision tracking based concentration training method of any of the preceding claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310253392.1A CN115953930B (en) | 2023-03-16 | 2023-03-16 | Concentration training method, device, terminal and storage medium based on vision tracking |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310253392.1A CN115953930B (en) | 2023-03-16 | 2023-03-16 | Concentration training method, device, terminal and storage medium based on vision tracking |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115953930A CN115953930A (en) | 2023-04-11 |
CN115953930B true CN115953930B (en) | 2023-06-06 |
Family
ID=85896270
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310253392.1A Active CN115953930B (en) | 2023-03-16 | 2023-03-16 | Concentration training method, device, terminal and storage medium based on vision tracking |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115953930B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116503696B (en) * | 2023-06-29 | 2023-10-17 | 浙江强脑科技有限公司 | Concentration training method and device based on virtual defense mechanism and terminal equipment |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060287137A1 (en) * | 2005-05-20 | 2006-12-21 | Jeffrey Chu | Virtual Batting Range |
US20080102991A1 (en) * | 2006-10-27 | 2008-05-01 | Thomas Clark Hawkins | Athlete Reaction Training System |
US9873031B2 (en) * | 2012-06-20 | 2018-01-23 | Cellpoint Systems, Inc. | Smart target system for combat fitness and competition training |
CN111282275B (en) * | 2020-03-06 | 2022-03-11 | 腾讯科技(深圳)有限公司 | Method, device, equipment and storage medium for displaying collision traces in virtual scene |
CA3182568A1 (en) * | 2020-05-08 | 2021-11-11 | Sumitomo Pharma Co., Ltd. | Three-dimensional cognitive ability evaluation system |
CN111672108A (en) * | 2020-05-29 | 2020-09-18 | 腾讯科技(深圳)有限公司 | Virtual object display method, device, terminal and storage medium |
CN115517679A (en) * | 2022-09-13 | 2022-12-27 | 浙江强脑科技有限公司 | Concentration assessment method, device, equipment and storage medium |
-
2023
- 2023-03-16 CN CN202310253392.1A patent/CN115953930B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN115953930A (en) | 2023-04-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115953930B (en) | Concentration training method, device, terminal and storage medium based on vision tracking | |
Lacquaniti et al. | Gravity in the brain as a reference for space and time perception | |
CN116312077B (en) | Concentration training method, device, terminal and storage medium | |
US20180260643A1 (en) | Verification method and system | |
CN105184246A (en) | Living body detection method and living body detection system | |
KR102474279B1 (en) | Taekwondo movement judging system | |
Payne et al. | Adult age differences in wrap-up during sentence comprehension: Evidence from ex-Gaussian distributional analyses of reading time. | |
CN103959228A (en) | Mechanism for facilitating enhanced viewing perspective of video images at computing devices | |
CN112462941A (en) | Teaching interaction method, device, system and medium based on gesture recognition | |
CN110265145A (en) | A kind of personal health methods of risk assessment, device, electronic equipment and storage medium | |
CN113536893A (en) | Online teaching learning concentration degree identification method, device, system and medium | |
CN114500857A (en) | Image shooting method and device, terminal equipment and storage medium | |
CN114464298B (en) | Eye tracking and naked eye 3D-based attention training method and device | |
US20220092300A1 (en) | Display apparatus and method for controlling thereof | |
CN115517679A (en) | Concentration assessment method, device, equipment and storage medium | |
JP2002282542A (en) | Game device using handwriting recognition, image erasing method in game device and program therefor | |
KR102336900B1 (en) | cognitive training curriculum composition system using artificial intelligence and method for driving the same | |
de Lemos Fonseca et al. | Motor skill acquisition during a balance task as a process of optimization of motor primitives | |
van Mastrigt et al. | Implicit reward-based motor learning | |
CN112307966A (en) | Event display method and device, storage medium and electronic equipment | |
CN115019515B (en) | Imaging control method and system | |
Al-Fawakhiri et al. | Independent influences of movement distance and visual distance on Fitts’ law. | |
KR102363435B1 (en) | Apparatus and method for providing feedback on golf swing motion | |
Bennett Jr | Security, economy, and the end of interstate rivalry | |
RU2670648C1 (en) | Interactive method of biometric user authentication |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |