CN116650789B - Concentration training method based on touch data and gyroscope data - Google Patents

Concentration training method based on touch data and gyroscope data Download PDF

Info

Publication number
CN116650789B
CN116650789B CN202310966149.4A CN202310966149A CN116650789B CN 116650789 B CN116650789 B CN 116650789B CN 202310966149 A CN202310966149 A CN 202310966149A CN 116650789 B CN116650789 B CN 116650789B
Authority
CN
China
Prior art keywords
virtual character
determining
scene
training
scene element
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310966149.4A
Other languages
Chinese (zh)
Other versions
CN116650789A (en
Inventor
韩璧丞
杨锦陈
张蕙琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mental Flow Technology Co Ltd
Original Assignee
Shenzhen Mental Flow Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mental Flow Technology Co Ltd filed Critical Shenzhen Mental Flow Technology Co Ltd
Priority to CN202310966149.4A priority Critical patent/CN116650789B/en
Publication of CN116650789A publication Critical patent/CN116650789A/en
Application granted granted Critical
Publication of CN116650789B publication Critical patent/CN116650789B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4848Monitoring or testing the effects of treatment, e.g. of medication
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0414Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using force sensing means to determine a position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0044Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the sight sense
    • A61M2021/005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the sight sense images, e.g. video

Abstract

The invention discloses a concentration training method based on touch data and gyroscope data, and relates to the field of man-machine interaction. According to the invention, the video is played to the user through the playing terminal, and the user controls the virtual character in the video to move in a touch control and shaking mode of the playing terminal, so that a man-machine interaction type concentration training method is realized, and the interest of concentration training is improved. The method and the system can further judge the concentration training effect of the user by analyzing the total moving route of the virtual character and scene elements covered on the total moving route, quantitatively display the training effect to the user in a score form, and further optimize the training experience of the user by adding an effect feedback step. The problem that the concentration training method in the prior art lacks interestingness is solved.

Description

Concentration training method based on touch data and gyroscope data
Technical Field
The invention relates to the field of man-machine interaction, in particular to a concentration training method based on touch data and gyroscope data.
Background
Attention is also called attention, which means the psychological state of a person when focusing on something/activity, and good attention helps to improve the efficiency of work and study. With the rapid development of society, a great deal of information and knowledge are required to be accepted by people, and high requirements are put on the concentration and reaction of people. The existing concentration training method mainly adopts a mode of doing exercises, such as scratch and erase training. The scratch training is to scratch the corresponding target to realize the concentration training according to the requirement of the test, and the training problems are relatively fixed and boring, and lack of interest.
Accordingly, there is a need for improvement and development in the art.
Disclosure of Invention
The invention aims to solve the technical problems of the prior art, provides a concentration training method based on touch data and gyroscope data, and aims to solve the problem that the existing concentration training method lacks interestingness.
The technical scheme adopted by the invention for solving the problems is as follows:
in a first aspect, an embodiment of the present invention provides a concentration training method implemented based on touch data and gyroscope data, where the method includes:
acquiring touch data and gyroscope data corresponding to a playing device, wherein a video picture played by the playing device comprises a plurality of scene elements and virtual characters corresponding to a target user, and each scene element and each virtual character respectively have a preset moving direction;
adjusting the moving route of the virtual character according to the touch data and the gyroscope data;
when the position of the virtual character is overlapped with the position of any scene element, updating the position of the virtual character according to the scene element; after updating, continuing to execute the step of acquiring the touch data and the gyroscope data corresponding to the playing equipment until the preset training time is reached;
acquiring a total moving route of the virtual character in the training time period, and determining a plurality of target scene elements according to the total moving route, wherein the target scene elements are the scene elements covered by the total moving route;
and determining concentration training scores corresponding to the target users according to the total moving route and each target scene element.
In one embodiment, the adjusting the movement route of the virtual character according to the touch data and the gyroscope data includes:
determining touch pressure according to the touch data, and determining the advancing speed of the virtual character according to the touch pressure;
determining the inclination direction and the inclination angle of the playing device according to the gyroscope data, determining the vertical movement direction of the virtual character according to the inclination direction, and determining the vertical movement speed of the virtual character according to the inclination angle;
and determining the moving route according to the advancing speed, the vertical moving direction and the vertical moving speed.
In one embodiment, the category of the scene element includes an obstacle, and the updating the position of the virtual character according to the scene element includes:
when the scene element is the obstacle, acquiring a shape parameter of the scene element and the color difference degree of the scene element and the picture background;
determining a retreating distance according to the shape parameter and the color difference degree;
and updating the position of the virtual character according to the backward distance.
In one embodiment, the category of the scene element includes a bonus, the updating the position of the virtual character based on the scene element includes:
when the scene element is the bonus, acquiring category information of the scene element;
determining a forward distance according to the category information;
and updating the position of the virtual character according to the advancing distance.
In one embodiment, the category of the scene element includes a bonus, the updating the position of the virtual character based on the scene element includes:
when the scene element is the bonus, acquiring category information of the scene element and a distance value between the scene element and the nearest barrier;
determining a forward distance according to the category information and the distance value;
and updating the position of the virtual character according to the advancing distance.
In one embodiment, the determining the concentration training score corresponding to the target user according to the total moving route and each target scene element includes:
determining element category proportions and element total numbers according to the target scene elements;
determining a weight coefficient according to the element category proportion and the element total number;
determining the total advancing distance corresponding to the virtual character according to the total moving route;
determining an initial concentration training score corresponding to the target user according to the total advancing distance;
and determining the concentration training score according to the weight coefficient and the initial concentration training score.
In one embodiment, the categories of the scene elements include obstacles and rewards, the element category ratio is a number ratio of the rewards to the obstacles, and the determining the weight coefficient according to the element category ratio and the total number of elements includes:
obtaining the product of the element category proportion and the element total quantity;
and determining the weight coefficient according to the product, wherein the product is in direct proportion to the weight coefficient.
In a second aspect, an embodiment of the present invention further provides an concentration training device implemented based on touch data and gyroscope data, where the device includes:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring touch data and gyroscope data corresponding to a playing device, wherein a video picture played by the playing device comprises a plurality of scene elements and virtual characters corresponding to a target user, and each scene element and each virtual character respectively have a preset moving direction;
the adjustment module is used for adjusting the moving route of the virtual character according to the touch data and the gyroscope data;
the updating module is used for updating the position of the virtual character according to the scene element when the position of the virtual character is overlapped with the position of any scene element; after updating, continuing to execute the step of acquiring the touch data and the gyroscope data corresponding to the playing equipment until the preset training time is reached;
the recording module is used for acquiring the total moving route of the virtual character in the training time period, and determining a plurality of target scene elements according to the total moving route, wherein the target scene elements are the scene elements covered by the total moving route;
and the scoring module is used for determining concentration training scores corresponding to the target users according to the total moving route and each target scene element.
In a third aspect, an embodiment of the present invention further provides a terminal, where the terminal includes a memory and one or more processors; the memory stores more than one program; the program comprising instructions for performing a concentration training method implemented based on touch data and gyroscope data as described in any of the above; the processor is configured to execute the program.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium having a plurality of instructions stored thereon, where the instructions are adapted to be loaded and executed by a processor to implement the steps of any of the above-described concentration training methods implemented based on touch data and gyroscope data.
The invention has the beneficial effects that: according to the embodiment of the invention, the video is played to the user through the playing terminal, and the user controls the virtual character in the video to move in a touch control and shaking mode of the playing terminal, so that a man-machine interaction type concentration training method is realized, and the interest of concentration training is improved. The method and the system can further judge the concentration training effect of the user by analyzing the total moving route of the virtual character and scene elements covered on the total moving route, quantitatively display the training effect to the user in a score form, and further optimize the training experience of the user by adding an effect feedback step. The problem that the concentration training method in the prior art lacks interestingness is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings may be obtained according to the drawings without inventive effort to those skilled in the art.
Fig. 1 is a flow chart of a concentration training method implemented based on touch data and gyroscope data according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a playback device according to an embodiment of the present invention.
Fig. 3 is a schematic block diagram of a concentration training device implemented based on touch data and gyroscope data according to an embodiment of the present invention.
Fig. 4 is a schematic block diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The invention discloses a concentration training method based on touch data and gyroscope data, which is used for making the purposes, technical schemes and effects of the invention clearer and more definite. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In view of the above-mentioned drawbacks of the prior art, the present invention provides a concentration training method implemented based on touch data and gyroscope data, as shown in fig. 1, the method includes:
step S100, acquiring touch data and gyroscope data corresponding to playing equipment, wherein a video picture played by the playing equipment comprises a plurality of scene elements and virtual characters corresponding to a target user, and each scene element and each virtual character respectively have a preset moving direction;
step 200, adjusting the moving route of the virtual character according to the touch data and the gyroscope data;
step S300, when the position of the virtual character is overlapped with the position of any scene element, updating the position of the virtual character according to the scene element; after updating, continuing to execute the step of acquiring the touch data and the gyroscope data corresponding to the playing equipment until the preset training time is reached;
step S400, acquiring a total moving route of the virtual character in the training time period, and determining a plurality of target scene elements according to the total moving route, wherein the target scene elements are the scene elements covered by the total moving route;
and S500, determining concentration training scores corresponding to the target users according to the total moving route and each target scene element.
Specifically, the target user is a user who performs concentration training, and the concentration training method in this embodiment needs to be performed by a playback device with touch capability, such as a tablet with touch capability. During training, the playing device plays video data corresponding to the current training mode, some scene elements and virtual characters corresponding to the target user appear in the video picture, and each scene element and virtual character have preset moving directions at the initial time. For example, some obstacles or rewards may appear at different locations in the video frame that are opposite to the virtual character. The target user controls the virtual character to move in a mode of touch control and shaking of the playing device, so that the position relation between the virtual character and each scene element is adjusted. The types of the scene elements are various, different action modes are preset for the scene elements of each type, and once the virtual character is overlapped with the position of a certain scene element, the position of the virtual character is adjusted according to the action mode of the scene element. For example, when the virtual character coincides with the obstacle, the virtual character is adjusted from the current position to the rear of the virtual character; the avatar coincides with the bonus, the avatar is adjusted from the current position to the front of it. After the position adjustment is finished, the virtual character continues to move from the adjusted position according to the preset moving direction, and the target user continues to control the virtual character to move in a mode of touch control and shaking the playing equipment until the training duration is reached, and the training is finished. And storing and recording the total moving route of the virtual character in the training process, storing and recording all scene elements covered by the total moving route as target scene elements, judging the training effect of the target user by analyzing the stored and recorded data after the training is finished, and calculating the concentration training score. According to the invention, the video is played to the user through the playing terminal, the playing terminal has a touch function and is internally provided with the gyroscope sensor, and the user can control the virtual character in the video to move through touch and shaking of the playing terminal, so that man-machine interaction is realized, and the interest of concentration training is improved. The method and the system can further judge the concentration training effect of the user by analyzing the total moving route of the virtual character and scene elements covered on the total moving route, quantitatively display the training effect to the user in a score form, and further optimize the training experience of the user by adding an effect feedback step.
In one implementation, the adjusting the movement route of the virtual character according to the touch data and the gyroscope data includes:
step S201, determining touch pressure according to the touch data, and determining the advancing speed of the virtual character according to the touch pressure;
step S202, determining the inclination direction and the inclination angle of the playing device according to the gyroscope data, determining the vertical movement direction of the virtual character according to the inclination direction, and determining the vertical movement speed of the virtual character according to the inclination angle;
step S203, determining the moving route according to the advancing speed, the vertical moving direction and the vertical moving speed.
Specifically, the avatar in the play terminal itself has a fixed moving direction, for example, the avatar continues to move forward. As shown in fig. 2, the positions of the scene elements in the video scene appear randomly, and their moving directions are opposite to the moving directions of the virtual characters, so that there is a possibility that the virtual characters and the scene elements overlap in position. The target user needs to concentrate on, adjusts the moving route of the virtual character according to various scene elements continuously appearing in the video scene in a touch control and shake mode of the playing terminal, and determines whether the virtual character coincides with the occurrence position of the scene element according to the category of each scene element. The touch data received by the terminal is mainly touch pressure, and the terminal adjusts the current advancing speed of the virtual character according to the touch pressure, wherein the advancing speed is faster when the touch pressure is larger. In addition, the terminal adjusts movement of the virtual character in the vertical direction according to the received gyroscope data, wherein the movement direction of the virtual character in the vertical direction is determined based on the inclination direction of the playing terminal, and the movement speed of the virtual character in the vertical direction is determined based on the inclination angle of the playing terminal. For example, if the target user wants the avatar to move upward, the playing terminal needs to rotate counterclockwise about the horizontal axis as the central axis; if the target user wishes the avatar to move upward quickly, the rotation angle needs to be increased. Because the target user needs to comprehensively adjust the moving route of the virtual character through two modes of touch control and shaking of the playing terminal, the attention and the hand-eye coordination ability of the target user can be more tested, and the concentration of the target user can be more effectively improved.
In one implementation, the category of the scene element includes an obstacle, and the updating the position of the virtual character according to the scene element includes:
step S301 (a), when the scene element is the obstacle, acquiring a shape parameter of the scene element and a color difference degree of the scene element and a picture background;
step S302 (a), determining a retreating distance according to the shape parameter and the color difference degree;
step S303 (a), updating the position of the virtual character according to the backward distance.
Specifically, the categories of the scene elements in the embodiment include obstacles, and the role of the scene elements is to prevent the progress of the virtual character as the name implies, so that when the target user does not successfully control the virtual character to avoid the obstacles, the system generates a corresponding punishment scene. Since the total movement route of the virtual character affects the value of the concentration training score, the penalty scenario of this embodiment is to update the position of the virtual character to the rear. The obstacles in this embodiment include multiple types, and there are certain differences in shape parameters and/or colors of different obstacles. For the shape parameters, the larger the shape of the obstacle, the easier it is for the target user to notice, and the less difficult it is to avoid. For the color, the larger the difference between the color of the obstacle and the color of the background of the picture, the easier it is to be noticed by the target user, and the smaller the difficulty of avoiding. Therefore, for each obstacle, the embodiment comprehensively determines the difficulty of avoiding the obstacle according to the shape parameter and the color difference degree corresponding to the obstacle, and further determines the retreating distance of the virtual character. According to the embodiment, punishment scenes of different barriers are set in a difficulty grading mode, so that the interestingness of concentration training is enriched, and accurate calculation of concentration training scores of target users is facilitated.
In one implementation, the category of the scene element includes a bonus, the updating the position of the virtual character based on the scene element includes:
step S301 (b), when the scene element is the bonus, obtaining the category information of the scene element;
step S302 (b), determining a forward distance according to the category information;
step S303 (b), updating the position of the virtual character according to the advancing distance.
Specifically, the categories of the scene elements in the embodiment further include rewards, and the effect of the scene elements is to promote the progress of the virtual character as the name implies, so that when the target user successfully controls the virtual character to contact the rewards, the system generates a corresponding rewards scene. Since the total moving route of the virtual character affects the value of the concentration training score, the bonus scene of the embodiment is to update the position of the virtual character to the front. The awards in this embodiment include multiple types, with different types of awards differing in prize level. For each bonus, the present embodiment determines the distance traveled by the avatar according to the category to which the bonus corresponds. According to the method, the rewarding scenes of different rewards are set in a difficulty grading mode, so that the interestingness of concentration training is enriched, and the concentration training score of a target user can be accurately calculated.
In another implementation, the category of the scene element includes a bonus, and the updating the position of the virtual character based on the scene element includes:
step S301 (c), when the scene element is the bonus, obtaining category information of the scene element and a distance value between the scene element and the nearest barrier;
step S302 (c), determining a forward distance according to the category information and the distance value;
step S303 (c), updating the position of the virtual character according to the advancing distance.
In particular, this embodiment provides another more accurate difficulty rating for the bonus. For each bonus, the difficulty of the target user for controlling the virtual character to touch the bonus is accurately judged by acquiring the category information of the bonus and the distance value between the nearest barrier of the bonus, and the corresponding advancing distance is generated. It will be appreciated that the closer the distance between the bonus and the obstacle, the more difficult the target user is to manipulate the avatar to touch the bonus, and the more the hand-eye fit of the attention of the target user is tested. So that the closer the bonus is to the obstacle, the greater the corresponding travel distance of the bonus, if the categories are the same.
In one implementation manner, the determining the concentration training score corresponding to the target user according to the total moving route and each target scene element includes:
step S401, determining element category proportions and element total numbers according to the target scene elements;
step S402, determining a weight coefficient according to the element category proportion and the element total number;
step S403, determining the total advancing distance corresponding to the virtual character according to the total moving route;
step S404, determining an initial concentration training score corresponding to the target user according to the total advancing distance;
step S405, determining the concentration training score according to the weight coefficient and the initial concentration training score.
Specifically, the more times the target user avoids the obstacle/touches the bonus, the higher the concentration of the target user, the greater the travel distance of the total travel route corresponding to the target user may be, so the present embodiment sets an initial concentration training score based on the travel distance corresponding to the total travel route. However, besides the advancing distance, the proportion of the scene elements of different categories in each target scene element and the total number of the scene elements covered can reflect the concentration degree of the target user to a certain extent, for example, the barriers in the target scene elements are more than rewards, and the total number of the target scene elements is more, so that the situation that the target user may have distraction in the concentration training process is indicated; the number of rewards in the target scene elements is greater than the number of obstacles, and the total number of the target scene elements is greater, so that the target user can concentrate in the concentration training process. Therefore, the influence of the element category proportion and the element total quantity on the concentration score is integrated into the initial concentration training score in a weighted mode, so that the accuracy of the concentration training score is further improved.
In one implementation, the categories of the scene elements include obstacles and rewards, the element category ratio is a number ratio of the rewards to the obstacles, and the determining the weight coefficient according to the element category ratio and the total number of elements includes:
step S4021, obtaining the product of the element category proportion and the total number of elements;
step S4022, determining the weight coefficient according to the product, where the product is in a proportional relationship with the weight coefficient.
Specifically, the embodiment sets the element category ratio as the number ratio of the bonus and the obstacle, so that the larger the value of the element category ratio is, the larger the positive influence of the element category ratio on the concentration score is; the smaller the value of the element class proportion, the less its positive effect on the concentration score. The reliability of the information reflected by the element category proportion can be reflected to a certain extent by the total number of the elements, and the reliability is higher as the accidental of the information reflected by the element category proportion is lower as the total number of the elements is larger; the smaller the total number of elements, the higher the contingency of the information reflected by the element category ratio, and the lower the reliability. Therefore, the weight coefficient is determined by the product of the element category proportion and the element total number, and the larger the value of the product is, the larger the weight coefficient is, the larger the positive influence on the concentration score is.
In one implementation, the method further comprises:
adjusting the initial advancing speed of the virtual character in the next training time period according to the concentration training score;
when the total training time length reaches a preset training target, determining a target concentration training score of the target user according to the concentration training score corresponding to each training time length.
In short, the training duration may be a preset duration in a complete concentration training process, and in this embodiment, the concentration training score of the target user is calculated once every preset duration, where the concentration training score may laterally reflect the concentration degree of the target user. Because the time spent on the complete concentration training is long, the concentration of the target user may fluctuate in the complete concentration training process, so that a better training effect is difficult to achieve by adopting the same training difficulty all the time. The embodiment can dynamically adjust the training difficulty in a complete concentration training process. Specifically, for each preset duration, the advancing speed of the virtual character in the current preset duration is adjusted according to the concentration training score of the previous preset duration. It will be appreciated that adjustment of the speed of advancement will also result in a change in the difficulty of training, the faster the speed of advancement the higher the difficulty of training and vice versa. And after the complete one-time concentration training is finished, comprehensively judging the training score of the training of the target user according to the concentration training scores corresponding to the preset time periods in the concentration training, and obtaining the target concentration training score. According to the method, the experience of the concentration training of the user can be effectively improved by means of the segmentation calculation and the segmentation dynamic adjustment of the training difficulty, and the concentration training effect is fed back to the user more accurately.
In one implementation manner, each frame of picture of the playing device is divided into a normal playing area and a visual interference area, wherein the normal playing area is a playing and moving area of the virtual character and each scene element, the visual interference area is used for displaying interference elements, and the interference elements can comprise elements such as flash objects, circularly changed color information and the like.
Based on the above embodiment, the present invention further provides a concentration training device implemented based on touch data and gyroscope data, as shown in fig. 3, where the device includes:
the device comprises an acquisition module 01, a display module and a display module, wherein the acquisition module is used for acquiring touch data and gyroscope data corresponding to a playing device, wherein a video picture played by the playing device comprises a plurality of scene elements and virtual characters corresponding to a target user, and each scene element and each virtual character respectively have a preset moving direction;
an adjustment module 02, configured to adjust a movement route of the virtual character according to the touch data and the gyroscope data;
an updating module 03, configured to update the position of the virtual character according to any one of the scene elements when the position of the virtual character is overlapped with the position of the scene element; after updating, continuing to execute the step of acquiring the touch data and the gyroscope data corresponding to the playing equipment until the preset training time is reached;
the recording module 04 is used for acquiring a total moving route of the virtual character in the training time period, and determining a plurality of target scene elements according to the total moving route, wherein the target scene elements are the scene elements covered by the total moving route;
and the scoring module 05 is used for determining the concentration training score corresponding to the target user according to the total moving route and each target scene element.
Based on the above embodiment, the present invention also provides a terminal, and a functional block diagram thereof may be shown in fig. 4. The terminal comprises a processor, a memory, a network interface and a display screen which are connected through a system bus. Wherein the processor of the terminal is adapted to provide computing and control capabilities. The memory of the terminal includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the terminal is used for communicating with an external terminal through a network connection. The computer program, when executed by the processor, implements a concentration training method implemented based on touch data and gyroscope data. The display screen of the terminal may be a liquid crystal display screen or an electronic ink display screen.
It will be appreciated by those skilled in the art that the functional block diagram shown in fig. 4 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the terminal to which the present inventive arrangements may be applied, and that a particular terminal may include more or less components than those shown, or may combine some of the components, or have a different arrangement of components.
In one implementation, the memory of the terminal has stored therein one or more programs, and the execution of the one or more programs by one or more processors includes instructions for performing a concentration training method implemented based on touch data and gyroscope data.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
In summary, the invention discloses a concentration training method based on touch data and gyroscope data, and relates to the field of man-machine interaction. According to the invention, the video is played to the user through the playing terminal, and the user controls the virtual character in the video to move in a touch control and shaking mode of the playing terminal, so that a man-machine interaction type concentration training method is realized, and the interest of concentration training is improved. The method and the system can further judge the concentration training effect of the user by analyzing the total moving route of the virtual character and scene elements covered on the total moving route, quantitatively display the training effect to the user in a score form, and further optimize the training experience of the user by adding an effect feedback step. The problem that the concentration training method in the prior art lacks interestingness is solved.
It is to be understood that the invention is not limited in its application to the examples described above, but is capable of modification and variation in light of the above teachings by those skilled in the art, and that all such modifications and variations are intended to be included within the scope of the appended claims.

Claims (7)

1. The method for realizing concentration training based on touch data and gyroscope data is characterized by comprising the following steps:
acquiring touch data and gyroscope data corresponding to a playing device, wherein a video picture played by the playing device comprises a plurality of scene elements and virtual characters corresponding to a target user, and each scene element and each virtual character respectively have a preset moving direction; each frame of picture of the playing equipment is divided into a normal playing area and a visual interference area, wherein the normal playing area is a playing and moving area of the virtual character and each scene element, the visual interference area is used for displaying interference elements, and the interference elements comprise flash objects and circularly changed color information;
adjusting the moving route of the virtual character according to the touch data and the gyroscope data;
when the position of the virtual character is overlapped with the position of any scene element, updating the position of the virtual character according to the scene element; after updating, continuing to execute the step of acquiring the touch data and the gyroscope data corresponding to the playing equipment until the preset training time is reached;
acquiring a total moving route of the virtual character in the training time period, and determining a plurality of target scene elements according to the total moving route, wherein the target scene elements are the scene elements covered by the total moving route;
determining concentration training scores corresponding to the target users according to the total moving route and each target scene element;
adjusting the initial advancing speed of the virtual character in the next training time period according to the concentration training score; when the total training time length reaches a preset training target, determining a target concentration training score of the target user according to the concentration training score corresponding to each training time length respectively;
the category of the scene element includes an obstacle, and the updating the position of the virtual character according to the scene element includes:
when the scene element is the obstacle, acquiring a shape parameter of the scene element and the color difference degree of the scene element and the picture background;
determining a retreating distance according to the shape parameter and the color difference degree;
updating the position of the virtual character according to the backward distance;
the category of the scene element comprises rewards, and the updating of the position of the virtual character according to the scene element comprises the following steps:
when the scene element is the bonus, acquiring category information of the scene element and a distance value between the scene element and the nearest barrier;
determining a forward distance according to the category information and the distance value;
and updating the position of the virtual character according to the advancing distance.
2. The concentration training method implemented based on touch data and gyroscope data according to claim 1, wherein the adjusting the movement route of the virtual character according to the touch data and gyroscope data comprises:
determining touch pressure according to the touch data, and determining the advancing speed of the virtual character according to the touch pressure;
determining the inclination direction and the inclination angle of the playing device according to the gyroscope data, determining the vertical movement direction of the virtual character according to the inclination direction, and determining the vertical movement speed of the virtual character according to the inclination angle;
and determining the moving route according to the advancing speed, the vertical moving direction and the vertical moving speed.
3. The concentration training method implemented based on touch data and gyroscope data according to claim 1, wherein the determining the concentration training score corresponding to the target user according to the total movement route and each target scene element includes:
determining element category proportions and element total numbers according to the target scene elements;
determining a weight coefficient according to the element category proportion and the element total number;
determining the total advancing distance corresponding to the virtual character according to the total moving route;
determining an initial concentration training score corresponding to the target user according to the total advancing distance;
and determining the concentration training score according to the weight coefficient and the initial concentration training score.
4. The concentration training method implemented based on touch data and gyroscope data according to claim 3, wherein the categories of scene elements include obstacles and rewards, the element category ratio is a number ratio of the rewards to the obstacles, and the determining the weight coefficient according to the element category ratio and the total number of elements includes:
obtaining the product of the element category proportion and the element total quantity;
and determining the weight coefficient according to the product, wherein the product is in direct proportion to the weight coefficient.
5. An apparatus for implementing the concentration training method implemented based on touch data and gyroscope data as described in any one of claims 1-4, the apparatus comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring touch data and gyroscope data corresponding to a playing device, wherein a video picture played by the playing device comprises a plurality of scene elements and virtual characters corresponding to a target user, and each scene element and each virtual character respectively have a preset moving direction; each frame of picture of the playing equipment is divided into a normal playing area and a visual interference area, wherein the normal playing area is a playing and moving area of the virtual character and each scene element, the visual interference area is used for displaying interference elements, and the interference elements comprise flash objects and circularly changed color information;
the adjustment module is used for adjusting the moving route of the virtual character according to the touch data and the gyroscope data;
the updating module is used for updating the position of the virtual character according to the scene element when the position of the virtual character is overlapped with the position of any scene element; after updating, continuing to execute the step of acquiring the touch data and the gyroscope data corresponding to the playing equipment until the preset training time is reached;
the recording module is used for acquiring the total moving route of the virtual character in the training time period, and determining a plurality of target scene elements according to the total moving route, wherein the target scene elements are the scene elements covered by the total moving route;
the scoring module is used for determining concentration training scores corresponding to the target users according to the total moving route and each target scene element;
adjusting the initial advancing speed of the virtual character in the next training time period according to the concentration training score; when the total training time length reaches a preset training target, determining a target concentration training score of the target user according to the concentration training score corresponding to each training time length respectively;
the category of the scene element includes an obstacle, and the updating the position of the virtual character according to the scene element includes:
when the scene element is the obstacle, acquiring a shape parameter of the scene element and the color difference degree of the scene element and the picture background;
determining a retreating distance according to the shape parameter and the color difference degree;
updating the position of the virtual character according to the backward distance;
the category of the scene element comprises rewards, and the updating of the position of the virtual character according to the scene element comprises the following steps:
when the scene element is the bonus, acquiring category information of the scene element and a distance value between the scene element and the nearest barrier;
determining a forward distance according to the category information and the distance value;
and updating the position of the virtual character according to the advancing distance.
6. A terminal comprising a memory and one or more processors; the memory stores more than one program; the program comprising instructions for performing the concentration training method implemented based on touch data and gyroscope data as claimed in any of claims 1-4; the processor is configured to execute the program.
7. A computer readable storage medium having stored thereon a plurality of instructions adapted to be loaded and executed by a processor to implement the steps of the concentration training method implemented on the basis of touch data and gyroscope data as claimed in any of the preceding claims 1-4.
CN202310966149.4A 2023-08-02 2023-08-02 Concentration training method based on touch data and gyroscope data Active CN116650789B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310966149.4A CN116650789B (en) 2023-08-02 2023-08-02 Concentration training method based on touch data and gyroscope data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310966149.4A CN116650789B (en) 2023-08-02 2023-08-02 Concentration training method based on touch data and gyroscope data

Publications (2)

Publication Number Publication Date
CN116650789A CN116650789A (en) 2023-08-29
CN116650789B true CN116650789B (en) 2023-11-17

Family

ID=87714040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310966149.4A Active CN116650789B (en) 2023-08-02 2023-08-02 Concentration training method based on touch data and gyroscope data

Country Status (1)

Country Link
CN (1) CN116650789B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020068824A (en) * 2001-02-23 2002-08-28 학교법인 한양학원 System and method for enhancing attention on internet
CA2843670A1 (en) * 2014-02-24 2015-08-24 Chris Argiro Video-game console for allied touchscreen devices
CN110448912A (en) * 2019-07-31 2019-11-15 维沃移动通信有限公司 Terminal control method and terminal device
WO2021232229A1 (en) * 2020-05-19 2021-11-25 深圳元戎启行科技有限公司 Virtual scene generation method and apparatus, computer device and storage medium
CN115167689A (en) * 2022-09-08 2022-10-11 深圳市心流科技有限公司 Human-computer interaction method, device, terminal and storage medium for concentration training
CN116312077A (en) * 2023-03-13 2023-06-23 深圳市心流科技有限公司 Concentration training method, device, terminal and storage medium
CN116403676A (en) * 2023-06-05 2023-07-07 浙江强脑科技有限公司 Obstacle avoidance-based concentration training method and device, terminal equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11660419B2 (en) * 2021-07-16 2023-05-30 Psyber, Inc. Systems, devices, and methods for generating and manipulating objects in a virtual reality or multi-sensory environment to maintain a positive state of a user

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020068824A (en) * 2001-02-23 2002-08-28 학교법인 한양학원 System and method for enhancing attention on internet
CA2843670A1 (en) * 2014-02-24 2015-08-24 Chris Argiro Video-game console for allied touchscreen devices
CN110448912A (en) * 2019-07-31 2019-11-15 维沃移动通信有限公司 Terminal control method and terminal device
WO2021232229A1 (en) * 2020-05-19 2021-11-25 深圳元戎启行科技有限公司 Virtual scene generation method and apparatus, computer device and storage medium
CN115167689A (en) * 2022-09-08 2022-10-11 深圳市心流科技有限公司 Human-computer interaction method, device, terminal and storage medium for concentration training
CN116312077A (en) * 2023-03-13 2023-06-23 深圳市心流科技有限公司 Concentration training method, device, terminal and storage medium
CN116403676A (en) * 2023-06-05 2023-07-07 浙江强脑科技有限公司 Obstacle avoidance-based concentration training method and device, terminal equipment and storage medium

Also Published As

Publication number Publication date
CN116650789A (en) 2023-08-29

Similar Documents

Publication Publication Date Title
US7965295B2 (en) Mixture model for motion lines in a virtual reality environment
US9283473B2 (en) Game providing device
US6234802B1 (en) Virtual challenge system and method for teaching a language
CN110533969B (en) Programming teaching terminal and system
Frey et al. The use of virtual environments based on a modification of the computer game Quake III Arena® in psychological experimenting
JP2010240377A (en) Graphical representation of gaming experience
CN102221975A (en) Project navigation using motion capturing data
US20160267804A1 (en) Training and cognitive skill improving system and method
Schaik et al. Presence within a mixed reality environment
US10493365B2 (en) System and method for playing a predictive sports game on a computing device
JP5001677B2 (en) Game device, game method executed by game device, and game program for causing game device to execute game method
CN112601098A (en) Live broadcast interaction method and content recommendation method and device
CN116650789B (en) Concentration training method based on touch data and gyroscope data
US10179285B2 (en) Game program and game apparatus
CN113791709A (en) Page display method and device, electronic equipment and storage medium
CN105879389A (en) Webgame display frame zoom operational method, device, and game platform
US20100175058A1 (en) System for providing distraction-free content in a flash-based gaming environment
CN107168522A (en) Control method, device and the virtual reality device of application
TWI423834B (en) Corresponding to game methods, electronic devices, game servers and computer programs
CN116665846B (en) Concentration training method and device based on touch control, terminal and storage medium
US20130017529A1 (en) Method and apparatus for generating educational content
JP5001678B2 (en) Game device, game method executed by game device, and game program for causing game device to execute game method
US20130017528A1 (en) Method and apparatus for managing student activities
JP5967251B1 (en) Program, recording medium, and information processing apparatus
KR102565179B1 (en) Apparatus and method for providing golf simulation that can control mission difficulty and provide different rewards

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant