CN115167689B - Human-computer interaction method, device, terminal and storage medium for concentration training - Google Patents

Human-computer interaction method, device, terminal and storage medium for concentration training Download PDF

Info

Publication number
CN115167689B
CN115167689B CN202211096166.9A CN202211096166A CN115167689B CN 115167689 B CN115167689 B CN 115167689B CN 202211096166 A CN202211096166 A CN 202211096166A CN 115167689 B CN115167689 B CN 115167689B
Authority
CN
China
Prior art keywords
action
determining
data
target
completion degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211096166.9A
Other languages
Chinese (zh)
Other versions
CN115167689A (en
Inventor
韩璧丞
周超前
丁小玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mental Flow Technology Co Ltd
Original Assignee
Shenzhen Mental Flow Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mental Flow Technology Co Ltd filed Critical Shenzhen Mental Flow Technology Co Ltd
Priority to CN202211096166.9A priority Critical patent/CN115167689B/en
Publication of CN115167689A publication Critical patent/CN115167689A/en
Application granted granted Critical
Publication of CN115167689B publication Critical patent/CN115167689B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a human-computer interaction method, a human-computer interaction device, a human-computer interaction terminal and a storage medium for concentration training, wherein the target concentration value of a target user is determined according to electroencephalogram data by acquiring action data and electroencephalogram data of the target user; determining actual action data of the virtual character in the terminal picture according to the action data and the target concentration value; and updating the action of the virtual character according to the actual action data. The invention realizes the interaction link provided in the concentration training process through the interaction between the virtual roles and the user. In addition, the action of the virtual character needs to be determined by combining the current action data and the concentration value of the user, so that the feedback result of concentration training can be obtained by comparing the action of the user with the action of the virtual character, and a feedback link is provided in the concentration training process. The method solves the problems that the training experience of a user is poor and the training is difficult to persist due to the lack of an interaction or feedback process in the existing attention training method.

Description

Human-computer interaction method, device, terminal and storage medium for concentration training
Technical Field
The invention relates to the field of human-computer interaction, in particular to a human-computer interaction method, a human-computer interaction device, a human-computer interaction terminal and a storage medium for concentration training.
Background
The current common concentration training method comprises a staring method, namely staring at a round point, keeping the Dantian breathing and prolonging the non-blinking time as much as possible; or meditation training, i.e., relaxation by the dantian breathing method, and gradually inhaling through the nose while making various imaginations until the whole body is calm. However, the existing attention training methods are basically executed independently by the user, and no interaction or feedback exists in the period, so that the training experience of the user is poor, and the user is difficult to insist on training.
Thus, there is still a need for improvement and development of the prior art.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a human-computer interaction method, device, terminal and storage medium for attentive training, aiming at solving the problems of poor training experience and difficulty in persisting in training caused by lack of interaction or feedback process in the existing attentive training method.
The technical scheme adopted by the invention for solving the problems is as follows:
in a first aspect, an embodiment of the present invention provides a human-computer interaction method for concentration training, where the method includes:
acquiring action data and electroencephalogram data corresponding to a target user, and determining a target concentration value corresponding to the target user according to the electroencephalogram data, wherein the action data and the electroencephalogram data respectively correspond to the same acquisition time;
determining actual action data corresponding to the virtual character in the terminal picture according to the action data and the target concentration value;
and updating the action of the virtual role according to the actual action data.
In one embodiment, the determining a target concentration value corresponding to the target user according to the electroencephalogram data includes:
determining an actual signal intensity change curve according to the electroencephalogram data;
acquiring a standard signal intensity change curve, wherein the standard signal intensity change curve is used for reflecting the signal intensity change condition of standard electroencephalogram data with the highest concentration value;
determining a similarity value according to the actual signal intensity change curve and the standard signal intensity change curve;
and determining the target concentration value according to the similarity value.
In one embodiment, the determining, according to the motion data and the target concentration value, actual motion data corresponding to a virtual character in a terminal screen includes:
determining an action completion degree according to the target concentration value, wherein the action completion degree is used for reflecting the similarity degree between the action data and the actual action data;
and determining the actual action data according to the action completion degree and the action data.
In one embodiment, the determining the degree of completion of the action based on the target concentration value comprises:
determining the completion degree of the initial action according to the target concentration value;
acquiring action amplitude corresponding to the action data, and determining an interference value according to the action amplitude, wherein the action amplitude is in a direct proportion relation with the interference value;
and determining the action completion degree according to the interference value and the initial action completion degree.
In one embodiment, the determining the action completion degree according to the interference value and the initial action completion degree comprises:
when the interference value is larger than a preset interference threshold value, determining a compensation completion degree according to the interference value, and determining the action completion degree according to the combination of the initial action completion degree and the compensation completion degree;
and when the interference value is smaller than or equal to the interference threshold, determining the action completion degree according to the initial action completion degree.
In one embodiment, the terminal screen further includes a moving virtual obstacle, and the method further includes:
acquiring the contact ratio of the virtual character and the virtual obstacle;
when the coincidence degree is larger than a preset coincidence threshold value, showing a punishment scene through the terminal picture;
and when the coincidence degree is smaller than or equal to a preset coincidence threshold value, displaying the reward scene through the terminal picture.
In one embodiment, the method further comprises:
determining the interaction scores corresponding to the target users according to the occurrence times corresponding to the punishment scenes and the reward scenes respectively;
and adjusting the moving speed of the virtual barrier in the next round according to the interactive score corresponding to the previous round.
In a second aspect, an embodiment of the present invention further provides a human-computer interaction device for concentration training, where the device includes:
the attention-focusing determination module is used for acquiring action data and electroencephalogram data corresponding to a target user and determining a target attention value corresponding to the target user according to the electroencephalogram data, wherein the action data and the electroencephalogram data respectively correspond to the same acquisition time;
the action determining module is used for determining actual action data corresponding to the virtual role in the terminal picture according to the action data and the target concentration value;
and the action updating module is used for updating the action of the virtual role according to the actual action data.
In one embodiment, the concentration determination module includes:
the curve generation unit is used for determining an actual signal intensity change curve according to the electroencephalogram data;
the standard acquisition unit is used for acquiring a standard signal intensity change curve, wherein the standard signal intensity change curve is used for reflecting the signal intensity change condition of the standard electroencephalogram data with the highest concentration value;
the similarity calculation unit is used for determining a similarity value according to the actual signal intensity change curve and the standard signal intensity change curve;
a concentration calculation unit for determining the target concentration value according to the similarity value.
In one embodiment, the action determining module comprises:
the completion degree determining unit is used for determining the action completion degree according to the target concentration value, wherein the action completion degree is used for reflecting the similarity degree between the action data and the actual action data;
and the action determining unit is used for determining the actual action data according to the action completion degree and the action data.
In one embodiment, the completion determination unit includes:
the initial determination unit is used for determining the completion degree of the initial action according to the target concentration value;
the interference determining unit is used for acquiring action amplitude corresponding to the action data and determining an interference value according to the action amplitude, wherein the action amplitude and the interference value are in a direct proportion relation;
and the comprehensive determining unit is used for determining the action completion degree according to the interference value and the initial action completion degree.
In one embodiment, the comprehensive determination unit comprises:
when the interference value is larger than a preset interference threshold value, determining a compensation completion degree according to the interference value, and determining the action completion degree according to the combination of the initial action completion degree and the compensation completion degree;
and when the interference value is smaller than or equal to the interference threshold, determining the action completion degree according to the initial action completion degree.
In one embodiment, the terminal screen further includes a moving virtual obstacle, and the apparatus further includes:
the contact ratio judging module is used for acquiring the contact ratio of the virtual character and the virtual obstacle;
the scene generation module is used for displaying punishment scenes through the terminal pictures when the coincidence degree is greater than a preset coincidence threshold value; and when the coincidence degree is smaller than or equal to a preset coincidence threshold value, displaying the reward scene through the terminal picture.
In one embodiment, the apparatus further comprises:
the score calculating module is used for determining the interaction scores corresponding to the target users according to the occurrence times respectively corresponding to the punishment scenes and the reward scenes;
and the speed adjusting module is used for adjusting the moving speed of the virtual barrier in the next round according to the interactive score corresponding to the previous round.
In a third aspect, an embodiment of the present invention further provides a terminal, where the terminal includes a memory and more than one processor; the memory stores more than one program; the program contains instructions for performing a human-machine interaction method for concentration training as described in any of the above; the processor is configured to execute the program.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a plurality of instructions are stored, wherein the instructions are adapted to be loaded and executed by a processor to implement any of the above-mentioned human-computer interaction methods for concentration training.
The invention has the beneficial effects that: according to the embodiment of the invention, the target concentration value of the target user is determined according to the electroencephalogram data by acquiring the action data and the electroencephalogram data of the target user; determining actual action data of the virtual character in the terminal picture according to the action data and the target concentration value; and updating the action of the virtual character according to the actual action data. The invention realizes the interaction link provided in the concentration training process through the interaction between the virtual roles and the user. In addition, the action of the virtual character needs to be determined by combining the current action data and the concentration value of the user, so that the feedback result of concentration training can be obtained by comparing the action of the user with the action of the virtual character, and a feedback link is provided in the concentration training process. The method solves the problems that the training experience of a user is poor and the training is difficult to persist due to the lack of an interaction or feedback process in the existing attention training method.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a human-computer interaction method for concentration training according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of internal modules of a human-computer interaction device for concentration training according to an embodiment of the present invention.
Fig. 3 is a schematic block diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The invention discloses a human-computer interaction method, a human-computer interaction device, a human-computer interaction terminal and a storage medium for concentration training, and in order to make the purposes, technical schemes and effects of the human-computer interaction method, the technical schemes and the effects clearer and clearer, the human-computer interaction method, the human-computer interaction device, the terminal and the storage medium are further described in detail below by referring to the attached drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Attention is also called attention, which refers to the psychological state of a person when concentrating on something or doing exercises. Under normal circumstances, concentration directs a person's mental activities towards an object, selectively accepts certain information, suppresses other activities and other information, and concentrates all mental energy for the object at issue. Thus, good concentration helps to improve the efficiency of work and learning. The current common concentration training method comprises a staring method, namely staring at a round point, keeping the Dantian breathing and prolonging the non-blinking time as much as possible; or meditation training, i.e., relaxation by the dantian breathing method, and gradually inhaling through the nose while making various imaginations until the whole body is calm. However, the existing attention training method basically depends on the user to independently execute, and no interaction or feedback exists in the period, so that the training experience of the user is poor, and the user is difficult to insist on training.
In view of the above-mentioned drawbacks of the prior art, the present invention provides a human-computer interaction method for concentration training, the method comprising: acquiring action data and electroencephalogram data corresponding to a target user, and determining a target concentration value corresponding to the target user according to the electroencephalogram data, wherein the action data and the electroencephalogram data respectively correspond to the same acquisition time; determining actual action data corresponding to the virtual character in the terminal picture according to the action data and the target concentration value; and updating the action of the virtual role according to the actual action data. The invention realizes the interaction link provided in the concentration training process through the interaction between the virtual character and the user. In addition, the action of the virtual character needs to be determined by combining the current action data and the concentration value of the user, so that the feedback result of concentration training can be obtained by comparing the action of the user with the action of the virtual character, and a feedback link is provided in the concentration training process. The method solves the problems that the training experience of a user is poor and the user is difficult to insist on training due to the lack of an interaction or feedback process in the conventional attention training method.
For example, the terminal acquires the squatting action data and the electroencephalogram data made by the user A at the same time. And determining that the current target concentration value of the user A is 5 (the full score is 10) according to the electroencephalogram data of the user A. Because the target concentration value of the user A is low, the actual action data of the virtual character is determined to be half squat according to the squat action data and the target concentration value. And the terminal updates the current action of the virtual character according to the semi-squatting action data, so that the user A can watch the virtual character in the semi-squatting state. The user A knows that the self action is a squatting action, and the action displayed by the virtual character is a semi-squatting action, so that the user A can intuitively know that the user A is in a state with low concentration by comparing the self action with the virtual character action, and the concentration value of the user A is improved as much as possible in the next human-computer interaction process, so that the virtual character can make the action the same as the user A.
Exemplary method
As shown in fig. 1, the method includes:
s100, obtaining action data and electroencephalogram data corresponding to a target user, and determining a target concentration value corresponding to the target user according to the electroencephalogram data, wherein the action data and the electroencephalogram data respectively correspond to the same acquisition time.
Briefly, in this embodiment, the action data and the electroencephalogram data of the target user need to be collected at the same time, so as to realize fusion of the attention training process while human-computer interaction is performed. Specifically, in order to implement human-computer interaction, in the embodiment, current action data of the target user needs to be acquired to determine the action type that the target user desires to execute by the virtual character in the terminal screen. In order to implement concentration training, the electroencephalogram data generated while the target user performs an action needs to be acquired in the embodiment, and the target concentration value of the target user can be determined according to the current electroencephalogram data because the data characteristics of the electroencephalogram data generated under different concentration degrees are different.
In one implementation, the motion data is head motion data, and the head motion data may be acquired based on a preset head ring equipped with a gyroscope. For example, before concentration training, the target user wears a head ring with a gyroscope, and any head movement of the target user is detected by the gyroscope in the concentration training process, so that the head movement data of the target user is obtained. Meanwhile, the brain wave of the target user can be detected by the head ring, and the brain wave data of the target user can be obtained.
In one implementation, the determining, according to the electroencephalogram data, a target concentration value corresponding to the target user specifically includes:
s101, determining an actual signal intensity change curve according to the electroencephalogram data;
s102, obtaining a standard signal intensity change curve, wherein the standard signal intensity change curve is used for reflecting the signal intensity change condition of standard electroencephalogram data with the highest concentration value;
step S103, determining a similarity value according to the actual signal intensity change curve and the standard signal intensity change curve;
and S104, determining the target concentration value according to the similarity value.
Specifically, in order to determine the current concentration value of the target user based on the electroencephalogram data, in this embodiment, first, the signal intensity feature of the electroencephalogram data needs to be extracted, and a corresponding signal intensity change curve is drawn, so that an actual signal intensity change curve is obtained. And then, a pre-stored standard signal intensity change curve is taken, and the actual signal intensity change curve is compared with the standard signal intensity change curve so as to calculate a similarity value for reflecting the similarity degree of the two curves. The standard signal intensity change curve reflects the signal intensity change condition of the standard electroencephalogram data with the highest concentration value, so that the higher the similarity value is, the closer the electroencephalogram data representing the target user is to the standard electroencephalogram data, the higher the target concentration value currently corresponding to the target user is, otherwise, the lower the target concentration value is, and the similarity value is in a direct proportion relation with the target concentration value. According to the method and the device, the current concentration value of the target user can be rapidly, accurately and objectively determined in a curve comparison mode.
As shown in fig. 1, the method further comprises the steps of:
and S200, determining actual action data corresponding to the virtual character in the terminal picture according to the action data and the target concentration value.
Specifically, in order to implement human-computer interaction, in this embodiment, a virtual character corresponding to a target user is set in advance in a terminal, and the virtual character is displayed to the target user through a terminal screen. The human-computer interaction in this embodiment refers to a process in which a target user controls the action of a virtual character through the action of the target user. Since the objective of the present embodiment is to make the target user intuitively recognize the concentration degree of the target user through the actions made by the virtual character, the actions of the target user and the actions made by the virtual character are not in a one-to-one mapping relationship, but the actual action data of the virtual character needs to be determined by integrating the current action data and the concentration value of the target user. In other words, the target user has different concentration values and the virtual character exhibits different actions even though the same action is being performed.
For example, assume that the terminal screen is a game screen of temple fleeing/cool every day, and the virtual character is a virtual player corresponding to the target user in the game. When the action data is the head action data of the target user, the mapping relationship between different head actions of the target user and different actions of the virtual character is preset, for example, the head raising action of the target user corresponds to the jumping action of the virtual character, the head lowering action of the target user corresponds to the squatting action of the virtual character, the left turning action of the target user corresponds to the left turning action of the virtual character, and the right turning action of the target user corresponds to the right turning action of the virtual character. In the game process, the head action data and the electroencephalogram data of the target user are obtained, the target concentration value of the target user is determined according to the electroencephalogram data, and only when the target concentration value reaches the target level, the actual action made by the virtual character can be consistent with the head action data of the target user.
In an implementation manner, the step S200 specifically includes:
step S201, determining an action completion degree according to the target concentration value, wherein the action completion degree is used for reflecting the similarity degree between the action data and the actual action data;
step S202, determining the actual action data according to the action completion degree and the action data.
Specifically, firstly, determining the action completion degree of the current human-computer interaction process according to the obtained target concentration value, wherein the higher the action completion degree is, the closer the display action of the current virtual character is to the execution action of the target user; the lower the completion degree of the action, the more deviated the display action of the secondary virtual character and the execution action of the target user. And then, comprehensively determining actual action data corresponding to the virtual character according to the action completion degree and the action data. This embodiment is through will be concentrated on the power and convert the degree of completion into the action, can realize that the present degree of concentration of target user is expressed directly perceivedly through virtual role's show action. The target user can obtain the feedback result of being concentrated on the strength training through observing the action of virtual character to the experience that the target user carries out being concentrated on the strength training has been promoted.
In an implementation manner, the step S201 specifically includes:
step S2011, determining the completion degree of the initial action according to the target concentration value;
step S2012, obtaining action amplitude corresponding to the action data, and determining an interference value according to the action amplitude, wherein the action amplitude and the interference value are in a direct proportion relation;
and S2013, determining the action completion degree according to the interference value and the initial action completion degree.
In short, when the target user is in a dynamic state, the concentration of the target user is disturbed by the execution of the motion, and the larger the motion amplitude is, the more the disturbance is, so that the difficulty of the target user in focusing attention when making different amplitudes of motions is different. In order to avoid the situation that the finally generated action completion degree is too different due to different interference degrees when the target user performs actions with different amplitudes under the same concentration, the embodiment also needs to refer to the interference influence caused by the action amplitude when determining the action completion degree. Specifically, the present embodiment first determines the initial action completion degree according to the target concentration value, and the higher the target concentration value is, the higher the initial action completion degree is. And then, determining an interference value according to the action amplitude of the target user, and comprehensively determining a final action completion degree according to the interference value and the initial action completion degree, so that the finally generated action completion degree can accurately reflect the current concentration degree of the target user.
In one implementation, the step S2013 specifically includes:
step S20131, when the interference value is larger than a preset interference threshold value, determining a compensation completion degree according to the interference value, and determining the action completion degree according to the combination of the initial action completion degree and the compensation completion degree;
step S20132, when the interference value is smaller than or equal to the interference threshold, determining the action completion degree according to the initial action completion degree.
In short, because the target users have different attention difficulties when receiving different degrees of interference, in order to avoid the situation that the final generated action completion degree is too different due to different interference values under the same attention, the compensation operation of the action completion degree is newly added in the embodiment. Specifically, the present embodiment sets an interference threshold in advance to determine the interference level currently received by the target user. When the interference value is greater than the interference threshold value, the complex action with larger action amplitude is performed by the target user, the interference on concentration force is larger when the complex action is performed, so that the deviation between the target concentration force value and the real concentration force of the target user is larger, the action completion degree is directly determined according to the target concentration force value, the action completion degree is easily too low, the real concentration force of the target user is difficult to embody, the compensation completion degree needs to be determined according to the interference value, and the compensation completion degree is added on the basis of the initial action completion degree to obtain the final action completion degree. On the contrary, when the interference value is smaller than or equal to the interference threshold, the target user is represented to perform a simple action with a smaller action amplitude at the next time, and because the interference to the attention force is smaller when the simple action is performed, the target attention force value is more consistent with the real attention force of the target user, and the initial action completion degree is directly used as the final action completion degree.
As shown in fig. 1, the method further comprises the steps of:
and step S300, updating the action of the virtual role according to the actual action data.
Specifically, the terminal updates the action of the virtual character according to the actual action data, so that the virtual character shows the action corresponding to the actual action data. The target user can know the concentration condition of the target user by observing the action of the virtual role.
In one implementation, the terminal screen further includes a moving virtual obstacle, and the method further includes:
s400, acquiring the contact ratio of the virtual character and the virtual obstacle;
s401, when the coincidence degree is larger than a preset coincidence threshold value, displaying a punished scene through the terminal picture;
and S402, when the coincidence degree is smaller than or equal to a preset coincidence threshold value, displaying the bonus scene through the terminal picture.
Specifically, in order to increase the interest in the concentration training process, a virtual obstacle is also displayed on the terminal screen in this embodiment, and the target user needs to concentrate on the attention to execute a corresponding action to control the virtual character to avoid the virtual obstacle. When the coincidence degree of the virtual character and the virtual barrier is larger than a preset coincidence threshold value, the target user fails to avoid the virtual barrier, the current concentration value of the target user is possibly low, and a punishment scene is displayed to the target user through a terminal picture, so that the target user can know that the target user is in the state of low concentration value through the punishment scene. When the coincidence degree of the virtual character and the virtual barrier is smaller than or is equal to a preset coincidence threshold value, the target user is indicated to successfully avoid the virtual barrier, the current concentration value of the target user is possibly high, and a reward scene is displayed to the target user through a terminal picture, so that the target user can know that the target user is in a state with a high concentration value through the reward scene. This embodiment can improve and be absorbed in the interactive interest of strength training in-process through increasing the virtual barrier who removes, punishs the scene and rewards the scene through the show and can promote the user self and improve the enthusiasm of special attention.
In one implementation, the initial movement speed of the virtual obstacle may be determined based on the occupation type of the target user to meet physical fitness status and training requirements of different users. For example, if the job type of the target user is the diploma type, the initial moving speed may be set to a medium speed; the job type of the target user is military class, and the initial moving speed may be set to a high speed.
In one implementation, the method further comprises:
step S403, determining interaction scores corresponding to the target users according to the occurrence times corresponding to the punishment scenes and the reward scenes respectively;
and S404, adjusting the moving speed of the virtual obstacle in the next round according to the interactive score corresponding to the previous round.
Specifically, the number of occurrences of the penalty scene and the bonus scene may reflect the concentration level of the target user within a preset time period. If the punishment scenes are more in occurrence frequency, the bonus scenes are less in occurrence frequency, and the concentration degree of the target user in the time period is higher. In order to further improve the attention of the target user, the moving speed of the virtual obstacle in the next round can be increased, and the target user is prompted to improve the attention so as to avoid the virtual obstacle. If the punishment scenes appear more times, the bonus scenes appear less times, which indicates that the concentration degree of the target user in the time period is low and the target user is difficult to adapt to the moving speed of the current virtual obstacle. In order to avoid the enthusiasm of hitting the attention training of the target user, the moving speed of the next round of virtual obstacles can be reduced, so that the target user gradually adapts to the moving speed of the virtual obstacles, and then the moving speed of the virtual obstacles is increased. The embodiment can dynamically adjust the moving speed of the virtual barrier of each round aiming at the training performances of different users, so that each user can obtain the optimal training effect.
In one implementation, the method further comprises:
aiming at each punishment scene corresponding to the wheel, acquiring action analysis information and concentration analysis information corresponding to each punishment scene;
judging error reasons corresponding to the punishment scenes according to the action analysis information and the attention analysis information corresponding to the punishment scenes in the round respectively;
determining the frequency increasing training action corresponding to the next round according to the error reasons corresponding to the punishment scenes;
and acquiring correct action types corresponding to the reward scenes aiming at the reward scenes corresponding to the wheel, and determining a frequency reduction training action corresponding to the next wheel according to the correct action types corresponding to the reward scenes.
Specifically, in the embodiment, by analyzing each penalty scene and each reward scene, the action type with poor training performance and the action type with excellent training performance of the target user in the round can be obtained, and the frequency-increasing training action is set according to the action type with poor training performance, so that the training frequency of the action type with poor training performance in the previous round is increased in the process of the next round of concentration training. Meanwhile, aiming at the action type with excellent training performance, the frequency reduction training action is set so as to reduce the action type with excellent training performance in the previous round in the next round of concentration training process. Thereby further improving the training effect of the concentration training of the target user.
Specifically, the generating process of the action analysis information corresponding to each penalty scene includes: acquiring a correct action type corresponding to the punishment scene, namely an action type capable of successfully avoiding the virtual barrier, wherein for example, if the virtual barrier is below, the correct action type is jumping; with the virtual obstacle above, the correct type of action is squat. Judging whether the action type corresponding to the actual action data corresponding to the punishment scene is the correct action type or not, if so, judging that the action analysis information is correct; if not, the motion analysis information is a motion error.
Specifically, the generating process of the attention analysis information corresponding to each penalty scene includes: judging whether the target concentration value corresponding to the punishment scene is larger than a preset concentration threshold value or not, if so, judging that the concentration analysis information is that the concentration reaches the standard; if not, the concentration analysis information is that the concentration does not reach the standard.
Specifically, the generating process of the frequency-increasing training action includes: and judging the error reasons corresponding to the punishment scenes according to the action analysis information and the attention analysis information corresponding to the punishment scenes in the round. For each punishment scene, when the action analysis information of the punishment scene is an action error and the special attention analysis information is that the special attention does not meet the standard, judging that the error reason corresponding to the punishment scene is the action error; when the action analysis information of the punishment scene is correct, and the attention-focused analysis information is under-focused time, judging the sequence of action execution time and attention-focused time, and when the action execution time is prior to the attention-focused time, judging that the error reason is an action error; and when the action analysis information of the punishment scene is an action error and the attention-focusing analysis information is that the attention-focusing is not up to the standard, judging the sequence of the action execution time and the attention-focusing time. When the action execution time is prior to the non-standard time of the attention force, judging that the error reason is an action error; and when the time when the attention is not up to the standard is prior to the action execution time, judging that the error reason is low attention. And determining the frequency-increasing action type according to the correct action type corresponding to each punishment scene with action error caused by the error.
In one implementation mode, the method can also be applied to combined training, namely, the target user is a plurality of users, and the action data of the plurality of users and the electroencephalogram data are adopted to control the action corresponding to the virtual character in the terminal picture.
Specifically, motion data and electroencephalogram data corresponding to a plurality of users are obtained; determining a target concentration value corresponding to each user according to the electroencephalogram data corresponding to each user; judging whether the target concentration values respectively corresponding to the users are all larger than a preset target value or not, and judging whether the action data respectively corresponding to the users are the same or not; and when the target concentration values respectively corresponding to the users are larger than a preset target value and the action data respectively corresponding to the users are the same, updating the action of the virtual role according to the action data. In other words, when the present embodiment is applied to the combination training, it is necessary that the concentration value of each person in the combination reaches the target value, and the virtual character can be controlled to perform the corresponding action when the actions are consistent. Therefore, the embodiment can improve the team cooperation capability while improving the attention of each person in the combination.
Exemplary devices
Based on the above embodiment, the present invention further provides a human-computer interaction device for attention-focused training, as shown in fig. 2, the device includes:
the attention power determining module 01 is used for acquiring action data and electroencephalogram data corresponding to a target user, and determining a target attention power value corresponding to the target user according to the electroencephalogram data, wherein the action data and the electroencephalogram data respectively correspond to the same acquisition time;
the action determining module 02 is used for determining actual action data corresponding to the virtual character in the terminal picture according to the action data and the target concentration value;
and the action updating module 03 is configured to update the action of the virtual role according to the actual action data.
In one implementation, the concentration determination module 01 includes:
the curve generating unit is used for determining an actual signal intensity change curve according to the electroencephalogram data;
the standard acquisition unit is used for acquiring a standard signal intensity change curve, wherein the standard signal intensity change curve is used for reflecting the signal intensity change condition of the standard electroencephalogram data with the highest concentration value;
the similarity calculation unit is used for determining a similarity value according to the actual signal intensity change curve and the standard signal intensity change curve;
a concentration calculation unit for determining the target concentration value according to the similarity value.
In one implementation, the action determining module 02 includes:
the completion degree determining unit is used for determining the action completion degree according to the target concentration value, wherein the action completion degree is used for reflecting the similarity degree between the action data and the actual action data;
and the action determining unit is used for determining the actual action data according to the action completion degree and the action data.
In one implementation, the completion determination unit includes:
the initial determination unit is used for determining the initial action completion degree according to the target concentration value;
the interference determining unit is used for acquiring action amplitude corresponding to the action data and determining an interference value according to the action amplitude, wherein the action amplitude and the interference value are in a direct proportion relation;
and the comprehensive determining unit is used for determining the action completion degree according to the interference value and the initial action completion degree.
In one implementation, the comprehensive determination unit includes:
when the interference value is larger than a preset interference threshold value, determining a compensation completion degree according to the interference value, and determining the action completion degree according to the combination of the initial action completion degree and the compensation completion degree;
and when the interference value is smaller than or equal to the interference threshold, determining the action completion degree according to the initial action completion degree.
In one implementation, the terminal screen further includes a moving virtual obstacle, and the apparatus further includes:
the contact ratio judging module is used for acquiring the contact ratio of the virtual character and the virtual obstacle;
the scene generation module is used for displaying punishment scenes through the terminal pictures when the coincidence degree is greater than a preset coincidence threshold value; and when the coincidence degree is smaller than or equal to a preset coincidence threshold value, displaying the reward scene through the terminal picture.
In one implementation, the apparatus further comprises:
the score calculating module is used for determining the interaction scores corresponding to the target users according to the occurrence times respectively corresponding to the punishment scenes and the reward scenes;
and the speed adjusting module is used for adjusting the moving speed of the virtual barrier of the next round according to the interactive score corresponding to the previous round.
Based on the above embodiments, the present invention further provides a terminal, and a schematic block diagram thereof may be as shown in fig. 3. The terminal comprises a processor, a memory, a network interface and a display screen which are connected through a system bus. Wherein the processor of the terminal is configured to provide computing and control capabilities. The memory of the terminal comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the terminal is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement a human-computer interaction method for concentration training. The display screen of the terminal can be a liquid crystal display screen or an electronic ink display screen.
It will be understood by those skilled in the art that the block diagram shown in fig. 3 is a block diagram of only a portion of the structure associated with the inventive arrangements and is not intended to limit the terminals to which the inventive arrangements may be applied, and that a particular terminal may include more or less components than those shown, or may have some components combined, or may have a different arrangement of components.
In one implementation, one or more programs are stored in a memory of the terminal and configured to be executed by one or more processors include instructions for performing a human-machine interaction method for focused training.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware instructions of a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
In summary, the present invention discloses a human-computer interaction method, apparatus, terminal and storage medium for concentration training, the method comprising: acquiring action data and electroencephalogram data corresponding to a target user, and determining a target concentration value corresponding to the target user according to the electroencephalogram data, wherein the action data and the electroencephalogram data respectively correspond to the same acquisition time; determining actual action data corresponding to the virtual character in the terminal picture according to the action data and the target concentration value; and updating the action of the virtual role according to the actual action data. The invention realizes the interaction link provided in the concentration training process through the interaction between the virtual character and the user. In addition, the action of the virtual character needs to be determined by combining the current action data and the concentration value of the user, so that the feedback result of concentration training can be obtained by comparing the action of the user with the action of the virtual character, and a feedback link is provided in the concentration training process. The method solves the problems that the training experience of a user is poor and the user is difficult to insist on training due to the lack of an interaction or feedback process in the conventional attention training method.
It will be understood that the invention is not limited to the examples described above, but that modifications and variations will occur to those skilled in the art in light of the above teachings, and that all such modifications and variations are considered to be within the scope of the invention as defined by the appended claims.

Claims (8)

1. A human-computer interaction method for concentration training, the method comprising:
acquiring action data and electroencephalogram data corresponding to a target user, and determining a target concentration value corresponding to the target user according to the electroencephalogram data, wherein the action data and the electroencephalogram data respectively correspond to the same acquisition time;
according to the action data and the target concentration value, determining actual action data corresponding to the virtual character in the terminal picture, wherein even if the target user performs the same action under the condition of different concentration values, the action displayed by the virtual character is different, and only when the target concentration value reaches the target level, the actual action performed by the virtual character can be consistent with the head action data of the target user;
updating the action of the virtual role according to the actual action data;
the determining the actual action data corresponding to the virtual character in the terminal picture according to the action data and the target concentration value comprises the following steps:
determining an action completion degree according to the target concentration value, wherein the action completion degree is used for reflecting the similarity degree between the action data and the actual action data, and the higher the action completion degree is, the closer the display action of the current virtual character and the execution action of the target user are, the lower the action completion degree is, and the more deviated the display action of the current virtual character and the execution action of the target user are; the higher the target concentration value is, the higher the initial action completion degree is;
determining the action completion degree according to the target concentration value, including:
determining the completion degree of the initial action according to the target concentration value;
acquiring action amplitude corresponding to the action data, and determining an interference value according to the action amplitude, wherein the action amplitude is in a direct proportion relation with the interference value;
and determining the action completion degree according to the interference value and the initial action completion degree.
2. The human-computer interaction method for attention training as claimed in claim 1, wherein the determining a target attention value corresponding to the target user from the electroencephalogram data comprises:
determining an actual signal intensity change curve according to the electroencephalogram data;
acquiring a standard signal intensity change curve, wherein the standard signal intensity change curve is used for reflecting the signal intensity change condition of standard electroencephalogram data with the highest concentration value;
determining a similarity value according to the actual signal intensity change curve and the standard signal intensity change curve;
and determining the target concentration value according to the similarity value.
3. The human-computer interaction method for attention training according to claim 1, wherein the determining the action completion degree according to the interference value and the initial action completion degree comprises:
when the interference value is larger than a preset interference threshold value, determining a compensation completion degree according to the interference value, and determining the action completion degree according to the combination of the initial action completion degree and the compensation completion degree;
and when the interference value is smaller than or equal to the interference threshold, determining the action completion degree according to the initial action completion degree.
4. The human-computer interaction method for attention training according to claim 1, wherein the terminal screen further includes a moving virtual obstacle, the method further comprising:
acquiring the contact ratio of the virtual character and the virtual obstacle;
when the coincidence degree is larger than a preset coincidence threshold value, displaying a punished scene through the terminal picture;
and when the coincidence degree is smaller than or equal to a preset coincidence threshold value, displaying the reward scene through the terminal picture.
5. The human-computer interaction method for concentration training of claim 4, wherein the method further comprises:
determining the interaction scores corresponding to the target users according to the occurrence times corresponding to the punishment scenes and the reward scenes respectively;
and adjusting the moving speed of the virtual barrier in the next round according to the interactive score corresponding to the previous round.
6. A human-computer interaction device for attentive training, the device comprising:
the attention power determining module is used for acquiring action data and electroencephalogram data corresponding to a target user and determining a target attention power value corresponding to the target user according to the electroencephalogram data, wherein the action data and the electroencephalogram data respectively correspond to the same acquisition time;
the action determining module is used for determining actual action data corresponding to the virtual character in the terminal picture according to the action data and the target concentration value, wherein under the condition that the target user is at different concentration values, even if the target user makes the same action, the action shown by the virtual character is different, and only when the target concentration value reaches a target level, the actual action made by the virtual character is consistent with the head action data of the target user;
the action updating module is used for updating the action of the virtual role according to the actual action data;
the determining the actual action data corresponding to the virtual character in the terminal picture according to the action data and the target concentration value comprises the following steps:
determining action completion degree according to the target concentration value, wherein the action completion degree is used for reflecting the similarity degree between the action data and the actual action data, and the higher the action completion degree is, the closer the display action of the current virtual character and the execution action of the target user are, the lower the action completion degree is, and the more deviated the display action of the current virtual character and the execution action of the target user are; the higher the target concentration value is, the higher the initial action completion degree is;
determining the action completion degree according to the target concentration value, including:
determining the completion degree of the initial action according to the target concentration value;
acquiring action amplitude corresponding to the action data, and determining an interference value according to the action amplitude, wherein the action amplitude is in a direct proportion relation with the interference value;
and determining the action completion degree according to the interference value and the initial action completion degree.
7. A terminal, comprising a memory and one or more processors; the memory stores more than one program; the program comprises instructions for performing the human-machine interaction method for concentration training of any one of claims 1-5; the processor is configured to execute the program.
8. A computer-readable storage medium having stored thereon a plurality of instructions adapted to be loaded and executed by a processor to implement the method for human-computer interaction for concentration training of any of the above claims 1-5.
CN202211096166.9A 2022-09-08 2022-09-08 Human-computer interaction method, device, terminal and storage medium for concentration training Active CN115167689B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211096166.9A CN115167689B (en) 2022-09-08 2022-09-08 Human-computer interaction method, device, terminal and storage medium for concentration training

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211096166.9A CN115167689B (en) 2022-09-08 2022-09-08 Human-computer interaction method, device, terminal and storage medium for concentration training

Publications (2)

Publication Number Publication Date
CN115167689A CN115167689A (en) 2022-10-11
CN115167689B true CN115167689B (en) 2022-12-09

Family

ID=83482392

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211096166.9A Active CN115167689B (en) 2022-09-08 2022-09-08 Human-computer interaction method, device, terminal and storage medium for concentration training

Country Status (1)

Country Link
CN (1) CN115167689B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115329901B (en) * 2022-10-12 2023-04-07 深圳市心流科技有限公司 Cognitive training method, device, equipment and storage terminal
CN115845214B (en) * 2023-02-27 2023-06-06 深圳市心流科技有限公司 Concentration force and reaction force dual training method and device and terminal equipment
CN116370788B (en) * 2023-06-05 2023-10-17 浙江强脑科技有限公司 Training effect real-time feedback method and device for concentration training and terminal equipment
CN116665846B (en) * 2023-08-02 2024-03-08 深圳市心流科技有限公司 Concentration training method and device based on touch control, terminal and storage medium
CN116650789B (en) * 2023-08-02 2023-11-17 深圳市心流科技有限公司 Concentration training method based on touch data and gyroscope data
CN117012071B (en) * 2023-09-27 2024-01-30 深圳市心流科技有限公司 Training excitation method, device and storage medium for concentration training

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014217704A (en) * 2013-05-10 2014-11-20 ソニー株式会社 Image display apparatus and image display method
US10120413B2 (en) * 2014-09-11 2018-11-06 Interaxon Inc. System and method for enhanced training using a virtual reality environment and bio-signal data
US20160196765A1 (en) * 2014-12-24 2016-07-07 NeuroSpire, Inc. System and method for attention training using electroencephalography (EEG) based neurofeedback and motion-based feedback
CN106691441A (en) * 2016-12-22 2017-05-24 蓝色传感(北京)科技有限公司 Attention training system based on brain electricity and movement state feedback and method thereof
CN113192601A (en) * 2021-04-15 2021-07-30 杭州国辰迈联机器人科技有限公司 Attention deficit hyperactivity disorder rehabilitation training method and training task based on brain-computer interface
CN113546395A (en) * 2021-07-28 2021-10-26 西安领跑网络传媒科技股份有限公司 Intelligent exercise training system and training method
CN114159064B (en) * 2022-02-11 2022-05-17 深圳市心流科技有限公司 Electroencephalogram signal based concentration assessment method, device, equipment and storage medium
CN114642432A (en) * 2022-02-14 2022-06-21 浙江强脑科技有限公司 Attention assessment method, device, equipment and storage medium
CN114176611B (en) * 2022-02-14 2022-07-05 深圳市心流科技有限公司 Method and device for training meditation state based on brain wave signals and storage medium
CN114847950A (en) * 2022-04-29 2022-08-05 深圳市云长数字医疗有限公司 Attention assessment and training system and method based on virtual reality and storage medium
CN114694448B (en) * 2022-06-01 2022-08-30 深圳市心流科技有限公司 Concentration training method and device, intelligent terminal and storage medium

Also Published As

Publication number Publication date
CN115167689A (en) 2022-10-11

Similar Documents

Publication Publication Date Title
CN115167689B (en) Human-computer interaction method, device, terminal and storage medium for concentration training
CN110890140B (en) Virtual reality-based autism rehabilitation training and capability assessment system and method
US10643741B2 (en) Systems and methods for a web platform hosting multiple assessments of human visual performance
Lenhardt et al. An adaptive P300-based online brain–computer interface
Heiser et al. Spatial updating in area LIP is independent of saccade direction
Zeyl et al. Adding real-time Bayesian ranks to error-related potential scores improves error detection and auto-correction in a P300 speller
CN114756137B (en) Training mode adjusting method and device for electromyographic signals and electroencephalographic signals
US20210346689A1 (en) System and method for individualizing neuromodulation
CN109102859A (en) A kind of motion control method and system
CN115268718A (en) Image display method, device, terminal and storage medium for concentration training
KR102425481B1 (en) Virtual reality communication system for rehabilitation treatment
CN116687411B (en) Game comprehensive score acquisition method and device, intelligent terminal and storage medium
Schez-Sobrino et al. Automatic recognition of physical exercises performed by stroke survivors to improve remote rehabilitation
JP2023530624A (en) Systems and methods for the treatment of post-traumatic stress disorder (PTSD) and phobias
CN112465139A (en) Cognitive training method, system and storage medium
Zhang Virtual reality games based on brain computer interface
CN115953930A (en) Concentration training method, device, terminal and storage medium based on visual tracking
CN103297546A (en) Method and system for visual perception training and server
CN115944298A (en) Human-computer interaction concentration assessment method, device, terminal and storage medium
CN113662822B (en) Optotype adjusting method based on eye movement, visual training method and visual training device
CN113160968B (en) Personalized diagnosis system based on mobile internet and application method
CN115591077A (en) Self-control force training method, device, equipment and storage medium
US20220184405A1 (en) Systems and methods for labeling data in active implantable medical device systems
KR101890374B1 (en) Brain training method using eeg and problem and brain training apparatus using the same
CN112402767A (en) Eye movement desensitization reprocessing intervention system and eye movement desensitization reprocessing intervention method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant