CN113539488A - Human-computer interaction method and device for evaluating attention persistence - Google Patents
Human-computer interaction method and device for evaluating attention persistence Download PDFInfo
- Publication number
- CN113539488A CN113539488A CN202011267310.1A CN202011267310A CN113539488A CN 113539488 A CN113539488 A CN 113539488A CN 202011267310 A CN202011267310 A CN 202011267310A CN 113539488 A CN113539488 A CN 113539488A
- Authority
- CN
- China
- Prior art keywords
- evaluation
- image
- user
- selection
- display screen
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 63
- 230000002688 persistence Effects 0.000 title claims abstract description 45
- 230000003993 interaction Effects 0.000 title claims abstract description 41
- 238000011156 evaluation Methods 0.000 claims abstract description 149
- 230000004044 response Effects 0.000 claims abstract description 32
- 238000000605 extraction Methods 0.000 claims description 31
- 238000004590 computer program Methods 0.000 claims description 10
- 230000001186 cumulative effect Effects 0.000 claims description 9
- 230000009471 action Effects 0.000 claims description 5
- 238000012854 evaluation process Methods 0.000 abstract description 11
- 238000012216 screening Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 13
- 238000004891 communication Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 239000000284 extract Substances 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 239000000835 fiber Substances 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 208000020016 psychiatric disease Diseases 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04817—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Databases & Information Systems (AREA)
- Primary Health Care (AREA)
- General Health & Medical Sciences (AREA)
- Epidemiology (AREA)
- Pathology (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the application discloses a human-computer interaction method and a human-computer interaction device for evaluating attention persistence. One embodiment of the method comprises: sequentially extracting images from a candidate image set or an interference image set; dynamically displaying an image on a display screen; in response to detecting that the user selects the image displayed on the display screen, determining whether the user's selection is correct and recording user operation information; determining whether the evaluation ending condition is met currently; and if so, generating and outputting an evaluation result. The embodiment realizes the purpose of accurately recording the maintaining time of the attention of the user by using a human-computer interaction method, and improves the efficiency of attention persistence evaluation. The embodiment of the application can be applied to the field of attention persistence evaluation and screening of specific crowds, the dynamic graphic display mode can be accurately controlled by using a human-computer interaction method, the attention persistence evaluation process is enriched, evaluation data are more accurate, and the current evaluation condition can be visually displayed.
Description
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a human-computer interaction method and device for evaluating attention persistence.
Background
Human-Computer Interaction technologies (Human-Computer Interaction technologies) refers to a technology for realizing Human-Computer Interaction in an efficient manner through Computer input and output devices. The man-machine interaction technology comprises the steps that a machine provides a large amount of relevant information and prompt requests for people through an output or display device, and a person inputs the relevant information, answers questions, prompts and the like to the machine through an input device. Human-computer interaction technology is one of the important elements in the design of computer user interfaces.
Currently, when cognitive assessment is performed on a specific population such as the elderly and patients with mental diseases, it is usually necessary to know the duration of attention of an evaluated person to things, so as to perform corresponding duration assessment or training on the evaluated person. In order to accurately and efficiently evaluate the attention persistence of a user, it is necessary to provide a human-computer interaction-based evaluation method with rich content and various interaction modes.
Disclosure of Invention
An object of the embodiments of the present application is to provide an improved human-computer interaction method and apparatus for assessing attention persistence.
In a first aspect, an embodiment of the present application provides a human-computer interaction method for evaluating attention persistence, including: sequentially extracting images from the image set to be selected or the interference image set according to a preset extraction period; dynamically displaying the extracted image on a display screen of the target device; in response to detecting that the user selects the image displayed on the display screen, determining whether the selection of the user is correct or not and recording user operation information, wherein the selection of the image to be selected is correct, the selection of the interference image is correct or the selection of the image to be selected is not wrong before the image to be selected is lost from the display screen; determining whether the evaluation ending condition is met currently; and if so, generating and outputting an evaluation result based on the user operation information.
In some embodiments, extracting an image from the candidate image set or the interference image set according to a preset extraction period includes: and extracting images from the image set to be selected or the interference image set according to an extraction period according to a rule that every time a first preset number of images including a second preset number of images to be selected are extracted.
In some embodiments, dynamically displaying the extracted image on a display screen of the target device includes: and moving the extracted image on the display screen according to a preset track.
In some embodiments, determining whether the evaluation ending condition is currently met comprises: in response to a user selection error, determining that an evaluation ending condition is currently met; or in response to determining that the current time reaches a preset evaluation ending time and that the user has not selected an error within the evaluation time period, determining that the evaluation ending condition is currently met.
In some embodiments, dynamically displaying the extracted image on a display screen of the target device includes: the extracted images are sequentially displayed at fixed positions on the display screen.
In some embodiments, the user operation information includes a cumulative number of selection errors; and determining whether the user selection is correct and recording user operation information, including: in response to the current user selection error, adding one to the cumulative number of selection errors; responding to the fact that the current user selection is correct and the accumulated selection error times are smaller than the preset times, and resetting the accumulated selection error times; determining whether the evaluation ending condition is met currently, including: responding to the accumulated selection error times reaching the preset times, and determining that the current condition is in accordance with the evaluation ending condition; or, in response to the accumulated selection error times not reaching the preset times and the number of the images displayed at the fixed position reaching the preset number, determining that the evaluation end condition is currently met.
In some embodiments, after determining whether the user's selection is correct, the method further comprises: and outputting prompt information for representing whether the selection of the user is correct.
In some embodiments, before images are sequentially extracted from the candidate image set or the interference image set according to a preset extraction period, the method further includes: in response to entering an animation demonstration interface, sequentially extracting images from a demonstration candidate image set or a demonstration interference image set according to a preset extraction period; dynamically displaying the extracted image on a display screen, and displaying a click prompt icon on an animation demonstration interface; moving the click prompt icon to the demonstration to-be-selected image, virtually clicking the demonstration to-be-selected image by using the click prompt icon and outputting prompt information representing correct selection; and/or moving the click prompt icon to the demonstration interference image, virtually clicking the demonstration interference image by using the click prompt icon and outputting prompt information representing selection errors; and exiting the animation demonstration interface in response to the current condition of meeting the animation demonstration ending condition.
In some embodiments, the evaluation result includes an attention-sustaining time, which is a time period from a time when the evaluation is started to a time when the evaluation end condition is met.
In a second aspect, an embodiment of the present application provides a human-computer interaction device for assessing attention persistence, the device including: the first extraction module is used for sequentially extracting images from the image set to be selected and the interference image set according to a preset extraction period; the first display module is used for dynamically displaying the extracted image on a display screen of the target device; the device comprises a first determining module, a second determining module and a display module, wherein the first determining module is used for responding to the fact that the user selects the image displayed on the display screen, determining whether the selection of the user is correct or not and recording the operation information of the user, wherein the selection of the image to be selected is correct, and the selection of the interference image or the selection of the image to be selected is not wrong before the image to be selected is lost from the display screen; the second determination module is used for determining whether the evaluation ending condition is met currently; and the generating module is used for generating and outputting an evaluation result based on the user operation information if the evaluation result is consistent with the user operation information.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; storage means for storing one or more programs which, when executed by one or more processors, cause the one or more processors to carry out a method as described in any one of the implementations of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
According to the human-computer interaction method and the human-computer interaction device for evaluating the attention persistence, the image to be selected and the interference image are dynamically displayed on the display screen of the target device for selection by a user, the user operation information representing whether the user selects correctly is recorded, when the evaluation finishing condition is met, the evaluation result is generated and output according to the user operation information, and the evaluation result can accurately represent the attention persistence of the user, so that the human-computer interaction method is utilized, the attention maintaining time of the user is accurately recorded, and the attention persistence evaluation efficiency is improved. The embodiment of the application can be applied to the field of attention persistence evaluation and screening of specific crowds, the dynamic graphic display mode can be accurately controlled by using a human-computer interaction method, the attention persistence evaluation process is enriched, evaluation data is more accurate, the evaluation process is more efficient, and the current evaluation condition can be visually displayed.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a human-computer interaction method for assessing attention persistence according to the application;
FIG. 3 is a flow diagram of yet another embodiment of a human-computer interaction method for assessing attention persistence according to the application;
FIG. 4 is an exemplary diagram of an evaluation interface for a human-computer interaction method for evaluating attention persistence according to the application;
FIG. 5 is a flow diagram of yet another embodiment of a human-computer interaction method for assessing attention persistence according to the application;
FIG. 6 is an exemplary diagram of an evaluation interface for a human-computer interaction method for evaluating attention persistence according to the application;
FIG. 7 is a schematic diagram illustrating an example of a human-computer interaction device for assessing attention persistence, in accordance with the subject application;
FIG. 8 is a block diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
FIG. 1 illustrates an exemplary system architecture 100 to which the human-computer interaction method for assessing attention persistence of embodiments of the present application may be applied.
As shown in fig. 1, system architecture 100 may include terminal device 101, network 102, and server 103. Network 102 is the medium used to provide communication links between terminal devices 101 and server 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use terminal device 101 to interact with server 103 over network 102 to receive or send messages and the like. Various communication client applications, such as a visual attention assessment application, a web browser application, an instant messaging tool, etc., may be installed on the terminal device 101.
The terminal device 101 may be various electronic devices including, but not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle-mounted terminal (e.g., a car navigation terminal), etc., and a fixed terminal such as a digital TV, a desktop computer, etc.
The server 103 may be a server providing various services, such as a background server providing support for a visual attention assessment application on the terminal device 101. The background server may send a software installation program, various data required by the software, and the like to the terminal apparatus 101, and may also generate an evaluation result according to an online operation of the user.
It should be noted that, the human-computer interaction method for evaluating attention persistence provided in the embodiment of the present application may be executed by the terminal device 101 or the server 103, and accordingly, the human-computer interaction device for evaluating attention persistence may be disposed in the terminal device 101 or the server 103.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. It should be noted that, in the case that the material required for evaluation does not need to be acquired from a remote location, the system architecture may include only a server or a terminal device without a network.
With continued reference to FIG. 2, a flow 200 of one embodiment of a human-machine interaction method applied to assess attention persistence in accordance with the present application is illustrated. The method comprises the following steps:
In this embodiment, an execution subject (e.g., a terminal device or a server shown in fig. 1) of the human-computer interaction method for evaluating attention persistence may sequentially extract images from a candidate image set or an interference image set according to a preset extraction period. The number of the interference images in the interference image set and the number of the candidate images in the candidate image set can be one or more.
The interference image and the candidate image can be images with similar characteristics such as appearance, color and the like. The image to be selected is an image to be selected by the user from the images included in the target area. The types of the interference image and the candidate image may be various, such as images including various graphics, symbols, numbers, characters, and the like.
The preset period may be preset to a fixed period, or may be arbitrarily set by a user. It should be noted that, each time the interference image or the candidate image is extracted, step 202 is executed immediately.
In this embodiment, the execution body may dynamically display the extracted image on a display screen of the target device. The target device is a device used in evaluating a user, such as a terminal device shown in fig. 1.
The form of the dynamic display may be various. For example, the extracted interference image or the image to be selected may be moved on the display screen in accordance with a set trajectory, or the images may be periodically switched at a fixed position.
In this embodiment, the execution main body may determine whether the user's selection is correct and record the user operation information in response to detecting that the user selects the image displayed on the display screen. If the user selects the image to be selected, the selection is correct, the interference image is selected, or the image to be selected is not selected to be the selection error before the image to be selected disappears from the display screen.
The user can select the displayed image by clicking with a mouse, touching with a hand, or the like. After the user selects, user operation information is generated to record whether the user selection is correct. The user operation information may include other information such as a time point when the user selects an image, a number of currently selected images, and the like.
In some optional implementation manners of this embodiment, in step 203, after determining whether the selection of the user is correct, the execution main body may further output a prompt message for characterizing whether the selection of the user is correct. The prompt message may be various forms of messages, including but not limited to at least one of the following: text, icons, alert tones, etc. As an example, when the image selected by the user is the image to be selected, the icon "√" is displayed on the image to be selected, and when the selected image is the interference image, the icon "×" is displayed on the selected image.
The realization mode can visually display whether the selected image is correct or not to the user by outputting the prompt information, thereby improving the flexibility of evaluation and the richness of human-computer interaction.
In this embodiment, the execution subject may determine whether the evaluation end condition is currently met. Wherein, the evaluation ending condition may include but is not limited to at least one of the following: the number of times of the user selection errors reaches the preset number of times; the current time reaches the preset evaluation ending time; a user triggers a signal of evaluation completion by clicking a button and the like; and the number of times of the current user for evaluation operation reaches the preset number of times, and the like.
And step 205, if the evaluation result is matched with the evaluation result, generating and outputting the evaluation result based on the user operation information.
In this embodiment, if the evaluation end condition is met, the execution subject may generate and output an evaluation result based on the user operation information. Wherein the evaluation result is used for characterizing the persistence of the attention of the user. The manner of outputting the evaluation result may be any preset manner, for example, the evaluation result may be displayed on a display screen, output audio, generate a table, be sent to other electronic devices, and the like.
In some optional implementations of the embodiment, the evaluation result includes an attention-sustaining time, and the attention-sustaining time is a time period from a time when the evaluation is started to a time when the evaluation end condition is met. As an example, if the user has not made an error throughout the evaluation, the overall time of the evaluation is the attention-sustaining time; if the number of selection errors of the user reaches a preset number, determining a time period from a time when the evaluation is started to a time when the number of selection errors reaches the preset number as an attention maintaining time.
By recording the attention maintaining time, the attention persistence of the user can be accurately represented, time reference is provided for various evaluations of the user, and the evaluation accuracy and efficiency are improved.
In some optional implementations of this embodiment, before the step 201, the method may include the following steps:
and step one, responding to the entering of an animation demonstration interface, and sequentially extracting images from a demonstration candidate image set or a demonstration interference image set according to a preset extraction period.
Wherein the animation demonstration interface may be accessed by a user action, e.g. the user accesses the animation demonstration interface by clicking on a button to access the evaluation.
And step two, dynamically displaying the extracted image on a display screen, and displaying a click prompt icon on an animation demonstration interface.
The click prompt icon is used for demonstrating a click action so as to prompt a user how to operate. And may be in various forms such as a hand icon.
Moving the click prompt icon to an image to be selected for demonstration, virtually clicking the image to be selected for demonstration by using the click prompt icon, and outputting prompt information representing correct selection; and/or the presence of a gas in the gas,
and moving the click prompt icon to the demonstration interference image, virtually clicking the demonstration interference image by using the click prompt icon, and outputting prompt information representing selection errors.
In general, the CSS tool may be used to obtain values of the left offset and the top offset of the image to be clicked, and then the movement effect is performed on the gesture map by using jiquery's animatae. Virtual clicks are used to demonstrate the action of clicking on an image. The virtual click may be implemented in various ways, such as playing sound effects, displaying dynamic images, and the like. For example, after the image movement is completed, the click prompt icon is enlarged and reduced by adding a transform scale in the CSS, thereby realizing the click effect.
And step four, responding to the current condition of meeting the animation demonstration end, and quitting the animation demonstration interface.
By way of example, animation presentation end conditions may include, but are not limited to, at least one of: the animation demonstration time reaches the preset time; alternatively, the user triggers a signal to end the animation presentation (e.g., clicks an end animation presentation button).
According to the method, the animation demonstration step is added before formal evaluation, so that how to evaluate the evaluation operation can be clearly shown to a user, and the user can efficiently know how to evaluate the evaluation operation by watching the animation demonstration step, so that the evaluation efficiency is improved. It should be noted that before the formal evaluation and after the animation demonstration, the method can further include a step of example operation, so that the user can manually practice the step of evaluation before the formal evaluation, which is helpful to improve the efficiency of evaluation. It should be understood that the process of example operation is substantially identical to the process of operation of formal evaluation, and is not described in detail herein.
According to the method provided by the embodiment of the application, the image to be selected and the interference image are dynamically displayed on the display screen of the target device for the user to select, the user operation information representing whether the user selects correctly is recorded, when the evaluation ending condition is met, the evaluation result is generated and output according to the user operation information, and the evaluation result can accurately represent the persistence of the attention of the user, so that the purpose of accurately recording the maintaining time of the attention of the user by using a man-machine interaction method is achieved, and the efficiency of attention persistence evaluation is improved. The embodiment of the application can be applied to the field of attention persistence evaluation and screening of specific crowds, the dynamic graphic display mode can be accurately controlled by using a human-computer interaction method, the attention persistence evaluation process is enriched, evaluation data is more accurate, the evaluation process is more efficient, and the current evaluation condition can be visually displayed.
With further reference to fig. 3, there is shown a flow 300 of yet another embodiment of a method for assessing visual attention according to the present application, the method comprising:
In this embodiment, the executing body may extract the images from the candidate image set or the interference image set according to an extraction period per a rule that a first preset number of images including a second preset number of candidate images are extracted.
As an example, the first preset number is 5, the second preset number is 1, and the numbers of the images included in the candidate image set and the interference image set are both 1. Every 5 extracted images include 1 image to be selected. If the first 4 of the 5 images are all interference images, the 5 th image is a candidate image.
In this embodiment, the execution subject may move the extracted image on the display screen according to a preset trajectory.
The preset track can be any track. For example, a straight movement from top to bottom, a straight movement from left to right, a spiral movement, etc. Generally, the first preset number in step 301 is related to step 302, that is, the maximum number of images that can be simultaneously displayed on the screen when moving according to the track of step 302 is the first preset number. For example, if the first preset number is 5 and the images move linearly from top to bottom, the parameters of the movement can be set so that the maximum number of images can be displayed simultaneously on the screen.
As an example, as shown in fig. 4, 401 is a candidate image, and the rest is an interference image. Each image generates a candidate image or an interference image according to the frequency of generating one image per second, and one candidate image appears in no five images and moves from top to bottom.
In this embodiment, step 303 is substantially the same as step 203 in the corresponding embodiment of fig. 2, and is not described herein again. As shown in fig. 4, if the user clicks the image 401 to be selected, the selection is correct.
In some optional implementations of this embodiment, step 304 may include any one of:
and determining that the evaluation ending condition is met currently in response to the user selection error. Namely, when the user clicks the interference image or the image to be selected is not clicked before moving out of the screen, the evaluation is finished.
In response to determining that the current time reaches a preset evaluation end time and that the user has not selected an error within the evaluation time period, determining that an evaluation end condition is currently met. I.e. the user has not made an error within the specified time, the evaluation is ended.
By setting the evaluation ending condition, the time for which the user operates correctly in the evaluation process can be accurately determined, so that the attention persistence of the user can be accurately evaluated.
And 305, if the evaluation ending condition is met, generating and outputting an evaluation result based on the user operation information.
In this embodiment, the evaluation result may include at least one of: the total evaluation time, the maintenance time, the accuracy (the number of times of selecting the image to be selected in the evaluation process is divided by the total number of the images to be selected appearing in the complete evaluation period), and the like.
In the method provided by the embodiment corresponding to fig. 3, through the execution of step 301, the frequency of the candidate images appearing on the display screen can be set, so that the candidate images are respectively more uniform in the whole evaluation process, which is helpful for improving the accuracy of the evaluation attention persistence. Through the execution of step 302, a plurality of images can be displayed on the display screen in a moving mode at the same time, and the evaluation mode is enriched.
With further reference to fig. 5, there is shown a flow 500 of yet another embodiment of a method for assessing visual attention according to the present application, the method comprising:
and step 501, sequentially extracting images from the image set to be selected or the interference image set according to a preset extraction period.
In this embodiment, step 501 is substantially the same as step 201 in the corresponding embodiment of fig. 2, and is not described here again.
As an example, as shown in fig. 6, the interference image set includes one image, i.e., an image displaying the number 3, and the candidate image set includes 9 images, i.e., images displaying other numbers than 3. The execution main body randomly extracts images from the interference image set and the image set to be selected, and displays the images in a fixed area 601 on the display screen. If a new image is extracted, the image displayed last time is overlaid, and the effect that the displayed number is changed constantly is presented in the fixed area 601.
As an example, as shown in fig. 6, when a numeral 3 appears in 601, if clicked, an error is selected. When another number appears in 601, the selection is correct if clicked, and the selection is incorrect if not clicked during the display.
In some optional implementation manners of this embodiment, the user operation information includes a cumulative number of selection errors. Based on this, step 503 includes:
in response to the current user selection error, incrementing the cumulative number of selection errors by one.
And responding to the fact that the current user selection is correct and the accumulated selection error times are smaller than the preset times, and clearing the accumulated selection error times. That is, when the number of accumulated selection errors is smaller than the preset number, the number of accumulated selection errors is cleared as long as one selection is correct.
In some optional implementations of this embodiment, based on the above optional implementations, step 504 may include:
and responding to the accumulated selection error times reaching preset times, and determining that the current condition is met with the evaluation ending condition. As an example, when the evaluation starts, the cumulative number of selection errors n is 0, and each time the selection error is once, n is accumulated to be 1, and the preset number of times is 10, then the evaluation ending condition is met when the user selects the error 10 times continuously.
And determining that the evaluation ending condition is met currently in response to the fact that the accumulated selection error times do not reach the preset times and the number of the images displayed at the fixed position reaches the preset number. That is, the user does not have a situation where the consecutive selection error reaches the preset number of times throughout the evaluation, and the evaluation is ended when the number of the set display images reaches the preset number.
The evaluation ending condition provided by the implementation mode can accurately determine that when a user generates more times of continuous errors in the evaluation process, the evaluation is stopped and the maintaining time is recorded, so that the flexibility of evaluating the attention durability of the user can be improved.
And 505, if the evaluation ending condition is met, generating and outputting an evaluation result based on the user operation information.
In this embodiment, the evaluation result may include at least one of: total evaluation duration, maintenance time, accuracy, total quantity of questions (i.e., the number of times images are switched on the display screen), number of selections, number of correct times, number of errors, and the like.
The method provided by the embodiment corresponding to fig. 5 enriches the attention persistence evaluation manner by sequentially switching image display at fixed positions on the display screen and determining whether a selection error occurs according to the selection of the user.
With further reference to fig. 7, as an implementation of the method shown in the above figures, the present application provides an embodiment of a human-computer interaction device for evaluating attention persistence, which corresponds to the embodiment of the method shown in fig. 2, and which can be applied in various electronic devices.
As shown in fig. 7, the human-computer interaction device 700 for evaluating attention persistence of the present embodiment includes: the first extraction module 701 is configured to sequentially extract images from the to-be-selected image set and the interference image set according to a preset extraction period; a first display module 702, configured to dynamically display the extracted image on a display screen of the target device; a first determining module 703, configured to determine whether a selection of a user is correct in response to detecting that the user selects an image displayed on the display screen, and record user operation information, where selecting an image to be selected is correct, selecting an interference image, or selecting no image to be selected as a selection error before the image to be selected is lost from the display screen; a second determining module 704, configured to determine whether an evaluation ending condition is currently met; and a generating module 705, configured to generate and output an evaluation result based on the user operation information if the user operation information matches the evaluation result.
In this embodiment, the first extraction module 701 may sequentially extract images from the candidate image set or the interference image set according to a preset extraction period. The number of the interference images in the interference image set and the number of the candidate images in the candidate image set can be one or more.
The interference image and the candidate image can be images with similar characteristics such as appearance, color and the like. The image to be selected is an image to be selected by the user from the images included in the target area. The types of the interference image and the candidate image may be various, such as images including various graphics, symbols, numbers, characters, and the like.
In this embodiment, the first display module 702 may dynamically display the extracted image on a display screen of the target device. The target device is a device used in evaluating a user, such as a terminal device shown in fig. 1.
The form of the dynamic display may be various. For example, the extracted interference image or the image to be selected may be moved on the display screen in accordance with a set trajectory, or the images may be periodically switched at a fixed position.
In this embodiment, the first determining module 703 may determine whether the selection of the user is correct and record the user operation information in response to detecting that the user selects the image displayed on the display screen. If the user selects the image to be selected, the selection is correct, the interference image is selected, or the image to be selected is not selected to be the selection error before the image to be selected disappears from the display screen.
The user can select the displayed image by clicking with a mouse, touching with a hand, or the like. After the user selects, user operation information is generated to record whether the user selection is correct. The user operation information may include other information such as a time point when the user selects an image, a number of currently selected images, and the like.
In this embodiment, the second determination module 704 may determine whether the evaluation end condition is currently met. Wherein, the evaluation ending condition may include but is not limited to at least one of the following: the number of times of the user selection errors reaches the preset number of times; the current time reaches the preset evaluation ending time; a user triggers a signal of evaluation completion by clicking a button and the like; and the number of times of the current user for evaluation operation reaches the preset number of times, and the like.
In this embodiment, if the evaluation ending condition is met, the generating module 705 may generate and output an evaluation result based on the user operation information. Wherein the evaluation result is used for characterizing the persistence of the attention of the user. The manner of outputting the evaluation result may be any preset manner, for example, the evaluation result may be displayed on a display screen, output audio, generate a table, be sent to other electronic devices, and the like.
In some optional implementations of this embodiment, the first extracting module 701 may be further configured to: and extracting images from the image set to be selected or the interference image set according to an extraction period according to a rule that every time a first preset number of images including a second preset number of images to be selected are extracted.
In some optional implementations of this embodiment, the first display module 702 may be further configured to: the image processing device is used for moving the extracted image on the display screen according to a preset track.
In some optional implementations of this embodiment, the second determining module 704 may include: a first determination unit (not shown in the figure) for determining that the evaluation end condition is currently met in response to a user selection error; or a second determination unit (not shown in the figure) for determining that the evaluation end condition is currently met in response to determining that the current time reaches a preset evaluation end time and that the user has not selected an error within the evaluation time period.
In some optional implementations of this embodiment, the first display module may be further configured to: the extracted images are sequentially displayed at fixed positions on the display screen.
In some optional implementations of this embodiment, the user operation information includes accumulated selection error times; and the first determining module 703 may include: an accumulation unit (not shown in the figure) for adding one to the accumulated selection error number in response to the current user selection error; a zero clearing unit (not shown in the figure) for clearing the cumulative selection error times in response to the current user selection being correct and the cumulative selection error times being smaller than the preset times; the second determining module 704 may include: a third determining unit (not shown in the figure) for determining that the evaluation ending condition is currently met in response to the accumulated selection error times reaching the preset times; or, a fourth determining unit (not shown in the figure) for determining that the evaluation end condition is currently met in response to the accumulated selection error times not reaching the preset times and the number of images displayed at the fixed position reaching the preset number.
In some optional implementations of this embodiment, the first determining module 703 may be further configured to: and outputting prompt information for representing whether the selection of the user is correct.
In some optional implementations of this embodiment, the apparatus 700 may further include: a second extraction module (not shown in the figure) for responding to the entering of the animation demonstration interface and sequentially extracting images from the demonstration candidate image set and the demonstration interference image set according to a preset extraction period; a second display module (not shown in the figure) for dynamically displaying the extracted image on a display screen and displaying a click prompt icon on the animation demonstration interface; the first moving module (not shown in the figure) is used for moving the click prompt icon to the demonstration image to be selected, virtually clicking the demonstration image to be selected by using the click prompt icon and outputting prompt information representing correct selection; and/or a second moving module (not shown in the figure) for moving the click prompt icon to the demonstration interference image, virtually clicking the demonstration interference image by using the click prompt icon, and outputting prompt information representing selection errors; and the exit module (not shown in the figure) is used for exiting the animation demonstration interface in response to the animation demonstration ending condition is met currently.
In some optional implementations of the embodiment, the evaluation result includes an attention-sustaining time, and the attention-sustaining time is a time period from a time when the evaluation is started to a time when the evaluation end condition is met.
According to the device provided by the embodiment of the application, the image to be selected and the interference image are dynamically displayed on the display screen of the target device for the user to select, the user operation information representing whether the user selects correctly is recorded, when the condition of finishing the evaluation is met, the evaluation result is generated and output according to the user operation information, and the evaluation result can accurately represent the persistence of the attention of the user, so that the purpose of accurately recording the maintaining time of the attention of the user by using a man-machine interaction method is achieved, and the efficiency of the attention persistence evaluation is improved. The embodiment of the application can be applied to the field of attention persistence evaluation and screening of specific crowds, the dynamic graphic display mode can be accurately controlled by using a human-computer interaction method, the attention persistence evaluation process is enriched, evaluation data is more accurate, the evaluation process is more efficient, and the current evaluation condition can be visually displayed.
Referring now to FIG. 8, shown is a block diagram of a computer system 800 suitable for use in implementing the electronic device of an embodiment of the present application. The electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 8, the computer system 800 includes a Central Processing Unit (CPU)801 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the system 800 are also stored. The CPU 801, ROM 802, and RAM 803 are connected to each other via a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
The following components are connected to the I/O interface 805: an input portion 806 including a keyboard, a mouse, and the like; an output section 807 including a display such as a Liquid Crystal Display (LCD) and a speaker; a storage portion 808 including a hard disk and the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. A drive 810 is also connected to the I/O interface 805 as necessary. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted on the storage section 808 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 809 and/or installed from the removable medium 811. The computer program performs the above-described functions defined in the method of the present application when executed by the Central Processing Unit (CPU) 801.
It should be noted that the computer readable storage medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present application may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes a first extraction module, a first display module, a first determination module, a second determination module, and a generation module. The names of these modules do not in some cases constitute a limitation on the unit itself, and for example, the first extraction module may also be described as "a module for extracting images from the candidate image set and the interference image set in order according to a preset extraction period".
As another aspect, the present application also provides a computer-readable storage medium, which may be included in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable storage medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: sequentially extracting images from the image set to be selected or the interference image set according to a preset extraction period; dynamically displaying the extracted image on a display screen of the target device; in response to detecting that the user selects the image displayed on the display screen, determining whether the selection of the user is correct or not and recording user operation information, wherein the selection of the image to be selected is correct, the selection of the interference image is correct or the selection of the image to be selected is not wrong before the image to be selected is lost from the display screen; determining whether the evaluation ending condition is met currently; and if so, generating and outputting an evaluation result based on the user operation information.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.
Claims (12)
1. A human-computer interaction method for assessing attention persistence, the method comprising:
sequentially extracting images from the image set to be selected or the interference image set according to a preset extraction period;
dynamically displaying the extracted image on a display screen of the target device;
in response to detecting that the user selects the image displayed on the display screen, determining whether the user's selection is correct and recording user operation information, wherein selecting the image to be selected is correct, selecting the interference image or selecting no image to be selected as a selection error before the image to be selected is lost from the display screen;
determining whether the evaluation ending condition is met currently;
and if so, generating and outputting an evaluation result based on the user operation information.
2. The method according to claim 1, wherein the extracting images from the candidate image set or the interference image set according to a preset extraction period comprises:
and extracting images from the image set to be selected or the interference image set according to the extraction period according to a rule that every time a first preset number of images including a second preset number of images to be selected are extracted.
3. The method of claim 2, wherein dynamically displaying the extracted image on a display screen of a target device comprises:
and moving the extracted image on the display screen according to a preset track.
4. The method of claim 2, wherein the determining whether the evaluation termination condition is currently met comprises:
in response to the user selection error, determining that an evaluation ending condition is currently met; or
Determining that an evaluation end condition is currently met in response to determining that the current time reaches a preset evaluation end time and that the user has not selected an error within the evaluation time period.
5. The method of claim 1, wherein dynamically displaying the extracted image on a display screen of a target device comprises:
sequentially displaying the extracted images at fixed positions on the display screen.
6. The method of claim 5, wherein the user action information includes cumulative selection error times; and
the determining whether the user's selection is correct and recording user operation information includes:
in response to the current user selection error, incrementing the cumulative number of selection errors by one;
responding to the fact that the current user selection is correct and the accumulated selection error times are smaller than the preset times, and clearing the accumulated selection error times;
the determining whether the evaluation ending condition is met currently comprises the following steps:
responding to the accumulated selection error times reaching the preset times, and determining that the current condition of finishing evaluation is met; or,
and determining that the current condition of the evaluation end is met in response to the fact that the accumulated selection error times do not reach the preset times and the number of the images displayed at the fixed position reaches the preset number.
7. The method according to any of claims 1-6, wherein after said determining whether the user's selection is correct, the method further comprises:
and outputting prompt information for representing whether the selection of the user is correct.
8. The method according to any one of claims 1 to 6, wherein before the sequentially extracting the images from the candidate image set or the interference image set according to the preset extraction period, the method further comprises:
in response to entering an animation demonstration interface, sequentially extracting images from a demonstration candidate image set or a demonstration interference image set according to a preset extraction period;
dynamically displaying the extracted image on the display screen, and displaying a click prompt icon on the animation demonstration interface;
moving the click prompt icon to an image to be selected for demonstration, virtually clicking the image to be selected for demonstration by using the click prompt icon, and outputting prompt information representing correct selection; and/or the presence of a gas in the gas,
moving the click prompt icon to a demonstration interference image, virtually clicking the demonstration interference image by using the click prompt icon, and outputting prompt information representing selection errors;
and exiting the animation demonstration interface in response to the current condition of meeting the animation demonstration ending condition.
9. The method according to any one of claims 1 to 6, wherein the evaluation result includes an attention-sustaining time, which is a time period from a time when the evaluation is started to a time when the evaluation end condition is met.
10. A human-computer interaction device for assessing attention persistence, the device comprising:
the first extraction module is used for sequentially extracting images from the image set to be selected and the interference image set according to a preset extraction period;
the first display module is used for dynamically displaying the extracted image on a display screen of the target device;
the first determining module is used for responding to the fact that the user selects the image displayed on the display screen, determining whether the user selects correctly or not and recording user operation information, wherein the image to be selected is selected to be correct, the interference image is selected or the image to be selected is not selected to be wrong before the image to be selected is lost from the display screen;
the second determination module is used for determining whether the evaluation ending condition is met currently;
and the generating module is used for generating and outputting an evaluation result based on the user operation information if the user operation information is consistent with the user operation information.
11. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-9.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011267310.1A CN113539488A (en) | 2020-11-13 | 2020-11-13 | Human-computer interaction method and device for evaluating attention persistence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011267310.1A CN113539488A (en) | 2020-11-13 | 2020-11-13 | Human-computer interaction method and device for evaluating attention persistence |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113539488A true CN113539488A (en) | 2021-10-22 |
Family
ID=78094458
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011267310.1A Pending CN113539488A (en) | 2020-11-13 | 2020-11-13 | Human-computer interaction method and device for evaluating attention persistence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113539488A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6020886A (en) * | 1996-09-04 | 2000-02-01 | International Business Machines Corporation | Method and apparatus for generating animated help demonstrations |
US20170224210A1 (en) * | 2014-09-30 | 2017-08-10 | National University Corporation Hamamatsu University School Of Medicine | Inattention measurement device, system, and method |
US20180317831A1 (en) * | 2016-01-19 | 2018-11-08 | Murdoch Childrens Research Institute | Diagnostic tool for assessing neurodevelopmental conditions or disorders |
CN109190097A (en) * | 2018-08-08 | 2019-01-11 | 北京百度网讯科技有限公司 | Method and apparatus for output information |
CN110400636A (en) * | 2019-06-21 | 2019-11-01 | 首都医科大学附属北京天坛医院 | Recognize appraisal procedure, device, computer equipment and storage medium |
-
2020
- 2020-11-13 CN CN202011267310.1A patent/CN113539488A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6020886A (en) * | 1996-09-04 | 2000-02-01 | International Business Machines Corporation | Method and apparatus for generating animated help demonstrations |
US20170224210A1 (en) * | 2014-09-30 | 2017-08-10 | National University Corporation Hamamatsu University School Of Medicine | Inattention measurement device, system, and method |
US20180317831A1 (en) * | 2016-01-19 | 2018-11-08 | Murdoch Childrens Research Institute | Diagnostic tool for assessing neurodevelopmental conditions or disorders |
CN109190097A (en) * | 2018-08-08 | 2019-01-11 | 北京百度网讯科技有限公司 | Method and apparatus for output information |
CN110400636A (en) * | 2019-06-21 | 2019-11-01 | 首都医科大学附属北京天坛医院 | Recognize appraisal procedure, device, computer equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111324252B (en) | Display control method and device in live broadcast platform, storage medium and electronic equipment | |
JP2023507068A (en) | Information interaction method, device, equipment, storage medium and program product | |
CN111459586B (en) | Remote assistance method, device, storage medium and terminal | |
CN113194349B (en) | Video playing method, comment device, equipment and storage medium | |
CN113539489A (en) | Human-computer interaction method and device for assessing visual attention | |
CN113849258B (en) | Content display method, device, equipment and storage medium | |
CN112116212B (en) | Application evaluation method and device, storage medium and electronic equipment | |
CN115079876A (en) | Interactive method, device, storage medium and computer program product | |
WO2020155915A1 (en) | Method and apparatus for playing back audio | |
CN113555085A (en) | Working memory training method and device for cognitive disorder | |
EP4398083A1 (en) | Virtual resource transfer method and apparatus, and device, readable storage medium and product | |
CN112218130A (en) | Control method and device for interactive video, storage medium and terminal | |
CN113253885A (en) | Target content display method, device, equipment, readable storage medium and product | |
CN111857482B (en) | Interaction method, device, equipment and readable medium | |
CN113688341A (en) | Dynamic picture decomposition method and device, electronic equipment and readable storage medium | |
CN113420135A (en) | Note processing method and device in online teaching, electronic equipment and storage medium | |
KR102551531B1 (en) | Context-based interactive service providing system and method | |
CN112492399B (en) | Information display method and device and electronic equipment | |
CN113535018B (en) | Human-computer interaction method and device for evaluating cognitive speed | |
WO2023056850A1 (en) | Page display method and apparatus, and device and storage medium | |
CN110765326A (en) | Recommendation method, device, equipment and computer readable storage medium | |
CN113539488A (en) | Human-computer interaction method and device for evaluating attention persistence | |
CN112541493B (en) | Topic explaining method and device and electronic equipment | |
KR101891754B1 (en) | Device and method of providing for learning application, and server of providing for learning content | |
Li et al. | The impact of progress indicators and information density on users' temporal perception and user experience in mobile pedestrian navigation applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |