CN113535018A - Human-computer interaction method and device for evaluating cognitive speed - Google Patents

Human-computer interaction method and device for evaluating cognitive speed Download PDF

Info

Publication number
CN113535018A
CN113535018A CN202011280348.2A CN202011280348A CN113535018A CN 113535018 A CN113535018 A CN 113535018A CN 202011280348 A CN202011280348 A CN 202011280348A CN 113535018 A CN113535018 A CN 113535018A
Authority
CN
China
Prior art keywords
image
target
evaluation
user
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011280348.2A
Other languages
Chinese (zh)
Other versions
CN113535018B (en
Inventor
李湄珍
陈智轩
雷彪
陶静
杨珊莉
吴劲松
姚凌翔
余滢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Hejia Jiannao Intelligent Technology Co ltd
Original Assignee
Xiamen Hejia Jiannao Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Hejia Jiannao Intelligent Technology Co ltd filed Critical Xiamen Hejia Jiannao Intelligent Technology Co ltd
Publication of CN113535018A publication Critical patent/CN113535018A/en
Application granted granted Critical
Publication of CN113535018B publication Critical patent/CN113535018B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0483Interaction with page-structured environments, e.g. book metaphor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour

Abstract

The embodiment of the application discloses a man-machine interaction method and device for evaluating cognitive speed. One embodiment of the method comprises: respectively displaying a preset first image set and a preset second image set in a first area and a second area on a display screen of a target device; determining a target first image from a first set of images; and repeatedly executing the user cognitive assessment step until the assessment is finished. According to the embodiment, the cognitive speed of the user can be automatically evaluated according to the operation of the user, and the efficiency and the accuracy of cognitive speed evaluation are improved. The embodiment of the application can be applied to the field of cognitive speed assessment and screening of specific crowds, the user can directly identify and sequentially compare the graphs, the data acquisition difficulty is reduced by using a man-machine interaction method, the accuracy of assessment data is improved, and the current assessment condition is visually displayed.

Description

Human-computer interaction method and device for evaluating cognitive speed
The present application claims priority from chinese patent application, entitled "human-machine interaction method and apparatus for assessing cognitive speed" filed by the chinese patent office on 13/11/2020, application number 202011269463X, the entire contents of which are incorporated herein by reference.
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a man-machine interaction method and device for evaluating cognitive speed.
Background
Human-Computer Interaction technologies (Human-Computer Interaction technologies) refers to a technology for realizing Human-Computer Interaction in an efficient manner through Computer input and output devices. The man-machine interaction technology comprises the steps that a machine provides a large amount of relevant information and prompt requests for people through an output or display device, and a person inputs the relevant information, answers questions, prompts and the like to the machine through an input device. Human-computer interaction technology is one of the important elements in the design of computer user interfaces.
For example, in the field of cognitive screening, processing speed reflects the recognition capability and the response speed of a graph, a paper scale is usually adopted for evaluating the processing speed, numbers are arranged below the graph of the paper scale and correspond to the graph one by one, personnel involved in evaluation can write corresponding numbers according to the recognition and the sequence of the graph, and the correct evaluation basis is to judge the accuracy of the correspondence between the numbers and the graph. This method requires preparation of a paper material in advance, face-to-face communication of the person to be evaluated, and setting of a correspondence between numbers and graphics, which is inconvenient for the user to perform evaluation and inefficient in evaluation.
Disclosure of Invention
The embodiment of the application aims to provide a human-computer interaction method and a human-computer interaction device for evaluating cognitive speed.
In a first aspect, an embodiment of the present application provides a human-computer interaction method for evaluating cognitive speed, where the method includes: responding to an operation of triggering formal evaluation by a user, and respectively displaying a preset first image set and a preset second image set in a first area and a second area on a display screen of the target device; determining a target first image from a first set of images; performing the following user cognitive assessment steps based on the target first image: outputting prompt information to be operated corresponding to the target first image; generating recording information for recording an operation of the user in response to the user selecting a target second image from the second image set; determining whether the current condition is met with an evaluation ending condition, and if so, outputting an evaluation result for evaluating the cognitive speed of the user; and if the evaluation ending condition is not met, re-determining the target first image from the unoperated first images in the first image set, and continuing to execute the user cognitive evaluation step.
In some embodiments, before determining whether the evaluation termination condition is currently met, the method further comprises: and outputting the operated prompt information corresponding to the target first image.
In some embodiments, displaying a preset first image set and a preset second image set in a first area and a second area on a display screen of a target device, respectively, comprises: displaying the first image set in a first area in a paging mode; and after generating recording information for recording an operation of the user, the method further includes: and switching the next page to be displayed in the first area in response to the first images in the current page being operated.
In some embodiments, determining whether the evaluation ending condition is currently met comprises: in response to the fact that the first images included in the first image set are all operated, determining that the first images currently accord with the evaluation ending condition; or in response to the fact that the time that the user triggers the operation of the formal evaluation reaches the preset time length from the current time, determining that the evaluation ending condition is met currently.
In some embodiments, the evaluation result comprises at least one of: average reaction time, answering time, correct operation number, wrong operation number, total operation number and correct operation rate.
In some embodiments, before the first and second regions on the display screen of the target device display the preset first and second sets of images, respectively, in response to a user triggering the operation of the formal evaluation, the method further comprises: responding to the entering of an animation demonstration interface, respectively displaying a preset third image set and a preset fourth image set in a first area and a second area, and displaying a click prompt icon in the animation demonstration interface; determining a target third image from the third set of images; performing the following animation demonstration steps based on the target third image: outputting prompt information to be operated corresponding to the target third image; moving the click prompt icon to a target fourth image corresponding to the target third image in the fourth image set; virtual clicking is carried out on the target fourth image; determining whether the animation demonstration ending condition is met or not at present, and if yes, quitting the animation demonstration interface; if the animation demonstration ending condition is not met, the target third image is determined again from the third images which are not demonstrated in the third image set, and the animation demonstration step is continuously executed.
In some embodiments, virtually clicking on the target fourth image comprises: and changing the appearance characteristic of the click prompt icon to represent the virtual click target fourth image.
In some embodiments, prior to determining whether the animation presentation end condition is currently met, the method further comprises: and outputting the operated prompt information corresponding to the target third image.
In some embodiments, prior to determining the target third image from the third set of images, the method further comprises: and playing animation demonstration prompt tones.
In a second aspect, an embodiment of the present application provides a human-computer interaction device for estimating cognitive speed, the device including: the first display module is used for responding to the operation that a user triggers formal evaluation, and respectively displaying a preset first image set and a preset second image set in a first area and a second area on a display screen of the target device; a first determining module for determining a target first image from a first set of images; an evaluation module for performing the following user cognitive evaluation steps based on the target first image: outputting prompt information to be operated corresponding to the target first image; generating recording information for recording an operation of the user in response to the user selecting a target second image from the second image set; determining whether the current condition is met with an evaluation ending condition, and if so, outputting an evaluation result for evaluating the cognitive speed of the user; and the second determining module is used for re-determining the target first image from the first images which are not operated in the first image set if the evaluation ending condition is not met, and continuing to execute the cognitive evaluation step of the user.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; storage means for storing one or more programs which, when executed by one or more processors, cause the one or more processors to carry out a method as described in any one of the implementations of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
According to the man-machine interaction method and the man-machine interaction device for assessing the cognitive speed, the man-machine interaction method is arranged on the electronic equipment, the first image set and the second image set are respectively displayed in the first area and the second area on the screen during assessment, a user selects the corresponding images in the second image according to the prompt of the first image, and finally the assessment result is output, so that the user and the equipment can flexibly interact with each other in a man-machine mode, the cognitive speed assessment mode is enriched, the user can be automatically assessed according to the operation of the user, and the efficiency and the accuracy of the cognitive speed assessment are improved. The embodiment of the application can be applied to the field of cognitive speed assessment and screening of specific crowds, the user can directly identify and sequentially compare the graphs, the data acquisition difficulty is reduced by using a man-machine interaction method, the accuracy of assessment data is improved, and the current assessment condition is visually displayed.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a human-machine interaction method for assessing cognitive speed according to the present application;
FIG. 3 is an exemplary illustration of a formal assessment interface of a human-computer interaction method for assessing cognitive speed according to the present application.
FIG. 4 is a flow diagram of yet another embodiment of a human-machine interaction method for assessing cognitive speed according to the present application;
FIG. 5 is an exemplary diagram of an animation demonstration interface for a human-machine interaction method for assessing cognitive speed according to the present application;
FIG. 6 is a schematic diagram illustrating an embodiment of a human-computer interaction device for assessing cognitive speed in accordance with the present application;
FIG. 7 is a block diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 illustrates an exemplary system architecture 100 to which the human-computer interaction method for assessing cognitive speed of the embodiments of the present application may be applied.
As shown in fig. 1, system architecture 100 may include terminal device 101, network 102, and server 103. Network 102 is the medium used to provide communication links between terminal devices 101 and server 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use terminal device 101 to interact with server 103 over network 102 to receive or send messages and the like. Various communication client applications, such as a cognitive assessment application, a web browser application, an instant messaging tool, and the like, may be installed on the terminal device 101.
The terminal device 101 may be various electronic devices including, but not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle-mounted terminal (e.g., a car navigation terminal), etc., and a fixed terminal such as a digital TV, a desktop computer, etc.
The server 103 may be a server that provides various services, such as a background server that provides support for cognitive assessment applications on the terminal device 101. The background server may send a software installation program, various data required by the software, and the like to the terminal apparatus 101, and may also generate an evaluation result according to an online operation of the user.
It should be noted that the human-computer interaction method for evaluating the cognitive speed provided in the embodiment of the present application may be executed by the terminal device 101 or the server 103, and accordingly, the human-computer interaction device for evaluating the cognitive speed may be disposed in the terminal device 101 or the server 103.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. It should be noted that, in the case that the material required for evaluation does not need to be acquired from a remote location, the system architecture may include only a server or a terminal device without a network.
With continued reference to FIG. 2, a flow 200 of one embodiment of a human-machine interaction method applied to assess cognitive speed according to the present application is shown. The method comprises the following steps:
step 201, in response to an operation of triggering formal evaluation by a user, displaying a preset first image set and a preset second image set in a first area and a second area on a display screen of a target device respectively.
In this embodiment, an executing entity (e.g., a terminal device or a server shown in fig. 1) of the human-computer interaction method for evaluating the cognitive speed may display a preset first image set and a preset second image set in a first area and a second area, respectively, on a display screen of the target device in response to an operation of a user triggering formal evaluation. Wherein the first set of images is used to provide directions for a user to evaluate and the second set of images is used to be selected by the user at the time of evaluation. The target device is a device used in evaluating a user, such as a terminal device shown in fig. 1.
Typically, the user may trigger a formal evaluation by clicking a start evaluation button displayed on the display screen, at which point the first set of images and the second set of images are displayed on the display screen.
In general, the evaluation rule may be set using a CSS tool. For example, three div tags are created, respectively indicated as A, B, C, an a tag for displaying the title text at the top of the screen, and a B tag corresponding to a first area, which includes a plurality of divs, each div corresponding to a first image. The C-tag corresponds to a second area including a plurality of divs, each div corresponding to one second image. As shown in fig. 3, an exemplary diagram of a formal evaluation is shown, wherein 301 is a title text area, 302 is a first area, and 303 is a second area.
In step 202, a target first image is determined from a first set of images.
In this embodiment, the executing subject may determine the target first image from the first image set. And the target first image is an image corresponding to the target first image and used for prompting the user to select from the second image set. Typically, when step 202 is first performed, the target first image is the first image in the first set of images. As shown in fig. 3, an image indicated by reference numeral 3021 is a target first image.
Step 203, based on the target first image, executing the following user cognitive assessment steps: outputting prompt information to be operated corresponding to the target first image; generating recording information for recording an operation of the user in response to the user selecting a target second image from the second image set; and determining whether the evaluation ending condition is met or not at present, and if so, outputting an evaluation result for evaluating the cognitive speed of the user.
In this embodiment, the executing agent may execute the following user cognition evaluation steps (including steps 2031 to 2034) based on the target first image:
step 2031, outputting a prompt message to be operated corresponding to the target first image.
And the prompt information to be operated is used for prompting the user to select the image corresponding to the target first image from the second image set. For example, the prompt message to be operated may be to highlight the target first image or to display a mark on the target first image.
Step 2032, in response to the user selecting the target second image from the second set of images, generating recording information for recording the user's operation.
The target second image is a second image corresponding to the target first image, and is generally a second image having the same shape as the target first image. As shown in fig. 3, the target first image 3021 corresponds to the target second image 3031. The user may select the target second image by clicking a mouse, clicking a screen, or the like. The recorded information may include, but is not limited to, at least one of the following: information indicating whether the selection is correct, the reaction time of the current operation (i.e., the time from the output of the prompt information to be operated to the selection of the target second image by the user), and the like.
In some optional implementations of this embodiment, in step 201, the executing entity may display the first image set in the first area in a paging manner. Specifically, when the number of the first image sets is large, all the images in the first area cannot be displayed simultaneously, and at this time, the images can be displayed in pages. For example, the total number of the first images is 250, 50 are displayed per page, and paging is switched manually or automatically.
Based on this, after generating the recording information for recording the operation of the user, the execution main body may further perform the steps of:
and switching the next page to be displayed in the first area in response to the first images in the current page being operated.
According to the implementation mode, when the number of the first images included in the first image set is large, the first images are displayed in a paging mode and the paging mode is switched during evaluation, the displayed first images can be clearer and easier to distinguish, and the evaluation efficiency is improved.
Step 2033, it is determined whether the evaluation end condition is currently met.
Wherein, the evaluation ending condition is a trigger condition for ending the formal evaluation.
In some optional implementations of this embodiment, the evaluation end condition may include, but is not limited to, at least one of:
and in response to determining that all the first images included in the first image set are operated, determining that the evaluation ending condition is met currently. I.e. the user reacts to each first image comprised by the set of first images.
And determining that the current condition is met with the evaluation ending condition in response to the fact that the time from the time when the user triggers the operation of the formal evaluation to the time when the current time reaches the preset time. As an example, the preset time period may be 90 seconds.
The evaluation finishing condition provided by the implementation mode can make necessary limitation for the user evaluation so as to improve the accuracy of evaluation and the consistency of the evaluation conditions of different users.
In some optional implementations of this embodiment, before step 2033, the executing entity may further output an operated prompt message corresponding to the target first image. The operated prompt message may be in various forms, such as an icon, such as a diagonal line and a dot. As shown in fig. 3, after the operation on the target first image 3021 is completed, a diagonal line is displayed 3021 to indicate that the operation on 3021 is completed. In general, an invisible slash with an accessibility of 0 may be added to each first image while the first image set is displayed in the first region, and the user may change the accessibility to 1 and display the slash after selecting the corresponding second image.
According to the implementation mode, after the operation on the target first image is completed, the operated prompt information is output, so that the user can be timely reminded of which first images are operated and completed, and the evaluation efficiency is improved.
Step 2034, if yes, outputting an evaluation result for evaluating the cognitive speed of the user.
The manner of outputting the evaluation result may include various manners, such as displaying on a display screen, outputting to a printer for printing, sending to other electronic devices, and the like.
In some optional implementations of this embodiment, the evaluation result may include, but is not limited to, at least one of: average reaction time, answering time, number of correct operations, number of operation errors, total number of operations, operation accuracy rate and the like. The average reaction time is an average of the total reaction time of the user for a plurality of operations (which may be operations for all the first images or operations for part of the first images) in the evaluation. The answering time can be the total time of the evaluation process.
The evaluation result listed by the implementation mode can accurately and comprehensively reflect the cognitive speed of the user, and provides a sufficient basis for subsequent diagnosis of the user.
And 204, if the evaluation ending condition is not met, re-determining the target first image from the unoperated first images in the first image set, and continuing to execute the cognitive evaluation step of the user.
In this embodiment, if the evaluation end condition is not met, the executing entity may re-determine the target first image from the non-operated first images in the first image set, and continue to execute the user cognitive evaluation steps (i.e., steps 2031 to 2034). As an example, as shown in fig. 3, after the operation on 3021 is completed, the step 3031 to the step 3034 are re-executed by continuing to select 3022 as the target first image.
The execution subject may determine the target first image according to the arrangement order of the first images, or may randomly determine the target first image.
According to the method provided by the embodiment of the application, the man-machine interaction method is arranged on the electronic equipment, the first image set and the second image set are respectively displayed in the first area and the second area on the screen during evaluation, the user selects the corresponding images in the second image according to the prompt of the first image, and finally the evaluation result is output, so that the user and the equipment can flexibly interact with each other in a man-machine mode, the cognitive speed evaluation mode is enriched, the user can automatically evaluate the cognitive speed of the user according to the operation of the user, and the cognitive speed evaluation efficiency and accuracy are improved. The embodiment of the application can be applied to the field of cognitive speed assessment and screening of specific crowds, the user can directly identify and sequentially compare the graphs, the data acquisition difficulty is reduced by using a man-machine interaction method, the accuracy of assessment data is improved, and the current assessment condition is visually displayed.
With further reference to fig. 4, a flow 400 of yet another embodiment of a human-machine interaction method for assessing cognitive speed according to the present application is shown. Before the above step 201, the method comprises the following steps:
step 401, in response to entering the animation demonstration interface, displaying a preset third image set and a preset fourth image set in the first area and the second area respectively, and displaying a click prompt icon in the animation demonstration interface.
Typically, the animation presentation interface is accessible by a user action, for example, the user accesses the animation presentation interface by clicking a button to access the evaluation. The third image set and the fourth image set each include a number of images that is generally less than the number of images included in the first image set and the second image set described above. As shown in fig. 5, the third image set includes 5 third images, and the fourth image set includes 3 fourth images. The click prompt icon is used to demonstrate the action of clicking to prompt the user how to operate. May take various forms, such as the hand icon 501 shown in fig. 5.
Step 402, a target third image is determined from the third set of images.
The method for determining the target third image is substantially the same as step 202 in the embodiment corresponding to fig. 2, and is not repeated here.
Step 403, based on the target third image, performing the following animation demonstration steps: outputting prompt information to be operated corresponding to the target third image; moving the click prompt icon to a target fourth image corresponding to the target third image in the fourth image set; virtual clicking is carried out on the target fourth image; and determining whether the animation demonstration ending condition is met or not at present, and if so, exiting the animation demonstration interface.
In this embodiment, the executing body may execute the following animation demonstration steps (including steps 4031-4035) based on the target third image:
and step 4031, outputting to-be-operated prompt information corresponding to the target third image.
The prompt information to be operated is the same as the prompt information to be operated in step 2031, and is not described here again. As shown in fig. 5, 502 is the target third image, and 502 may be highlighted.
Step 4032, the click prompt icon is moved to a target fourth image corresponding to the target third image in the fourth image set.
As shown in fig. 5, clicking the prompt icon 501 moves to the target fourth image 503. In general, the values of the left offset and the upper offset of the target fourth image may be obtained in the C-tag (i.e., the second region) using the CSS tool, and then the gesture map is subjected to the movement effect using jimulae of Jquery.
Step 4033, the target fourth image is virtually clicked.
And the virtual click is used for demonstrating the action of clicking the fourth image of the target. The virtual click may be implemented in various ways, such as playing sound effects, displaying dynamic images, and the like.
In some optional implementations of the embodiment, the execution subject may represent the virtual click target fourth image by changing an outline feature of the click prompt icon. The virtual click effect can be realized by changing the shape, the size and the like of the click prompt icon. After the step movement is completed, the motor presentation icon is usually enlarged or reduced by adding a transform scale in the CSS to achieve a click effect. According to the implementation mode, virtual clicking is performed by changing the appearance characteristics of the clicking prompt icon, so that the user can intuitively check how to click the fourth image in the evaluation, and the evaluation efficiency is improved.
4034, determine whether the end condition of the animation demonstration is met.
As an example, the animation presentation end condition may include: all images in the third set of images have been demonstrated how to operate; or the animation demonstration time reaches the preset time; alternatively, the user triggers a signal to end the animation presentation (e.g., clicks an end animation presentation button).
4035, if yes, quit the animation demonstration interface.
And step 404, if the animation demonstration ending condition is not met, re-determining a target third image from the third images which are not demonstrated in the third image set, and continuing to execute the animation demonstration step.
In some optional implementation manners of this embodiment, before step 4034, the executing body may further output an operated prompt message corresponding to the target third image. The operated prompt information is the same as the operated prompt information described in the above embodiments, and is not described herein again. As shown in fig. 5, the operated indication information is a diagonal line. According to the realization mode, after the operation demonstration of the target third image is completed, the operated prompt information is output, so that the user can be reminded of which third images are operated and completed currently, the demonstration process is more complete, and the user can know how to evaluate the operation more clearly.
In some optional implementation manners of this embodiment, before the step 402, the executing body may further play an animation demonstration prompt tone, so that the demonstration process is clearer, and the user may learn how to evaluate more specifically, thereby improving the evaluation efficiency.
The method provided by the embodiment corresponding to fig. 4 can clearly show how to perform the evaluation operation to the user by adding the animation demonstration step described in fig. 4 before formal evaluation, and the user can efficiently know how to perform the evaluation by watching the animation demonstration step, thereby improving the evaluation efficiency. It should be noted that before the formal evaluation and after the animation demonstration, the method can further include a step of example operation, so that the user can manually practice the step of evaluation before the formal evaluation, which is helpful to improve the efficiency of evaluation. It should be understood that the process of example operation is substantially identical to the process of operation of formal evaluation, and is not described in detail herein.
With further reference to fig. 6, as an implementation of the method shown in the above figures, the present application provides an embodiment of a human-computer interaction device for estimating cognitive speed, where the embodiment of the device corresponds to the embodiment of the method shown in fig. 2, and the device may be applied to various electronic devices.
As shown in fig. 6, the human-computer interaction device 600 for evaluating cognitive speed according to the present embodiment includes: the first display module 601 is configured to display a preset first image set and a preset second image set in a first area and a second area on a display screen of the target device, respectively, in response to an operation that a user triggers formal evaluation; a first determining module 602 for determining a target first image from a first set of images; an evaluation module 603 configured to perform the following user cognitive evaluation steps based on the target first image: outputting prompt information to be operated corresponding to the target first image; generating recording information for recording an operation of the user in response to the user selecting a target second image from the second image set; determining whether the current condition is met with an evaluation ending condition, and if so, outputting an evaluation result for evaluating the cognitive speed of the user; and a second determining module 604, configured to, if the evaluation ending condition is not met, re-determine the target first image from the non-operated first images in the first image set, and continue to perform the user cognitive evaluation step.
In this embodiment, the first display module 601 may display a preset first image set and a preset second image set in a first area and a second area on a display screen of the target device, respectively, in response to an operation of triggering formal evaluation by a user. Wherein the first set of images is used to provide directions for a user to evaluate and the second set of images is used to be selected by the user at the time of evaluation. The target device is a device used in evaluating a user, such as a terminal device shown in fig. 1.
Typically, the user may trigger a formal evaluation by clicking a start evaluation button displayed on the display screen, at which point the first set of images and the second set of images are displayed on the display screen.
In this embodiment, the first determination module 602 may determine the target first image from the first set of images. And the target first image is an image corresponding to the target first image and used for prompting the user to select from the second image set. Typically, when step 202 is first performed, the target first image is the first image in the first set of images. As shown in fig. 3, an image indicated by reference numeral 3021 is a target first image.
In this embodiment, the evaluation module 603 may perform the following user cognitive evaluation steps (including steps 6031-6034) based on the target first image:
and step 6031, outputting to-be-operated prompt information corresponding to the target first image.
And the prompt information to be operated is used for prompting the user to select the image corresponding to the target first image from the second image set. For example, the prompt message to be operated may be to highlight the target first image or to display a mark on the target first image.
In step 6032, in response to the user selecting the target second image from the second image set, recording information for recording the operation of the user is generated.
The target second image is a second image corresponding to the target first image, and is generally a second image having the same shape as the target first image. As shown in fig. 3, the target first image 3021 corresponds to the target second image 3031. The user may select the target second image by clicking a mouse, clicking a screen, or the like. The recorded information may include, but is not limited to, at least one of the following: information indicating whether the selection is correct, the reaction time of the current operation (i.e., the time from the output of the prompt information to be operated to the selection of the target second image by the user), and the like.
In step 6033, it is determined whether the evaluation end condition is currently met.
Wherein, the evaluation ending condition is a trigger condition for ending the formal evaluation.
In step 6034, if yes, an evaluation result for evaluating the cognitive speed of the user is output.
The manner of outputting the evaluation result may include various manners, such as displaying on a display screen, outputting to a printer for printing, sending to other electronic devices, and the like.
In this embodiment, if the evaluation end condition is not met, the second determination module 604 may re-determine the target first image from the non-operated first images in the first image set, and continue to perform the user cognitive evaluation steps (i.e., steps 6031 to 6034). As an example, as shown in fig. 3, after the operation on 3021 is completed, the selection 3022 as the target first image is continued, and steps 6031 to 6034 are re-executed.
The apparatus 600 may determine the target first image according to the arrangement order of the first images, or may randomly determine the target first image.
In some optional implementations of this embodiment, the evaluation module 603 may be further configured to: and outputting the operated prompt information corresponding to the target first image.
In some optional implementations of this embodiment, the first display module 601 may be further configured to: displaying the first image set in a first area in a paging mode; and the evaluation module 603 may be further configured to: and switching the next page to be displayed in the first area in response to the first images in the current page being operated.
In some optional implementations of this embodiment, the evaluation module 603 may include: a first determining unit (not shown in the figure), configured to determine that the first images included in the first image set are all operated, and that the evaluation ending condition is currently met; or a second determining unit (not shown in the figure) for determining that the evaluation ending condition is currently met in response to determining that the time for the user to trigger the operation of the formal evaluation reaches a preset time length from the current time.
In some optional implementations of this embodiment, the evaluation result includes at least one of: average reaction time, answering time, correct operation number, wrong operation number, total operation number and correct operation rate.
In some optional implementations of this embodiment, the apparatus 600 may further include: a second display module (not shown in the figure) for displaying a preset third image set and a preset fourth image set in the first area and the second area respectively in response to entering the animation demonstration interface, and displaying a click prompt icon in the animation demonstration interface; a third determining module (not shown in the figure) for determining a target third image from the third set of images; a presentation module (not shown in the figures) for performing the following animation presentation steps based on the target third image: outputting prompt information to be operated corresponding to the target third image; moving the click prompt icon to a target fourth image corresponding to the target third image in the fourth image set; virtual clicking is carried out on the target fourth image; determining whether the animation demonstration ending condition is met or not at present, and if yes, quitting the animation demonstration interface; and a fourth determining module (not shown in the figure) for re-determining the target third image from the undepicted third images in the third image set if the animation demonstration ending condition is not met, and continuing to perform the animation demonstration step.
In some optional implementations of this embodiment, the presentation module may be further configured to: and changing the appearance characteristic of the click prompt icon to represent the virtual click target fourth image.
In some optional implementations of this embodiment, the presentation module may be further configured to: and outputting the operated prompt information corresponding to the target third image.
In some optional implementations of this embodiment, the apparatus 600 may further include: and the sound playing module (not shown in the figure) is used for playing the animation demonstration prompt sound.
According to the device provided by the embodiment of the application, the man-machine interaction method is arranged on the electronic equipment, the first image set and the second image set are respectively displayed in the first area and the second area on the screen during evaluation, the user selects the corresponding images in the second image according to the prompt of the first image, and finally the evaluation result is output, so that the user and the equipment can flexibly interact with each other in a man-machine mode, the cognitive speed evaluation mode is enriched, the automatic evaluation of the cognitive speed of the user according to the operation of the user is realized, and the cognitive speed evaluation efficiency and accuracy are improved. The embodiment of the application can be applied to the field of cognitive speed assessment and screening of specific crowds, the user can directly identify and sequentially compare the graphs, the data acquisition difficulty is reduced by using a man-machine interaction method, the accuracy of assessment data is improved, and the current assessment condition is visually displayed.
Referring now to FIG. 7, shown is a block diagram of a computer system 700 suitable for use in implementing the electronic device of an embodiment of the present application. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU)701, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the system 700 are also stored. The CPU701, the ROM 702, and the RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Liquid Crystal Display (LCD) and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program, when executed by a Central Processing Unit (CPU)701, performs the above-described functions defined in the method of the present application.
It should be noted that the computer readable storage medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present application may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes a first display module, a first determination module, an evaluation module, and a second determination module. Where the names of these modules do not in some cases constitute a limitation of the unit itself, for example, the first display module may also be described as "a module for displaying a preset first image set and a second image set in a first area and a second area, respectively, on the display screen of the target device in response to a user triggering an operation of a formal evaluation".
As another aspect, the present application also provides a computer-readable storage medium, which may be included in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable storage medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: responding to an operation of triggering formal evaluation by a user, and respectively displaying a preset first image set and a preset second image set in a first area and a second area on a display screen of the target device; determining a target first image from a first set of images; performing the following user cognitive assessment steps based on the target first image: outputting prompt information to be operated corresponding to the target first image; generating recording information for recording an operation of the user in response to the user selecting a target second image from the second image set; determining whether the current condition is met with an evaluation ending condition, and if so, outputting an evaluation result for evaluating the cognitive speed of the user; and if the evaluation ending condition is not met, re-determining the target first image from the unoperated first images in the first image set, and continuing to execute the user cognitive evaluation step.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (12)

1. A human-computer interaction method for assessing cognitive speed, the method comprising:
responding to an operation of triggering formal evaluation by a user, and respectively displaying a preset first image set and a preset second image set in a first area and a second area on a display screen of the target device;
determining a target first image from the first set of images;
performing the following user cognitive assessment steps based on the target first image: outputting prompt information to be operated corresponding to the target first image; generating recording information for recording an operation of a user in response to the user selecting a target second image from the second image set; determining whether the current condition of the evaluation is met, and if so, outputting an evaluation result for representing the cognitive speed of the user;
and if the evaluation ending condition is not met, re-determining a target first image from the unoperated first images in the first image set, and continuing to execute the user cognitive evaluation step.
2. The method of claim 1, wherein prior to said determining whether an evaluation termination condition is currently met, the method further comprises:
and outputting the operated prompt information corresponding to the target first image.
3. The method of claim 1, wherein the displaying the preset first image set and the second image set in the first area and the second area on the display screen of the target device respectively comprises:
displaying the first image set in a paging mode in the first area; and
after the generating of the recording information for recording the user's operation, the method further includes:
and switching the next page to be displayed in the first area in response to the first images in the current page being operated.
4. The method of claim 1, wherein the determining whether the evaluation termination condition is currently met comprises:
in response to the fact that the first images included in the first image set are all operated, determining that the first images currently accord with an evaluation ending condition; or
And determining that the current condition is met with the evaluation ending condition in response to the fact that the time from the time when the user triggers the operation of the formal evaluation to the time when the current time reaches the preset time.
5. The method according to one of claims 1 to 4, wherein the evaluation result comprises at least one of: average reaction time, answering time, correct operation number, wrong operation number, total operation number and correct operation rate.
6. The method of claim 1, wherein prior to the operation of triggering formal evaluation in response to a user, displaying a preset first set of images and a preset second set of images in a first area and a second area, respectively, on a display screen of a target device, the method further comprises:
responding to an animation demonstration interface, respectively displaying a preset third image set and a preset fourth image set in the first area and the second area, and displaying a click prompt icon in the animation demonstration interface;
determining a target third image from the third set of images;
performing the following animation demonstration steps based on the target third image: outputting prompt information to be operated corresponding to the target third image; moving the click prompt icon to a target fourth image corresponding to a target third image in the fourth image set; virtual clicking is carried out on the target fourth image; determining whether the animation demonstration ending condition is met or not at present, and if yes, quitting the animation demonstration interface;
if the animation demonstration ending condition is not met, the target third image is determined again from the third images which are not demonstrated in the third image set, and the animation demonstration step is continuously executed.
7. The method of claim 6, wherein virtually clicking on the target fourth image comprises:
changing the appearance characteristic of the click prompt icon to represent a virtual click target fourth image.
8. The method of claim 6, wherein prior to said determining whether an animation presentation end condition is currently met, said method further comprises:
and outputting the operated prompt information corresponding to the target third image.
9. The method of claim 6, wherein prior to said determining a target third image from the set of third images, the method further comprises:
and playing animation demonstration prompt tones.
10. A human-computer interaction device for assessing cognitive speed, the device comprising:
the first display module is used for responding to the operation that a user triggers formal evaluation, and respectively displaying a preset first image set and a preset second image set in a first area and a second area on a display screen of the target device;
a first determining module for determining a target first image from the first set of images;
an evaluation module for performing the following user cognitive evaluation steps based on the target first image: outputting prompt information to be operated corresponding to the target first image; generating recording information for recording an operation of a user in response to the user selecting a target second image from the second image set; determining whether an evaluation ending condition is met or not at present, and if so, outputting an evaluation result for evaluating the cognitive speed of the user;
and a second determining module, configured to, if the evaluation end condition is not met, re-determine a target first image from the non-operated first images in the first image set, and continue to perform the user cognitive evaluation step.
11. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-9.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-9.
CN202011280348.2A 2020-11-13 2020-11-16 Human-computer interaction method and device for evaluating cognitive speed Active CN113535018B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011269463 2020-11-13
CN202011269463X 2020-11-13

Publications (2)

Publication Number Publication Date
CN113535018A true CN113535018A (en) 2021-10-22
CN113535018B CN113535018B (en) 2024-03-22

Family

ID=78094466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011280348.2A Active CN113535018B (en) 2020-11-13 2020-11-16 Human-computer interaction method and device for evaluating cognitive speed

Country Status (1)

Country Link
CN (1) CN113535018B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114822853A (en) * 2022-06-16 2022-07-29 成都中医药大学 Rehabilitation assessment doctor end, operation method and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200951897A (en) * 2008-06-13 2009-12-16 Univ Ishou Cognitive ability interactive digital learning system
US20150294590A1 (en) * 2014-04-11 2015-10-15 Aspen Performance Technologies Neuroperformance
CN107563181A (en) * 2017-10-24 2018-01-09 百望电子发票数据服务有限公司 A kind of verification method and system for clicking graphical verification code
CN108471991A (en) * 2015-08-28 2018-08-31 艾腾媞乌有限责任公司 cognitive skill training system and program
CN110942812A (en) * 2019-04-24 2020-03-31 上海大学 Automatic cognitive function assessment system
CN111627556A (en) * 2020-07-23 2020-09-04 厦门市和家健脑智能科技有限公司 System and method for rapidly screening cognitive impairment of old people

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200951897A (en) * 2008-06-13 2009-12-16 Univ Ishou Cognitive ability interactive digital learning system
US20150294590A1 (en) * 2014-04-11 2015-10-15 Aspen Performance Technologies Neuroperformance
CN108471991A (en) * 2015-08-28 2018-08-31 艾腾媞乌有限责任公司 cognitive skill training system and program
CN107563181A (en) * 2017-10-24 2018-01-09 百望电子发票数据服务有限公司 A kind of verification method and system for clicking graphical verification code
CN110942812A (en) * 2019-04-24 2020-03-31 上海大学 Automatic cognitive function assessment system
CN111627556A (en) * 2020-07-23 2020-09-04 厦门市和家健脑智能科技有限公司 System and method for rapidly screening cognitive impairment of old people

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114822853A (en) * 2022-06-16 2022-07-29 成都中医药大学 Rehabilitation assessment doctor end, operation method and storage medium
CN114822853B (en) * 2022-06-16 2022-09-13 成都中医药大学 Rehabilitation assessment doctor end, operation method and storage medium

Also Published As

Publication number Publication date
CN113535018B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
US11838251B2 (en) Information interaction method, apparatus, device, storage medium and program product
CN113031842B (en) Video-based interaction method and device, storage medium and electronic equipment
CN106294770A (en) Information sharing method, device and terminal
CN111324252B (en) Display control method and device in live broadcast platform, storage medium and electronic equipment
CN114003326A (en) Message processing method, device, equipment and storage medium
US10606618B2 (en) Contextual assistance system
CN110837334B (en) Method, device, terminal and storage medium for interactive control
CN113555085A (en) Working memory training method and device for cognitive disorder
CN111930453A (en) Dictation interaction method and device and electronic equipment
CN113688341B (en) Dynamic picture decomposition method and device, electronic equipment and readable storage medium
CN113539489A (en) Human-computer interaction method and device for assessing visual attention
CN113535018B (en) Human-computer interaction method and device for evaluating cognitive speed
US20230412723A1 (en) Method and apparatus for generating imagery record, electronic device, and storage medium
CN109951380B (en) Method, electronic device, and computer-readable medium for finding conversation messages
CN112492399B (en) Information display method and device and electronic equipment
CN110377192B (en) Method, device, medium and electronic equipment for realizing interactive effect
CN112308745A (en) Method and apparatus for generating information
CN113539488A (en) Human-computer interaction method and device for evaluating attention persistence
CN113362802A (en) Voice generation method and device and electronic equipment
US20190196669A1 (en) Interactive user interface improved by presentation of appropriate informative content
CN113126863A (en) Object selection implementation method and device, storage medium and electronic equipment
CN113672317A (en) Method and device for rendering title page
CN113535042A (en) Method and device for generating image based on old people cognitive recognition
KR101891754B1 (en) Device and method of providing for learning application, and server of providing for learning content
CN110990528A (en) Question answering method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant