CN111142819A - Visual space attention detection method and related product - Google Patents

Visual space attention detection method and related product Download PDF

Info

Publication number
CN111142819A
CN111142819A CN201911285087.0A CN201911285087A CN111142819A CN 111142819 A CN111142819 A CN 111142819A CN 201911285087 A CN201911285087 A CN 201911285087A CN 111142819 A CN111142819 A CN 111142819A
Authority
CN
China
Prior art keywords
attention
sub
area
display screen
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911285087.0A
Other languages
Chinese (zh)
Inventor
胡月妍
黄艳
唐红思
胡立平
王立平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201911285087.0A priority Critical patent/CN111142819A/en
Publication of CN111142819A publication Critical patent/CN111142819A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The application discloses a visual space attention detection method and a related product. The method comprises the following steps: displaying a first display object in a first area of a display screen according to a control mode, wherein the display screen comprises M areas, the first area is any one of the M areas, and M is an integer greater than 1; obtaining the attention result of the tested individual to the first display object; and obtaining the attention distribution result of the tested individual in the first area of the display screen according to the attention result. Detection of the visual spatial attention of an individual may be achieved.

Description

Visual space attention detection method and related product
Technical Field
The present application relates to the field of computer technologies, and in particular, to a visual space attention detection method and a related product.
Background
Visuospatial attention refers to the ability of an individual to perceive visual stimuli at different spatial locations. The individual's visual space attention has a great influence on daily life, for example, when a person enjoys a plurality of playing fishes simultaneously in a pool, the person may more easily find that the fish in the central position in the visual field has changed movements; when an individual focuses on the movement of three children at the same time, the attention resources allocated to each child by the individual may be different, and the individual may be more likely to find that an accident occurs to the child located at the left side in the visual field. In real life, the arrangement of objects in the individual view field does not usually take into account the individual visual space attention, for example, video pictures acquired by a plurality of cameras are simultaneously displayed in a monitoring picture, and the arrangement of the video pictures may be randomly arranged, which may cause important video pictures to be in the view field area with low individual attention, and further cause the individual to miss important information.
The attention degree of the individual to the visual field area can be determined by detecting the visual space attention of the individual, so that the arrangement of the objects in the visual field range of the individual is adjusted, and the important objects are placed at the eye-catching positions of the individual. In this regard, the present application proposes a visual space attention detection scheme.
Disclosure of Invention
The application provides a visual space attention detection method and a related product, which are used for detecting the attention distribution condition of an individual at a visual space position.
In a first aspect, a visual space attention detection method is provided, including: displaying a first display object in a first area of a display screen according to a control mode, wherein the display screen comprises M areas, the first area is any one of the M areas, and M is an integer greater than 1; obtaining the attention result of the tested individual to the first display object; and obtaining the attention distribution result of the tested individual in the first area of the display screen according to the attention result.
In one possible implementation manner, during the displaying of the first display object in the first area of the display screen according to the control mode, the method further includes: and displaying a second display object in a second area of the display screen according to the control mode, wherein the second area is any one of the M areas and is not the first area.
In another possible implementation manner, the control mode includes K sub-control modes, where K is a natural number greater than 0; the obtaining of the attention result of the tested individual to the first display object comprises: obtaining K attention scores of the tested individual to the first display object displayed according to the K sub-control modes as the attention result.
In another possible implementation manner, the K sub-control modes are all the same sub-control mode, or the K sub-control modes include different sub-control modes.
In another possible implementation manner, any one of the K sub-control patterns includes L slave sub-control patterns, where L is a natural number greater than 1.
In yet another possible implementation manner, the L slave sub-control modes are all the same slave sub-control mode, or the L slave sub-control modes include mutually different slave sub-control modes.
In yet another possible implementation manner, the slave sub-control mode includes: and controlling any one or more of the color, the rotation direction and the rotation speed of a display object in the display screen.
In a second aspect, there is provided a visual space attention detection apparatus comprising: the display device comprises a display unit, a control unit and a display unit, wherein the display unit is used for displaying a first display object in a first area of a display screen according to a control mode, the display screen comprises M areas, the first area is any one of the M areas, and M is an integer greater than 1; an obtaining unit, configured to obtain a result of attention of the subject to the first display object; the obtaining unit is further configured to obtain an attention distribution result of the subject in the first area of the display screen according to the attention result.
In a possible implementation manner, in the process of displaying the first display object in the first area of the display screen according to the control mode, the display unit is further configured to display a second display object in a second area of the display screen according to the control mode, where the second area is any one of the M areas and is not the first area.
In another possible implementation manner, the control mode includes K sub-control modes, where K is a natural number greater than 0; the obtaining unit is specifically configured to obtain, as the attention result, K attention scores of the subject for the first display object displayed in the K sub-control modes.
In another possible implementation manner, the K sub-control modes are all the same sub-control mode, or the K sub-control modes include different sub-control modes.
In another possible implementation manner, any one of the K sub-control patterns includes L slave sub-control patterns, where L is a natural number greater than 1.
In yet another possible implementation manner, the L slave sub-control modes are all the same slave sub-control mode, or the L slave sub-control modes include mutually different slave sub-control modes.
In yet another possible implementation manner, the slave sub-control mode includes: and controlling any one or more of the color, the rotation direction and the rotation speed of a display object in the display screen.
In a third aspect, an electronic device is provided, including: a processor, a memory; the processor is configured to support the electronic device to perform corresponding functions in the method of the first aspect and any possible implementation manner thereof. The memory stores programs (instructions) and data necessary for the electronic device. Optionally, the electronic device may further include an input/output interface for supporting communication between the electronic device and other devices.
In a fourth aspect, there is provided a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to perform the method of the first aspect and any possible implementation thereof.
In a fifth aspect, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the method of the first aspect and any of its possible implementations.
According to the method, a first display object is displayed in a first area of a display screen according to a control mode, the display screen comprises M areas, the first area is any one of the M areas, and M is an integer greater than 1; then, obtaining the attention result of the tested individual to the first display object; and finally, obtaining the attention distribution result of the tested individual in the first area of the display screen according to the attention result. Thereby realizing the detection of the visual space attention of the individual.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
Fig. 1 is a schematic flowchart of a method for detecting attention in a visual space according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating a display screen area division according to an embodiment of the present application;
FIG. 3 is a schematic flow chart illustrating another method for detecting visual spatial attention according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of six visual stimuli provided by an embodiment of the present application;
fig. 5 is a schematic structural diagram of a visual space attention detection apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic hardware structure diagram of a visual space attention detection apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The embodiments of the present application will be described below with reference to the drawings.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a method for detecting attention of a visual space according to an embodiment of the present disclosure.
101. A first display object is displayed in a first area of a display screen in accordance with a control mode.
The display screen comprises M areas, the first area is any one of the M areas, and M is an integer larger than 1.
In this application example, the display screen can be the display device or the screen that are used for showing content on devices such as computer, cell-phone, intelligent wearing equipment, projecting apparatus. The display screen includes M regions, where M is an integer greater than 1, that is, the display screen may be divided into a plurality of regions, for example, M regions, as shown in a in fig. 2, the entire display screen is divided into 4 regions: an M1 region, an M2 region, an M3 region, an M4 region; it is also possible to determine a large area from the display screen and then divide the large area into M areas, as shown in b of fig. 2, 4 areas are divided in the central area of the display screen: m1 region, M2 region, M3 region, M4 region. The M regions may be all regions having the same shape or may include regions having different shapes, and the shape of any one region may be a regular polygon such as a rectangle, a circle, or a triangle, or may be any irregular polygon. The M regions correspond to M visual spatial locations of the subject or tester when the subject or tester is viewing or gazing at the display screen. The first region is any one of M regions, for example, as shown in b of fig. 2, the first region may be any one of an M1 region, an M2 region, an M3 region, and an M4 region in the display screen. The first display object may include one or more display sub-objects, and the display sub-objects may be any object or character object, for example, the first display object may be one small sphere, two small spheres, one small sphere and one cube.
In the embodiment of the present application, the first display object is displayed in the first area of the display screen according to a control mode, and the control mode determines how the first display object is displayed in the first area of the display screen, so that when the first display object is correspondingly displayed in the control mode, it is equivalent to generating a visual stimulation signal for the tested individual or the tester.
As described above, the first area is any one of the M areas, and when the first display object is displayed in the first area of the display screen according to the control mode, whether the display object is displayed in an area other than the first area is determined, and if the display object is displayed, the display object is specifically displayed in any control mode, which is not specifically limited in the present application.
One possible implementation is that, in the process of displaying the first display object in the first area of the display screen according to the control mode, the second display object is displayed in the second area of the display screen according to the control mode, and the second area is any one of the M areas and is not the first area. For example, as shown in b of fig. 2, when the first region is an M1 region, the second region may be any one of an M2 region, an M3 region, and an M4 region. That is, while the first display object is displayed in the M1 area of the display screen according to the control mode, the second display object may be displayed in the M2 area, the M3 area, or the M4 area of the display screen according to the control mode, or the corresponding display objects may be displayed in all of the M2 area, the M3 area, and the M4 area of the display screen according to the control mode. In addition, if the second display object is also displayed in the M2 area of the display screen according to the control mode while the first display object is displayed in the M1 area of the display screen according to the control mode, it should be noted that the first display object and the second display object may be the same or different, for example, the first display object is two beads, the second display object is also two beads, for example, the first display object is two beads, and the second display object is two cubes. Similarly, if the corresponding display objects are displayed in all of the M2 region, the M3 region, and the M4 region of the display screen according to the control mode while the first display object is displayed in the M1 region of the display screen according to the control mode, the objects displayed in all of the M2 region, the M3 region, and the M4 region may be the same as the first display object, or the objects displayed in any of the M2 region, the M3 region, and the M4 region may be different from the first display object.
102. And obtaining the attention result of the tested individual to the first display object.
When the first display object is correspondingly displayed in the above control mode, which is equivalent to generating the visual stimulation signal to the subject or the tester, as described above, then, after the first display object is displayed according to the control mode, the tested individual will respond to the visual stimulation signal, for example, the tested individual needs to make a judgment on the display content or the motion change information of the first display object in the control mode, the attention result of the tested individual to the first display object can be obtained according to the judgment information made by the tested individual, specifically, the attention result can be an attention score, when the tested individual judges that the attention score of the tested individual to the first display object is 1 point, when the subject makes a wrong judgment, the attention score of the subject for the first display object is 0.
103. And obtaining the attention distribution result of the tested individual in the first area of the display screen according to the attention result.
As shown in 102, after obtaining the attention result of the subject on the first display object, the attention result reflects the attention degree of the subject on the first area of the display screen, and then the attention distribution result of the subject on the first area of the display screen can be obtained according to the attention result. In addition, it can be understood that the embodiment method shown in fig. 1 can detect the attention distribution result of the tested individual in the first area of the display screen, where the display screen includes M areas, the first area is any one of the M areas, and M is an integer greater than 1, that is, the embodiment method shown in fig. 1 can correspondingly detect the attention distribution result of the tested individual in each of the M areas of the display screen. According to the method, a first display object is displayed in a first area of a display screen according to a control mode, the display screen comprises M areas, the first area is any one of the M areas, and M is an integer greater than 1; then, obtaining the attention result of the tested individual to the first display object; and finally, obtaining the attention distribution result of the tested individual in the first area of the display screen according to the attention result. Thereby realizing the detection of the visual space attention of the individual.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating another method for detecting attention in visual space according to an embodiment of the present disclosure.
301. And displaying the first display object in a first area of the display screen according to a control mode, wherein the control mode comprises K sub-control modes, and K is a natural number greater than 0.
The display screen comprises M areas, the first area is any one of the M areas, and M is an integer larger than 1.
As described above, the embodiment shown in fig. 1 merely illustrates that the first display object is displayed in the first area of the display screen according to the control mode, and the control mode is not limited at all. In this embodiment, it is defined that the control mode includes K sub-control modes, that is, specifically, a first display object may be displayed in the first area of the display screen according to the K sub-control modes, and after the first display object is displayed in the first area of the display screen according to any one of the sub-control modes, the subject may respond to the visual stimulation signal generated by the first display object in the sub-control mode, and then the first display object is displayed in the first area of the display screen according to the K sub-control modes, and the subject may respond to the K visual stimulation signals generated by the first display object in the K sub-control modes K times. Further, the K sub-control modes are all the same sub-control mode, or the K sub-control modes include different sub-control modes.
One possible implementation manner is that the K sub-control modes are all the same sub-control mode. It should be understood that, in this case, the tested individual correspondingly repeats K times to respond to the visual stimulation signal generated by the first display object in the sub-control mode, which corresponds to K times of displaying the first display object in the first area of the display screen in the sub-control mode, and it is understood that K times of the same test are performed on the tested individual, and each test is to display the first display object in the first area of the display screen in the sub-control mode.
Another possible implementation manner is that the K sub-control modes include mutually different sub-control modes. It should be understood that, in this case, it is equivalent to repeat K times to display the first display object in the first area of the display screen, only because the K sub-control modes include mutually different sub-control modes, the K display modes of the first display object include mutually different display modes, for example, one sub-control mode included in the K sub-control modes specifically includes: controlling the first display object to perform curvilinear motion, wherein another sub-control mode included in the K sub-control modes specifically includes: since the first display object is controlled to perform the linear motion, the K-time display modes of the first display object include a display mode in which the first display object performs the curvilinear motion and a display mode in which the first display object performs the linear motion, that is, display modes different from each other are included. It can be understood that the tested individual is tested for K times, and the specific K times of tests comprise tests with different control modes.
In addition, one of the K sub-control patterns may include L sub-control patterns, where L is a natural number greater than 1. Specifically, the L slave sub-control modes are all the same slave sub-control mode, or the L slave sub-control modes include mutually different slave sub-control modes. That is to say, displaying the first display object in the first area of the display screen according to the K sub-control modes may be understood as performing K tests on the tested individual according to the K sub-control modes, where any one test in the K tests may include the L sub-control modes, and further, the L sub-control modes are all the same sub-control mode, or the L sub-control modes include different sub-control modes. Further, the slave control mode includes: and controlling any one or more of the color, the rotation direction and the rotation speed of the display object in the display screen. For example, when the test is performed once for the tested individual according to L slave sub-control modes, each slave sub-control mode is: controlling the first display object to display red for 3 seconds and then yellow for 5 seconds, and repeating the above steps for L times, can realize controlling the first display object to display in a 'flickering' manner. For another example, when the tested individual is tested once according to L slave sub-control modes, if L is 2, one of the slave sub-control modes is: controlling the first display object to display red for 3 seconds, and controlling the other slave control mode to be: and controlling the first display object to continuously perform clockwise circular motion for 5 seconds, wherein the motion speed can be preset.
302. And obtaining K attention scores of the tested individual to the first display object displayed according to the K sub-control modes as an attention result.
As described above, the first display object is displayed in the first area of the display screen according to the K sub-control modes, and the subject may respond K times to the K visual stimulation signals generated by the first display object in the K sub-control modes, where the K responses are expressed as K attention scores of the subject for the first display object displayed according to the K sub-control modes. For example, the score of each attention score is specifically 1 or 0, and if K is 5, the K attention scores may specifically be: score 1, score 0, score 1, that is, these 5 scores are taken as the above attention results.
303. And obtaining the attention distribution result of the tested individual in the first area of the display screen according to the attention result.
In the embodiment of the present application, the attention result reflects the attention degree of the subject to the first area on the display screen, and specifically includes K attention scores as described in 302. Then, obtaining the attention distribution result of the tested individual in the first area of the display screen according to the K attention scores, wherein one possible implementation manner is to sum the K attention scores to obtain an attention total score, and describe the attention distribution condition of the tested individual in the first area of the display screen by using the attention total score; another possible implementation manner is that the K attention scores are averaged to obtain an attention average score, and the attention average score is used to describe the attention distribution condition of the tested individual in the first area of the display screen; a further possible implementation is to calculate a variance value for the K attention scores, where the variance value may describe a fluctuation of the attention of the subject to the first area of the display screen in the K tests. It can be understood that various mathematical statistical calculations can be performed on the K attention scores, and the attention distribution of the tested individual in the first area of the display screen is described by using the final statistical value, and the application does not limit the specific calculation manner to process the K attention scores. In addition, it can be understood that the embodiment method shown in fig. 3 can detect the attention distribution result of the tested individual in the first area of the display screen, where the display screen includes M areas, the first area is any one of the M areas, and M is an integer greater than 1, that is, the embodiment method shown in fig. 3 can correspondingly detect the attention distribution result of the tested individual in each of the M areas of the display screen. In a practical application scenario, the attention distribution of each visual space region of an individual in the visual field range can be detected according to the method shown in fig. 3, so as to adjust the arrangement of the objects in the visual field range of the individual.
The method includes the steps that a first display object is displayed in a first area of a display screen according to a control mode, the control mode comprises K sub-control modes, K is a natural number larger than 0, the display screen comprises M areas, the first area is any one of the M areas, and M is an integer larger than 1; then, obtaining K attention scores of the tested individual to the first display object displayed according to the K sub-control modes as an attention result; and finally, obtaining the attention distribution result of the tested individual in the first area of the display screen according to the attention result. Thereby realizing the detection of the visual space attention of the individual.
An application scenario is also provided in the present application, please refer to fig. 4, and fig. 4 is a schematic diagram of six kinds of visual stimuli provided in the embodiment of the present application. The six visual stimulus diagrams respectively show six different spatial arrangements, specifically, a in fig. 4 represents a longitudinally distributed spatial arrangement, b in fig. 4 represents a transversely distributed spatial arrangement, c in fig. 4 represents a central diamond distributed spatial arrangement, d in fig. 4 represents a central square distributed spatial arrangement, e in fig. 4 represents a peripheral diamond distributed spatial arrangement, and f in fig. 4 represents a peripheral square distributed spatial arrangement. The display objects in the six visual stimulation diagrams are specifically the same in composition and each composed of 8 gray small balls, wherein every two gray small balls are in one group and four groups in total, and the four groups of small balls are respectively displayed in four areas of the display screen. The background color of the visual stimulation schematic diagram is black, the center of the visual stimulation schematic diagram presents key information, the tested individual can start each trial time by pressing a space key, and the target stimulation is found after the trial time is finished. Specifically, taking a in fig. 4 as an example, when the tested individual presses the space bar to start a test, the control display screen presents the visual stimulation diagram shown in a in fig. 4, and after 1 second, any one of the four groups of balls in the display screen is controlled to display red, the remaining four balls are kept in the gray state, and the duration of the red state is 2.4 seconds; after 2.4 seconds, controlling all the small balls in the display screen to be restored to a gray state, and simultaneously controlling the four groups of small balls to randomly rotate around the center of each group of small balls for 6 seconds; after 6 seconds, controlling the small balls to finish moving and be in a static state, then, selecting four target small balls which are red before each group of small balls move by the tested individual according to the display process of the small balls, and pressing an enter key for confirmation, wherein if the 4 groups of small balls in a in fig. 4 are sequentially a 1 st group of small balls, a 2 nd group of small balls, a 3 rd group of small balls and a 4 th group of small balls according to the arrangement sequence from top to bottom, if the small balls selected by the tested individual in the 1 st group of small balls are the target small balls which are red, the selection of the tested individual to the 1 st group of small balls is correct, and correspondingly, the attention score of the tested individual to the area where the 1 st group of small balls are located in the display screen is 1 min; if the small ball selected by the tested individual in the 1 st group of small balls is not the target small ball which is displayed in red, the tested individual is indicated to have a wrong selection on the 1 st group of small balls, and accordingly, the attention score of the tested individual on the area where the 1 st group of small balls are located in the display screen is 0. Then, according to the result of the individual subject selecting the 4 groups of the small balls, the attention scores of the individual subject to the 4 areas of the display screen where the 4 groups of the small balls are located can be obtained accordingly.
The method of the embodiments of the present application is set forth above in detail and the apparatus of the embodiments of the present application is provided below.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a visual space attention detection apparatus according to an embodiment of the present application, where the apparatus 1 includes: display unit 11, obtaining unit 12. Wherein:
a display unit 11, configured to display a first display object in a first area of a display screen according to a control mode, where the display screen includes M areas, the first area is any one of the M areas, and M is an integer greater than 1;
an obtaining unit 12, configured to obtain a result of attention of the subject to the first display object;
the obtaining unit 12 is further configured to obtain an attention distribution result of the subject in the first area of the display screen according to the attention result.
In a possible implementation manner, in the process of displaying the first display object in the first area of the display screen according to the control mode, the display unit 11 is further configured to display a second display object in a second area of the display screen according to the control mode, where the second area is any one of the M areas and is not the first area.
In another possible implementation manner, the control mode includes K sub-control modes, where K is a natural number greater than 0; the obtaining unit 12 is specifically configured to obtain, as the attention result, K attention scores of the subject for the first display object displayed according to the K sub-control modes.
In another possible implementation manner, the K sub-control modes are all the same sub-control mode, or the K sub-control modes include different sub-control modes.
In another possible implementation manner, any one of the K sub-control patterns includes L slave sub-control patterns, where L is a natural number greater than 1.
In yet another possible implementation manner, the L slave sub-control modes are all the same slave sub-control mode, or the L slave sub-control modes include mutually different slave sub-control modes.
In yet another possible implementation manner, the slave sub-control mode includes: and controlling any one or more of the color, the rotation direction and the rotation speed of a display object in the display screen.
Fig. 6 is a schematic hardware structure diagram of a visual space attention detection apparatus according to an embodiment of the present application. The visual space attention detection device 2 comprises a processor 21 and may further comprise an input device 22, an output device 23 and a memory 24. The input device 22, the output device 23, the memory 24 and the processor 21 are connected to each other via a bus.
The memory includes, but is not limited to, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or a portable read-only memory (CD-ROM), which is used for storing instructions and data.
The input means are for inputting data and/or signals and the output means are for outputting data and/or signals. The output means and the input means may be separate devices or may be an integral device.
The processor may include one or more processors, for example, one or more Central Processing Units (CPUs), and in the case of one CPU, the CPU may be a single-core CPU or a multi-core CPU.
The memory is used to store program codes and data of the network device.
The processor is used for calling the program codes and data in the memory and executing the following steps: displaying a first display object in a first area of a display screen according to a control mode, wherein the display screen comprises M areas, the first area is any one of the M areas, and M is an integer greater than 1; obtaining the attention result of the tested individual to the first display object; and obtaining the attention distribution result of the tested individual in the first area of the display screen according to the attention result.
In one implementation, the processor is configured to perform the steps of: in the process of displaying the first display object in the first area of the display screen according to the control mode, the steps further include: and displaying a second display object in a second area of the display screen according to the control mode, wherein the second area is any one of the M areas and is not the first area.
In another implementation, the processor is configured to perform the steps of: the control mode comprises K sub-control modes, and K is a natural number greater than 0; the obtaining of the attention result of the tested individual to the first display object comprises: obtaining K attention scores of the tested individual to the first display object displayed according to the K sub-control modes as the attention result.
In yet another implementation manner, the K sub-control modes are all the same sub-control mode, or the K sub-control modes include mutually different sub-control modes.
In yet another implementation manner, any one of the K sub-control patterns includes L slave sub-control patterns, where L is a natural number greater than 1.
In yet another implementation, the L slave sub-control modes are all the same slave sub-control mode, or the L slave sub-control modes include mutually different slave sub-control modes.
In yet another implementation, the slave sub-control mode includes: and controlling any one or more of the color, the rotation direction and the rotation speed of a display object in the display screen.
It will be appreciated that fig. 6 only shows a simplified design of the visual spatial attention detection arrangement. In practical applications, the visual space attention detection apparatus may further include other necessary elements, including but not limited to any number of input/output devices, processors, controllers, memories, etc., and all visual space attention detection apparatuses that may implement the embodiments of the present application are within the scope of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)), or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Versatile Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium includes: various media that can store program codes, such as a read-only memory (ROM) or a Random Access Memory (RAM), a magnetic disk, or an optical disk.

Claims (10)

1. A visual space attention detection method, comprising:
displaying a first display object in a first area of a display screen according to a control mode, wherein the display screen comprises M areas, the first area is any one of the M areas, and M is an integer greater than 1;
obtaining the attention result of the tested individual to the first display object;
and obtaining the attention distribution result of the tested individual in the first area of the display screen according to the attention result.
2. The method of claim 1, wherein during the displaying of the first display object in the first region of the display screen in the control mode, the method further comprises:
and displaying a second display object in a second area of the display screen according to the control mode, wherein the second area is any one of the M areas and is not the first area.
3. The method according to claim 1 or 2, wherein the control pattern comprises K sub-control patterns, wherein K is a natural number greater than 0; the obtaining of the attention result of the tested individual to the first display object comprises:
obtaining K attention scores of the tested individual to the first display object displayed according to the K sub-control modes as the attention result.
4. The method according to claim 3, wherein the K sub-control modes are all the same sub-control mode, or the K sub-control modes comprise mutually different sub-control modes.
5. The method according to claim 3 or 4, wherein any one of the K sub-control patterns comprises L sub-control patterns, and L is a natural number greater than 1.
6. The method according to claim 5, wherein the L slave sub-control modes are all the same slave sub-control mode, or wherein the L slave sub-control modes comprise mutually different slave sub-control modes.
7. The method of claim 6, wherein the slave sub-control mode comprises: and controlling any one or more of the color, the rotation direction and the rotation speed of a display object in the display screen.
8. A visual space attention detection apparatus, comprising:
the display device comprises a display unit, a control unit and a display unit, wherein the display unit is used for displaying a first display object in a first area of a display screen according to a control mode, the display screen comprises M areas, the first area is any one of the M areas, and M is an integer greater than 1;
an obtaining unit, configured to obtain a result of attention of the subject to the first display object;
the obtaining unit is further configured to obtain an attention distribution result of the subject in the first area of the display screen according to the attention result.
9. An electronic device, comprising: a processor and a memory, wherein the memory stores program instructions that, when executed by the processor, cause the processor to perform the method of any of claims 1 to 7.
10. A computer-readable storage medium having stored therein instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1 to 7.
CN201911285087.0A 2019-12-13 2019-12-13 Visual space attention detection method and related product Pending CN111142819A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911285087.0A CN111142819A (en) 2019-12-13 2019-12-13 Visual space attention detection method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911285087.0A CN111142819A (en) 2019-12-13 2019-12-13 Visual space attention detection method and related product

Publications (1)

Publication Number Publication Date
CN111142819A true CN111142819A (en) 2020-05-12

Family

ID=70518355

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911285087.0A Pending CN111142819A (en) 2019-12-13 2019-12-13 Visual space attention detection method and related product

Country Status (1)

Country Link
CN (1) CN111142819A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111814565A (en) * 2020-06-11 2020-10-23 北京微播易科技股份有限公司 Target detection method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101621636A (en) * 2008-06-30 2010-01-06 北京大学 Method and system for inserting and transforming advertisement sign based on visual attention module
US20120154390A1 (en) * 2010-12-21 2012-06-21 Tomoya Narita Information processing apparatus, information processing method, and program
CN105094292A (en) * 2014-05-05 2015-11-25 索尼公司 Method and device evaluating user attention
CN109480757A (en) * 2018-12-29 2019-03-19 深圳先进技术研究院 Visual function detection method and system and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101621636A (en) * 2008-06-30 2010-01-06 北京大学 Method and system for inserting and transforming advertisement sign based on visual attention module
US20120154390A1 (en) * 2010-12-21 2012-06-21 Tomoya Narita Information processing apparatus, information processing method, and program
CN105094292A (en) * 2014-05-05 2015-11-25 索尼公司 Method and device evaluating user attention
CN109480757A (en) * 2018-12-29 2019-03-19 深圳先进技术研究院 Visual function detection method and system and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111814565A (en) * 2020-06-11 2020-10-23 北京微播易科技股份有限公司 Target detection method and device

Similar Documents

Publication Publication Date Title
EP3396511A1 (en) Information processing device and operation reception method
CN110354489B (en) Virtual object control method, device, terminal and storage medium
CN109246463B (en) Method and device for displaying bullet screen
CN106125938B (en) Information processing method and electronic equipment
JP2020502603A (en) Field of view (FOV) aperture of virtual reality (VR) content on head-mounted display
US20170163958A1 (en) Method and device for image rendering processing
US8678903B1 (en) Mobile device having a virtual spin wheel and virtual spin wheel control method of the same
Nguyen-Vo et al. Simulated reference frame: A cost-effective solution to improve spatial orientation in vr
CN105635848A (en) Bullet-screen display method and terminal
CN105205860B (en) The methods of exhibiting and device of threedimensional model scene
KR20180013892A (en) Reactive animation for virtual reality
US20230356075A1 (en) Method, computer device, and storage medium for virtual object switching
KR102396390B1 (en) Method and terminal unit for providing 3d assembling puzzle based on augmented reality
CN110478900B (en) Map area generation method, device, equipment and storage medium in virtual environment
CN111142819A (en) Visual space attention detection method and related product
CN111921203A (en) Interactive processing method and device in virtual scene, electronic equipment and storage medium
Haji-Khamneh et al. How different types of scenes affect the Subjective Visual Vertical (SVV) and the Perceptual Upright (PU)
Brown et al. Efficient dataflow modeling of peripheral encoding in the human visual system
CN106933454A (en) Display methods and system
US8795052B2 (en) Strategy game systems and methods
WO2021114288A1 (en) Visuospatial attention detection method and related product
CN107016079B (en) Knowledge point display method and device
US11100723B2 (en) System, method, and terminal device for controlling virtual image by selecting user interface element
Mayer et al. Humans are detected more efficiently than machines in the context of natural scenes
Peters et al. Computational mechanisms for gaze direction in interactive visual environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination