CN111752381A - Man-machine interaction method and device - Google Patents

Man-machine interaction method and device Download PDF

Info

Publication number
CN111752381A
CN111752381A CN201910434020.2A CN201910434020A CN111752381A CN 111752381 A CN111752381 A CN 111752381A CN 201910434020 A CN201910434020 A CN 201910434020A CN 111752381 A CN111752381 A CN 111752381A
Authority
CN
China
Prior art keywords
sensing
point
sight line
sensing point
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910434020.2A
Other languages
Chinese (zh)
Other versions
CN111752381B (en
Inventor
许云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201910434020.2A priority Critical patent/CN111752381B/en
Publication of CN111752381A publication Critical patent/CN111752381A/en
Application granted granted Critical
Publication of CN111752381B publication Critical patent/CN111752381B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Position Input By Displaying (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses a man-machine interaction method and device, and relates to the technical field of computers. The method comprises the following steps: collecting sight line information of a user; determining a sight line moving path of the user in an operation sensing area based on the sight line information; and determining an operation instruction of the user on an operation object based on the operation model of the operation sensing area and the sight line moving path, wherein the user operates the operation object corresponding to the operation sensing area through sight line movement. According to the man-machine interaction method, the operation object corresponding to the operation induction area is operated through sight line movement, a user does not need to stare at the operation area on the display interface of the terminal application program for a long time, visual fatigue of the user due to long-time vision is avoided, and user experience is improved.

Description

Man-machine interaction method and device
Technical Field
The invention relates to the technical field of computers, in particular to a man-machine interaction method and device.
Background
Eye movement interaction mode as a novel interaction mode, some application scenes such as market fitting-free clothes, advertisement sight hot area analysis, AR glasses and the like gradually appear. Compared with traditional equipment for inputting by hand actions such as a handle, a mouse, a keyboard and the like, the eye-motion interaction can carry out man-machine interaction more naturally and directly, and the device has the advantages of freeing hands and the like at any time and any place.
In the related art, a typical eye movement interaction mode is based on a traditional WIMP (windows, icons, options and indexes) paradigm, and simulates the operation behavior of clicking a mouse/touch screen by using 'gaze' as a basic interaction feature with an eye movement interface. The long-time gazing operation easily causes visual fatigue of the user and influences the user experience.
Disclosure of Invention
In order to solve the problems that visual fatigue of a user is easily caused and user experience is influenced in an interaction mode mainly based on gaze in the related technology, the embodiment of the invention provides a human-computer interaction method and a human-computer interaction device.
According to an aspect of the present invention, there is provided a human-computer interaction method, including:
collecting sight line information of a user;
determining a sight line moving path of the user in an operation sensing area based on the sight line information; and
determining an operation instruction of the user to an operation object based on the operation model of the operation sensing area and the sight line moving path,
wherein the user operates the operation object corresponding to the operation sensing area through line-of-sight movement.
Preferably, the human-computer interaction method further includes:
establishing the operational model of the operational sensing zone.
Preferably, the operation sensing area comprises a plurality of sensing point combinations,
each sensing point combination comprises a plurality of sensing points.
Preferably, the states of the plurality of sensing points include: active and inactive states, selected and unselected states,
wherein the state of the sensing point can only be set to the selected state when the sensing point is in the activated state; when the sensing point is activated, the sensing point is in the non-selected state, and when the sight line is coincident with the sensing point, the sensing point is set to be in the selected state.
Preferably, the operation sensing area comprises a plurality of sensing point combinations,
each sensing point combination comprises: a first induction point, a second induction point and a third induction point
The establishing the operation model of the operation induction zone comprises:
the first sensing point is used as a starting point of the sight line moving path, and when the sight line is superposed with the first sensing point, the state of the first sensing point is set to be a selected state;
the second sensing point is used as a selection point of the sight line moving path, and when the sight line is overlapped with the second sensing point, the state of the second sensing point is set to be a selected state;
and the third sensing point is used as a termination point of the sight line moving path, and when the sight line is coincident with the third sensing point, the state of the third sensing point is set to be a selected state.
Preferably, the establishing the operation model of the operation sensing region further comprises:
if the sight line disappears in the moving process of the operation sensing area, setting the state of the first sensing point which is closest to the position where the sight line disappears and is in the activated state as the selected state, and taking the first sensing point as the starting point of a new sight line moving path.
Preferably, the establishing the operation model of the operation sensing region further comprises:
in the same sensing point combination, if the sight line passes through the second sensing point and then reaches the first sensing point, the second sensing point is reset to be in a non-selected state.
Preferably, the establishing the operation model of the operation sensing region further comprises:
if the sight line starts from the first sensing point and reaches the third sensing point after passing through the second sensing points in the process of moving in the operation sensing area, the second sensing points are set to be in a selected state.
Preferably, the establishing the operation model of the operation sensing region further comprises:
if the sight line is coincident with the third sensing point, all sensing points passed by the sight line moving path are reset to be in a non-selected state, and meanwhile, an operation flow for reporting the sight line moving path is triggered.
Preferably, the operation sensing area and the operation object are displayed on a display interface of a terminal application program,
when the operation induction area and the operation object are loaded on a display interface of the terminal application program, the induction point in the multiple induction point combinations of the operation induction area is set to be in an activated state.
Preferably, the determining an operation instruction of the user on the operation object based on the operation model of the operation sensing area and the sight line moving path includes:
receiving the reported sight line moving path;
determining the operation object corresponding to the induction point combination to which the second induction point in the selected state belongs in the sight line moving path according to the operation model of the operation induction area;
and triggering an interactive control event corresponding to the operation object according to a preset program of the operation object.
According to another aspect of the present invention, there is provided a human-computer interaction device, comprising:
an acquisition unit configured to acquire sight line information of a user;
a path determination unit configured to determine a line of sight movement path of the user in an operation sensing area based on the line of sight information; and
an interaction unit configured to determine an operation instruction of the user to an operation object based on an operation model of the operation sensing area and the sight-line moving path,
wherein the user operates the operation object corresponding to the operation sensing area through line-of-sight movement.
Preferably, the human-computer interaction device further comprises:
an establishing unit configured to establish the operation model of the operation sensing region.
Preferably, the operation sensing area comprises a plurality of sensing point combinations,
each sensing point combination comprises a plurality of sensing points.
Preferably, the states of the plurality of sensing points include: active and inactive states, selected and unselected states,
wherein the state of the sensing point can only be set to the selected state when the sensing point is in the activated state; when the sensing point is activated, the sensing point is in the non-selected state, and when the sight line is coincident with the sensing point, the sensing point is set to be in the selected state.
Preferably, the operation sensing area comprises a plurality of sensing point combinations,
each sensing point combination comprises: a first induction point, a second induction point and a third induction point
The establishing the operation model of the operation induction zone comprises:
the first sensing point is used as a starting point of the sight line moving path, and when the sight line is superposed with the first sensing point, the state of the first sensing point is set to be a selected state;
the second sensing point is used as a selection point of the sight line moving path, and when the sight line is overlapped with the second sensing point, the state of the second sensing point is set to be a selected state;
and the third sensing point is used as a termination point of the sight line moving path, and when the sight line is coincident with the third sensing point, the state of the third sensing point is set to be a selected state.
Preferably, the establishing the operation model of the operation sensing region further comprises:
if the sight line disappears in the moving process of the operation sensing area, setting the state of the first sensing point which is closest to the position where the sight line disappears and is in the activated state as the selected state, and taking the first sensing point as the starting point of a new sight line moving path.
Preferably, the establishing the operation model of the operation sensing region further comprises:
in the same sensing point combination, if the sight line passes through the second sensing point and then reaches the first sensing point, the second sensing point is reset to be in a non-selected state.
Preferably, the establishing the operation model of the operation sensing region further comprises:
if the sight line starts from the first sensing point and reaches the third sensing point after passing through the second sensing points in the process of moving in the operation sensing area, the second sensing points are set to be in a selected state.
Preferably, the establishing the operation model of the operation sensing region further comprises:
if the sight line is coincident with the third sensing point, all sensing points passed by the sight line moving path are reset to be in a non-selected state, and meanwhile, an operation flow for reporting the sight line moving path is triggered.
Preferably, the operation sensing area and the operation object are displayed on a display interface of a terminal application program,
when the operation induction area and the operation object are loaded on a display interface of the terminal application program, the induction point in the multiple induction point combinations of the operation induction area is set to be in an activated state.
Preferably, the determining an operation instruction of the user on the operation object based on the operation model of the operation sensing area and the sight line moving path includes:
receiving the reported sight line moving path;
determining the operation object corresponding to the induction point combination to which the second induction point in the selected state belongs in the sight line moving path according to the operation model of the operation induction area;
and triggering an interactive control event corresponding to the operation object according to a preset program of the operation object.
According to still another aspect of the present invention, there is provided a human-computer interaction control apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to execute the human-computer interaction method.
According to still another aspect of the present invention, there is provided a computer-readable storage medium, wherein computer instructions are stored in the computer-readable storage medium, and when executed, the computer instructions implement the human-computer interaction method as described above.
According to yet another aspect of the present invention, there is provided a computer program product comprising a computer program product, the computer program comprising program instructions which, when executed by a mobile terminal, cause the mobile terminal to perform the steps of the above-mentioned human-computer interaction method.
One embodiment of the present invention has the following advantages or benefits:
and collecting sight line information of the user. And determining the sight line moving path of the user in the operation sensing area based on the sight line information. And determining an operation instruction of the user to the operation object based on the operation model and the sight line moving path of the operation sensing area, wherein the user operates the operation object corresponding to the operation sensing area through sight line movement. The operation object corresponding to the operation induction area is operated through sight line movement, a user does not need to stare at an operation area on a display interface of the terminal application program for a long time, visual fatigue of the user due to long-time vision is avoided, and user experience is improved.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following description of the embodiments of the present invention with reference to the accompanying drawings, in which:
fig. 1 shows a flow diagram of a human-computer interaction method according to an embodiment of the invention.
Fig. 2 is a flow chart of a man-machine interaction method according to an embodiment of the present invention.
FIG. 3 shows a schematic view of a three-point operating sensing zone of one embodiment of the present invention.
Fig. 4 shows a schematic view of a three-point operation sensing zone and an operation object according to an embodiment of the present invention.
Fig. 5 is a schematic structural diagram of a human-computer interaction device according to an embodiment of the invention.
Fig. 6 is a schematic structural diagram of a human-computer interaction device according to an embodiment of the invention.
Fig. 7 is a schematic structural diagram of a human-computer interaction control device according to an embodiment of the present invention.
Detailed Description
The present invention will be described below based on examples, but the present invention is not limited to only these examples. In the following detailed description of the present invention, certain specific details are set forth. It will be apparent to one skilled in the art that the present invention may be practiced without these specific details. Well-known methods, procedures, and procedures have not been described in detail so as not to obscure the present invention. The figures are not necessarily drawn to scale.
Fig. 1 is a flowchart illustrating a human-computer interaction method according to an embodiment of the present invention. The method specifically comprises the following steps:
in step S101, gaze information of the user is acquired.
In this step, the sight line information of the user is collected. The sight line information includes a sight line direction, a sight line gaze position, and the like. The face and eye regions of the user can be shot through the self-contained equipment or the external camera equipment of the eye tracker or the mobile intelligent terminal, and images of the face and eyes of the user are obtained. And identifying the acquired image, extracting the features of the image by judging the eye position and the pupil position or detecting the Purkinje and the like, and further identifying some basic feature values of the eyes of the user, including the eyeball point position, the moving track and the like, so as to acquire the sight line information of the user.
In step S102, a gaze movement path of the user in the operation sensing area is determined based on the gaze information.
In this step, a line-of-sight moving path of the user in the operation sensing area is determined based on the line-of-sight information. The eye movement application operation interface on the display interface of the general terminal application program can be divided into a content display area and an operation area. The operation region in this embodiment is an operation sensing region including a plurality of sensing point combinations, each of which includes a plurality of sensing points. The content display area is a display area of an operation object corresponding to the operation induction area. Specifically, the obtained basic feature values such as the eyeball point position and the movement trajectory may be used for calculation, and a specific interface display area where the user's sight is concentrated on the display interface of the terminal application program is mapped, so as to obtain the sight movement path of the user in the operation sensing area.
In step S103, an operation instruction of the user on an operation object is determined based on the operation model of the operation sensing area and the gaze movement path, wherein the user operates the operation object corresponding to the operation sensing area through gaze movement.
In this step, an operation instruction of the user to the operation object is determined based on the operation model of the operation sensing area and the sight-line movement path, wherein the user operates the operation object corresponding to the operation sensing area through the sight-line movement. Optionally, the operation object is a menu option. It can be understood that, based on the operation model and the sight line movement path of the operation sensing area, the operation instruction for selecting and deselecting the menu option by the user is determined, so that the user can operate the menu option corresponding to the operation sensing area through sight line movement, for example, the menu option is selected or deselected.
According to the embodiment of the invention, the sight line information of the user is collected. And determining the sight line moving path of the user in the operation sensing area based on the sight line information. And determining an operation instruction of the user to the operation object based on the operation model and the sight line moving path of the operation sensing area, wherein the user operates the operation object corresponding to the operation sensing area through sight line movement. The operation object corresponding to the operation induction area is operated through sight line movement, a user does not need to stare at an operation area on a display interface of the terminal application program for a long time, visual fatigue of the user due to long-time vision is avoided, and user experience is improved.
FIG. 2 is a flow chart of a human-computer interaction method according to an embodiment of the invention. The embodiment is a more perfect man-machine interaction method than the previous embodiment. The method specifically comprises the following steps:
in step S201, gaze information of the user is acquired.
In this step, this step is identical to step S101 in fig. 1, and is not described here again.
In step S202, a gaze movement path of the user in the operation sensing area is determined based on the gaze information.
In this step, this step is identical to step S102 in fig. 1, and is not described here again. .
In step S203, the operation model of the operation sensing region is established.
In this step, an operational model of the operational sensing zone is established. The eye movement application operation interface on the display interface of the general terminal application program can be divided into a content display area and an operation area. The operation area in this embodiment may be a three-point operation sensing area, a four-point operation sensing area, or a five-point operation sensing area. The content display area is a display area of an operation object corresponding to the operation induction area. The user operates the operation object corresponding to the operation sensing area through the sight line movement. And establishing an operation model of the operation induction area, specifically establishing an operation model of an operation object corresponding to the operation induction area operated by the user through sight line movement.
Specifically, the operation sensing area comprises a plurality of sensing point combinations, and each sensing point combination comprises a plurality of sensing points. It is understood that the operation sensing region may include one sensing point combination, and may also include two or more sensing point combinations. Each sensing point combination can comprise three sensing points or four sensing points. The states of the plurality of sensing points include: active and inactive states, selected and unselected states. The state of the sense point can only be set to the selected state when the sense point is in the active state. After the sensing point is activated, the sensing point is in a non-selected state, and when the sight line is overlapped with the sensing point, the sensing point is set to be in a selected state.
Optionally, the operation sensing area comprises a sensing point combination, and the sensing point combination comprises: the device comprises a first induction point, a second induction point and a third induction point. FIG. 3 shows a schematic view of a three-point operating sensing zone of one embodiment of the present invention. The three-point operation sensing area shown in fig. 3 includes a group of sensing point combinations, in the sensing point combinations, the first sensing point 301, the second sensing point 302, and the third sensing point 303 are located at three vertex positions of a triangle, and the positional relationship of the first sensing point 301, the second sensing point 302, and the third sensing point 303 located at the three vertex positions of the triangle may be any positional relationship, which should not be taken as a limitation to the technical solution of the present application. It is understood that a plurality of groups of sensing point combinations may be further included in the three-point operation sensing area, and accordingly, in the content display area, there are a plurality of operation objects corresponding to the plurality of groups of sensing point combinations one to one. When an operation induction area and an operation object on a display interface of a terminal application program are loaded, induction points in a plurality of induction point combinations of the operation induction area are set to be in an activated state.
For the three-point operation sensing zone as shown in fig. 3, an operation model of the operation sensing zone is established, which includes: the first sensing point is used as a starting point of the sight line moving path, and when the sight line is overlapped with the first sensing point, the state of the first sensing point is set to be a selected state. The second sensing point is used as a selection point of the sight line moving path, and when the sight line is overlapped with the second sensing point, the state of the second sensing point is set to be a selected state. When the state of the second induction point is set to be the selected state, the operation object corresponding to the induction point combination to which the second induction point belongs is selected. And the third sensing point is used as a termination point of the sight line moving path, and when the sight line is coincident with the third sensing point, the state of the third sensing point is set to be the selected state.
And if the sight line disappears in the moving process of the three-point operation sensing area, setting the state of a first sensing point which is closest to the position where the sight line disappears and is in an activated state on a display interface of the terminal application program as a selected state, and taking the first sensing point as the starting point of a new sight line moving path.
In the same sensing point combination, if the sight line reaches the first sensing point after passing through the second sensing point, the second sensing point is reset to the non-selected state.
And if the sight line starts from the first sensing point and reaches the third sensing point after passing through the second sensing points in the moving process of the sight line in the three-point operation sensing area, the second sensing points are set to be in the selected state. It will be appreciated that there may be one second sensing point set to the selected state, multiple second sensing points set to the selected state, or zero second sensing points set to the selected state.
If the sight line is coincident with the third sensing point, all the sensing points passed by the sight line moving path are reset to be in a non-selected state, and meanwhile, an operation flow for reporting the sight line moving path is triggered.
Optionally, the operation sensing area comprises a sensing point combination, and the sensing point combination comprises four sensing points. For example, the four induction equations include: a starting point, an end point and two selection points. Similar to the operation model of the three-point operation sensing zone, an operation model of the four-point operation sensing zone can be established for the four sensing points.
In step 204, an operation instruction of the user on an operation object is determined based on the operation model of the operation sensing area and the sight line moving path, wherein the user operates the operation object corresponding to the operation sensing area through sight line movement.
In this step, the reported sight line movement path is received. And determining an operation object corresponding to the induction point combination to which the second induction point in the selected state belongs in the sight line moving path according to the operation model of the operation induction area. And triggering an interactive control event corresponding to the operation object according to a preset program of the operation object. Optionally, the operation object is a menu option. It can be understood that, based on the operation model and the sight line movement path of the operation sensing area, the operation instruction for selecting and deselecting the menu option by the user is determined, so that the user can operate the menu option corresponding to the operation sensing area through sight line movement, for example, the menu option is selected or deselected.
Fig. 4 is a schematic view of a three-point operation sensing zone and an operation object according to an embodiment of the present invention. As shown in fig. 4, the operation objects are menu option a, menu option b, menu option c, menu option d, and menu option e. The sensing point combination corresponding to the menu option a comprises: a first sensing point 401a, a second sensing point 402a and a third sensing point 403 a; the sensing point combination corresponding to the menu option b comprises: a first sensing point 401b, a second sensing point 402b and a third sensing point 403 b; the sensing point combination corresponding to the menu option c comprises: a first sensing point 401c, a second sensing point 402c, and a third sensing point 403 c; the sensing point combination corresponding to the menu option d comprises: a first sensing point 401d, a second sensing point 402d, and a third sensing point 403 d; the sensing point combination corresponding to the menu option e comprises: a first sensing point 401e, a second sensing point 402e and a third sensing point 403 e. The line from point 1 to point 8 is the line of sight movement path. When the page is loaded, a plurality of sensing points in the sensing point combination corresponding to the menu option a, the menu option b, the menu option c, the menu option d and the menu option e are all set to be in an activated state.
The line of sight (point 1 to point 2) coincides with the first sensing point 401a, and the first sensing point 401a is set to the selected state. The first sensing point 401a serves as a starting point of the current sight-line moving path.
The sight line (point 2 to point 3) disappears in the process of moving the three-point operation sensing area, the state of a first sensing point 401b which is closest to the position (point 3) where the sight line disappears on the display interface of the terminal application program and is in an activated state is set to be in a selected state, and the first sensing point 401b is used as the starting point of a new sight line moving path. The line of sight starts from the first sensing point 401b and passes through the second sensing point 402b to reach point 4. The state of the second sensing point 402b is set to the selected state.
The line of sight (point 4 to point 5) coincides with the second sensing point 402c, and the state of the second sensing point 402c is set to the selected state.
The line of sight (point 5 to point 6) coincides with the first sensing point 401 c. Operating model according to three-point operating sensing zone: in the same sensing point combination, if the line of sight reaches the first sensing point after passing through the second sensing point, the second sensing point is reset to the unselected state, and the state of the second sensing point 402c is reset to the unselected state.
The line of sight (point 6 to point 7) coincides with the second sensing point 402e, and the state of the second sensing point 402e is set to the selected state.
The sight line (point 7 to point 8) coincides with the third sensing point 403d, all the sensing points through which the sight line moving path passes are reset to the non-selected state, and an operation flow for reporting the sight line moving path is triggered.
According to the operation model of the three-point operation sensing area, the second sensing point in the selected state in the moving path of the sight line (point 1 to point 8) comprises: a second sensing point 402b and a second sensing point 402 e. Accordingly, menu option b and menu option e corresponding to second sensing point 402b and second sensing point 402e are in the selected state. It can be understood that when there is a second sensing point in the selected state in the path of the line of sight movement, it is equivalent to the user operating the sensing point in the three-point operation sensing area to select a menu option by the line of sight movement. When a plurality of second sensing points in the selected state exist in the sight line moving path, the user operates the sensing points in the three-point operation sensing area through sight line movement to select a plurality of menu options. And triggering interactive control events corresponding to the menu option b and the menu option e according to preset programs of the menu option b and the menu option e.
According to the embodiment of the invention, an operation model of the three-point operation sensing area is established, in particular to the operation model of an operation object corresponding to the three-point operation sensing area operated by a user through sight line movement. And receiving the reported sight line moving path. And determining an operation object corresponding to the induction point combination to which the second induction point in the selected state belongs in the sight line moving path according to the operation model of the three-point operation induction area. And triggering an interactive control event corresponding to the operation object according to a preset program of the operation object. When the terminal application applies the three-point operation induction area to adapt to human-computer interaction based on the sight moving path, only the selection part of the control controlled by the keyboard and the mouse needs to be replaced by the three-point operation induction area, the interaction program of the display page of the whole terminal application program does not need to be adjusted, the change is less, and the human-computer interaction improvement cost is greatly saved.
Fig. 5 is a schematic structural diagram of a human-computer interaction device according to an embodiment of the invention. As shown in fig. 5, the human-computer interaction device includes: an acquisition unit 501, a path determination unit 502 and an interaction unit 503.
An acquisition unit 501 configured to acquire gaze information of a user.
The unit is configured to acquire gaze information of a user. The sight line information includes a sight line direction, a sight line gaze position, and the like. The face and eye regions of the user can be shot through the self-contained equipment or the external camera equipment of the eye tracker or the mobile intelligent terminal, and images of the face and eyes of the user are obtained. And identifying the acquired image, extracting the features of the image by judging the eye position and the pupil position or detecting the Purkinje and the like, and further identifying some basic feature values of the eyes of the user, including the eyeball point position, the moving track and the like, so as to acquire the sight line information of the user.
A path determining unit 502 configured to determine a line of sight moving path of the user in the operation sensing area based on the line of sight information.
The unit is configured to determine a line-of-sight movement path of the user at the operation sensing area based on the line-of-sight information. The eye movement application operation interface on the display interface of the general terminal application program can be divided into a content display area and an operation area. The operation region in this embodiment is an operation sensing region including a plurality of sensing point combinations, each of which includes a plurality of sensing points. The content display area is a display area of an operation object corresponding to the operation induction area. Specifically, the obtained basic feature values such as the eyeball point position and the movement trajectory may be used for calculation, and a specific interface display area where the user's sight is concentrated on the display interface of the terminal application program is mapped, so as to obtain the sight movement path of the user in the operation sensing area.
An interaction unit 503 configured to determine an operation instruction of the user on an operation object based on the operation model of the operation sensing area and the gaze movement path, wherein the user operates the operation object corresponding to the operation sensing area through gaze movement.
The unit is configured to determine an operation instruction of a user on an operation object based on an operation model of the operation sensing area and a sight line movement path, wherein the user operates the operation object corresponding to the operation sensing area through sight line movement. Optionally, the operation object is a menu option. It can be understood that, based on the operation model and the sight line movement path of the operation sensing area, the operation instruction for selecting and deselecting the menu option by the user is determined, so that the user can operate the menu option corresponding to the operation sensing area through sight line movement, for example, the menu option is selected or deselected.
Fig. 6 is a schematic structural diagram of a human-computer interaction device according to an embodiment of the invention. As shown in fig. 6, the human-computer interaction device includes: an acquisition unit 601, a path determination unit 602, a setup unit 603 and an interaction unit 604.
An acquisition unit 601 configured to acquire gaze information of a user.
This unit is identical to the acquisition unit 501 in fig. 5 and will not be described here.
A path determining unit 602 configured to determine a line of sight moving path of the user in the operation sensing area based on the line of sight information.
This unit is identical to the path determination unit 502 in fig. 5 and will not be described here.
A building unit 603 configured to build the operation model of the operation sensing zone.
The unit is configured to establish an operational model of operating the sensing region. The eye movement application operation interface on the display interface of the general terminal application program can be divided into a content display area and an operation area. The operation area in this embodiment may be a three-point operation sensing area, a four-point operation sensing area, or a five-point operation sensing area. The content display area is a display area of an operation object corresponding to the operation induction area. The user operates the operation object corresponding to the operation sensing area through the sight line movement. And establishing an operation model of the operation induction area, specifically establishing an operation model of an operation object corresponding to the operation induction area operated by the user through sight line movement.
Specifically, the operation sensing area comprises a plurality of sensing point combinations, and each sensing point combination comprises a plurality of sensing points. It is understood that the operation sensing region may include one sensing point combination, and may also include two or more sensing point combinations. Each sensing point combination can comprise three sensing points or four sensing points. The states of the plurality of sensing points include: active and inactive states, selected and unselected states. The state of the sense point can only be set to the selected state when the sense point is in the active state. After the sensing point is activated, the sensing point is in a non-selected state, and when the sight line is overlapped with the sensing point, the sensing point is set to be in a selected state.
Optionally, the operation sensing area comprises a sensing point combination, and the sensing point combination comprises: the device comprises a first induction point, a second induction point and a third induction point. FIG. 3 shows a schematic view of a three-point operating sensing zone of one embodiment of the present invention. The three-point operation sensing area shown in fig. 3 includes a group of sensing point combinations, in the sensing point combinations, the first sensing point 301, the second sensing point 302, and the third sensing point 303 are located at three vertex positions of a triangle, and the positional relationship of the first sensing point 301, the second sensing point 302, and the third sensing point 303 located at the three vertex positions of the triangle may be any positional relationship, which should not be taken as a limitation to the technical solution of the present application. It is understood that a plurality of groups of sensing point combinations may be further included in the three-point operation sensing area, and accordingly, in the content display area, there are a plurality of operation objects corresponding to the plurality of groups of sensing point combinations one to one. When an operation induction area and an operation object on a display interface of a terminal application program are loaded, induction points in a plurality of induction point combinations of the operation induction area are set to be in an activated state.
For the three-point operation sensing zone as shown in fig. 3, an operation model of the operation sensing zone is established, which includes: the first sensing point is used as a starting point of the sight line moving path, and when the sight line is overlapped with the first sensing point, the state of the first sensing point is set to be a selected state. The second sensing point is used as a selection point of the sight line moving path, and when the sight line is overlapped with the second sensing point, the state of the second sensing point is set to be a selected state. When the state of the second induction point is set to be the selected state, the operation object corresponding to the induction point combination to which the second induction point belongs is selected. And the third sensing point is used as a termination point of the sight line moving path, and when the sight line is coincident with the third sensing point, the state of the third sensing point is set to be the selected state.
And if the sight line disappears in the moving process of the three-point operation sensing area, setting the state of a first sensing point which is closest to the position where the sight line disappears and is in an activated state on a display interface of the terminal application program as a selected state, and taking the first sensing point as the starting point of a new sight line moving path.
In the same sensing point combination, if the sight line reaches the first sensing point after passing through the second sensing point, the second sensing point is reset to the non-selected state.
And if the sight line starts from the first sensing point and reaches the third sensing point after passing through the second sensing points in the moving process of the sight line in the three-point operation sensing area, the second sensing points are set to be in the selected state. It will be appreciated that there may be one second sensing point set to the selected state, multiple second sensing points set to the selected state, or zero second sensing points set to the selected state.
If the sight line is coincident with the third sensing point, all the sensing points passed by the sight line moving path are reset to be in a non-selected state, and meanwhile, an operation flow for reporting the sight line moving path is triggered.
Optionally, the operation sensing area comprises a sensing point combination, and the sensing point combination comprises four sensing points. For example, the four induction equations include: a starting point, an end point and two selection points. Similar to the operation model of the three-point operation sensing zone, an operation model of the four-point operation sensing zone can be established for the four sensing points.
An interaction unit 604, configured to determine an operation instruction of the user on an operation object based on the operation model of the operation sensing area and the gaze movement path, wherein the user operates the operation object corresponding to the operation sensing area through gaze movement.
The unit is configured to receive the reported line of sight movement path. And determining an operation object corresponding to the induction point combination to which the second induction point in the selected state belongs in the sight line moving path according to the operation model of the operation induction area. And triggering an interactive control event corresponding to the operation object according to a preset program of the operation object. Optionally, the operation object is a menu option. It can be understood that, based on the operation model and the sight line movement path of the operation sensing area, the operation instruction for selecting and deselecting the menu option by the user is determined, so that the user can operate the menu option corresponding to the operation sensing area through sight line movement, for example, the menu option is selected or deselected.
Fig. 4 is a schematic view of a three-point operation sensing zone and an operation object according to an embodiment of the present invention. As shown in fig. 4, the operation objects are menu option a, menu option b, menu option c, menu option d, and menu option e. The sensing point combination corresponding to the menu option a comprises: a first sensing point 401a, a second sensing point 402a and a third sensing point 403 a; the sensing point combination corresponding to the menu option b comprises: a first sensing point 401b, a second sensing point 402b and a third sensing point 403 b; the sensing point combination corresponding to the menu option c comprises: a first sensing point 401c, a second sensing point 402c, and a third sensing point 403 c; the sensing point combination corresponding to the menu option d comprises: a first sensing point 401d, a second sensing point 402d, and a third sensing point 403 d; the sensing point combination corresponding to the menu option e comprises: a first sensing point 401e, a second sensing point 402e and a third sensing point 403 e. The line from point 1 to point 8 is the line of sight movement path. When the page is loaded, a plurality of sensing points in the sensing point combination corresponding to the menu option a, the menu option b, the menu option c, the menu option d and the menu option e are all set to be in an activated state.
The line of sight (point 1 to point 2) coincides with the first sensing point 401a, and the first sensing point 401a is set to the selected state. The first sensing point 401a serves as a starting point of the current sight-line moving path.
The sight line (point 2 to point 3) disappears in the process of moving the three-point operation sensing area, the state of a first sensing point 401b which is closest to the position (point 3) where the sight line disappears on the display interface of the terminal application program and is in an activated state is set to be in a selected state, and the first sensing point 401b is used as the starting point of a new sight line moving path. The line of sight starts from the first sensing point 401b and passes through the second sensing point 402b to reach point 4. The state of the second sensing point 402b is set to the selected state.
The line of sight (point 4 to point 5) coincides with the second sensing point 402c, and the state of the second sensing point 402c is set to the selected state.
The line of sight (point 5 to point 6) coincides with the first sensing point 401 c. Operating model according to three-point operating sensing zone: in the same sensing point combination, if the line of sight reaches the first sensing point after passing through the second sensing point, the second sensing point is reset to the unselected state, and the state of the second sensing point 402c is reset to the unselected state.
The line of sight (point 6 to point 7) coincides with the second sensing point 402e, and the state of the second sensing point 402e is set to the selected state.
The sight line (point 7 to point 8) coincides with the third sensing point 403d, all the sensing points through which the sight line moving path passes are reset to the non-selected state, and an operation flow for reporting the sight line moving path is triggered.
According to the operation model of the three-point operation sensing area, the second sensing point in the selected state in the moving path of the sight line (point 1 to point 8) comprises: a second sensing point 402b and a second sensing point 402 e. Accordingly, menu option b and menu option e corresponding to second sensing point 402b and second sensing point 402e are in the selected state. It can be understood that when there is a second sensing point in the selected state in the path of the line of sight movement, it is equivalent to the user operating the sensing point in the three-point operation sensing area to select a menu option by the line of sight movement. When a plurality of second sensing points in the selected state exist in the sight line moving path, the user operates the sensing points in the three-point operation sensing area through sight line movement to select a plurality of menu options. And triggering interactive control events corresponding to the menu option b and the menu option e according to preset programs of the menu option b and the menu option e.
Fig. 7 is a block diagram of a human-machine interaction control apparatus according to an embodiment of the present invention. The apparatus shown in fig. 7 is only an example and should not limit the functionality and scope of use of embodiments of the present invention in any way.
Referring to fig. 7, the apparatus includes a processor 701, a memory 702, and an input-output device 703 connected by a bus. The memory 702 includes a Read Only Memory (ROM) and a Random Access Memory (RAM), and various computer instructions and data required to perform system functions are stored in the memory 702, and the processor 701 reads the various computer instructions from the memory 702 to perform various appropriate actions and processes. An input/output device including an input portion of a keyboard, a mouse, and the like; an output section including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section including a hard disk and the like; and a communication section including a network interface card such as a LAN card, a modem, or the like. The memory 702 also stores the following computer instructions to perform the operations specified in the human-computer interaction method of the embodiment of the invention: collecting sight line information of a user; determining a sight line moving path of the user in an operation sensing area based on the sight line information; and determining an operation instruction of the user on an operation object based on the operation model of the operation sensing area and the sight line moving path, wherein the user operates the operation object corresponding to the operation sensing area through sight line movement.
Accordingly, an embodiment of the present invention provides a computer-readable storage medium, which stores computer instructions that, when executed, implement the operations specified by the above-mentioned human-computer interaction method.
Correspondingly, the embodiment of the invention also provides a computer program product, which comprises a computer program product, wherein the computer program comprises program instructions, and when the program instructions are executed by the mobile terminal, the mobile terminal is enabled to execute the steps of the human-computer interaction method.
The flowcharts and block diagrams in the figures and block diagrams illustrate the possible architectures, functions, and operations of the systems, methods, and apparatuses according to the embodiments of the present invention, and may represent a module, a program segment, or merely a code segment, which is an executable instruction for implementing a specified logical function. It should also be noted that the executable instructions that implement the specified logical functions may be recombined to create new modules and program segments. The blocks of the drawings, and the order of the blocks, are thus provided to better illustrate the processes and steps of the embodiments and should not be taken as limiting the invention itself.
The above description is only a few embodiments of the present invention, and is not intended to limit the present invention, and various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (24)

1. A human-computer interaction method, comprising:
collecting sight line information of a user;
determining a sight line moving path of the user in an operation sensing area based on the sight line information; and
determining an operation instruction of the user to an operation object based on the operation model of the operation sensing area and the sight line moving path,
wherein the user operates the operation object corresponding to the operation sensing area through line-of-sight movement.
2. The human-computer interaction method according to claim 1, further comprising:
establishing the operational model of the operational sensing zone.
3. The human-computer interaction method of claim 1, wherein the operation sensing area comprises a plurality of sensing point combinations,
each sensing point combination comprises a plurality of sensing points.
4. The human-computer interaction method of claim 3, wherein the states of the plurality of sensing points comprise: active and inactive states, selected and unselected states,
wherein the state of the sensing point can only be set to the selected state when the sensing point is in the activated state; when the sensing point is activated, the sensing point is in the non-selected state, and when the sight line is coincident with the sensing point, the sensing point is set to be in the selected state.
5. The human-computer interaction method of claim 2, wherein the operation sensing area comprises a plurality of sensing point combinations,
each sensing point combination comprises: a first induction point, a second induction point and a third induction point
The establishing the operation model of the operation induction zone comprises:
the first sensing point is used as a starting point of the sight line moving path, and when the sight line is superposed with the first sensing point, the state of the first sensing point is set to be a selected state;
the second sensing point is used as a selection point of the sight line moving path, and when the sight line is overlapped with the second sensing point, the state of the second sensing point is set to be a selected state;
and the third sensing point is used as a termination point of the sight line moving path, and when the sight line is coincident with the third sensing point, the state of the third sensing point is set to be a selected state.
6. The human-computer interaction method of claim 5, wherein the establishing the operation model of the operation sensing region further comprises:
if the sight line disappears in the moving process of the operation sensing area, setting the state of the first sensing point which is closest to the position where the sight line disappears and is in the activated state as the selected state, and taking the first sensing point as the starting point of a new sight line moving path.
7. The human-computer interaction method of claim 5, wherein the establishing the operation model of the operation sensing region further comprises:
in the same sensing point combination, if the sight line passes through the second sensing point and then reaches the first sensing point, the second sensing point is reset to be in a non-selected state.
8. The human-computer interaction method of claim 5, wherein the establishing the operation model of the operation sensing region further comprises:
if the sight line starts from the first sensing point and reaches the third sensing point after passing through the second sensing points in the process of moving in the operation sensing area, the second sensing points are set to be in a selected state.
9. The human-computer interaction method of claim 5, wherein the establishing the operation model of the operation sensing region further comprises:
if the sight line is coincident with the third sensing point, all sensing points passed by the sight line moving path are reset to be in a non-selected state, and meanwhile, an operation flow for reporting the sight line moving path is triggered.
10. The human-computer interaction method according to any one of claims 4 to 9,
the operation induction area and the operation object are displayed on a display interface of a terminal application program,
when the operation induction area and the operation object are loaded on a display interface of the terminal application program, the induction point in the multiple induction point combinations of the operation induction area is set to be in an activated state.
11. The human-computer interaction method according to claim 10, wherein the determining of the operation instruction of the user on the operation object based on the operation model of the operation sensing area and the sight line moving path comprises:
receiving the reported sight line moving path;
determining the operation object corresponding to the induction point combination to which the second induction point in the selected state belongs in the sight line moving path according to the operation model of the operation induction area;
and triggering an interactive control event corresponding to the operation object according to a preset program of the operation object.
12. A human-computer interaction device, comprising:
an acquisition unit configured to acquire sight line information of a user;
a path determination unit configured to determine a line of sight movement path of the user in an operation sensing area based on the line of sight information; and
an interaction unit configured to determine an operation instruction of the user to an operation object based on an operation model of the operation sensing area and the sight-line moving path,
wherein the user operates the operation object corresponding to the operation sensing area through line-of-sight movement.
13. A human-computer interaction device according to claim 12, further comprising:
an establishing unit configured to establish the operation model of the operation sensing region.
14. A human-computer interaction device according to claim 12, wherein the operational sensing zone comprises a plurality of sensing point combinations,
each sensing point combination comprises a plurality of sensing points.
15. A human-computer interaction device according to claim 14, wherein the states of the plurality of sensing points comprise: active and inactive states, selected and unselected states,
wherein the state of the sensing point can only be set to the selected state when the sensing point is in the activated state; when the sensing point is activated, the sensing point is in the non-selected state, and when the sight line is coincident with the sensing point, the sensing point is set to be in the selected state.
16. A human-computer interaction device according to claim 13, wherein the operational sensing zone comprises a plurality of sensing point combinations,
each sensing point combination comprises: a first induction point, a second induction point and a third induction point
The establishing the operation model of the operation induction zone comprises:
the first sensing point is used as a starting point of the sight line moving path, and when the sight line is superposed with the first sensing point, the state of the first sensing point is set to be a selected state;
the second sensing point is used as a selection point of the sight line moving path, and when the sight line is overlapped with the second sensing point, the state of the second sensing point is set to be a selected state;
and the third sensing point is used as a termination point of the sight line moving path, and when the sight line is coincident with the third sensing point, the state of the third sensing point is set to be a selected state.
17. The human-computer interaction device of claim 16, wherein the establishing the operational model of the operational sensing zone further comprises:
if the sight line disappears in the moving process of the operation sensing area, setting the state of the first sensing point which is closest to the position where the sight line disappears and is in the activated state as the selected state, and taking the first sensing point as the starting point of a new sight line moving path.
18. The human-computer interaction device of claim 16, wherein the establishing the operational model of the operational sensing zone further comprises:
in the same sensing point combination, if the sight line passes through the second sensing point and then reaches the first sensing point, the second sensing point is reset to be in a non-selected state.
19. The human-computer interaction device of claim 16, wherein the establishing the operational model of the operational sensing zone further comprises:
if the sight line starts from the first sensing point and reaches the third sensing point after passing through the second sensing points in the process of moving in the operation sensing area, the second sensing points are set to be in a selected state.
20. The human-computer interaction device of claim 16, wherein the establishing the operational model of the operational sensing zone further comprises:
if the sight line is coincident with the third sensing point, all sensing points passed by the sight line moving path are reset to be in a non-selected state, and meanwhile, an operation flow for reporting the sight line moving path is triggered.
21. A human-computer interaction device according to any of claims 15-20,
the operation induction area and the operation object are displayed on a display interface of a terminal application program,
when the operation induction area and the operation object are loaded on a display interface of the terminal application program, the induction point in the multiple induction point combinations of the operation induction area is set to be in an activated state.
22. The human-computer interaction device according to claim 21, wherein the determining the operation instruction of the user on the operation object based on the operation model of the operation sensing area and the sight line moving path comprises:
receiving the reported sight line moving path;
determining the operation object corresponding to the induction point combination to which the second induction point in the selected state belongs in the sight line moving path according to the operation model of the operation induction area;
and triggering an interactive control event corresponding to the operation object according to a preset program of the operation object.
23. A human-computer interaction control device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the human-computer interaction method of any one of the preceding claims 1 to 11.
24. A computer-readable storage medium storing computer instructions which, when executed, implement a human-computer interaction method as claimed in any one of claims 1 to 11.
CN201910434020.2A 2019-05-23 2019-05-23 Man-machine interaction method and device Active CN111752381B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910434020.2A CN111752381B (en) 2019-05-23 2019-05-23 Man-machine interaction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910434020.2A CN111752381B (en) 2019-05-23 2019-05-23 Man-machine interaction method and device

Publications (2)

Publication Number Publication Date
CN111752381A true CN111752381A (en) 2020-10-09
CN111752381B CN111752381B (en) 2024-06-18

Family

ID=72672896

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910434020.2A Active CN111752381B (en) 2019-05-23 2019-05-23 Man-machine interaction method and device

Country Status (1)

Country Link
CN (1) CN111752381B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103336581A (en) * 2013-07-30 2013-10-02 黄通兵 Human eye movement characteristic design-based human-computer interaction method and system
US20140152764A1 (en) * 2012-12-04 2014-06-05 Nintendo Co., Ltd. Information processing system, information processing apparatus, storage medium having stored therein information processing program, and information transmission/reception method
WO2015082817A1 (en) * 2013-12-05 2015-06-11 Op3Ft Method for controlling the interaction with a touch screen and device implementing said method
US20160092099A1 (en) * 2014-09-25 2016-03-31 Wavelight Gmbh Apparatus Equipped with a Touchscreen and Method for Controlling Such an Apparatus
KR20160118568A (en) * 2015-04-02 2016-10-12 한국과학기술원 Method and apparatus for providing information terminal with hmd
US20170123491A1 (en) * 2014-03-17 2017-05-04 Itu Business Development A/S Computer-implemented gaze interaction method and apparatus
CN106873774A (en) * 2017-01-12 2017-06-20 北京奇虎科技有限公司 interaction control method, device and intelligent terminal based on eye tracking
CN107656613A (en) * 2017-09-08 2018-02-02 国网山东省电力公司电力科学研究院 A kind of man-machine interactive system and its method of work based on the dynamic tracking of eye
CN107977560A (en) * 2017-11-23 2018-05-01 北京航空航天大学 Identity identifying method and device based on Eye-controlling focus
CN108008811A (en) * 2016-10-27 2018-05-08 中兴通讯股份有限公司 A kind of method and terminal using non-touch screen mode operating terminal
CN108279778A (en) * 2018-02-12 2018-07-13 上海京颐科技股份有限公司 User interaction approach, device and system
US20180329603A1 (en) * 2017-02-27 2018-11-15 Colopl, Inc. Method executed on computer for moving in virtual space, program and information processing apparatus for executing the method on computer

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140152764A1 (en) * 2012-12-04 2014-06-05 Nintendo Co., Ltd. Information processing system, information processing apparatus, storage medium having stored therein information processing program, and information transmission/reception method
CN103336581A (en) * 2013-07-30 2013-10-02 黄通兵 Human eye movement characteristic design-based human-computer interaction method and system
WO2015082817A1 (en) * 2013-12-05 2015-06-11 Op3Ft Method for controlling the interaction with a touch screen and device implementing said method
US20170123491A1 (en) * 2014-03-17 2017-05-04 Itu Business Development A/S Computer-implemented gaze interaction method and apparatus
US20160092099A1 (en) * 2014-09-25 2016-03-31 Wavelight Gmbh Apparatus Equipped with a Touchscreen and Method for Controlling Such an Apparatus
KR20160118568A (en) * 2015-04-02 2016-10-12 한국과학기술원 Method and apparatus for providing information terminal with hmd
CN108008811A (en) * 2016-10-27 2018-05-08 中兴通讯股份有限公司 A kind of method and terminal using non-touch screen mode operating terminal
CN106873774A (en) * 2017-01-12 2017-06-20 北京奇虎科技有限公司 interaction control method, device and intelligent terminal based on eye tracking
US20180329603A1 (en) * 2017-02-27 2018-11-15 Colopl, Inc. Method executed on computer for moving in virtual space, program and information processing apparatus for executing the method on computer
CN107656613A (en) * 2017-09-08 2018-02-02 国网山东省电力公司电力科学研究院 A kind of man-machine interactive system and its method of work based on the dynamic tracking of eye
CN107977560A (en) * 2017-11-23 2018-05-01 北京航空航天大学 Identity identifying method and device based on Eye-controlling focus
CN108279778A (en) * 2018-02-12 2018-07-13 上海京颐科技股份有限公司 User interaction approach, device and system

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
XIAOMING WANG ETC..: "A New Type of Eye Movement Model Based on Recurrent Neural Networks for Simulating the Gaze Bahavior of Human Reading", COMPLEX DEEP LEARNING AND EVOLUTIONARY COMPUTING MODELS IN COMPUTER VISION *
王海燕等: "基于Cog Tool的数字界面交互行为认知模型仿真研究", 航天医学与医学工程, no. 01 *
程时伟等: "移动计算用户界面可用性评估的眼动方法", 电子学报, no. 1 *
肖志勇等: "基于视线跟踪和手势识别的人机交互", 计算机工程, no. 15, 5 August 2009 (2009-08-05) *
胡文婷等: "基于视线跟踪的智能界面实现机制研究", 计算机应用与软件, no. 01, 15 January 2016 (2016-01-15) *

Also Published As

Publication number Publication date
CN111752381B (en) 2024-06-18

Similar Documents

Publication Publication Date Title
CN108287657B (en) Skill applying method and device, storage medium and electronic equipment
EP3293620A1 (en) Multi-screen control method and system for display screen based on eyeball tracing technology
JP5618904B2 (en) System, method and computer program for interactive filter (system and method for interactive filter)
CN106843498A (en) Dynamic interface exchange method and device based on virtual reality
CN107122119B (en) Information processing method, information processing device, electronic equipment and computer readable storage medium
US20140049462A1 (en) User interface element focus based on user's gaze
CN105892642A (en) Method and device for controlling terminal according to eye movement
EP2266014B1 (en) Apparatus to create, save and format text documents using gaze control and associated method
US10488918B2 (en) Analysis of user interface interactions within a virtual reality environment
CN112286434A (en) Suspension button display method and terminal equipment
CN108958577A (en) Window operation method, apparatus, wearable device and medium based on wearable device
CN111722708B (en) Eye movement-based multi-dimensional geographic information self-adaptive intelligent interaction method and device
CN110032296A (en) Determination method, apparatus, terminal and the storage medium of virtual objects in terminal
CN107085489A (en) A kind of control method and electronic equipment
CN109246292B (en) Method and device for moving terminal desktop icons
WO2018000606A1 (en) Virtual-reality interaction interface switching method and electronic device
CN114063845A (en) Display method, display device and electronic equipment
US9612683B2 (en) Operation method of touch screen with zooming-in function and touch screen device
CN111752381B (en) Man-machine interaction method and device
US20170108924A1 (en) Zoom effect in gaze tracking interface
KR20180058097A (en) Electronic device for displaying image and method for controlling thereof
CN113495616A (en) Terminal display control method, terminal, and computer-readable storage medium
CN115202524B (en) Display method and device
CN108499102B (en) Information interface display method and device, storage medium and electronic equipment
CN115002551A (en) Video playing method and device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant