CN107544660B - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN107544660B
CN107544660B CN201610472170.9A CN201610472170A CN107544660B CN 107544660 B CN107544660 B CN 107544660B CN 201610472170 A CN201610472170 A CN 201610472170A CN 107544660 B CN107544660 B CN 107544660B
Authority
CN
China
Prior art keywords
target object
determining
gaze
parameters
eyeball
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610472170.9A
Other languages
Chinese (zh)
Other versions
CN107544660A (en
Inventor
杨大业
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201610472170.9A priority Critical patent/CN107544660B/en
Publication of CN107544660A publication Critical patent/CN107544660A/en
Application granted granted Critical
Publication of CN107544660B publication Critical patent/CN107544660B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses an information processing method and electronic equipment, comprising the following steps: acquiring a gaze parameter within a preset time period, wherein the gaze parameter comprises position data of an eyeball; performing feature extraction on the gaze parameters to obtain feature parameters; determining a type identifier matched with the characteristic parameters according to a preset strategy; and determining the type of the target object watched by the eyeballs according to the type identification.

Description

Information processing method and electronic equipment
Technical Field
The present invention relates to information processing technologies, and in particular, to an information processing method and an electronic device.
Background
With the popularity of Augmented Reality (AR) and Virtual Reality (VR) technologies, AR/VR applications have rapidly advanced from professional industrial applications to consumer entertainment applications in the past. The use scenario of AR/VR is also defined by relatively fixed locations, such as: design rooms, laboratories, and the like are spread to places of daily life. The application scenarios of AR/VR mobile are becoming more and more rich, for example: games, education, etc. Due to the use scene and the technical base of the AR/VR equipment, compared with the traditional terminal, for example: notebooks (PCs), cell phones, etc. are very different, and conventional input devices, for example: mouse and keyboard cannot be applied to AR/VR equipment. Head-mounted eye tracking is a suitable technique for mobile AR/VR applications.
One application scenario for head-mounted eye tracking technology is the interaction of a customer with the physical world to implement augmented reality AR, such as: acquiring additional academic information or explanation at any time while reading the textbook; and obtaining depth video information when reading newspaper. To achieve this, knowing in advance the type of paper media read by the user enhances the AR's application client experience. And the knowledge structure and the knowledge level of the user can be further understood by simultaneously knowing the type of the paper media read by the user. Based on this, how to know the type of the paper media read by the user is a problem to be solved.
Disclosure of Invention
In order to solve the above technical problem, embodiments of the present invention provide an information processing method and an electronic device.
The information processing method provided by the embodiment of the invention comprises the following steps:
acquiring a gaze parameter within a preset time period, wherein the gaze parameter comprises position data of an eyeball;
performing feature extraction on the gaze parameters to obtain feature parameters;
determining a type identifier matched with the characteristic parameters according to a preset strategy;
and determining the type of the target object watched by the eyeballs according to the type identification.
In the embodiment of the present invention, the performing feature extraction on the gaze parameter to obtain a feature parameter includes:
and performing feature extraction on the gaze parameters to obtain the following feature parameters: number of saccade directions, saccade distance, mean and variance of saccade directions, slope.
In the embodiment of the present invention, the method further includes:
after a gaze parameter within a preset time length is obtained, a gaze point of the eyeball at a target object is determined according to the gaze parameter and the position of the target object;
the step of performing feature extraction on the gaze parameters to obtain feature parameters comprises the following steps:
and performing characteristic extraction on the fixation point of the eyeball at the target object to obtain characteristic parameters.
In the embodiment of the present invention, the determining, according to a preset policy, a type identifier matched with the feature parameter includes:
determining, from the number of glance directions, at least the following information in the target object: character direction, picture proportion, character layout and picture layout;
determining a size of the target object based on the saccade distance;
and determining the saccade direction of the eyeball according to the slope.
In an embodiment of the present invention, the determining, according to the type identifier, a type of the target object watched by the eyeball includes:
and determining the paper media type of the target object according to the type identification, wherein the paper media type is used for representing the physical attribute of the target object.
The electronic device provided by the embodiment of the invention comprises:
the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring a gaze parameter within a preset time length, and the gaze parameter comprises more than one position data of an eyeball;
the extraction unit is used for carrying out feature extraction on the staring parameters to obtain feature parameters;
the first determining unit is used for determining a type identifier matched with the characteristic parameter according to a preset strategy;
and the second determining unit is used for determining the type of the target object watched by the eyeballs according to the type identification.
In an embodiment of the present invention, the extracting unit is further configured to perform feature extraction on the gaze parameter, so as to obtain the following feature parameters: number of saccade directions, saccade distance, mean and variance of saccade directions, slope.
In an embodiment of the present invention, the electronic device further includes:
the third determining unit is used for determining a fixation point of the eyeball at the target object according to the fixation parameter and the position of the target object after obtaining the fixation parameter within a preset time length;
the extraction unit is further configured to perform feature extraction on the fixation point of the eyeball at the target object to obtain a feature parameter.
In an embodiment of the present invention, the first determining unit is further configured to determine, according to the number of panning directions, at least the following information in the target object: character direction, picture proportion, character layout and picture layout; determining a size of the target object based on the saccade distance; and determining the saccade direction of the eyeball according to the slope.
In this embodiment of the present invention, the second determining unit is further configured to determine a paper media type of the target object according to the type identifier, where the paper media type is used to represent a physical attribute of the target object.
According to the technical scheme of the embodiment of the invention, the gaze parameters within the preset time length are obtained, wherein the gaze parameters comprise one or more pieces of position data of eyeballs; performing feature extraction on the gaze parameters to obtain feature parameters; determining a type identifier matched with the characteristic parameters according to a preset strategy; and determining the type of the target object watched by the eyeballs according to the type identification. In this way, the type of the content read by the user is automatically identified through eye tracking, and here, the type of the target object is the type of the paper media read by the user. Knowing the type of the paper media read by the user in advance can improve the application customer experience of the AR, and meanwhile, also knows the knowledge structure and level of the user.
Drawings
Fig. 1 is a schematic flowchart of an information processing method according to a first embodiment of the present invention;
FIG. 2 is a flowchart illustrating an information processing method according to a second embodiment of the present invention;
FIG. 3 is a flowchart illustrating an information processing method according to a third embodiment of the present invention;
FIG. 4 is a schematic view of smart eyewear in accordance with an embodiment of the present invention;
FIG. 5 is a schematic illustration of gaze parameters for an embodiment of the present invention;
FIG. 6 is a schematic diagram of characteristic parameters of an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.
Detailed Description
So that the manner in which the features and aspects of the embodiments of the present invention can be understood in detail, a more particular description of the embodiments of the invention, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings.
Fig. 1 is a flowchart illustrating an information processing method according to a first embodiment of the present invention, where the information processing method in this example is applied to an electronic device, and as shown in fig. 1, the information processing method includes the following steps:
step 101: and acquiring a gaze parameter within a preset time period, wherein the gaze parameter comprises more than one position data of the eyeball.
In the embodiment of the present invention, the electronic device has an eye tracking device, and the operating principle of the eye tracking device is as follows: sending invisible infrared light to a user, and then searching and capturing the flickering of eyeballs and the reflection of retina of the user by using two built-in cameras; the direction watched by the eyeballs, namely the position data of the eyeballs can be determined according to the images obtained by the camera.
Since the eye tracking device collects the position data of the eyeball, the eye tracking device is usually disposed in a head-mounted device, such as smart glasses, a smart helmet, and the like.
Referring to fig. 4, fig. 4 is a schematic view of a pair of smart glasses, a user wears the smart glasses on the head, and the smart glasses are provided with eye tracking devices, and different users have different eye structures and head structures, so after the eye tracking devices are turned on, the user needs to be firstly personalized and the eye tracking devices can accurately acquire position data of eyeballs of the user after the user is calibrated.
In the embodiment of the invention, the gaze parameters within the preset time length are acquired, and the gaze parameters comprise one or more pieces of position data of the eyeball. Specifically, changes in the gaze parameters over a period of time are acquired by the eye tracking device, the changes in the gaze parameters being a series of positional data of the eyeball. The eye tracking device collects position data of an eyeball in real time, and obtains discrete data of a fixation point and time, as shown in fig. 5, an X coordinate axis and a Y coordinate axis respectively represent an abscissa and an ordinate of the position data of the eyeball.
Step 102: and performing feature extraction on the gaze parameters to obtain feature parameters.
In the embodiment of the present invention, the obtained characteristic parameters include: number of saccade directions, saccade distance, mean and variance of saccade directions, slope.
Referring to FIG. 6, wherein, the first diagram shows the number of saccades; FIG. B illustrates the saccadic distance, specifically, the 5% -95% quantile distance; the third graph illustrates the mean and variance of the panning direction; the graph illustrates the slope.
In the embodiment of the invention, the characteristic extraction is carried out on the gaze parameters by a sliding window algorithm, wherein:
1) number of directions of glance: the four directions correspond to the four directions of the euclidean coordinate axis and allow for some error, for example, 20 degrees. The number of panning directions indicates information on the direction of characters, the picture proportion, the layout and the like.
2) Sweep distance: the size of the paper media is distinguished.
3) Variance and mean of saccadic direction
4) Slope by linear regression: indicating the overall reading direction.
Step 103: determining a type identifier matched with the characteristic parameters according to a preset strategy; and determining the type of the target object watched by the eyeballs according to the type identification.
In the embodiment of the present invention, the characteristic parameters are processed through a classification tree algorithm to obtain the paper media type read by the user, where the paper media type includes, but is not limited to: textbooks, newspapers, magazines, caricatures, and the like.
Here, the classification tree algorithm is also called a decision tree algorithm, and a decision tree is a prediction model which represents a mapping relationship between object attributes and object values. Each node in the tree represents an object and each divergent path represents a possible attribute value, and each leaf node corresponds to the value of the object represented by the path traveled from the root node to the leaf node. The decision tree has only a single output, and if a plurality of outputs are desired, independent decision trees can be established to handle different outputs. The machine learning technique for generating decision trees from data is called decision tree learning, and as can be seen, the classification tree algorithm is a prediction tree depending on classification and training and classifies the future according to known prediction. The embodiment of the invention utilizes a classification tree algorithm to process the characteristic parameters to obtain the type of the paper media read by the user.
According to the technical scheme of the embodiment of the invention, the position information of the eyeballs of the user is detected in real time through the eye tracking equipment, so that the type of the paper media read by the user is determined, and the knowledge level and the reading preference of the user can be judged.
Fig. 2 is a flowchart illustrating an information processing method according to a second embodiment of the present invention, where the information processing method in this example is applied to an electronic device, and as shown in fig. 2, the information processing method includes the following steps:
step 201: acquiring a gaze parameter within a preset time period, wherein the gaze parameter comprises position data of an eyeball; and determining the fixation point of the eyeball at the target object according to the fixation parameters and the position of the target object.
In the embodiment of the present invention, the electronic device has an eye tracking device, and the operating principle of the eye tracking device is as follows: sending invisible infrared light to a user, and then searching and capturing the flickering of eyeballs and the reflection of retina of the user by using two built-in cameras; the direction watched by the eyeballs, namely the position data of the eyeballs can be determined according to the images obtained by the camera.
Since the eye tracking device collects the position data of the eyeball, the eye tracking device is usually disposed in a head-mounted device, such as smart glasses, a smart helmet, and the like.
Referring to fig. 4, fig. 4 is a schematic view of a pair of smart glasses, a user wears the smart glasses on the head, and the smart glasses are provided with eye tracking devices, and different users have different eye structures and head structures, so after the eye tracking devices are turned on, the user needs to be firstly personalized and the eye tracking devices can accurately acquire position data of eyeballs of the user after the user is calibrated.
In the embodiment of the invention, the gaze parameters within the preset time length are acquired, and the gaze parameters comprise one or more pieces of position data of the eyeball. Specifically, changes in the gaze parameters over a period of time are acquired by the eye tracking device, the changes in the gaze parameters being a series of positional data of the eyeball. The eye tracking device collects position data of an eyeball in real time, and obtains discrete data of a fixation point and time, as shown in fig. 5, an X coordinate axis and a Y coordinate axis respectively represent an abscissa and an ordinate of the position data of the eyeball.
In the embodiment of the present invention, the gaze parameters include one or more position data of the eyeball, and the position of the target object relative to the eyeball is combined, so that the gaze point of the eyeball at the target object, that is, which part of the target object is viewed by the eyeball, can be determined.
Step 202: and performing characteristic extraction on the fixation point of the eyeball at the target object to obtain characteristic parameters.
In the embodiment of the present invention, the obtained characteristic parameters include: number of saccade directions, saccade distance, mean and variance of saccade directions, slope.
Referring to FIG. 6, wherein, the first diagram shows the number of saccades; FIG. B illustrates the saccadic distance, specifically, the 5% -95% quantile distance; the third graph illustrates the mean and variance of the panning direction; the graph illustrates the slope.
In the embodiment of the invention, the characteristic extraction is carried out on the gaze parameters by a sliding window algorithm, wherein:
1) number of directions of glance: the four directions correspond to the four directions of the euclidean coordinate axis and allow for some error, for example, 20 degrees. The number of panning directions indicates information on the direction of characters, the picture proportion, the layout and the like.
2) Sweep distance: the size of the paper media is distinguished.
3) Variance and mean of saccadic direction
4) Slope by linear regression: indicating the overall reading direction.
Step 203: determining a type identifier matched with the characteristic parameters according to a preset strategy; and determining the type of the target object watched by the eyeballs according to the type identification.
In the embodiment of the present invention, the characteristic parameters are processed through a classification tree algorithm to obtain the paper media type read by the user, where the paper media type includes, but is not limited to: textbooks, newspapers, magazines, caricatures, and the like.
Here, the classification tree algorithm is also called a decision tree algorithm, and a decision tree is a prediction model which represents a mapping relationship between object attributes and object values. Each node in the tree represents an object and each divergent path represents a possible attribute value, and each leaf node corresponds to the value of the object represented by the path traveled from the root node to the leaf node. The decision tree has only a single output, and if a plurality of outputs are desired, independent decision trees can be established to handle different outputs. The machine learning technique for generating decision trees from data is called decision tree learning, and as can be seen, the classification tree algorithm is a prediction tree depending on classification and training and classifies the future according to known prediction. The embodiment of the invention utilizes a classification tree algorithm to process the characteristic parameters to obtain the type of the paper media read by the user.
According to the technical scheme of the embodiment of the invention, the position information of the eyeballs of the user is detected in real time through the eye tracking equipment, so that the type of the paper media read by the user is determined, and the knowledge level and the reading preference of the user can be judged.
Fig. 3 is a flowchart illustrating an information processing method according to a third embodiment of the present invention, where the information processing method in this example is applied to an electronic device, and as shown in fig. 3, the information processing method includes the following steps:
step 301: acquiring a gaze parameter within a preset time period, wherein the gaze parameter comprises position data of an eyeball; and determining the fixation point of the eyeball at the target object according to the fixation parameters and the position of the target object.
In the embodiment of the present invention, the electronic device has an eye tracking device, and the operating principle of the eye tracking device is as follows: sending invisible infrared light to a user, and then searching and capturing the flickering of eyeballs and the reflection of retina of the user by using two built-in cameras; the direction watched by the eyeballs, namely the position data of the eyeballs can be determined according to the images obtained by the camera.
Since the eye tracking device collects the position data of the eyeball, the eye tracking device is usually disposed in a head-mounted device, such as smart glasses, a smart helmet, and the like.
Referring to fig. 4, fig. 4 is a schematic view of a pair of smart glasses, a user wears the smart glasses on the head, and the smart glasses are provided with eye tracking devices, and different users have different eye structures and head structures, so after the eye tracking devices are turned on, the user needs to be firstly personalized and the eye tracking devices can accurately acquire position data of eyeballs of the user after the user is calibrated.
In the embodiment of the invention, the gaze parameters within the preset time length are acquired, and the gaze parameters comprise one or more pieces of position data of the eyeball. Specifically, changes in the gaze parameters over a period of time are acquired by the eye tracking device, the changes in the gaze parameters being a series of positional data of the eyeball. The eye tracking device collects position data of an eyeball in real time, and obtains discrete data of a fixation point and time, as shown in fig. 5, an X coordinate axis and a Y coordinate axis respectively represent an abscissa and an ordinate of the position data of the eyeball.
In the embodiment of the present invention, the gaze parameters include one or more position data of the eyeball, and the position of the target object relative to the eyeball is combined, so that the gaze point of the eyeball at the target object, that is, which part of the target object is viewed by the eyeball, can be determined.
Step 302: and performing characteristic extraction on the fixation point of the eyeball at the target object to obtain characteristic parameters.
In the embodiment of the present invention, the obtained characteristic parameters include: number of saccade directions, saccade distance, mean and variance of saccade directions, slope.
Referring to FIG. 6, wherein, the first diagram shows the number of saccades; FIG. B illustrates the saccadic distance, specifically, the 5% -95% quantile distance; the third graph illustrates the mean and variance of the panning direction; the graph illustrates the slope.
In the embodiment of the invention, the characteristic extraction is carried out on the gaze parameters by a sliding window algorithm, wherein:
1) number of directions of glance: the four directions correspond to the four directions of the euclidean coordinate axis and allow for some error, for example, 30 degrees. The number of panning directions indicates information on the direction of characters, the picture proportion, the layout and the like.
2) Sweep distance: the size of the paper media is distinguished.
3) Variance and mean of saccadic direction
4) Slope by linear regression: indicating the overall reading direction.
Step 303: determining, from the number of glance directions, at least the following information in the target object: character direction, picture proportion, character layout and picture layout; determining a size of the target object based on the saccade distance; determining a saccadic direction of the eyeball according to the slope; and determining the paper media type of the target object according to the information, wherein the paper media type is used for representing the physical attributes of the target object.
In the embodiment of the present invention, the characteristic parameters are processed through a classification tree algorithm to obtain the paper media type read by the user, where the paper media type includes, but is not limited to: textbooks, newspapers, magazines, caricatures, and the like.
Here, the classification tree algorithm is also called a decision tree algorithm, and a decision tree is a prediction model which represents a mapping relationship between object attributes and object values. Each node in the tree represents an object and each divergent path represents a possible attribute value, and each leaf node corresponds to the value of the object represented by the path traveled from the root node to the leaf node. The decision tree has only a single output, and if a plurality of outputs are desired, independent decision trees can be established to handle different outputs. The machine learning technique for generating decision trees from data is called decision tree learning, and as can be seen, the classification tree algorithm is a prediction tree depending on classification and training and classifies the future according to known prediction. The embodiment of the invention utilizes a classification tree algorithm to process the characteristic parameters to obtain the type of the paper media read by the user.
According to the technical scheme of the embodiment of the invention, the position information of the eyeballs of the user is detected in real time through the eye tracking equipment, so that the type of the paper media read by the user is determined, and the knowledge level and the reading preference of the user can be judged.
Fig. 7 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention, and as shown in fig. 7, the electronic device includes:
an obtaining unit 71, configured to obtain a gaze parameter within a preset time period, where the gaze parameter includes one or more position data of an eyeball;
an extracting unit 72, configured to perform feature extraction on the gaze parameter to obtain a feature parameter;
a first determining unit 73, configured to determine, according to a preset policy, a type identifier matching the feature parameter;
a second determining unit 74, configured to determine, according to the type identifier, a type of the target object gazed by the eyeball.
Those skilled in the art will understand that the implementation functions of each unit in the electronic device shown in fig. 7 can be understood by referring to the related description of the information processing method.
Fig. 8 is a schematic structural composition diagram of an electronic device according to a fifth embodiment of the present invention, and as shown in fig. 8, the electronic device includes:
an obtaining unit 81, configured to obtain a gaze parameter within a preset time period, where the gaze parameter includes one or more position data of an eyeball;
an extracting unit 82, configured to perform feature extraction on the gaze parameter to obtain a feature parameter;
a first determining unit 83, configured to determine, according to a preset policy, a type identifier matched with the feature parameter;
a second determining unit 84, configured to determine, according to the type identifier, a type of the target object gazed by the eyeball.
The extracting unit 82 is further configured to perform feature extraction on the gaze parameter, so as to obtain the following feature parameters: number of saccade directions, saccade distance, mean and variance of saccade directions, slope.
The electronic device further includes:
the third determining unit 85 is configured to determine, after acquiring a gaze parameter within a preset time, a gaze point of the eyeball at the target object according to the gaze parameter and the position of the target object;
the extracting unit 82 is further configured to perform feature extraction on the fixation point of the eyeball at the target object to obtain a feature parameter.
The first determining unit 83 is further configured to determine at least the following information in the target object according to the number of saccade directions: character direction, picture proportion, character layout and picture layout; determining a size of the target object based on the saccade distance; and determining the saccade direction of the eyeball according to the slope.
The second determining unit 84 is further configured to determine a paper media type of the target object according to the type identifier, where the paper media type is used to represent a physical attribute of the target object.
Those skilled in the art will understand that the implementation functions of each unit in the electronic device shown in fig. 8 can be understood by referring to the related description of the information processing method.
The technical schemes described in the embodiments of the present invention can be combined arbitrarily without conflict.
In the embodiments provided in the present invention, it should be understood that the disclosed method and intelligent device may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one second processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention.

Claims (8)

1. An information processing method comprising:
acquiring a gaze parameter within a preset time length, wherein the gaze parameter comprises more than one position data of an eyeball;
performing feature extraction on the gaze parameters to obtain feature parameters;
determining a type identifier matched with the characteristic parameters according to a preset strategy;
determining the type of a target object watched by the eyeballs according to the type identification;
determining a type identifier matched with the characteristic parameter according to a preset strategy comprises the following steps:
determining at least the following information in the target object according to the number of saccade directions: character direction, picture proportion, character layout and picture layout;
determining a size of the target object according to the saccade distance;
determining the saccadic direction of the eyeball according to the slope;
the determining the type of the target object watched by the eyeballs according to the type identification comprises the following steps:
and determining the paper media type of the target object according to the type identification.
2. The information processing method according to claim 1, wherein the performing feature extraction on the gaze parameter to obtain a feature parameter comprises:
and performing feature extraction on the gaze parameters to obtain the following feature parameters: number of saccade directions, saccade distance, mean and variance of saccade directions, slope.
3. The information processing method according to claim 1, the method further comprising:
after a gaze parameter within a preset time length is obtained, a gaze point of the eyeball at a target object is determined according to the gaze parameter and the position of the target object;
the step of performing feature extraction on the gaze parameters to obtain feature parameters comprises the following steps:
and performing characteristic extraction on the fixation point of the eyeball at the target object to obtain characteristic parameters.
4. The information processing method according to claim 1, wherein the paper media type is used to characterize a physical attribute of the target object.
5. An electronic device, comprising:
the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is used for acquiring a gaze parameter in a preset time length, and the gaze parameter comprises more than one position data of an eyeball;
the extraction unit is used for carrying out feature extraction on the staring parameters to obtain feature parameters;
the first determining unit is used for determining a type identifier matched with the characteristic parameter according to a preset strategy;
a second determining unit, configured to determine, according to the type identifier, a type of a target object gazed by the eyeballs;
the first determination unit is further configured to determine at least the following information in the target object according to the number of saccade directions: character direction, picture proportion, character layout and picture layout; determining a size of the target object according to the saccade distance; determining the saccadic direction of the eyeball according to the slope;
the second determining unit is further configured to determine the paper media type of the target object according to the type identifier.
6. The electronic device of claim 5, the extraction unit further configured to perform feature extraction on the gaze parameter, resulting in the following feature parameters: number of saccade directions, saccade distance, mean and variance of saccade directions, slope.
7. The electronic device of claim 5, further comprising:
the third determining unit is used for determining a fixation point of the eyeball at the target object according to the fixation parameter and the position of the target object after obtaining the fixation parameter within a preset time length;
the extraction unit is further configured to perform feature extraction on the fixation point of the eyeball at the target object to obtain a feature parameter.
8. The electronic device of claim 5, the paper media type to characterize a physical attribute of the target object.
CN201610472170.9A 2016-06-24 2016-06-24 Information processing method and electronic equipment Active CN107544660B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610472170.9A CN107544660B (en) 2016-06-24 2016-06-24 Information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610472170.9A CN107544660B (en) 2016-06-24 2016-06-24 Information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN107544660A CN107544660A (en) 2018-01-05
CN107544660B true CN107544660B (en) 2020-12-18

Family

ID=60960112

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610472170.9A Active CN107544660B (en) 2016-06-24 2016-06-24 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN107544660B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI690826B (en) * 2018-05-23 2020-04-11 國立臺灣師範大學 Method of generating amplified content
CN111159678B (en) * 2019-12-26 2023-08-18 联想(北京)有限公司 Identity recognition method, device and storage medium
CN115064275B (en) * 2022-08-19 2022-12-02 山东心法科技有限公司 Method, equipment and medium for quantifying and training children computing capacity

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104199544A (en) * 2014-08-28 2014-12-10 华南理工大学 Targeted advertisement delivery method based on eye gaze tracking
CN104504390A (en) * 2015-01-14 2015-04-08 北京工业大学 On-line user state recognition method and device based on eye movement data
EP2936300A1 (en) * 2012-12-19 2015-10-28 Qualcomm Incorporated Enabling augmented reality using eye gaze tracking
US9317113B1 (en) * 2012-05-31 2016-04-19 Amazon Technologies, Inc. Gaze assisted object recognition
CN105518666A (en) * 2013-08-29 2016-04-20 索尼公司 Information processing device and information processing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9317113B1 (en) * 2012-05-31 2016-04-19 Amazon Technologies, Inc. Gaze assisted object recognition
EP2936300A1 (en) * 2012-12-19 2015-10-28 Qualcomm Incorporated Enabling augmented reality using eye gaze tracking
CN105518666A (en) * 2013-08-29 2016-04-20 索尼公司 Information processing device and information processing method
CN104199544A (en) * 2014-08-28 2014-12-10 华南理工大学 Targeted advertisement delivery method based on eye gaze tracking
CN104504390A (en) * 2015-01-14 2015-04-08 北京工业大学 On-line user state recognition method and device based on eye movement data

Also Published As

Publication number Publication date
CN107544660A (en) 2018-01-05

Similar Documents

Publication Publication Date Title
Memo et al. Head-mounted gesture controlled interface for human-computer interaction
US10846327B2 (en) Visual attribute determination for content selection
CN107886032B (en) Terminal device, smart phone, authentication method and system based on face recognition
US11237702B2 (en) Carousel interface for post-capture processing in a messaging system
US20220130115A1 (en) Side-by-side character animation from realtime 3d body motion capture
US9135508B2 (en) Enhanced user eye gaze estimation
US20230377189A1 (en) Mirror-based augmented reality experience
CN114140867A (en) Eye pose recognition using eye features
CN107422840B (en) Method and system for identifying target object
US11756249B2 (en) Layering of post-capture processing in a messaging system
US11695718B2 (en) Post-capture processing in a messaging system
CN110956691A (en) Three-dimensional face reconstruction method, device, equipment and storage medium
US11750546B2 (en) Providing post-capture media overlays for post-capture processing in a messaging system
CN111429338B (en) Method, apparatus, device and computer readable storage medium for processing video
US11226785B2 (en) Scale determination service
CN113467619B (en) Picture display method and device, storage medium and electronic equipment
CN110148191A (en) The virtual expression generation method of video, device and computer readable storage medium
CN107544660B (en) Information processing method and electronic equipment
CN107422844B (en) Information processing method and electronic equipment
CN112148404A (en) Head portrait generation method, apparatus, device and storage medium
Perra et al. Adaptive eye-camera calibration for head-worn devices
CN105931204B (en) Picture restoring method and system
CN113450448A (en) Image processing method, device and system
CN116363725A (en) Portrait tracking method and system for display device, display device and storage medium
WO2014100448A1 (en) Collecting and selecting photos

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant