CN107544660A - A kind of information processing method and electronic equipment - Google Patents
A kind of information processing method and electronic equipment Download PDFInfo
- Publication number
- CN107544660A CN107544660A CN201610472170.9A CN201610472170A CN107544660A CN 107544660 A CN107544660 A CN 107544660A CN 201610472170 A CN201610472170 A CN 201610472170A CN 107544660 A CN107544660 A CN 107544660A
- Authority
- CN
- China
- Prior art keywords
- parameter
- eyeball
- destination object
- staring
- type
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- User Interface Of Digital Computer (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The invention discloses a kind of information processing method and electronic equipment, including:The parameter of staring in preset duration is obtained, stare parameter including eyeball one is with position data;Feature extraction is carried out to the parameter of staring, obtains characteristic parameter;According to preset strategy, it is determined that the type identification to match with the characteristic parameter;According to the type identification, the type for the destination object that the eyeball is watched attentively is determined.
Description
Technical field
The present invention relates to the information processing technology, more particularly to a kind of information processing method and electronic equipment.
Background technology
With augmented reality (AR, Augmented Reality) and virtual reality (VR, Virtual Reality) technology
Popularization, AR/VR application from the past specialty commercial Application quickly propels to consumer entertainment application.AR/VR's uses field
Scape also by relatively-stationary place, such as:Design office, laboratory etc. are diffused into the place of daily life.The application of AR/VR movements
Scene is increasingly abundanter, such as:Game, education etc..Because the usage scenario and technical foundation of AR/VR equipment are relative to traditional whole
End, such as:The very big difference of the presence such as notebook (PC), mobile phone, traditional input equipment, such as:Mouse-keyboard etc. can not be applied
In AR/VR equipment.Wear-type ocular pursuit is a kind of technology of suitable mobile AR/VR applications.
One application scenarios of wear-type ocular pursuit technology are clients to be interacted to realize augmented reality AR with physical world,
Such as:Extra INFORMATION or explanation are obtained at any time while textbook is read;The video of depth is obtained when reading the newspaper paper
Information etc..To achieve it, know that user Suo Du paper medias type can lift AR application client experience in advance.Understand simultaneously
User Suo Du paper medias type can further appreciate that the structure of knowledge and level of user.Based on this, how to know that user reads paper
Matchmaker's type is that have the problem of to be solved.
The content of the invention
In order to solve the above technical problems, the embodiments of the invention provide a kind of information processing method and electronic equipment.
Information processing method provided in an embodiment of the present invention, including:
The parameter of staring in preset duration is obtained, stare parameter including eyeball one is with position data;
Feature extraction is carried out to the parameter of staring, obtains characteristic parameter;
According to preset strategy, it is determined that the type identification to match with the characteristic parameter;
According to the type identification, the type for the destination object that the eyeball is watched attentively is determined.
It is described that feature extraction is carried out to the parameter of staring in the embodiment of the present invention, characteristic parameter is obtained, including:
Feature extraction is carried out to the parameter of staring, obtains following characteristic parameter:Sweep direction number, pan distance, pan
The average and variance in direction, slope.
In the embodiment of the present invention, methods described also includes:
Obtain in preset duration stare parameter after, determined according to the position for staring parameter and the destination object
Go out the blinkpunkt that the eyeball is located at destination object;
It is described that feature extraction is carried out to the parameter of staring, characteristic parameter is obtained, including:
The blinkpunkt for being located at destination object to the eyeball carries out feature extraction, obtains characteristic parameter.
It is described according to preset strategy in the embodiment of the present invention, it is determined that the type identification to match with the characteristic parameter, bag
Include:
According to the pan direction number, at least following information in the destination object is determined:Words direction, picture account for
Than, text layout, picture layout;
According to the pan distance, the size of the destination object is determined;
According to the slope, the pan direction of the eyeball is determined.
It is described according to the type identification in the embodiment of the present invention, determine the class for the destination object that the eyeball is watched attentively
Type, including:
According to the type identification, paper media's type of the destination object is determined, wherein, paper media's type is used to characterize
The physical attribute of the destination object.
Electronic equipment provided in an embodiment of the present invention, including:
Acquiring unit, for obtaining the parameter of staring in preset duration, stare parameter including eyeball one is with position
Put data;
Extraction unit, for carrying out feature extraction to the parameter of staring, obtain characteristic parameter;
First determining unit, for according to preset strategy, it is determined that the type identification to match with the characteristic parameter;
Second determining unit, for according to the type identification, determining the type for the destination object that the eyeball is watched attentively.
In the embodiment of the present invention, the extraction unit, it is additionally operable to carry out feature extraction to the parameter of staring, obtains as follows
Characteristic parameter:Sweep direction number, pan distance, the average and variance, slope in pan direction.
In the embodiment of the present invention, the electronic equipment also includes:
3rd determining unit, for obtain in preset duration stare parameter after, according to the parameter and described of staring
Determine that the eyeball is located at the blinkpunkt of destination object in the position of destination object;
The extraction unit, the blinkpunkt for being additionally operable to be located at the eyeball destination object carry out feature extraction, obtain spy
Levy parameter.
In the embodiment of the present invention, first determining unit, it is additionally operable to, according to the pan direction number, determine the target
At least following information in object:Words direction, picture accounting, text layout, picture layout;According to the pan distance, really
The size of the fixed destination object;According to the slope, the pan direction of the eyeball is determined.
In the embodiment of the present invention, second determining unit, it is additionally operable to, according to the type identification, determine the target pair
Paper media's type of elephant, wherein, paper media's type is used for the physical attribute for characterizing the destination object.
In the technical scheme of the embodiment of the present invention, obtain in preset duration and stare parameter, the parameter of staring includes eye
One of ball is with position data;Feature extraction is carried out to the parameter of staring, obtains characteristic parameter;According to preset strategy, it is determined that
The type identification to match with the characteristic parameter;According to the type identification, the destination object that the eyeball is watched attentively is determined
Type.In this way, reading the type of content come automatic identification user by eye-tracking, here, the type of destination object is
Paper media's type that user is read.Know that user Suo Du paper medias type can lift AR application client experience in advance, while also
Solve the structure of knowledge and level of user.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of the information processing method of the embodiment of the present invention one;
Fig. 2 is the schematic flow sheet of the information processing method of the embodiment of the present invention two;
Fig. 3 is the schematic flow sheet of the information processing method of the embodiment of the present invention three;
Fig. 4 is the schematic diagram of the intelligent glasses of the embodiment of the present invention;
Fig. 5 is the schematic diagram for staring parameter of the embodiment of the present invention;
Fig. 6 is the schematic diagram of the characteristic parameter of the embodiment of the present invention;
Fig. 7 is the structure composition schematic diagram of the electronic equipment of the embodiment of the present invention four;
Fig. 8 is the structure composition schematic diagram of the electronic equipment of the embodiment of the present invention five.
Embodiment
The characteristics of in order to more fully hereinafter understand the embodiment of the present invention and technology contents, below in conjunction with the accompanying drawings to this hair
The realization of bright embodiment is described in detail, appended accompanying drawing purposes of discussion only for reference, is not used for limiting the embodiment of the present invention.
Fig. 1 is the schematic flow sheet of the information processing method of the embodiment of the present invention one, the information processing method in this example
Applied in electronic equipment, as shown in figure 1, described information processing method comprises the following steps:
Step 101:The parameter of staring in preset duration is obtained, stare parameter including eyeball one is with positional number
According to.
In the embodiment of the present invention, the electronic equipment has eye tracking equipment, and the operation principle of eye tracking equipment is:Xiang Yong
Family sends sightless infrared light, and flicker and the eye nethike embrane of user eyeball are then captured using two built-in camera searches
Reflection;The image obtained according to camera can determine that the orientation of eyeball fixes, namely the position data of eyeball.
Due to eye tracking equipment collection be eyeball position data, therefore, eye tracking equipment is generally arranged to wear and set
In standby, such as intelligent glasses, intelligent helmet etc..
Shown in reference picture 4, Fig. 4 is a kind of schematic diagram of intelligent glasses, and intelligent glasses are worn on head by user, in intelligence
A tracking equipment is provided with glasses, because different users has different ocular structure and head construction, therefore, opening
, it is necessary to carry out individualized calibration to user first, after being calibrated, eye tracking equipment can be gathered accurately after eye tracking equipment
The position data of the eyeball of the user.
In the embodiment of the present invention, obtain in preset duration and stare parameter, it is described stare parameter include one of eyeball with
Position data.Specifically, the change that parameter is stared in a period of time is obtained by eye tracking equipment, the change for staring parameter turns to one
The position data of the eyeball of series.Eye tracking equipment gathers the position data of eyeball in real time, and obtain blinkpunkt and time from
Data are dissipated, shown in reference picture 5, in Fig. 5, the abscissa for the position data that X-coordinate axle and Y-coordinate axle represent eyeball respectively is sat with vertical
Mark.
Step 102:Feature extraction is carried out to the parameter of staring, obtains characteristic parameter.
In the embodiment of the present invention, obtained characteristic parameter includes:Sweep direction number, pan distance, the average in pan direction
With variance, slope.
Reference picture 6, wherein, first figure illustrates pan direction number;Second figure illustrates pan distance, specifically, 5%-
95% quantile distance;Third figure illustrates the average and variance in pan direction;Fourth figure illustrates slope.
In the embodiment of the present invention, feature extraction is carried out to staring parameter by sliding window algorithm, wherein:
1) direction number is swept:Four direction corresponds to European reference axis four direction and allows certain error, such as:20 degree.
Pan direction number indicates the information such as words direction, picture accounting and layout.
2) distance is swept:Distinguish the size of paper media.
3) variance and average in direction are swept
4) slope obtained by linear regression:Indicate overall reading direction.
Step 103:According to preset strategy, it is determined that the type identification to match with the characteristic parameter;According to the type
Mark, determine the type for the destination object that the eyeball is watched attentively.
In the embodiment of the present invention, by classifying, tree algorithm is handled characteristic parameter, obtains user Suo Du paper medias type,
Here, paper media's type include but is not limited to be:Teach book, newspaper, magazine, caricature etc..
Here, classification tree algorithm is also known as decision Tree algorithms, and decision tree is a forecast model, and what he represented is object category
A kind of mapping relations between property and object value.Each node represents some object in tree, and what each diverging paths then represented
Some possible property value, and each leaf node then corresponds to the object represented by the path undergone from root node to the leaf node
Value.Decision tree only has single output, if being intended to plural output, can establish independent decision tree to handle different outputs.From
The machine learning techniques that data produce decision tree are called decision tree learning, it is seen then that classification tree algorithm is that one kind relies on classification, instruction
Pre- assize on white silk, according to known prediction, sort out future.The embodiment of the present invention is carried out using tree algorithm of classifying to characteristic parameter
Processing, obtains user Suo Du paper medias type.
The technical scheme of the embodiment of the present invention, by the positional information of the eyeball of ocular pursuit equipment detecting real-time user, enter
And determine paper media's type that user is read, it is possible to determine that go out the know-how of user and read preference.
Fig. 2 is the schematic flow sheet of the information processing method of the embodiment of the present invention two, the information processing method in this example
Applied in electronic equipment, as shown in Fig. 2 described information processing method comprises the following steps:
Step 201:The parameter of staring in preset duration is obtained, stare parameter including eyeball one is with positional number
According to;Determine that the eyeball is located at the blinkpunkt of destination object according to the position for staring parameter and the destination object.
In the embodiment of the present invention, the electronic equipment has eye tracking equipment, and the operation principle of eye tracking equipment is:Xiang Yong
Family sends sightless infrared light, and flicker and the eye nethike embrane of user eyeball are then captured using two built-in camera searches
Reflection;The image obtained according to camera can determine that the orientation of eyeball fixes, namely the position data of eyeball.
Due to eye tracking equipment collection be eyeball position data, therefore, eye tracking equipment is generally arranged to wear and set
In standby, such as intelligent glasses, intelligent helmet etc..
Shown in reference picture 4, Fig. 4 is a kind of schematic diagram of intelligent glasses, and intelligent glasses are worn on head by user, in intelligence
A tracking equipment is provided with glasses, because different users has different ocular structure and head construction, therefore, opening
, it is necessary to carry out individualized calibration to user first, after being calibrated, eye tracking equipment can be gathered accurately after eye tracking equipment
The position data of the eyeball of the user.
In the embodiment of the present invention, obtain in preset duration and stare parameter, it is described stare parameter include one of eyeball with
Position data.Specifically, the change that parameter is stared in a period of time is obtained by eye tracking equipment, the change for staring parameter turns to one
The position data of the eyeball of series.Eye tracking equipment gathers the position data of eyeball in real time, and obtain blinkpunkt and time from
Data are dissipated, shown in reference picture 5, in Fig. 5, the abscissa for the position data that X-coordinate axle and Y-coordinate axle represent eyeball respectively is sat with vertical
Mark.
In the embodiment of the present invention, one of parameter including eyeball is stared with position data, combining target object is relative to eye
The position of ball, you can determine that eyeball is located at the blinkpunkt of destination object, i.e., which of destination object what eyeball was watched is
Position.
Step 202:The blinkpunkt for being located at destination object to the eyeball carries out feature extraction, obtains characteristic parameter.
In the embodiment of the present invention, obtained characteristic parameter includes:Sweep direction number, pan distance, the average in pan direction
With variance, slope.
Reference picture 6, wherein, first figure illustrates pan direction number;Second figure illustrates pan distance, specifically, 5%-
95% quantile distance;Third figure illustrates the average and variance in pan direction;Fourth figure illustrates slope.
In the embodiment of the present invention, feature extraction is carried out to staring parameter by sliding window algorithm, wherein:
1) direction number is swept:Four direction corresponds to European reference axis four direction and allows certain error, such as:20 degree.
Pan direction number indicates the information such as words direction, picture accounting and layout.
2) distance is swept:Distinguish the size of paper media.
3) variance and average in direction are swept
4) slope obtained by linear regression:Indicate overall reading direction.
Step 203:According to preset strategy, it is determined that the type identification to match with the characteristic parameter;According to the type
Mark, determine the type for the destination object that the eyeball is watched attentively.
In the embodiment of the present invention, by classifying, tree algorithm is handled characteristic parameter, obtains user Suo Du paper medias type,
Here, paper media's type include but is not limited to be:Teach book, newspaper, magazine, caricature etc..
Here, classification tree algorithm is also known as decision Tree algorithms, and decision tree is a forecast model, and what he represented is object category
A kind of mapping relations between property and object value.Each node represents some object in tree, and what each diverging paths then represented
Some possible property value, and each leaf node then corresponds to the object represented by the path undergone from root node to the leaf node
Value.Decision tree only has single output, if being intended to plural output, can establish independent decision tree to handle different outputs.From
The machine learning techniques that data produce decision tree are called decision tree learning, it is seen then that classification tree algorithm is that one kind relies on classification, instruction
Pre- assize on white silk, according to known prediction, sort out future.The embodiment of the present invention is carried out using tree algorithm of classifying to characteristic parameter
Processing, obtains user Suo Du paper medias type.
The technical scheme of the embodiment of the present invention, by the positional information of the eyeball of ocular pursuit equipment detecting real-time user, enter
And determine paper media's type that user is read, it is possible to determine that go out the know-how of user and read preference.
Fig. 3 is the schematic flow sheet of the information processing method of the embodiment of the present invention three, the information processing method in this example
Applied in electronic equipment, as shown in figure 3, described information processing method comprises the following steps:
Step 301:The parameter of staring in preset duration is obtained, stare parameter including eyeball one is with positional number
According to;Determine that the eyeball is located at the blinkpunkt of destination object according to the position for staring parameter and the destination object.
In the embodiment of the present invention, the electronic equipment has eye tracking equipment, and the operation principle of eye tracking equipment is:Xiang Yong
Family sends sightless infrared light, and flicker and the eye nethike embrane of user eyeball are then captured using two built-in camera searches
Reflection;The image obtained according to camera can determine that the orientation of eyeball fixes, namely the position data of eyeball.
Due to eye tracking equipment collection be eyeball position data, therefore, eye tracking equipment is generally arranged to wear and set
In standby, such as intelligent glasses, intelligent helmet etc..
Shown in reference picture 4, Fig. 4 is a kind of schematic diagram of intelligent glasses, and intelligent glasses are worn on head by user, in intelligence
A tracking equipment is provided with glasses, because different users has different ocular structure and head construction, therefore, opening
, it is necessary to carry out individualized calibration to user first, after being calibrated, eye tracking equipment can be gathered accurately after eye tracking equipment
The position data of the eyeball of the user.
In the embodiment of the present invention, obtain in preset duration and stare parameter, it is described stare parameter include one of eyeball with
Position data.Specifically, the change that parameter is stared in a period of time is obtained by eye tracking equipment, the change for staring parameter turns to one
The position data of the eyeball of series.Eye tracking equipment gathers the position data of eyeball in real time, and obtain blinkpunkt and time from
Data are dissipated, shown in reference picture 5, in Fig. 5, the abscissa for the position data that X-coordinate axle and Y-coordinate axle represent eyeball respectively is sat with vertical
Mark.
In the embodiment of the present invention, one of parameter including eyeball is stared with position data, combining target object is relative to eye
The position of ball, you can determine that eyeball is located at the blinkpunkt of destination object, i.e., which of destination object what eyeball was watched is
Position.
Step 302:The blinkpunkt for being located at destination object to the eyeball carries out feature extraction, obtains characteristic parameter.
In the embodiment of the present invention, obtained characteristic parameter includes:Sweep direction number, pan distance, the average in pan direction
With variance, slope.
Reference picture 6, wherein, first figure illustrates pan direction number;Second figure illustrates pan distance, specifically, 5%-
95% quantile distance;Third figure illustrates the average and variance in pan direction;Fourth figure illustrates slope.
In the embodiment of the present invention, feature extraction is carried out to staring parameter by sliding window algorithm, wherein:
1) direction number is swept:Four direction corresponds to European reference axis four direction and allows certain error, such as:30 degree.
Pan direction number indicates the information such as words direction, picture accounting and layout.
2) distance is swept:Distinguish the size of paper media.
3) variance and average in direction are swept
4) slope obtained by linear regression:Indicate overall reading direction.
Step 303:According to the pan direction number, at least following information in the destination object is determined:Words direction,
Picture accounting, text layout, picture layout;According to the pan distance, the size of the destination object is determined;According to described oblique
Rate, determine the pan direction of the eyeball;According to described information, paper media's type of the destination object is determined, wherein, the paper
Matchmaker's type is used for the physical attribute for characterizing the destination object.
In the embodiment of the present invention, by classifying, tree algorithm is handled characteristic parameter, obtains user Suo Du paper medias type,
Here, paper media's type include but is not limited to be:Teach book, newspaper, magazine, caricature etc..
Here, classification tree algorithm is also known as decision Tree algorithms, and decision tree is a forecast model, and what he represented is object category
A kind of mapping relations between property and object value.Each node represents some object in tree, and what each diverging paths then represented
Some possible property value, and each leaf node then corresponds to the object represented by the path undergone from root node to the leaf node
Value.Decision tree only has single output, if being intended to plural output, can establish independent decision tree to handle different outputs.From
The machine learning techniques that data produce decision tree are called decision tree learning, it is seen then that classification tree algorithm is that one kind relies on classification, instruction
Pre- assize on white silk, according to known prediction, sort out future.The embodiment of the present invention is carried out using tree algorithm of classifying to characteristic parameter
Processing, obtains user Suo Du paper medias type.
The technical scheme of the embodiment of the present invention, by the positional information of the eyeball of ocular pursuit equipment detecting real-time user, enter
And determine paper media's type that user is read, it is possible to determine that go out the know-how of user and read preference.
Fig. 7 is the structure composition schematic diagram of the electronic equipment of the embodiment of the present invention four, as described in Figure 7, the electronic equipment
Including:
Acquiring unit 71, for obtaining the parameter of staring in preset duration, it is described stare parameter include one of eyeball with
Position data;
Extraction unit 72, for carrying out feature extraction to the parameter of staring, obtain characteristic parameter;
First determining unit 73, for according to preset strategy, it is determined that the type identification to match with the characteristic parameter;
Second determining unit 74, for according to the type identification, determining the class for the destination object that the eyeball is watched attentively
Type.
It will be appreciated by those skilled in the art that each unit in electronic equipment shown in Fig. 7 realize that function can refer to before
State the associated description of information processing method and understand.
Fig. 8 is the structure composition schematic diagram of the electronic equipment of the embodiment of the present invention five, as described in Figure 8, the electronic equipment
Including:
Acquiring unit 81, for obtaining the parameter of staring in preset duration, it is described stare parameter include one of eyeball with
Position data;
Extraction unit 82, for carrying out feature extraction to the parameter of staring, obtain characteristic parameter;
First determining unit 83, for according to preset strategy, it is determined that the type identification to match with the characteristic parameter;
Second determining unit 84, for according to the type identification, determining the class for the destination object that the eyeball is watched attentively
Type.
The extraction unit 82, it is additionally operable to carry out feature extraction to the parameter of staring, obtains following characteristic parameter:Pan
Direction number, pan distance, the average and variance, slope in pan direction.
The electronic equipment also includes:
3rd determining unit 85, for obtain in preset duration stare parameter after, stare parameter and institute according to described
Determine that the eyeball is located at the blinkpunkt of destination object in the position for stating destination object;
The extraction unit 82, the blinkpunkt for being additionally operable to be located at the eyeball destination object carry out feature extraction, obtained
Characteristic parameter.
First determining unit 83, it is additionally operable to, according to the pan direction number, determine in the destination object at least
Following information:Words direction, picture accounting, text layout, picture layout;According to the pan distance, the target pair is determined
The size of elephant;According to the slope, the pan direction of the eyeball is determined.
Second determining unit 84, it is additionally operable to, according to the type identification, determine paper media's type of the destination object,
Wherein, paper media's type is used for the physical attribute for characterizing the destination object.
It will be appreciated by those skilled in the art that each unit in electronic equipment shown in Fig. 8 realize that function can refer to before
State the associated description of information processing method and understand.
, can be in any combination in the case where not conflicting between technical scheme described in the embodiment of the present invention.
In several embodiments provided by the present invention, it should be understood that disclosed method and smart machine, Ke Yitong
Other modes are crossed to realize.Apparatus embodiments described above are only schematical, for example, the division of the unit, only
Only a kind of division of logic function, can have other dividing mode, such as when actually realizing:Multiple units or component can be tied
Close, or be desirably integrated into another system, or some features can be ignored, or do not perform.In addition, shown or discussed each group
Into the mutual coupling in part or direct-coupling or communication connection can be by some interfaces, equipment or unit it is indirect
Coupling or communication connection, can be electrical, mechanical or other forms.
The above-mentioned unit illustrated as separating component can be or may not be physically separate, show as unit
The part shown can be or may not be physical location, you can positioned at a place, can also be distributed to multiple network lists
In member;Partly or entirely unit therein can be selected to realize the purpose of this embodiment scheme according to the actual needs.
In addition, each functional unit in various embodiments of the present invention can be fully integrated into a second processing unit,
Can also be each unit individually as a unit, can also two or more units it is integrated in a unit;
Above-mentioned integrated unit can both be realized in the form of hardware, and hardware can also be used to add the form of SFU software functional unit real
It is existing.
The foregoing is only a specific embodiment of the invention, but protection scope of the present invention is not limited thereto, any
Those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, should all be contained
Cover within protection scope of the present invention.
Claims (10)
1. a kind of information processing method, including:
The parameter of staring in preset duration is obtained, stare parameter including eyeball one is with position data;
Feature extraction is carried out to the parameter of staring, obtains characteristic parameter;
According to preset strategy, it is determined that the type identification to match with the characteristic parameter;
According to the type identification, the type for the destination object that the eyeball is watched attentively is determined.
2. information processing method according to claim 1, described to carry out feature extraction to the parameter of staring, feature is obtained
Parameter, including:
Feature extraction is carried out to the parameter of staring, obtains following characteristic parameter:Sweep direction number, pan distance, pan direction
Average and variance, slope.
3. information processing method according to claim 1, methods described also include:
Obtain preset duration in stare parameter after, institute is determined according to the position for staring parameter and the destination object
State the blinkpunkt that eyeball is located at destination object;
It is described that feature extraction is carried out to the parameter of staring, characteristic parameter is obtained, including:
The blinkpunkt for being located at destination object to the eyeball carries out feature extraction, obtains characteristic parameter.
4. information processing method according to claim 2, described according to preset strategy, it is determined that with the characteristic parameter phase
The type identification matched somebody with somebody, including:
According to the pan direction number, at least following information in the destination object is determined:Words direction, picture accounting, text
Word layout, picture layout;
According to the pan distance, the size of the destination object is determined;
According to the slope, the pan direction of the eyeball is determined.
5. information processing method according to claim 4, described according to the type identification, determine that the eyeball is watched attentively
Destination object type, including:
According to the type identification, paper media's type of the destination object is determined, wherein, paper media's type is used to characterizing described
The physical attribute of destination object.
6. a kind of electronic equipment, including:
Acquiring unit, for obtaining the parameter of staring in preset duration, stare parameter including eyeball one is with positional number
According to;
Extraction unit, for carrying out feature extraction to the parameter of staring, obtain characteristic parameter;
First determining unit, for according to preset strategy, it is determined that the type identification to match with the characteristic parameter;
Second determining unit, for according to the type identification, determining the type for the destination object that the eyeball is watched attentively.
7. electronic equipment according to claim 6, the extraction unit, it is additionally operable to put forward the parameter progress feature of staring
Take, obtain following characteristic parameter:Sweep direction number, pan distance, the average and variance, slope in pan direction.
8. electronic equipment according to claim 6, the electronic equipment also includes:
3rd determining unit, for obtain in preset duration stare parameter after, stare parameter and the target according to described
Determine that the eyeball is located at the blinkpunkt of destination object in the position of object;
The extraction unit, the blinkpunkt for being additionally operable to be located at the eyeball destination object carry out feature extraction, obtain feature ginseng
Number.
9. electronic equipment according to claim 8, first determining unit, it is additionally operable to according to the pan direction number,
Determine at least following information in the destination object:Words direction, picture accounting, text layout, picture layout;According to described
Distance is swept, determines the size of the destination object;According to the slope, the pan direction of the eyeball is determined.
10. electronic equipment according to claim 9, second determining unit, it is additionally operable to according to the type identification, really
Paper media's type of the fixed destination object, wherein, paper media's type is used for the physical attribute for characterizing the destination object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610472170.9A CN107544660B (en) | 2016-06-24 | 2016-06-24 | Information processing method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610472170.9A CN107544660B (en) | 2016-06-24 | 2016-06-24 | Information processing method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107544660A true CN107544660A (en) | 2018-01-05 |
CN107544660B CN107544660B (en) | 2020-12-18 |
Family
ID=60960112
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610472170.9A Active CN107544660B (en) | 2016-06-24 | 2016-06-24 | Information processing method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107544660B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI690826B (en) * | 2018-05-23 | 2020-04-11 | 國立臺灣師範大學 | Method of generating amplified content |
CN111159678A (en) * | 2019-12-26 | 2020-05-15 | 联想(北京)有限公司 | Identity recognition method, device and storage medium |
CN115064275A (en) * | 2022-08-19 | 2022-09-16 | 山东心法科技有限公司 | Method, equipment and medium for quantifying and training children computing capacity |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104199544A (en) * | 2014-08-28 | 2014-12-10 | 华南理工大学 | Targeted advertisement delivery method based on eye gaze tracking |
CN104504390A (en) * | 2015-01-14 | 2015-04-08 | 北京工业大学 | On-line user state recognition method and device based on eye movement data |
EP2936300A1 (en) * | 2012-12-19 | 2015-10-28 | Qualcomm Incorporated | Enabling augmented reality using eye gaze tracking |
US9317113B1 (en) * | 2012-05-31 | 2016-04-19 | Amazon Technologies, Inc. | Gaze assisted object recognition |
CN105518666A (en) * | 2013-08-29 | 2016-04-20 | 索尼公司 | Information processing device and information processing method |
-
2016
- 2016-06-24 CN CN201610472170.9A patent/CN107544660B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9317113B1 (en) * | 2012-05-31 | 2016-04-19 | Amazon Technologies, Inc. | Gaze assisted object recognition |
EP2936300A1 (en) * | 2012-12-19 | 2015-10-28 | Qualcomm Incorporated | Enabling augmented reality using eye gaze tracking |
CN105518666A (en) * | 2013-08-29 | 2016-04-20 | 索尼公司 | Information processing device and information processing method |
CN104199544A (en) * | 2014-08-28 | 2014-12-10 | 华南理工大学 | Targeted advertisement delivery method based on eye gaze tracking |
CN104504390A (en) * | 2015-01-14 | 2015-04-08 | 北京工业大学 | On-line user state recognition method and device based on eye movement data |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI690826B (en) * | 2018-05-23 | 2020-04-11 | 國立臺灣師範大學 | Method of generating amplified content |
CN111159678A (en) * | 2019-12-26 | 2020-05-15 | 联想(北京)有限公司 | Identity recognition method, device and storage medium |
CN111159678B (en) * | 2019-12-26 | 2023-08-18 | 联想(北京)有限公司 | Identity recognition method, device and storage medium |
CN115064275A (en) * | 2022-08-19 | 2022-09-16 | 山东心法科技有限公司 | Method, equipment and medium for quantifying and training children computing capacity |
Also Published As
Publication number | Publication date |
---|---|
CN107544660B (en) | 2020-12-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9690982B2 (en) | Identifying gestures or movements using a feature matrix that was compressed/collapsed using principal joint variable analysis and thresholds | |
US20210174146A1 (en) | Training set sufficiency for image analysis | |
JP2019535055A (en) | Perform gesture-based operations | |
US10430707B2 (en) | Information processing device | |
US10223838B2 (en) | Method and system of mobile-device control with a plurality of fixed-gradient focused digital cameras | |
US8687021B2 (en) | Augmented reality and filtering | |
Yang et al. | Benchmarking commercial emotion detection systems using realistic distortions of facial image datasets | |
WO2020125499A9 (en) | Operation prompting method and glasses | |
EP3855287A1 (en) | Adding system and adding method for adding odor information to digital photos | |
US20170156589A1 (en) | Method of identification based on smart glasses | |
CN106997236A (en) | Based on the multi-modal method and apparatus for inputting and interacting | |
EP3540574B1 (en) | Eye tracking method, electronic device, and non-transitory computer readable storage medium | |
CN103945104B (en) | Information processing method and electronic equipment | |
CN110674664A (en) | Visual attention recognition method and system, storage medium and processor | |
US10884494B1 (en) | Eye tracking device calibration | |
CN106897659A (en) | The recognition methods of blink motion and device | |
CN111008971B (en) | Aesthetic quality evaluation method of group photo image and real-time shooting guidance system | |
CN107544660A (en) | A kind of information processing method and electronic equipment | |
US20230052265A1 (en) | Augmented reality object manipulation | |
CN109839827B (en) | Gesture recognition intelligent household control system based on full-space position information | |
CN106527711A (en) | Virtual reality equipment control method and virtual reality equipment | |
Perra et al. | Adaptive eye-camera calibration for head-worn devices | |
CN109446356A (en) | A kind of multimedia document retrieval method and device | |
US20230326152A1 (en) | Data processing method, computer device and readable storage medium | |
CN108875499A (en) | Face shape point and status attribute detection and augmented reality method and apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |