CN106095112B - Information processing method and device - Google Patents

Information processing method and device Download PDF

Info

Publication number
CN106095112B
CN106095112B CN201610475104.7A CN201610475104A CN106095112B CN 106095112 B CN106095112 B CN 106095112B CN 201610475104 A CN201610475104 A CN 201610475104A CN 106095112 B CN106095112 B CN 106095112B
Authority
CN
China
Prior art keywords
content
action
display
preset
display area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610475104.7A
Other languages
Chinese (zh)
Other versions
CN106095112A (en
Inventor
杨春龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201610475104.7A priority Critical patent/CN106095112B/en
Publication of CN106095112A publication Critical patent/CN106095112A/en
Application granted granted Critical
Publication of CN106095112B publication Critical patent/CN106095112B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Abstract

The invention provides an information processing method and device, comprising the following steps: displaying first content in a display area of the electronic equipment; acquiring a first action parameter of the eyes of a user; and when the first action parameter indicates that the eyes execute a preset action, the first content is displayed in an enlarged mode. According to the technical scheme, when the acquired first action parameters of the eyes of the user indicate the eyes to execute the preset action, the first content displayed in the display area of the electronic equipment is displayed in an amplified mode, and automatic amplified display of the content based on the preset action of the eyes of the user is achieved, so that the number of times of manual operation of the user is reduced, and the operation complexity is reduced.

Description

Information processing method and device
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to an information processing method and apparatus.
Background
With the development of the intelligent control technology, the air-gap operation technology is widely applied, wherein the air-gap operation technology refers to a technology for operating the electronic equipment under the condition of not contacting a touch screen of the electronic equipment. If the user wants to use the air-separating operation technology, the gesture body feeling can be found in the electronic equipment, and the option corresponding to the air-separating operation technology in the gesture body feeling is opened.
In the option corresponding to the start space operation technology, the user may operate the electronic device without contacting a touch screen of the electronic device, for example, clicking a target application icon displayed on a desktop of the electronic device in a space, or amplifying a certain target object displayed on the electronic device in a space, and the like. Although the air-separating operation technology brings great convenience to users, the air-separating operation technology still needs manual operation of the users under most conditions, and operation complexity is improved.
Disclosure of Invention
In view of the above, the present invention provides an information processing method and apparatus for reducing the complexity of operations. The technical scheme is as follows:
the invention provides an information processing method, which comprises the following steps:
displaying first content in a display area of the electronic equipment;
acquiring a first action parameter of the eyes of a user;
and when the first action parameter indicates that the eyes execute a preset action, the first content is displayed in an enlarged mode.
Preferably, when the first action parameter indicates that the eye performs a preset action, the enlarging and displaying the first content includes:
when the first action parameter indicates that the eyes execute a preset action, extracting an operable object in the first content;
magnifying and displaying the operable object;
or
When the first action parameter indicates that the eyes execute a preset action, the enlarging and displaying the first content comprises:
acquiring a second motion parameter of the eye;
determining a first position of a viewpoint of the eye in the display area based on the second motion parameter;
magnifying and displaying the first content by taking the first position as a reference point;
or
When the first action parameter indicates that the eyes execute a preset action, the first content is displayed in an enlarged mode, and the method comprises the following steps: moving the first content displayed in an enlarged manner to a central area of the display area.
Preferably, the enlarging and displaying the first content includes: and amplifying and displaying the first content according to a preset amplification scale corresponding to the first action parameter.
Preferably, the method further comprises:
acquiring a second position of a first operation body in the display area;
determining a first operation object from a plurality of operable objects in the first content based on the second position;
executing a first instruction corresponding to the first operation object;
and restoring the display scale of the first content to the original scale, and displaying the second content corresponding to the first instruction in the display area in the original scale.
Preferably, the method further comprises:
obtaining a third motion parameter of the eye;
determining a magnification ratio when the third motion parameter indicates that the eye performs the preset motion;
and displaying the second content in an enlarged manner based on the enlargement ratio.
Preferably, the method further comprises:
obtaining a fourth motion parameter of the eye;
determining a third position of the viewpoint of the eye in the display area based on the fourth motion parameter;
and when the third position is positioned outside the display area where the amplified first content is positioned, restoring the display scale of the first content to the original scale.
The present invention also provides an information processing apparatus, the apparatus including:
a display unit for displaying first content in a display area of an electronic device;
a first acquisition unit configured to acquire a first motion parameter of an eye of a user;
the control unit is used for amplifying the first content when the first action parameter indicates that the eyes execute a preset action, and triggering the display unit to display the amplified first content.
Preferably, the control unit is configured to, when the first action parameter indicates that the eye performs a preset action, extract an operable object in the first content, enlarge the operable object, and trigger the display unit to display the enlarged operable object;
or
The control unit is used for acquiring a second action parameter of the eyes, determining a first position of a viewpoint of the eyes in the display area based on the second action parameter, and triggering the display unit to display the amplified first content by taking the first position as a reference point;
or
The control unit is configured to move the amplified first content to a center area of the display area, and trigger the display unit to display the amplified first content in the center area.
Preferably, the control unit is configured to amplify the first content at a preset amplification ratio corresponding to the first action parameter.
Preferably, the apparatus further comprises:
a second acquisition unit configured to acquire a second position of the first operation body in the display area;
a first determination unit configured to determine a first operation object from a plurality of operable objects in the first content based on the second position;
the execution unit is used for executing a first instruction corresponding to the first operation object;
the control unit is further configured to restore the display scale of the first content to an original scale, and trigger the display unit to display, in the display area, the second content corresponding to the first instruction in the original scale.
Preferably, the apparatus further comprises:
a third obtaining unit for obtaining a third motion parameter of the eye;
a second determination unit configured to determine a magnification ratio when the third motion parameter indicates that the eye performs the preset motion;
the control unit is configured to amplify the second content based on the amplification scale.
Preferably, the apparatus further comprises:
a fourth obtaining unit, configured to obtain a fourth motion parameter of the eye;
a third determination unit further configured to determine a third position of the viewpoint of the eye in the display area based on the fourth motion parameter;
the control unit is further configured to restore the display scale of the first content to the original scale when the third position is outside the display area where the amplified first content is located.
Compared with the prior art, the technical scheme provided by the invention has the following advantages:
according to the technical scheme, when the acquired first action parameters of the eyes of the user indicate the eyes to execute the preset action, the first content displayed in the display area of the electronic equipment is displayed in an amplified mode, and automatic amplified display of the content based on the preset action of the eyes of the user is achieved, so that the number of times of manual operation of the user is reduced, and the operation complexity is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
FIG. 1 is a flow chart of an information processing method provided by an embodiment of the invention;
fig. 2 is a schematic diagram of an area where first content is formed according to an embodiment of the present invention;
fig. 3 is another schematic diagram of an area where first content is formed according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a first content of interest provided by an embodiment of the present invention;
FIG. 5 is a diagram illustrating an enlarged display of the first content shown in FIG. 4 after the user zooms in;
FIG. 6 is a schematic diagram illustrating an enlarged display of an operable object in first content according to an embodiment of the present invention;
FIG. 7 is another flow chart of an information processing method provided by an embodiment of the invention;
FIG. 8 is a diagram illustrating operations performed on the first content after being enlarged according to an embodiment of the present invention;
FIG. 9 is a flowchart of another information processing method according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a point of interest movement provided by an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart of an information processing method according to an embodiment of the present invention is shown, which may include the following steps:
101: the first content is displayed in a display area of the electronic device. When the user stares at the display area, the viewpoint of the eyes falls on the display area to form a position point, and the content displayed in the area where the position point is located is regarded as the first content. The determination method of the area where the position point is located includes, but is not limited to, the following methods:
one way is that: and taking the position point as a central point, and extending a preset distance to the periphery of the central point to obtain a region with a specific shape, wherein the other mode is as follows: and taking the position point as an edge point, and extending a preset distance to other directions except the direction of the edge point to obtain a region with a specific shape. The areas of the specific shapes obtained by the two methods are the areas where the position points are located.
Considering that the shape of the current display area is generally rectangular, in order to maintain the shape of the current display area and conform to the understanding of the current display area by a user, the display area also extends in a rectangular manner when extending to the periphery of the center point or to other directions except the direction of the edge point, so as to obtain a rectangular area as the area where the position point is located, the preset distance may be preset, and the specific value of the preset distance may be determined according to the actual application.
As shown in fig. 2, the position point formed by the viewpoint of the eye falling on the display area is shown as the black dot in fig. 2, and the black dot is used as the center point, and extends a preset distance to the periphery of the center point to obtain a rectangular area shown by a dashed line frame, where the rectangular area is the area where the position point is located, and the content displayed in the rectangular area is the first content.
Or as shown in fig. 3, the black dot in fig. 3 is a position point formed by the viewpoint of the eye falling on the display area, the black dot is used as a left lower edge point, and the black dot extends to the upper left, upper right and lower right directions of the left lower edge point by a preset distance to obtain a rectangular area shown by a dashed line box, the rectangular area is also regarded as the area where the position point is located, and the content displayed in the rectangular area is the first content.
102: a first motion parameter of an eye of a user is acquired. Wherein the first action parameter is used for indicating what action the user's eyes are performing, said first action parameter may be obtained by a parameter obtaining means integrated in the electronic device.
For example, the parameter acquiring device may be an eye tracker, and the eye tracker is configured to record eye movement track characteristics of a person when processing visual information, such as eye movement track characteristics of a person when the person looks at, jumps and follows an object, so that what kind of action the eye of the user is performing can be determined based on the first action parameter obtained by the eye tracker. Similarly, when the user's eyes are gazing at the display area, the position point formed when the viewpoint of the eyes falls on the display area can be determined based on the first motion parameter obtained by the eye tracker.
103: and when the first action parameter indicates that the eyes execute the preset action, the first content is displayed in an enlarged mode. In the embodiment of the present invention, the preset action is preset in the electronic device, and is used for designating an action for triggering the electronic device to enlarge and display the first content, so that when the first action parameter indicates that the eyes are performing the preset action, the electronic device will automatically enlarge the first content, and display the enlarged first content on the display area.
If the preset action is set to be a squinting action, after the first action parameter is obtained, whether the action corresponding to the first action parameter is the squinting action or not is judged, if yes, the electronic equipment can automatically amplify the first content and display the first content, and if not, the electronic equipment can ignore the action corresponding to the first action parameter.
As shown in fig. 4, the dots in fig. 4 are position points formed by the viewpoint of the eyes falling on the display area, and when the obtained first motion parameter indicates that the eyes perform a squinting motion, the first content associated with the position points is automatically displayed in an enlarged manner as shown in fig. 5. The reason why the preset action is set as the squinting action is that: the user can be involuntary squint when can't see first content clearly, consequently will predetermine the action and set up to the squint action and more accord with user's watching mode, and the automatic enlarged first content that shows that detects behind the squint action for the user more can see first content clearly.
In the embodiment of the present invention, the manner of displaying the first content in an enlarged manner includes, but is not limited to, the following manners:
one way is that: when the first action parameter indicates that the eyes perform the preset action, the operable object in the first content is extracted, and the operable object is displayed in an enlarged mode. As shown in fig. 5, the first content includes three application icons regarded as operable objects, and when the first action parameter indicates that the eyes perform a squinting action, the three application icons are extracted and displayed in an enlarged manner.
The reason why the operable object in the first content is extracted for enlarged display is that: the purpose of displaying the first content in an enlarged manner is to facilitate a user to view valid content, such as an operable object, in the first content, if all the content in the first content is displayed in an enlarged manner, invalid content in all the content occupies an area where the valid content can be displayed, so that the valid content cannot be displayed in an enlarged manner in an effective manner.
As shown in the schematic diagram on the left side of the arrow in fig. 6, the first content includes: and when the first action parameter indicates that the eyes perform the preset action, the operable object is extracted from the first content to be displayed in an enlarged mode, and the effect graph is shown on the right side of the arrow in fig. 6. When the operable objects are displayed in an enlarged manner, the positional relationship of the operable objects can be rearranged, so that the operable objects displayed in the enlarged manner can occupy the whole area of the display area where the operable objects are located.
The other mode is as follows: acquiring a second motion parameter of the eyes, determining a first position of a viewpoint of the eyes in the display area based on the second motion parameter, and magnifying and displaying the first content with the first position as a reference point.
The acquisition time of the second action parameter is adjacent to the acquisition time of the first action parameter, and the acquisition time of the second action parameter is earlier than the acquisition time of the first action parameter. That is, the second motion parameter is a motion parameter obtained when the user's eyes look at the display area before the motion corresponding to the first motion parameter is performed. Of course, the acquisition time of the second motion parameter may also be later than the acquisition time of the first motion parameter, that is, the eye is moved after the eye performs the preset motion, and the second motion parameter is obtained during the process of moving the eye.
And with the first position as a reference point, the manner of magnifying and displaying the first content is as follows: the first position is an edge point or a center point of the first area, and the enlarged first content is displayed in the first area, as shown in fig. 5.
Yet another way is: and moving the first content which is displayed in an amplifying way to the central area of the display area, so that the first content which is displayed in an amplifying way can be displayed on the electronic equipment completely, and effectively capturing the action parameters of the eyes when the first content which is displayed in an amplifying way is amplified again so as to execute the operation corresponding to the action parameters.
In addition, when the first content is displayed in an enlarged manner, a preset enlargement ratio corresponding to the first action parameter may be determined first, and then the first content is displayed after being enlarged by the preset enlargement ratio. The preset amplification ratio corresponding to the first motion parameter is determined based on the motion amplitude of the eye in the first motion parameter, and the preset amplification ratio corresponding to the different motion amplitudes is different, which is not limited in the embodiment of the present invention.
For example, when the preset movement is an eye squinting movement, the movement amplitude of the eyes in the first movement parameter is the difference between the width of the eyes behind the squinting movement and the width corresponding to the eyes when the eyes are completely opened, if the width of the eyes behind the squinting movement is width a and the width corresponding to the eyes when the eyes are completely opened is width B, the movement amplitude of the eyes is | eye a-eye B |, then based on the currently obtained movement amplitude, the preset amplification ratio of the movement amplitude obtained at present is found from the correspondence relationship between the movement parameter and the preset amplification ratio, and then the first content is displayed after being amplified by the preset amplification ratio.
According to the technical scheme, when the acquired first action parameters of the eyes of the user indicate the eyes to execute the preset action, the first content displayed in the display area of the electronic equipment is displayed in an amplified mode, and automatic amplified display of the content based on the preset action of the eyes of the user is achieved, so that the number of times of manual operation of the user is reduced, and the operation complexity is reduced.
Referring to fig. 7, which shows another flowchart of the information processing method according to the embodiment of the present invention, on the basis of fig. 1, the method may further include the following steps:
104: and acquiring a second position of the first operation body in the display area. The second position is a position obtained when the first operation body is used for displaying the region where the first content is located after being amplified in the display region and performing operation, and the region where the first content is located after being amplified is increased compared with the region where the first content is located before being amplified, so that the touch operation method is more suitable for the first operation body to perform touch operation on the first content after being amplified.
After the first operation body performs touch operation on the amplified first content, the electronic device can obtain a second position of the first operation body in the display area through a touch technology.
105: based on the second position, a first operation object is determined from a plurality of operable objects in the first content.
106: and executing a first instruction corresponding to the first operation object.
As shown in fig. 8, after the first operation body is enlarged, a click operation is performed on an area where the first content is located, and a second position formed by the click operation on the display area corresponds to one operable object in the first content, so that when the second position is known, the first operation object displayed at the second position can be determined from a plurality of operable objects in the first content, and a first instruction corresponding to the first operation object is automatically executed, for example, an application corresponding to the first operation object is started.
107: and restoring the display scale of the first content to the original scale, and displaying the second content corresponding to the first instruction in the display area in the original scale.
Still taking the above fig. 8 as an example, after the first instruction corresponding to the first operation is executed, the enlarged first content is automatically reduced, so that the display scale of the first content is restored to the original scale, for example, the enlarged first content is restored to the original scale shown in fig. 4, so that when the electronic device finishes restoring the first instruction to the original state, the first content on the display area is also automatically restored to the original state, thereby further omitting the manual operation link and reducing the operation complexity.
And displaying the second content corresponding to the first instruction in the display area in the original proportion while the display proportion of the first content is restored to the original proportion, for example, displaying an application interface of an application corresponding to the first operation object in the original proportion. Of course, the second content may also be displayed after the display scale of the first content is restored to the original scale, and the execution sequence between the two is not limited in this embodiment of the present invention.
The original proportion is the display proportion adopted by the electronic equipment when the electronic equipment is started, and the display proportion of the content corresponding to the operation is restored to the original proportion after any operation is finished, so that a manual operation link can be omitted, and the operation complexity is reduced. When the second content corresponding to the first instruction is displayed in the display area as described above, indicating that the zooming display operation on the first content has ended, the display scale of the first content may be automatically restored to the original scale.
In addition, after the second content is displayed in the display area in the original scale, if the user cannot see the second content clearly, the second content can be automatically displayed in an enlarged manner based on the movement of the eyes, and the process is as follows:
first, a third motion parameter of the eye is obtained, wherein the third motion parameter is used for indicating what motion the user's eye is performing, and the first motion parameter can also be obtained by the above-mentioned eye tracker.
Secondly, when the third motion parameter indicates that the eye performs a preset motion, the magnification ratio is determined. The points to be explained here are: the preset action for automatically magnifying the second content is the same as the preset action for automatically magnifying the first content, and the preset action is set to be a squinting action which is adopted by the user when the user cannot see the content.
The amplification ratio is determined in the same manner as the preset amplification ratio, for example, the amplification ratio may be determined based on the motion amplitude of the eye in the third motion parameter, and the specific implementation process is not described in detail here.
Finally, the second content is displayed enlarged based on the enlargement scale.
According to the technical scheme, after the first content is displayed in an enlarged mode, the operation of the first operation body on the first content can be further acquired, the first operation object at the second position corresponding to the first operation body in the first content is determined, the second content of the first operation object is automatically displayed in the original proportion, and the display proportion of the first content is restored to the original proportion.
And for the second content, the second content can be displayed in an enlarged manner at a corresponding enlargement ratio under the condition that the third action parameter indicates that the eyes execute the preset action, and automatic enlarged display of the content based on the preset action of the eyes of the user is also realized, so that the manual operation times of the user are reduced, and the operation complexity is reduced.
Referring to fig. 9, which shows a further flowchart of the information processing method according to the embodiment of the present invention, on the basis of fig. 1, the method may further include the following steps:
108: a fourth motion parameter of the eye is obtained.
109: based on the fourth motion parameter, a third position of the viewpoint of the eye in the display area is determined. Wherein the fourth motion parameter is used for indicating what motion the user's eyes are performing, and the fourth motion parameter can also be obtained by the above-mentioned eye tracker. Specifically, the eye tracker is configured to record eye movement track characteristics of a person when processing visual information, such as eye movement track characteristics of a person when the person looks at, jumps and follows an object, so that when the user looks at the display area, a position point formed when a viewpoint of the eye falls on the display area can be determined based on a fourth action parameter obtained by the eye tracker, where the position point is a third position.
110: and when the third position is positioned outside the display area where the amplified first content is positioned, restoring the display scale of the first content to the original scale.
When the third position is outside the display area where the enlarged first content is located, it indicates that the eyes move from focusing on the first content to other positions, such as the position point to which the broken line points in fig. 10. When the focus of the eyes moves to other positions, which indicates that the user is not focusing on the first content any more, the display scale of the first content can be restored to the original scale, as shown in fig. 4, and automatic restoration of the content is realized.
Corresponding to the above method embodiment, an embodiment of the present invention further provides an information processing apparatus, a schematic structural diagram of which is shown in fig. 11, and the information processing apparatus may include: a display unit 11, a first acquisition unit 12 and a control unit 13.
The display unit 11 is configured to display first content in a display area of the electronic device. When the user stares at the display area, the viewpoint of the eyes falls on the display area to form a position point, and the content displayed in the area where the position point is located is regarded as the first content. The determination method of the area where the location point is located refers to the related description in the method embodiment, and the embodiment of the present invention is not further described.
A first obtaining unit 12, configured to obtain a first motion parameter of an eye of a user. The first action parameter is used for indicating what action the user's eyes perform, and the first action parameter can be obtained by a parameter obtaining device integrated in the electronic equipment, for example, the parameter obtaining device can be an eye tracker.
And the control unit 13 is used for amplifying the first content when the first action parameter indicates that the eyes execute the preset action, and triggering the display unit 11 to display the amplified first content. In the embodiment of the present invention, the preset action is preset in the electronic device, and is used for designating an action for triggering the electronic device to enlarge and display the first content, so that when the first action parameter indicates that the eye is performing the preset action, the control unit 13 will automatically enlarge the first content, and trigger the display unit 11 to display the enlarged first content on the display area.
If the preset action is set as a squinting action, after the first action parameter is obtained, whether the action corresponding to the first action parameter is the squinting action is judged, if yes, the control unit 13 automatically amplifies the first content and displays the first content, and if not, the action corresponding to the first action parameter is ignored.
In the embodiment of the present invention, the manner in which the control unit 13 enlarges the first content and triggers the display unit 11 to display includes, but is not limited to, the following manners, which are only briefly described in the embodiment of the present invention, and the specific process refers to the related descriptions in the method embodiment.
One way is that: when the first action parameter indicates that the eyes perform the preset action, the operable object in the first content is extracted, and the operable object is displayed in an enlarged mode.
The other mode is as follows: acquiring a second action parameter of the eyes, determining a first position of a viewpoint of the eyes in the display area based on the second action parameter, and triggering the display unit 11 to display the enlarged first content with the first position as a reference point.
Yet another way is: the first content displayed in an enlarged manner is moved to the central area of the display area, so that the display unit 11 can display the first content in its entirety on the electronic device, and when the first content is enlarged again, the motion parameters of the eyes can be effectively captured, so as to perform an operation corresponding to the motion parameters.
In addition, when the first content is amplified, a preset amplification ratio corresponding to the first action parameter may be determined first, and then the first content is amplified by the preset amplification ratio. The preset amplification ratio corresponding to the first motion parameter is determined based on the motion amplitude of the eye in the first motion parameter, and the preset amplification ratio corresponding to the different motion amplitudes is different, which is not limited in the embodiment of the present invention.
According to the technical scheme, when the acquired first action parameters of the eyes of the user indicate the eyes to execute the preset action, the first content displayed in the display area of the electronic equipment is displayed in an amplified mode, and automatic amplified display of the content based on the preset action of the eyes of the user is achieved, so that the number of times of manual operation of the user is reduced, and the operation complexity is reduced.
Referring to fig. 12, which shows another schematic structural diagram of an information processing apparatus according to an embodiment of the present invention, on the basis of fig. 11, the information processing apparatus may further include: a second acquisition unit 14, a first determination unit 15 and an execution unit 16.
And a second acquiring unit 14 for acquiring a second position of the first operating body in the display area. The second position is a position obtained when the first operation body is used for displaying the region where the first content is located after being amplified in the display region and performing operation, and the region where the first content is located after being amplified is increased compared with the region where the first content is located before being amplified, so that the touch operation method is more suitable for the first operation body to perform touch operation on the first content after being amplified.
After the first operation body performs touch operation on the amplified first content, the electronic device can obtain a second position of the first operation body in the display area through a touch technology.
A first determining unit 15, configured to determine a first operation object from the plurality of operable objects in the first content based on the second position.
And the execution unit 16 is configured to execute a first instruction corresponding to the first operation object.
As shown in fig. 8, after the first operation body is enlarged, a click operation is performed on an area where the first content is located, and a second position formed by the click operation on the display area corresponds to one operable object in the first content, so that when the second position is known, the first operation object displayed at the second position can be determined from a plurality of operable objects in the first content, and a first instruction corresponding to the first operation object is automatically executed, for example, an application corresponding to the first operation object is started.
The control unit 13 is further configured to restore the display scale of the first content to the original scale, and trigger the display unit to display the second content corresponding to the first instruction in the display area at the original scale.
And displaying the second content corresponding to the first instruction in the display area in the original proportion while the display proportion of the first content is restored to the original proportion, for example, displaying an application interface of an application corresponding to the first operation object in the original proportion. Of course, the second content may also be displayed after the display scale of the first content is restored to the original scale, and the execution sequence between the two is not limited in this embodiment of the present invention.
The original proportion is the display proportion adopted by the electronic equipment when the electronic equipment is started, and the display proportion of the content corresponding to the operation is restored to the original proportion after any operation is finished, so that a manual operation link can be omitted, and the operation complexity is reduced. When the second content corresponding to the first instruction is displayed in the display area as described above, indicating that the zooming display operation on the first content has ended, the display scale of the first content may be automatically restored to the original scale.
In addition, after the second content is displayed in the display area in the original scale, if the user cannot see the second content clearly, the second content may be automatically displayed in an enlarged manner based on the movement of the eyes, and the information processing apparatus according to the embodiment of the present invention may further include: a third acquisition unit and a second determination unit.
A third obtaining unit for obtaining a third motion parameter of the eye. Wherein the third motion parameter is used to indicate what motion the user's eyes are performing, and the first motion parameter may also be obtained by the above-mentioned eye tracker.
And a second determination unit for determining a magnification ratio when the third motion parameter indicates that the eye performs the preset motion, and then magnifying the second content based on the magnification ratio by the control unit.
The points to be explained here are: the preset action for automatically magnifying the second content is the same as the preset action for automatically magnifying the first content, and the preset action is set to be a squinting action which is adopted by the user when the user cannot see the content.
The amplification ratio is determined in the same manner as the preset amplification ratio, for example, the amplification ratio may be determined based on the motion amplitude of the eye in the third motion parameter, and the specific implementation process is not described in detail here.
According to the technical scheme, after the first content is displayed in an enlarged mode, the operation of the first operation body on the first content can be further acquired, the first operation object at the second position corresponding to the first operation body in the first content is determined, the second content of the first operation object is automatically displayed in the original proportion, and the display proportion of the first content is restored to the original proportion.
And for the second content, the second content can be displayed in an enlarged manner at a corresponding enlargement ratio under the condition that the third action parameter indicates that the eyes execute the preset action, and automatic enlarged display of the content based on the preset action of the eyes of the user is also realized, so that the manual operation times of the user are reduced, and the operation complexity is reduced.
Referring to fig. 13, which shows a schematic structural diagram of an information processing apparatus according to another embodiment of the present invention, on the basis of fig. 11, the information processing apparatus may further include: a fourth acquisition unit 17 and a third determination unit 18.
A fourth obtaining unit 17 for obtaining a fourth motion parameter of the eye.
The third determining unit 18 is further configured to determine a third position of the viewpoint of the eye in the display area based on the fourth motion parameter.
Wherein the fourth motion parameter is used for indicating what motion the user's eyes are performing, and the fourth motion parameter can also be obtained by the above-mentioned eye tracker. Specifically, the eye tracker is configured to record eye movement track characteristics of a person when processing visual information, such as eye movement track characteristics of a person when the person looks at, jumps and follows an object, so that when the user looks at the display area, a position point formed when a viewpoint of the eye falls on the display area can be determined based on a fourth action parameter obtained by the eye tracker, where the position point is a third position.
When the third position is located outside the display area where the magnified first content is located, indicating that the eyes move from focusing on the first content to other positions, the control unit 13 may restore the display scale of the first content to the original scale, thereby achieving automatic restoration of the content.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The information processing method and apparatus provided by the present invention are described in detail above, and a specific example is applied in the text to explain the principle and the implementation of the present invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (12)

1. An information processing method, characterized in that the method comprises:
displaying first content in a display area of the electronic equipment, wherein the first content is displayed in an area where a position point formed by eyes falls on the display area when the eyes of a user look at the display area;
acquiring a first action parameter of the eyes of a user;
when the first action parameter indicates that the eyes execute a preset action, determining a preset amplification scale corresponding to the first action parameter, and amplifying and displaying the first content according to the preset amplification scale, wherein the preset action is set as a squinting action;
the determining mode of the area where the position point is located comprises at least one of the following modes:
taking the position point as a central point, and extending a preset distance to the periphery of the central point to obtain a region with a specific shape, wherein the region with the specific shape is the region where the position point is located;
and taking the position point as an edge point, and extending a preset distance to other directions except the direction of the edge point to obtain a special-shaped area, wherein the special-shaped area is the area of the position point.
2. The method of claim 1, wherein the displaying the first content in enlargement when the first motion parameter indicates that the eye performs a preset motion comprises:
when the first action parameter indicates that the eyes execute a preset action, extracting an operable object in the first content;
magnifying and displaying the operable object;
or
When the first action parameter indicates that the eyes execute a preset action, the enlarging and displaying the first content comprises:
acquiring a second motion parameter of the eye;
determining a first position of a viewpoint of the eye in the display area based on the second motion parameter;
magnifying and displaying the first content by taking the first position as a reference point;
or
When the first action parameter indicates that the eyes execute a preset action, the first content is displayed in an enlarged mode, and the method comprises the following steps: moving the first content displayed in an enlarged manner to a central area of the display area.
3. The method of claim 1 or 2, wherein the displaying the first content in the enlarged scale comprises: and amplifying and displaying the first content according to a preset amplification scale corresponding to the first action parameter.
4. The method of claim 1, further comprising:
acquiring a second position of a first operation body in the display area;
determining a first operation object from a plurality of operable objects in the first content based on the second position;
executing a first instruction corresponding to the first operation object;
and restoring the display scale of the first content to the original scale, and displaying the second content corresponding to the first instruction in the display area in the original scale.
5. The method of claim 4, further comprising:
obtaining a third motion parameter of the eye;
determining a magnification ratio when the third motion parameter indicates that the eye performs the preset motion;
and displaying the second content in an enlarged manner based on the enlargement ratio.
6. The method of claim 1, further comprising:
obtaining a fourth motion parameter of the eye;
determining a third position of the viewpoint of the eye in the display area based on the fourth motion parameter;
and when the third position is positioned outside the display area where the amplified first content is positioned, restoring the display scale of the first content to the original scale.
7. An information processing apparatus characterized in that the apparatus comprises:
a display unit configured to display first content in a display area of an electronic device, the first content being displayed in an area where a point of sight of an eye falls on one position point formed on the display area when the eye of a user looks at the display area;
a first acquisition unit configured to acquire a first motion parameter of an eye of a user;
the control unit is used for determining a preset amplification ratio corresponding to a first action parameter when the first action parameter indicates that the eyes execute a preset action, amplifying the first content according to the preset amplification ratio, and triggering the display unit to display the amplified first content, wherein the preset action is set as a squinting action;
the determining mode of the area where the position point is located comprises at least one of the following modes:
taking the position point as a central point, and extending a preset distance to the periphery of the central point to obtain a region with a specific shape, wherein the region with the specific shape is the region where the position point is located;
and taking the position point as an edge point, and extending a preset distance to other directions except the direction of the edge point to obtain a special-shaped area, wherein the special-shaped area is the area of the position point.
8. The apparatus according to claim 7, wherein the control unit is configured to, when the first action parameter indicates that the eye performs a preset action, extract an operable object in the first content, enlarge the operable object, and trigger the display unit to display the enlarged operable object;
or
The control unit is used for acquiring a second action parameter of the eyes, determining a first position of a viewpoint of the eyes in the display area based on the second action parameter, and triggering the display unit to display the amplified first content by taking the first position as a reference point;
or
The control unit is configured to move the amplified first content to a center area of the display area, and trigger the display unit to display the amplified first content in the center area.
9. The apparatus according to claim 7 or 8, wherein the control unit is configured to amplify the first content at a preset amplification ratio corresponding to the first motion parameter.
10. The apparatus of claim 7, further comprising:
a second acquisition unit configured to acquire a second position of the first operation body in the display area;
a first determination unit configured to determine a first operation object from a plurality of operable objects in the first content based on the second position;
the execution unit is used for executing a first instruction corresponding to the first operation object;
the control unit is further configured to restore the display scale of the first content to an original scale, and trigger the display unit to display, in the display area, the second content corresponding to the first instruction in the original scale.
11. The apparatus of claim 10, further comprising:
a third obtaining unit for obtaining a third motion parameter of the eye;
a second determination unit configured to determine a magnification ratio when the third motion parameter indicates that the eye performs the preset motion;
the control unit is configured to amplify the second content based on the amplification scale.
12. The apparatus of claim 7, further comprising:
a fourth obtaining unit, configured to obtain a fourth motion parameter of the eye;
a third determination unit further configured to determine a third position of the viewpoint of the eye in the display area based on the fourth motion parameter;
the control unit is further configured to restore the display scale of the first content to the original scale when the third position is outside the display area where the amplified first content is located.
CN201610475104.7A 2016-06-24 2016-06-24 Information processing method and device Active CN106095112B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610475104.7A CN106095112B (en) 2016-06-24 2016-06-24 Information processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610475104.7A CN106095112B (en) 2016-06-24 2016-06-24 Information processing method and device

Publications (2)

Publication Number Publication Date
CN106095112A CN106095112A (en) 2016-11-09
CN106095112B true CN106095112B (en) 2020-06-23

Family

ID=57252830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610475104.7A Active CN106095112B (en) 2016-06-24 2016-06-24 Information processing method and device

Country Status (1)

Country Link
CN (1) CN106095112B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106708257A (en) * 2016-11-23 2017-05-24 网易(杭州)网络有限公司 Game interaction method and device
CN106547360B (en) * 2016-11-29 2019-10-22 珠海格力电器股份有限公司 Information processing method and electronic equipment
CN106791135B (en) * 2016-12-29 2020-12-29 努比亚技术有限公司 Automatic local zooming display method and mobile terminal
CN110266881B (en) * 2019-06-18 2021-03-12 Oppo广东移动通信有限公司 Application control method and related product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102945077A (en) * 2012-10-24 2013-02-27 广东欧珀移动通信有限公司 Image viewing method and device and intelligent terminal
CN103562841A (en) * 2011-05-31 2014-02-05 苹果公司 Devices, methods, and graphical user interfaces for document manipulation
CN103902174A (en) * 2012-12-26 2014-07-02 联想(北京)有限公司 Display method and equipment
CN104699249A (en) * 2015-03-27 2015-06-10 联想(北京)有限公司 Information processing method and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050033949A (en) * 2003-10-07 2005-04-14 삼성전자주식회사 Method for controlling auto zooming in the portable terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103562841A (en) * 2011-05-31 2014-02-05 苹果公司 Devices, methods, and graphical user interfaces for document manipulation
CN102945077A (en) * 2012-10-24 2013-02-27 广东欧珀移动通信有限公司 Image viewing method and device and intelligent terminal
CN103902174A (en) * 2012-12-26 2014-07-02 联想(北京)有限公司 Display method and equipment
CN104699249A (en) * 2015-03-27 2015-06-10 联想(北京)有限公司 Information processing method and electronic equipment

Also Published As

Publication number Publication date
CN106095112A (en) 2016-11-09

Similar Documents

Publication Publication Date Title
US11604560B2 (en) Application association processing method and apparatus
CN106095112B (en) Information processing method and device
KR102024422B1 (en) Method for opening file in file folder and terminal
US20210056253A1 (en) Method and apparatus for generating image file
JP6036807B2 (en) Information processing apparatus, information processing method, and program
CN108255387B (en) Quick contrast interaction method for images of mobile terminal of touch screen
US20130016246A1 (en) Image processing device and electronic apparatus
WO2016169343A1 (en) Touch operation response method and apparatus based on wearable device
JP2018517984A (en) Apparatus and method for video zoom by selecting and tracking image regions
EP2813930A1 (en) Terminal reselection operation method and terminal
DE202008005344U1 (en) Electronic device with switchable user interface and electronic device with accessible touch operation
US20180107900A1 (en) Object detection device, object detection method, and recording medium
US20210240316A1 (en) Non-transitory computer-readable medium and device for book display
JP2015152939A (en) information processing apparatus, information processing method, and program
TW201235884A (en) Electronic apparatus with touch screen and associated displaying control method
JP2017515241A (en) Element deletion method and apparatus based on touch panel
JP2015088180A (en) Electronic apparatus, control method thereof, and control program
KR101610882B1 (en) Method and apparatus of controlling display, and computer program for executing the method
CN106648281B (en) Screenshot method and device
KR20150099221A (en) Method of providing user interface and flexible device for performing the same.
JP2012048358A (en) Browsing device, information processing method and program
KR20110092754A (en) Terminal for supporting operation for individual finger and method for operating the terminal
CN112286430B (en) Image processing method, apparatus, device and medium
US10474409B2 (en) Response control method and electronic device
JP6289655B2 (en) Screen operation apparatus and screen operation method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant