CN106095375B - Display control method and device - Google Patents

Display control method and device Download PDF

Info

Publication number
CN106095375B
CN106095375B CN201610483587.5A CN201610483587A CN106095375B CN 106095375 B CN106095375 B CN 106095375B CN 201610483587 A CN201610483587 A CN 201610483587A CN 106095375 B CN106095375 B CN 106095375B
Authority
CN
China
Prior art keywords
user
determining
content
mode
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610483587.5A
Other languages
Chinese (zh)
Other versions
CN106095375A (en
Inventor
宋建华
刘鹏程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201610483587.5A priority Critical patent/CN106095375B/en
Publication of CN106095375A publication Critical patent/CN106095375A/en
Application granted granted Critical
Publication of CN106095375B publication Critical patent/CN106095375B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Abstract

The embodiment of the invention provides a display control method and a display control device, wherein the display control method comprises the following steps: determining a first focusing area and a first non-focusing area of the eyes of the user on the display interface, wherein the content displayed by each area in the display interface corresponds to a first mode or a second mode, and the content of the first mode is more detailed than the content of the second mode; the first content of the first mode is output to the first focused region, and the second content of the second mode is output to the first unfocused region. According to the display control method and device provided by the embodiment of the invention, the focused area and the unfocused area of the eyes of the user on the display interface are determined, and the contents in different modes are output to the focused area and the unfocused area, so that the contents displayed in different areas can accord with the visual perception of human beings.

Description

Display control method and device
Technical Field
The present invention relates to the field of electronics, and in particular, to a display control method and apparatus.
Background
Nowadays, more and more work needs to process a large amount of dynamic information in real time, a computer can process a large amount of information quickly and display the information on a high-definition large-screen display wall (such as a display wall composed of a plurality of display screens) in real time, and a user can view the content displayed on the screen to acquire corresponding information.
However, human vision has a limited range, for example, human vision includes two parts of foveal vision and peripheral vision, and people can clearly see the content of the foveal vision region but cannot clearly see the specific content displayed in the peripheral vision region.
However, the large-screen display wall in the existing scheme does not consider the visual limitation of human when displaying the content, and the displayed content does not conform to the visual perception of human.
Disclosure of Invention
In view of this, the present invention provides a display control method and apparatus, which can make the displayed content more suitable for human visual perception.
In one aspect, a display control method is provided, the method including:
determining a first focusing area and a first non-focusing area of the eyes of the user on the display interface, wherein the content displayed by each area in the display interface corresponds to a first mode or a second mode, and the content of the first mode is more detailed than the content of the second mode;
the first content of the first mode is output to the first focused region, and the second content of the second mode is output to the first unfocused region.
Optionally, the content of the second mode is used to indicate a trend and/or change of part of the information in the content of the first mode.
Optionally, each unfocused region in the display interface displays content corresponding to one of a plurality of modes, including a second mode, and after determining the first focused region and the first unfocused region of the user's eye on the display interface, the method further comprises:
determining an angle between a direction from the user's eyes to the first unfocused region and a direction of the user's gaze;
and determining a second mode corresponding to the content to be displayed in the first non-focus area from multiple modes according to the included angle.
Optionally, each piece of unfocused region in the display interface displays content corresponding to one of a plurality of modes, including a second mode, and after determining a first focused region and a first unfocused region of an eye of a user on the display interface, the method further includes:
determining a distance between the first unfocused region and the first focused region;
and determining a second mode corresponding to the content to be displayed in the first unfocused area from the plurality of modes according to the distance.
Optionally, the method further comprises:
under the condition that the position of the user and/or the sight line direction of eyes are/is changed, determining a second focusing area and a second non-focusing area of the eyes of the current user on the display interface;
outputting the third content of the first pattern to the second focusing area and outputting the fourth content of the second pattern to the second non-focusing area.
Optionally, determining a first in-focus region and a first out-of-focus region of the user's eye on the display interface comprises:
determining a user's eye position;
according to the eye position of the user, a first focusing area and a first non-focusing area of the user's eyes on the display interface are determined.
Optionally, determining the eye position of the user comprises:
acquiring a currently shot image of a user;
from the image, the eye position of the user is determined.
Optionally, determining the eye position of the user from the image comprises:
determining the head position of the user and the face direction of the user according to the image;
acquiring an eye position probability model;
and inputting the head position of the user and the face direction of the user into the eye position probability model, and determining the eye position of the user.
Optionally, the eye position probability model is determined according to the following method:
obtaining a plurality of samples, each sample of the plurality of samples including a head position, a face direction, and an eye position of a user;
and establishing an eye position probability model by adopting a random sampling consistency algorithm and a principal component analysis method according to a plurality of samples.
Optionally, the content of the second mode includes an image for representing a trend and/or change of a part of the information in the content of the first mode.
In another aspect, there is provided a display control apparatus including:
the device comprises a determining unit, a display unit and a control unit, wherein the determining unit is used for determining a first focusing area and a first non-focusing area of the eyes of a user on a display interface, the content displayed by each area in the display interface corresponds to a first mode or a second mode, and the detailed degree of the content of the first mode is higher than that of the content of the second mode;
an output unit for outputting a first content of a first mode to the first focus area and outputting a second content of a second mode to the first non-focus area.
Optionally, the content of the second mode is used to indicate a trend and/or change of part of the information in the content of the first mode.
Optionally, the content displayed by each unfocused region in the display interface corresponds to one of a plurality of modes, including a second mode, and the determining unit is further configured to:
after determining a first focusing area and a first non-focusing area of the eyes of the user on the display interface, determining an included angle between the direction from the eyes of the user to the first non-focusing area and the sight line direction of the user;
and determining a second mode corresponding to the content to be displayed in the first non-focus area from multiple modes according to the included angle.
Optionally, the content displayed in each unfocused region in the display interface corresponds to one of a plurality of modes, the plurality of modes includes a second mode, and the determining unit is further configured to:
after determining a first focused area and a first unfocused area of the user's eye on the display interface, determining a distance between the first unfocused area and the first focused area;
and determining a second mode corresponding to the content to be displayed in the first unfocused area from the plurality of modes according to the distance.
Optionally, the determining unit is further configured to determine a second focusing area and a second non-focusing area of the current user's eye on the display interface when the position of the user and/or the gaze direction of the eye changes;
the output unit is further configured to output a third content of the first pattern to the second focusing area and output a fourth content of the second pattern to the second non-focusing area.
Optionally, the determining unit is specifically configured to:
determining a user's eye position;
according to the eye position of the user, a first focusing area and a first non-focusing area of the user's eyes on the display interface are determined.
Optionally, the determining unit is specifically configured to:
acquiring a currently shot image of a user;
from the image, the eye position of the user is determined.
Optionally, the determining unit is specifically configured to:
determining the head position of the user and the face direction of the user according to the image;
acquiring an eye position probability model;
and inputting the head position of the user and the face direction of the user into the eye position probability model, and determining the eye position of the user.
Optionally, the determining unit is further configured to:
obtaining a plurality of samples, each sample of the plurality of samples including a head position, a face direction, and an eye position of a user;
and establishing an eye position probability model by adopting a random sampling consistency algorithm and a principal component analysis method according to a plurality of samples.
Optionally, the content of the second mode includes an image for representing a trend and/or change of a part of the information in the content of the first mode.
Based on the technical scheme, the display control method and the display control device in the embodiment of the invention can enable the contents displayed in different areas to conform to the visual perception of human beings by determining the focus area and the non-focus area of the eyes of the user on the display interface and outputting the contents in different modes to the focus area and the non-focus area.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments of the present invention will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a display control method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of the visual characteristics of the human eye.
Fig. 3 is a schematic diagram of a display control method according to an embodiment of the present invention.
FIG. 4A is a diagram illustrating content displayed by a focus area according to an embodiment of the present invention.
Fig. 4B and 4C are schematic diagrams of contents displayed by the unfocused region according to an embodiment of the present invention.
Fig. 5 is a schematic flowchart of a display control method according to another embodiment of the present invention.
Fig. 6 is a schematic structural diagram of a display control apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," and "third," etc. in the description and claims of this application and the accompanying drawings are used for distinguishing between different objects and not for describing a particular order.
In this embodiment of the present invention, "and/or" is only one kind of association relation describing an association object, and indicates that three kinds of relations may exist, for example, a and/or B may indicate: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" generally indicates that the former and latter related objects are in an "or" relationship.
In the embodiment of the present invention, the display interface may be a display area in one display screen (such as a large screen display), or may be a display area in a display wall formed by a plurality of display screens.
FIG. 1 is a schematic flow chart diagram of a display control method 100 according to an embodiment of the present invention. As shown in fig. 1, the method 100 includes the following.
110. Determining a first focusing area and a first non-focusing area of the eyes of the user on the display interface, wherein the content displayed by each area in the display interface corresponds to a first mode or a second mode, and the content of the first mode is more detailed than the content of the second mode.
It should be understood that the first focus area and the first unfocused area may be different display areas on the same display screen, or may be display areas on different display screens.
120. The first content of the first mode is output to the first focused region, and the second content of the second mode is output to the first unfocused region.
Accordingly, the first focused region may display the first content of the first mode, and the first unfocused region may display the second content of the second mode.
The first mode may be referred to as a normal mode, and the second mode may be referred to as a reduced mode. Alternatively, the first mode may be referred to as a detailed mode, and the second mode may be referred to as a normal mode or a simplified mode.
Therefore, the display control method of the embodiment of the invention can make the contents displayed in different areas conform to the visual perception of human beings by determining the focus area and the non-focus area of the eyes of the user on the display interface and outputting the contents in different modes to the focus area and the non-focus area.
The focused region corresponds to a foveal vision region of the human eye and the unfocused region corresponds to a peripheral vision region of the human eye. The human eye can see clearly the content of the foveal vision area and cannot see clearly the content of the peripheral vision area. Therefore, in the embodiment of the present invention, by outputting the content of the second mode to the unfocused region, it is also possible to avoid displaying unnecessary content in the unfocused region, so that information overload can be avoided.
In some embodiments, the content of the second mode is used to indicate trends and/or changes in a portion of the information in the content of the first mode.
Since the content of the peripheral vision region is not visible to the human eye, changes and movements within the peripheral vision region can be handled. For example, as shown in fig. 2, the human eye can see clearly the content within the viewing angle range of about 3 degrees, can read the text within the viewing angle range of about 6 degrees, and can perceive the movement and brightness of the peripheral visual region.
Therefore, by outputting the trend and/or change of the partial information in the content for representing the first mode to the unfocused region, the embodiment of the invention can timely deliver important information to the user.
In some embodiments, the content of the second mode includes an image representing a trend and/or change in a portion of the information in the content of the first mode.
For example, the image may be an image composed of color patches of various colors or luminances, and the trend and/or the change may be represented by the luminance and/or the change in color of the color patches. The image may also be an enlarged display of a part of information in the content of the first mode, and the trend and/or change of the part of information may be represented by a flicker or a change in color. As shown in fig. 3, detailed contents are displayed in the focused region, and a patch composition image indicating a trend and/or a change is displayed in the unfocused region.
By outputting the image representing the trend and/or the change to the unfocused area, the content displayed by the unfocused area can be more striking, which is beneficial to attracting the attention of the user, so that the change or the trend of the user in the unfocused area can be timely prompted.
For example, if a certain display area shows the traffic road condition of a certain intersection, the traffic road condition in the first mode may be a video monitoring picture of the intersection, and vehicles passing through the intersection can be clearly observed from the video monitoring picture; the traffic road condition in the second mode can be the congestion condition of the intersection represented by an image composed of color blocks with different colors, a red color block can represent that the intersection is in a congestion state, and a green color block represents that the intersection is in a smooth state. If the display area is the current focusing area of the user, the traffic road condition of the first mode is output to the display area, and the user can clearly see the video monitoring picture of the intersection. And if the display area is the current non-focus area of the user, outputting the traffic road condition of the second mode to the display area, and sensing the change of the road condition displayed in the display area by the user through the residual light. When the color block displayed in the display area is changed from green to red, the user knows that the intersection is in a congestion state and needs to be processed, the user turns over to watch the display area, the display area is the current focus area of the user at the moment, the traffic road condition of the first mode of the road condition is output to the display area, and the user can watch the detailed road condition of the intersection for further processing.
It should be noted that the above examples are for the purpose of helping those skilled in the art better understand the embodiments of the present invention, and are not intended to limit the scope of the embodiments of the present invention. When the display control method in the embodiment of the present invention is applied to different scenes, the presentation forms of the content in the first mode and the content in the second mode may be changed correspondingly, which is not limited in the embodiment of the present invention.
It should be understood that the focus area in the embodiment of the present invention is not limited to the range of 3 degrees or 6 degrees of the human eye viewing angle, and the range of the human eye viewing angle corresponding to the focus area in the embodiment of the present invention may be determined according to actual requirements.
It should be noted that the mode corresponding to the content to be displayed in the focus area may be preset, so that after the focus area is determined, the content of the preset mode may be sent to the focus area.
In some embodiments, all of the displayed content in the unfocused region on the display interface other than the focused region may correspond to the same mode, e.g., the second mode. In this case, a mode corresponding to the content to be displayed in the unfocused region may be set in advance.
In other embodiments, the unfocused region outside the focused region on the display interface may be further divided, and the content displayed in different unfocused display regions may correspond to different modes, where the details of the content in different modes are different. In this case, after the focused region and the plurality of unfocused regions are determined, it is necessary to further determine a mode corresponding to the content to be displayed in each unfocused region.
It will also be appreciated that the closer to the foveal vision region the more clear the eye sees the content, the farther away from the foveal vision region the more blurred the eye sees the content. Therefore, the unfocused region is divided into a plurality of different unfocused regions, and modes corresponding to contents displayed by different unfocused display regions can be different, so that the visual perception of human beings can be better met.
Optionally, each unfocused region in the display interface displays content corresponding to one of a plurality of modes, including the second mode. The plurality of modes may also include a third mode, a fourth mode, and the like. The details of the content of different ones of the plurality of modes may vary in degree.
In some embodiments, after determining the first focused region and the first unfocused region of the user's eye on the display interface, the method 100 may further comprise:
determining an angle between a direction from the user's eyes to the first unfocused region and a direction of the user's gaze;
and determining a second mode corresponding to the content to be displayed in the first unfocused area from multiple modes according to the included angle.
That is, the mode corresponding to the content to be displayed in the first unfocused region may be determined according to the angle. The larger the angle, the less detailed the content displayed by the first unfocused region.
In some embodiments, the mode corresponding to the content displayed by the unfocused region may also be determined according to the distance between the unfocused region and the focused region. Accordingly, after determining the first in-focus region and the first out-of-focus region of the user's eye on the display interface, the method 100 may further comprise:
determining a distance between the first unfocused region and the first focused region;
and determining a second mode corresponding to the content to be displayed in the first unfocused region from a plurality of modes according to the distance.
That is, the mode corresponding to the content to be displayed in the first non-focused region may be determined according to the distance. The greater the distance, the lower the detail of the content displayed by the first unfocused region.
Fig. 4A, 4B, and 4C are schematic diagrams of contents displayed by a focus area and a non-focus area according to an embodiment of the present invention. Taking any display area in the stock quotation display interface as an example, fig. 4A shows the content displayed when the display area is taken as a focus area, and at this time, detailed information such as prices and trend curves of a plurality of stocks can be displayed. Fig. 4B and 4C show contents of a plurality of modes displayed when the display region is a non-focus region and distances between the display region and the focus region are d1 and d2(d1 < d2), respectively: fig. 4B shows the content displayed when the display area is separated from the focus area by a second distance d1, and this time, a trend chart of the price of each stock in the plurality of stocks displayed by color blocks (colors not shown in the figure) with different brightness or colors can be displayed, for example, a stock price drop can be represented by a red inverted triangle color block, and a stock price rise can be represented by a green positive triangle color block; 4C shows the content displayed when the unfocused region and the focused region are separated by a third distance d2, at this time, an image composed of color blocks with different brightness or colors (colors not shown in the figure) can be displayed to show the overall fluctuation trend of different plates in the plurality of stocks, and for the plate with high fluctuation speed, the brightness or color change of the color blocks can be displayed to prompt the user. It should be understood that, here, only the content displayed in the unfocused region corresponds to two modes as an example, and the content displayed in the unfocused region may also correspond to more than two modes.
Optionally, as shown in fig. 5, the method 100 may further include:
130. under the condition that the position of the user and/or the sight line direction of eyes are/is changed, determining a second focusing area and a second non-focusing area of the eyes of the current user on the display interface;
140. outputting the third content of the first pattern to a second focused region and outputting the fourth content of the second pattern to a second unfocused region.
The embodiment of the invention can dynamically track the user, can timely adjust the focusing area and the non-focusing area of the eyes of the user on the display interface according to the position and/or the change of the sight line direction of the user, and respectively outputs the contents of the corresponding modes to the focusing area and the non-focusing area.
Optionally, before outputting the fourth content of the second mode to the second unfocused region, the method 100 may further include: and determining a second mode corresponding to the content to be displayed in the second unfocused region from the plurality of modes according to an included angle between the direction from the eyes of the user to the second unfocused region and the sight line direction of the user or the distance between the second unfocused region and the second focused region.
It should be understood that the mode corresponding to the content to be displayed in the second unfocused region determined according to the included angle and the distance may also be other modes in the plurality of modes.
Optionally, determining a first in-focus region and a first out-of-focus region of the user's eye on the display interface comprises:
determining a user's eye position;
according to the eye position of a user, a first focusing area and a first non-focusing area of the user's eyes on a display interface are determined.
It should be understood that the user's eye position refers to the position of the user's eyes relative to the display interface.
Specifically, the position of the pupil can be determined according to the eye position of the user, the sight line direction of the user is determined, and then the focusing area and the non-focusing area of the eye of the user on the display interface can be determined according to the sight line direction and the human eye visual range of the user.
Optionally, determining the eye position of the user comprises:
acquiring a currently shot image of a user;
from the image, the eye position of the user is determined.
Optionally, determining the eye position of the user from the image comprises:
determining the head position of the user and the face direction of the user according to the image;
acquiring an eye position probability model;
and inputting the head position of the user and the face direction of the user into the eye position probability model, and determining the eye position of the user.
In the embodiment of the invention, the eye position probability model can be stored in advance, and the head position and the face direction of the user are input into the eye position probability model, so that the eye position of the user can be determined, and the processing process is simplified.
In some implementations, face recognition techniques may also be employed to determine the user's eye position from the image.
Alternatively, the eye position probability model may be determined according to the following method:
obtaining a plurality of samples, each sample of the plurality of samples including a head position, a face direction, and an eye position of a user;
and establishing an eye position probability model by adopting a random sampling consistency algorithm and a principal component analysis method according to a plurality of samples.
Specifically, the eye position probability model may be built by principal component analysis using a random sampling consistency algorithm and physical constraints. Wherein the physical constraints may include a human eye visual range.
It should be understood that other algorithms in the prior art may be used to establish the eye position probability model, and the embodiment of the present invention is not limited thereto.
Therefore, the display control method of the embodiment of the invention can make the contents displayed in different areas conform to the visual perception of human beings by determining the focus area and the non-focus area of the eyes of the user on the display interface and outputting the contents in different modes to the focus area and the non-focus area.
Fig. 6 is a schematic structural diagram of a display control apparatus 600 according to an embodiment of the present invention. As shown in fig. 6, the apparatus 600 may include a determination unit 610 and an output unit 620.
The determining unit 610 may be configured to determine a first focused region and a first unfocused region of the user's eye on the display interface, where each region in the display interface displays content corresponding to a first mode or a second mode, and a content of the first mode has a higher detail degree than a content of the second mode.
The output unit 620 may be configured to output first content of a first mode to the first focused region and output second content of a second mode to the first unfocused region.
Therefore, the display control device of the embodiment of the invention can make the contents displayed in different areas conform to the visual perception of human beings by determining the focus area and the non-focus area of the eyes of the user on the display interface and outputting the contents in different modes to the focus area and the non-focus area.
Optionally, the content of the second mode is used to indicate a trend and/or change of part of the information in the content of the first mode.
Optionally, each unfocused region in the display interface displays content corresponding to one of a plurality of modes, including the second mode.
In some embodiments, determining unit 610 may be further configured to:
after determining a first focusing area and a first non-focusing area of the eyes of the user on the display interface, determining an included angle between the direction from the eyes of the user to the first non-focusing area and the sight line direction of the user;
and determining a second mode corresponding to the content to be displayed in the first unfocused area from multiple modes according to the included angle.
In some embodiments, determining unit 610 may be further configured to:
after determining a first focused area and a first unfocused area of the user's eye on the display interface, determining a distance between the first unfocused area and the first focused area;
and determining a second mode corresponding to the content to be displayed in the first unfocused region from a plurality of modes according to the distance.
Optionally, the determining unit 610 may be further configured to determine a second focused area and a second unfocused area of the current user's eye on the display interface when the position of the user and/or the gaze direction of the eye changes; the output unit 620 may also be configured to output the third content of the first mode to the second focusing area and output the fourth content of the second mode to the second non-focusing area.
Optionally, the determining unit 610 is specifically configured to:
determining a user's eye position;
according to the eye position of the user, a first focusing area and a first non-focusing area of the user's eyes on the display interface are determined.
Optionally, the determining unit 610 is specifically configured to:
acquiring a currently shot image of a user;
from the image, the eye position of the user is determined.
Optionally, the determining unit 610 is specifically configured to:
determining the head position of the user and the face direction of the user according to the image;
acquiring an eye position probability model;
and inputting the head position of the user and the face direction of the user into the eye position probability model, and determining the eye position of the user.
Optionally, the determining unit 610 is further configured to:
obtaining a plurality of samples, each sample of the plurality of samples including a head position, a face direction, and an eye position of a user;
and establishing an eye position probability model by adopting a random sampling consistency algorithm and a principal component analysis method according to a plurality of samples.
Optionally, the content of the second mode includes an image for representing a trend and/or change of a part of the information in the content of the first mode.
It should be understood that the display control apparatus 600 according to the embodiment of the present invention may correspond to an execution body of the method in the embodiment of the method of the present invention, and the above and other operations and/or functions of each unit in the display control apparatus 600 are respectively for implementing the corresponding flow of the method 100 shown in fig. 1, and are not described herein again for brevity.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (16)

1. A display control method comprising:
determining a first focusing area and a first non-focusing area of the eyes of a user on the same display screen, wherein the displayed content of each area corresponds to a first mode or a second mode, and the content of the first mode has a higher detailed degree than that of the second mode;
outputting first content of the first mode to the first focus area, and outputting second content of the second mode to the first non-focus area, wherein the content of the second mode includes an image representing a trend and/or a change of partial information in the content of the first mode.
2. The method of claim 1, wherein each unfocused region of the display screen displays content corresponding to one of a plurality of modes including the second mode,
after the determining the first in-focus region and the first out-of-focus region of the user's eye on the display screen, the method further comprises:
determining an angle between a direction from the user's eyes to the first unfocused region and a direction of the user's gaze;
and determining the second mode corresponding to the content to be displayed in the first unfocused region from the plurality of modes according to the included angle.
3. The method of claim 1, wherein each unfocused region of the display screen displays content corresponding to one of a plurality of modes, including the second mode,
after the determining the first in-focus region and the first out-of-focus region of the user's eye on the display screen, the method further comprises:
determining a distance between the first unfocused region and the first focused region;
and determining the second mode corresponding to the content to be displayed in the first unfocused area from the plurality of modes according to the distance.
4. The method of any of claims 1 to 3, further comprising:
determining a second focusing area and a second non-focusing area of the eyes of the user on the display screen under the condition that the position and/or the sight line direction of the eyes of the user are/is changed;
outputting the third content of the first pattern to the second focusing area and outputting the fourth content of the second pattern to the second non-focusing area.
5. The method of any of claims 1-3, wherein the determining a first focused region and a first unfocused region of a user's eye on the display screen comprises:
determining an eye position of the user;
determining the first focused region and the first unfocused region of the user's eyes on the display screen according to the user's eye position.
6. The method of claim 5, wherein the determining the user's eye position comprises:
acquiring a currently shot image of the user;
determining the eye position of the user according to the image.
7. The method of claim 6, wherein said determining the user's eye position from the image comprises:
determining the head position of the user and the face direction of the user according to the image;
acquiring an eye position probability model;
inputting the head position of the user and the face direction of the user into the eye position probability model, and determining the eye position of the user.
8. The method of claim 7, wherein the eye position probability model is determined according to the following method:
obtaining a plurality of samples, each sample of the plurality of samples comprising a head position, a face direction, and an eye position of a user;
and establishing the eye position probability model by adopting a random sampling consistency algorithm and a principal component analysis method according to the plurality of samples.
9. A display control apparatus comprising:
a determining unit, configured to determine a first focus area and a first non-focus area of the user's eyes on the same display screen, wherein each area displays content corresponding to a first mode or a second mode, and the content of the first mode has a higher detail degree than the content of the second mode;
and an output unit, configured to output first content in the first mode to the first focused region, and output second content in the second mode to the first unfocused region, where the content in the second mode includes an image indicating a trend and/or a change of partial information in the content in the first mode.
10. The apparatus of claim 9, wherein each unfocused region of the display screen displays content corresponding to one of a plurality of modes including the second mode,
the determination unit is further configured to:
after determining the first focused region and the first unfocused region of the user's eyes on the display screen, determining an angle between a direction from the user's eyes to the first unfocused region and a direction of the user's line of sight;
and determining the second mode corresponding to the content to be displayed in the first unfocused region from the plurality of modes according to the included angle.
11. The apparatus of claim 9, wherein each unfocused region of the display screen displays content corresponding to one of a plurality of modes including the second mode,
the determination unit is further configured to:
after determining the first focused region and the first unfocused region of the user's eye on the display screen, determining a distance between the first unfocused region and the first focused region;
and determining the second mode corresponding to the content to be displayed in the first unfocused area from the plurality of modes according to the distance.
12. The apparatus of any one of claims 9 to 11,
the determination unit is further used for determining a second focusing area and a second non-focusing area of the current eyes of the user on the display screen under the condition that the position of the user and/or the sight line direction of the eyes are/is changed;
the output unit is further configured to output a third content of the first pattern to the second focusing area and output a fourth content of the second pattern to the second non-focusing area.
13. The apparatus according to any one of claims 9 to 11, wherein the determining unit is specifically configured to:
determining an eye position of the user;
determining the first focused region and the first unfocused region of the user's eyes on the display screen according to the user's eye position.
14. The apparatus according to claim 13, wherein the determining unit is specifically configured to:
acquiring a currently shot image of the user;
determining the eye position of the user according to the image.
15. The apparatus according to claim 14, wherein the determining unit is specifically configured to:
determining the head position of the user and the face direction of the user according to the image;
acquiring an eye position probability model;
inputting the head position of the user and the face direction of the user into the eye position probability model, and determining the eye position of the user.
16. The apparatus of claim 15, wherein the determining unit is further configured to:
obtaining a plurality of samples, each sample of the plurality of samples comprising a head position, a face direction, and an eye position of a user;
and establishing the eye position probability model by adopting a random sampling consistency algorithm and a principal component analysis method according to the plurality of samples.
CN201610483587.5A 2016-06-27 2016-06-27 Display control method and device Active CN106095375B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610483587.5A CN106095375B (en) 2016-06-27 2016-06-27 Display control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610483587.5A CN106095375B (en) 2016-06-27 2016-06-27 Display control method and device

Publications (2)

Publication Number Publication Date
CN106095375A CN106095375A (en) 2016-11-09
CN106095375B true CN106095375B (en) 2021-07-16

Family

ID=57213700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610483587.5A Active CN106095375B (en) 2016-06-27 2016-06-27 Display control method and device

Country Status (1)

Country Link
CN (1) CN106095375B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106899766A (en) * 2017-03-13 2017-06-27 宇龙计算机通信科技(深圳)有限公司 A kind of safety instruction method and its device and mobile terminal
CN106959759B (en) * 2017-03-31 2020-09-25 联想(北京)有限公司 Data processing method and device
CN109241958A (en) * 2018-11-28 2019-01-18 同欣医疗咨询(天津)有限公司 Myopia prevention device, device and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002079962A2 (en) * 2001-03-28 2002-10-10 Koninklijke Philips Electronics N.V. Method and apparatus for eye gazing smart display
CN103430136A (en) * 2007-06-25 2013-12-04 微软公司 Graphical tile-based expansion cell guide
CN105408838A (en) * 2013-08-09 2016-03-16 辉达公司 Dynamic GPU feature adjustment based on user-observed screen area

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103376104B (en) * 2012-04-24 2016-08-10 昆达电脑科技(昆山)有限公司 The method producing divided frame according to touch control gesture
CN104484043A (en) * 2014-12-25 2015-04-01 广东欧珀移动通信有限公司 Screen brightness regulation method and device
CN104951808B (en) * 2015-07-10 2018-04-27 电子科技大学 A kind of 3D direction of visual lines methods of estimation for robot interactive object detection

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002079962A2 (en) * 2001-03-28 2002-10-10 Koninklijke Philips Electronics N.V. Method and apparatus for eye gazing smart display
CN103430136A (en) * 2007-06-25 2013-12-04 微软公司 Graphical tile-based expansion cell guide
CN105408838A (en) * 2013-08-09 2016-03-16 辉达公司 Dynamic GPU feature adjustment based on user-observed screen area

Also Published As

Publication number Publication date
CN106095375A (en) 2016-11-09

Similar Documents

Publication Publication Date Title
US10129520B2 (en) Apparatus and method for a dynamic “region of interest” in a display system
Zhao et al. Foresee: A customizable head-mounted vision enhancement system for people with low vision
US11024083B2 (en) Server, user terminal device, and control method therefor
EP3035681B1 (en) Image processing method and apparatus
US20150109507A1 (en) Image Presentation Method and Apparatus, and Terminal
CN108259883B (en) Image processing method, head-mounted display, and readable storage medium
EP4026318A1 (en) Intelligent stylus beam and assisted probabilistic input to element mapping in 2d and 3d graphical user interfaces
US11659158B1 (en) Frustum change in projection stereo rendering
CN107744451B (en) Training device for binocular vision function
CN106095375B (en) Display control method and device
WO2019104548A1 (en) Image display method, smart glasses and storage medium
CN106095106A (en) Virtual reality terminal and display photocentre away from method of adjustment and device
CN110433062B (en) Visual function training system based on dynamic video images
WO2019131160A1 (en) Information processing device, information processing method, and recording medium
CN113903210A (en) Virtual reality simulation driving method, device, equipment and storage medium
JP2023090721A (en) Image display device, program for image display, and image display method
CN111857336B (en) Head-mounted device, rendering method thereof, and storage medium
US20130050448A1 (en) Method, circuitry and system for better integrating multiview-based 3d display technology with the human visual system
Orlosky et al. The role of focus in advanced visual interfaces
CN113960788A (en) Image display method, image display device, AR glasses, and storage medium
KR101735997B1 (en) Image extraction method for depth-fusion
Orlosky Depth based interaction and field of view manipulation for augmented reality
WO2017051915A1 (en) Visual simulation device
WO2021260368A1 (en) Visual assistance
CN115877573A (en) Display method, head-mounted display device, and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant