CN106095375A - Display control method and device - Google Patents

Display control method and device Download PDF

Info

Publication number
CN106095375A
CN106095375A CN201610483587.5A CN201610483587A CN106095375A CN 106095375 A CN106095375 A CN 106095375A CN 201610483587 A CN201610483587 A CN 201610483587A CN 106095375 A CN106095375 A CN 106095375A
Authority
CN
China
Prior art keywords
user
content
pattern
focusing region
focal zone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610483587.5A
Other languages
Chinese (zh)
Other versions
CN106095375B (en
Inventor
宋建华
刘鹏程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201610483587.5A priority Critical patent/CN106095375B/en
Publication of CN106095375A publication Critical patent/CN106095375A/en
Application granted granted Critical
Publication of CN106095375B publication Critical patent/CN106095375B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Abstract

Embodiments provide a kind of display control method and device, this display control method comprises determining that the eyes of user the first focal zone on display interface and the first non-focusing region, wherein, the content that every piece of region in display interface shows corresponds to first mode or the second pattern, and the level of detail of the content of first mode is higher than the level of detail of the content of the second pattern;To the first content of the first focal zone output first mode, and export the second content of the second pattern to the first non-focusing region.The display control method of the embodiment of the present invention and device, it is determined by the eyes of the user focal zone on display interface and non-focusing region, and the content of different mode is exported to focal zone and non-focusing region, it is possible to the content that zones of different is shown meets the visually-perceptible of the mankind.

Description

Display control method and device
Technical field
The present invention relates to electronic applications, particularly relate to display control method and device.
Background technology
The most increasing job demand processes substantial amounts of multidate information in real time, and computer can quickly process a large amount of letter Breath also shows that user can watch screen on large high-definition screen curtain display wall (the display wall as being made up of multiple display screens) in real time The content of upper display is to obtain corresponding information.
But, the vision of the mankind has certain limiting sense, such as human eye vision and includes that foveal vision and periphery regard Feeling two parts, people are it can clearly be seen that the content in foveal vision region, but are not clearly visible peripheral vision The particular content of display in region.
But, in existing scheme large screen display wall show content time and do not take into account the mankind vision limitation, display Content do not meet the visually-perceptible of the mankind.
Summary of the invention
In view of this, the invention provides a kind of display control method and device, it is possible to the content of display is more accorded with Close the visually-perceptible of the mankind.
On the one hand, it is provided that a kind of display control method, the method includes:
Determine the eyes of user the first focal zone on display interface and the first non-focusing region, wherein, show boundary The content that every piece of region in face shows corresponds to first mode or the second pattern, the level of detail of the content of first mode ratio the The level of detail of the content of two modes is high;
To the first content of the first focal zone output first mode, and export the second pattern to the first non-focusing region Second content.
Alternatively, the content of the second pattern is used for representing trend and/or the change of the partial information in the content of first mode Change.
Alternatively, the content that every piece of non-focusing region in display interface shows is corresponding to a kind of mould in various modes Formula, various modes includes the second pattern, is determining the eyes of user the first focal zone on display interface and first non-poly- After burnt region, the method also includes:
Angle determining from the eyes of user to direction and the direction of visual lines of user in the first non-focusing region;
According to angle, from various modes, determine the second pattern that the first non-focusing region content to be shown is corresponding.
Alternatively, a kind of pattern in the content correspondence various modes that every piece of non-focusing region in display interface shows, Various modes includes the second pattern, is determining the eyes of user the first focal zone on display interface and the first non-focusing district After territory, the method also includes:
Determine the distance between the first non-focusing region and the first focal zone;
According to distance, from various modes, determine the second pattern that the first non-focusing region content to be shown is corresponding.
Alternatively, the method also includes:
In the case of the position of user and/or the direction of visual lines of eyes change, determine that the eyes of active user exist The second focal zone on display interface and the second non-focusing region;
To the 3rd content of the second focal zone output first mode, and export the second pattern to the second non-focusing region 4th content.
Optionally it is determined that the eyes of user the first focal zone on display interface and the first non-focusing region include:
Determine the eye position of user;
According to the eye position of user, determine the eyes of user the first focal zone on display interface and first non-poly- Burnt region.
Optionally it is determined that the eye position of user, including:
Obtain the image of the user of current shooting;
According to image, determine the eye position of user.
Alternatively, according to image, determine the eye position of user, including:
According to image, determine the head position of user and the face orientation of user;
Obtain eye position probabilistic model;
By head position and the face orientation input eye position probabilistic model of user of user, determine the eyes position of user Put.
Alternatively, eye position probabilistic model determines according to following methods:
Obtaining multiple sample, each sample in multiple samples includes the head position of user, face orientation and eyes position Put;
According to multiple samples, RANSAC algorithm and principal component analytical method is used to set up eye position probability mould Type.
Alternatively, the content of the second pattern include the partial information in the content for representing first mode trend and/ Or the image of change.
On the other hand, it is provided that a kind of display control unit, this device includes:
Determine unit, be used for the eyes determining user the first focal zone on display interface and the first non-focusing district Territory, wherein, the content that every piece of region in display interface shows corresponds to first mode or the second pattern, the content of first mode The level of detail higher than the level of detail of the content of the second pattern;
Output unit, for the first content to the first focal zone output first mode, and to the first non-focusing region Export the second content of the second pattern.
Alternatively, the content of the second pattern is used for representing trend and/or the change of the partial information in the content of first mode Change.
Alternatively, the content that every piece of non-focusing region in display interface shows is corresponding to a kind of mould in various modes Formula, various modes includes the second pattern, determines that unit is additionally operable to:
After determining the eyes of user the first focal zone on display interface and the first non-focusing region, determine from The eyes of user are to the angle between direction and the direction of visual lines of user in the first non-focusing region;
According to angle, from various modes, determine the second pattern that the first non-focusing region content to be shown is corresponding.
Alternatively, a kind of pattern in the content correspondence various modes that every piece of non-focusing region in display interface shows, Various modes includes the second pattern, determines that unit is additionally operable to:
After determining the eyes of user the first focal zone on display interface and the first non-focusing region, determine Distance between one non-focusing region and the first focal zone;
According to distance, from various modes, determine the second pattern that the first non-focusing region content to be shown is corresponding.
Optionally it is determined that unit is additionally operable in the case of the position of user and/or the direction of visual lines of eyes change, Determine the eyes of active user the second focal zone on display interface and the second non-focusing region;
Output unit is additionally operable to the 3rd content to the second focal zone output first mode, and to the second non-focusing region Export the 4th content of the second pattern.
Optionally it is determined that unit specifically for:
Determine the eye position of user;
According to the eye position of user, determine the eyes of user the first focal zone on display interface and first non-poly- Burnt region.
Optionally it is determined that unit specifically for:
Obtain the image of the user of current shooting;
According to image, determine the eye position of user.
Optionally it is determined that unit specifically for:
According to image, determine the head position of user and the face orientation of user;
Obtain eye position probabilistic model;
By head position and the face orientation input eye position probabilistic model of user of user, determine the eyes position of user Put.
Optionally it is determined that unit is additionally operable to:
Obtaining multiple sample, each sample in multiple samples includes the head position of user, face orientation and eyes position Put;
According to multiple samples, RANSAC algorithm and principal component analytical method is used to set up eye position probability mould Type.
Alternatively, the content of the second pattern include the partial information in the content for representing first mode trend and/ Or the image of change.
Based on technique scheme, the display control method of the embodiment of the present invention and device, it is determined by the eyes of user Focal zone on display interface and non-focusing region, and export in different mode to focal zone and non-focusing region Hold, it is possible to the content that zones of different is shown meets the visually-perceptible of the mankind.
Accompanying drawing explanation
In order to be illustrated more clearly that the technical scheme of the embodiment of the present invention, will make required in the embodiment of the present invention below Accompanying drawing be briefly described, it should be apparent that, drawings described below is only some embodiments of the present invention, for From the point of view of those of ordinary skill in the art, on the premise of not paying creative work, it is also possible to obtain other according to these accompanying drawings Accompanying drawing.
Fig. 1 is the indicative flowchart of display control method according to embodiments of the present invention.
Fig. 2 is the schematic diagram of visual characteristics of human eyes.
Fig. 3 is the schematic diagram of display control method according to embodiments of the present invention.
Fig. 4 A is the schematic diagram of the content that focal zone according to embodiments of the present invention shows.
Fig. 4 B and 4C is the schematic diagram of the content that non-focusing region according to embodiments of the present invention shows.
Fig. 5 is the indicative flowchart of display control method according to another embodiment of the present invention.
Fig. 6 is the structural representation of display control unit according to embodiments of the present invention.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Describe, it is clear that described embodiment is a part of embodiment of the present invention rather than whole embodiment wholely.Based on this Embodiment in bright, other realities all that those of ordinary skill in the art are obtained on the premise of not making creative work Execute example, all should belong to the scope of protection of the invention.
Term " first " in the description and claims of this application and accompanying drawing, " second " and " the 3rd " etc. be for Distinguish different object rather than be used for describing particular order.
In the embodiment of the present invention, "and/or" is only a kind of incidence relation describing affiliated partner, and expression can exist three The relation of kind, such as, A and/or B, can represent: individualism A, there is A and B, individualism B these three situation simultaneously.It addition, Character "/", typicallys represent the forward-backward correlation relation to liking a kind of "or".
In the embodiment of the present invention, display interface can be the viewing area in a display screen (such as large-screen display), also It can be the viewing area in the display wall being made up of multiple display screens.
Fig. 1 is the indicative flowchart of display control method 100 according to embodiments of the present invention.As it is shown in figure 1, method 100 include following content.
110, the eyes of user the first focal zone on display interface and the first non-focusing region are determined, wherein, aobvious Show that the content that every piece of region in interface shows corresponds to first mode or the second pattern, the level of detail of the content of first mode Higher than the level of detail of the content of the second pattern.
Should be understood that the first aggregation zone and the first non-focusing region can be the different viewing areas on same display screen, It can also be the viewing area on different display screen.
120, to the first content of the first focal zone output first mode, and the second mould is exported to the first non-focusing region Second content of formula.
Correspondingly, the first focal zone can show that the first content of first mode, the first non-focusing region can show Second pattern the second content.
Wherein, first mode is properly termed as normal mode, and the second pattern is properly termed as the simplified mode.Or, first mode Being referred to as Verbose Mode, the second pattern is referred to as normal mode or the simplified mode.
Therefore, the display control method of the embodiment of the present invention, it is determined by the focusing on display interface of the eyes of user Region and non-focusing region, and the content of different mode is exported to focal zone and non-focusing region, it is possible to make zones of different The content of display meets the visually-perceptible of the mankind.
Focal zone is corresponding to the foveal vision region of human eye, and non-focusing region is corresponding to the peripheral vision district of human eye Territory.Human eye can see the content in foveal vision region, it is impossible to sees the content in peripheral vision region.Therefore, this In bright embodiment, by exporting the content of the second pattern to non-focusing region, additionally it is possible to avoiding showing in non-focusing region need not The content wanted such that it is able to avoid information overload.
In certain embodiments, the trend of the content of the second pattern partial information in the content representing first mode And/or change.
Owing to human eye cannot see the content in peripheral vision region, but the change in peripheral vision region can be processed And movement.Such as, visual characteristics of human eyes as shown in Figure 2, human eye can see the content in about 3 degree of angulars field of view, permissible Read the word in about 6 degree of angulars field of view, can be with the motion in perception peripheral vision region and brightness.
Therefore, by non-focusing region output part letter in the content representing first mode in the embodiment of the present invention The trend of breath and/or change, it is possible to transmit important information to user in time.
In certain embodiments, the content of the second pattern includes the partial information in the content for representing first mode Trend and/or the image of change.
Such as, this image can be the image of the color lump composition of multiple color or brightness, can by the brightness of color lump and/ Or the change of color represents trend and/or change.This image can also putting for the partial information in the content of first mode Big display, can represent trend and the/change of this partial information by the change of flicker or color.As it is shown on figure 3, in focal zone Territory shows detailed content, at non-focusing region display expression trend and/or the color lump composition diagram picture of change.
By to non-focusing region output expression trend and/or the image of change, it is possible to make non-focusing region show Content more eye-catching, is conducive to causing the attention of user such that it is able to the change that prompting user occurs in non-focusing region in time Change or trend.
Such as, if certain viewing area is shown that the traffic at certain crossing, then the traffic of first mode is permissible For the video monitoring picture at this crossing, it can clearly be observed that vehicle through this crossing from this video monitoring picture;The The traffic of two modes can be the jam situation at this crossing of the graphical representation of the color lump composition using different colors, red Color color lump can represent that this crossing is in congestion status, and green color lump represents that this crossing is in unobstructed state.If this viewing area Territory is the focal zone that user is current, then to the traffic of this viewing area output first mode, user can clearly see Video monitoring picture to this crossing.If the non-focusing region that to be user current, this viewing area, then defeated to this viewing area Going out the traffic of the second pattern, user can perceive the change of the road conditions that this viewing area shows by remaining light.When this shows Show color lump that region shows from green become redness time, user learns that this crossing is in congestion status and needs to process, and user turns over Head watches this viewing area attentively, and now this viewing area is the focal zone that this user is current, exports this road to this viewing area The traffic of the first mode of condition, user can watch the detailed road conditions at this crossing, in order to processes further.
It should be noted that above-mentioned example is to aid in those skilled in the art and is more fully understood that the embodiment of the present invention, and have to Limit the scope of the embodiment of the present invention.When the method that the display of the embodiment of the present invention controls is applied in different scenes, first The form of expression of the content of pattern and the content of the second pattern can be with respective change, and this is not limited by the embodiment of the present invention.
Should be understood that the focal zone in the embodiment of the present invention is not limited to 3 degree of human eye visual angle or the scope of 6 degree, can basis Actual demand determines the human eye angular field of view that the focal zone in the embodiment of the present invention is corresponding.
It should be noted that the pattern that focal zone content to be shown is corresponding that can preset, so, determine focal zone Afterwards, the content of this preset mode can be sent to this focal zone.
In certain embodiments, the content that on display interface, whole non-focusing regions outside focal zone show can be right Answer same pattern, the such as second pattern.In this case, the mould that non-focusing region content to be shown is corresponding can be preset Formula.
In further embodiments, it is also possible to the non-focusing region outside focal zone on display interface is divided, The content that different non-focusing viewing areas shows can corresponding different pattern, the level of detail of the content of different mode is not With.In this case, after determining focal zone and polylith non-focusing region, in addition it is also necessary to further determine that Mei Kuai non-focusing district The pattern that territory content to be shown is corresponding.
Should also be understood that the content seen the closer to foveal vision region human eye is the most clear, further away from foveal vision district The content that territory human eye is seen is the fuzzyyest.Therefore, non-focusing region is divided into multiple different non-focusing district by the embodiment of the present invention Territory, pattern corresponding to content that different non-focusing viewing areas shows can also be different, so can preferably meet the mankind Visually-perceptible.
Alternatively, the content that every piece of non-focusing region in display interface shows is corresponding to a kind of mould in various modes Formula, various modes includes the second pattern.This various modes can also include the 3rd pattern, fourth mode etc..In this various modes Different mode content in detail can be different with degree.
In certain embodiments, the eyes of user the first focal zone on display interface and the first non-focusing are being determined After region, method 100 can also include:
Angle determining from the eyes of user to direction and the direction of visual lines of user in the first non-focusing region;
According to this angle, from various modes, determine the second pattern that the first non-focusing region content to be shown is corresponding.
It is to say, the pattern that the first non-focusing region content to be shown is corresponding can be determined according to this angle.This folder Angle is the biggest, and the level of detail of the content that the first non-focusing region shows is the lowest.
In certain embodiments, it is also possible to determine non-focusing region according to the distance between non-focusing region and focal zone The pattern corresponding to content of display.Correspondingly, the eyes of user the first focal zone and first on display interface is being determined After non-focusing region, method 100 can also include:
Determine the distance between the first non-focusing region and the first focal zone;
According to this distance, from various modes, determine the second pattern that the first non-focusing region content to be shown is corresponding.
It is to say, the pattern that the first non-focusing region content to be shown is corresponding can be determined according to this distance.Should be away from From the biggest, the level of detail of the content that the first non-focusing region shows is the lowest.
The signal of the content that Fig. 4 A, 4B and 4C are focal zone according to embodiments of the present invention and non-focusing region shows Figure.As a example by arbitrary viewing area in stock market display interface, when Fig. 4 A show this viewing area as focal zone The content of display, now can show the price of multiple stock and walk the details such as power curve.Fig. 4 B and 4C show this and shows Show that region shows when being respectively d1 and d2 (d1 < d2) as non-focusing region and the distance between this viewing area and focal zone The content of the various modes shown: the content of display when Fig. 4 B show this viewing area and focal zone interval second distance d1, Now can show each stock in the plurality of stock shown with the color lump (color not shown in figure) of different brightness or color The trendgram of price, for example, it is possible to represent stock prices decline with red inverted triangle color lump, represent with green positive triangle color lump Stock price goes up;When 4C show this non-focusing region and focal zone interval three distance d2, the content of display, now may be used With display with different in the plurality of stock of graphical representation of the color lump composition of different brightness or color (color not shown in figure) The ups and downs trend that plate is overall, plate fireballing for ups and downs can show with the brightness of color lump or color change, with Prompting user.Should be understood that non-focusing region shows the most only as a example by the content correspondence both of which that non-focusing region shows Content can also corresponding two or more pattern.
Alternatively, as it is shown in figure 5, method 100 can also include:
130, in the case of the position of user and/or the direction of visual lines of eyes change, the eye of active user is determined The eyeball the second focal zone on display interface and the second non-focusing region;
140, the 3rd content of described first mode is exported to the second focal zone, and to the second non-focusing region output institute State the 4th content of the second pattern.
The embodiment of the present invention can be with dynamic tracing user, it is possible to according to position and/or the change of direction of visual lines of user, and Time adjust eyes focal zone on display interface and the non-focusing region of user, and respectively to focal zone and non-focusing district The content of territory output corresponding modes.
Alternatively, before export the 4th content of described second pattern to the second non-focusing region, method 100 is all right Including: according to from the eyes of user to direction and the direction of visual lines of user in the second non-focusing region angle, or Distance between two non-focusing regions and the second focal zone, in determining that from various modes the second non-focusing region is to be shown Hold the second corresponding pattern.
Should be understood that according to pattern corresponding to the content that this angle is to be shown with the second non-focusing region that this distance determines also Can be other patterns in this various modes.
Optionally it is determined that the eyes of user the first focal zone on display interface and the first non-focusing region include:
Determine the eye position of user;
According to the eye position of user, determine the eyes of described user the first focal zone and first on display interface Non-focusing region.
Should be understood that the eye position of user refers to the eyes position relative to display interface of user.
Specifically, may determine that the position of pupil according to the eye position of user, and determine the direction of visual lines of user, then Direction of visual lines according to user and human eye vision scope i.e. can determine that the eyes of user focal zone on display interface and non- Focal zone.
Optionally it is determined that the eye position of user, including:
Obtain the image of the user of current shooting;
According to image, determine the eye position of user.
Alternatively, according to image, determine the eye position of user, including:
According to image, determine the head position of user and the face orientation of user;
Obtain eye position probabilistic model;
The head position of user and the face orientation of user are inputted described eye position probabilistic model, determines the eye of user Eyeball position.
In the embodiment of the present invention, eye position probabilistic model can be prestored, by head position and the face side of user To inputting this eye position conceptual schema, i.e. can determine that the eye position of user, be conducive to simplifying processing procedure.
In some implementations, it is also possible to use face recognition technology to determine the eye position of user according to this image.
Alternatively, eye position probabilistic model can determine according to following methods:
Obtaining multiple sample, each sample in multiple samples includes the head position of user, face orientation and eyes position Put;
According to multiple samples, RANSAC algorithm and principal component analytical method is used to set up eye position probability mould Type.
Specifically, it is possible to use RANSAC algorithm and physical constraint, eyes position is set up by principal component analysis Put probabilistic model.Wherein, physical constraint can include human eye vision scope.
Should be understood that and other algorithms of the prior art can also be used to set up eye position probabilistic model, the present invention implements This is not limited by example.
Therefore, the display control method of the embodiment of the present invention, it is determined by the focusing on display interface of the eyes of user Region and non-focusing region, and the content of different mode is exported to focal zone and non-focusing region, it is possible to make zones of different The content of display meets the visually-perceptible of the mankind.
Fig. 6 is the structural representation of display control unit 600 according to embodiments of the present invention.As shown in Figure 6, device 600 Can include determining that unit 610 and output unit 620.
Determine that unit 610 is determined for the eyes of user the first focal zone on display interface and first non-poly- Burnt region, wherein, the content that every piece of region in display interface shows corresponds to first mode or the second pattern, first mode The level of detail of content is higher than the level of detail of the content of the second pattern.
Output unit 620 may be used for the first content to the first focal zone output first mode, and non-poly-to first Burnt region exports the second content of the second pattern.
Therefore, the display control unit of the embodiment of the present invention, it is determined by the focusing on display interface of the eyes of user Region and non-focusing region, and the content of different mode is exported to focal zone and non-focusing region, it is possible to make zones of different The content of display meets the visually-perceptible of the mankind.
Alternatively, the content of the second pattern is used for representing trend and/or the change of the partial information in the content of first mode Change.
Alternatively, the content that every piece of non-focusing region in display interface shows is corresponding to a kind of mould in various modes Formula, various modes includes the second pattern.
In certain embodiments, determine that unit 610 can be also used for:
After determining the eyes of user the first focal zone on display interface and the first non-focusing region, determine from The eyes of user are to the angle between direction and the direction of visual lines of user in the first non-focusing region;
According to this angle, from various modes, determine the second pattern that the first non-focusing region content to be shown is corresponding.
In certain embodiments, determine that unit 610 can be also used for:
After determining the eyes of user the first focal zone on display interface and the first non-focusing region, determine Distance between one non-focusing region and the first focal zone;
According to this distance, from various modes, determine the second pattern that the first non-focusing region content to be shown is corresponding.
Optionally it is determined that what unit 610 can be also used for changing in the position of user and/or the direction of visual lines of eyes In the case of, determine the eyes of active user the second focal zone on display interface and the second non-focusing region;Output unit 620 can be also used for the 3rd content to the second focal zone output first mode, and to the second non-focusing region output second 4th content of pattern.
Optionally it is determined that unit 610 specifically for:
Determine the eye position of user;
According to the eye position of user, determine the eyes of user the first focal zone on display interface and first non-poly- Burnt region.
Optionally it is determined that unit 610 specifically for:
Obtain the image of the user of current shooting;
According to image, determine the eye position of user.
Optionally it is determined that unit 610 specifically for:
According to image, determine the head position of user and the face orientation of user;
Obtain eye position probabilistic model;
By head position and the face orientation input eye position probabilistic model of user of user, determine the eyes position of user Put.
Optionally it is determined that unit 610 is additionally operable to:
Obtaining multiple sample, each sample in multiple samples includes the head position of user, face orientation and eyes position Put;
According to multiple samples, RANSAC algorithm and principal component analytical method is used to set up eye position probability mould Type.
Alternatively, the content of the second pattern include the partial information in the content for representing first mode trend and/ Or the image of change.
Should be understood that display control unit 600 according to embodiments of the present invention may correspond in the inventive method embodiment Above and other operation of the unit in the executive agent of method, and this display control unit 600 and/or function are respectively In order to realize the corresponding flow process of method 100 shown in Fig. 1, for sake of simplicity, do not repeat them here.
Those of ordinary skill in the art are it is to be appreciated that combine the list of each example that the embodiments described herein describes Unit and algorithm steps, it is possible to electronic hardware, computer software or the two be implemented in combination in, in order to clearly demonstrate hardware With the interchangeability of software, the most generally describe composition and the step of each example according to function.This A little functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Specially Industry technical staff can use different methods to realize described function to each specifically should being used for, but this realization is not It is considered as beyond the scope of this invention.
Those skilled in the art is it can be understood that arrive, for convenience of description and succinctly, foregoing description be System, device and the specific works process of unit, be referred to the corresponding process in preceding method embodiment, do not repeat them here.
In several embodiments provided herein, it should be understood that disclosed system, apparatus and method, permissible Realize by another way.Such as, device embodiment described above is only schematically, such as, and described unit Dividing, be only a kind of logic function and divide, actual can have other dividing mode, the most multiple unit or assembly when realizing Can in conjunction with or be desirably integrated into another system, or some features can be ignored, or does not performs.It addition, it is shown or beg for The coupling each other of opinion or direct-coupling or communication connection can be the INDIRECT COUPLING by some interfaces, device or unit Or communication connection, it is also possible to be electric, machinery or other form connect.
The described unit illustrated as separating component can be or may not be physically separate, shows as unit The parts shown can be or may not be physical location, i.e. may be located at a place, or can also be distributed to multiple On NE.Some or all of unit therein can be selected according to the actual needs to realize embodiment of the present invention scheme Purpose.
It addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it is also possible to It is that unit is individually physically present, it is also possible to be that two or more unit are integrated in a unit.Above-mentioned integrated Unit both can realize to use the form of hardware, it would however also be possible to employ the form of SFU software functional unit realizes.
If described integrated unit realizes and as independent production marketing or use using the form of SFU software functional unit Time, can be stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially The part in other words prior art contributed, or this technical scheme completely or partially can be with the form of software product Embodying, this computer software product is stored in a storage medium, including some instructions with so that a computer Equipment (can be personal computer, server, or the network equipment etc.) performs the complete of method described in each embodiment of the present invention Portion or part steps.And aforesaid storage medium includes: USB flash disk, portable hard drive, read only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. are various can store journey The medium of sequence code.
The above, the only detailed description of the invention of the present invention, but protection scope of the present invention is not limited thereto, and any Those familiar with the art, in the technical scope that the invention discloses, can readily occur in the amendment of various equivalence or replace Changing, these amendments or replacement all should be contained within protection scope of the present invention.Therefore, protection scope of the present invention should be with right The protection domain required is as the criterion.

Claims (20)

1. a display control method, including:
Determine the eyes of user the first focal zone on display interface and the first non-focusing region, wherein, described display circle The content that every piece of region in face shows corresponds to first mode or the second pattern, the level of detail of the content of described first mode Higher than the level of detail of the content of described second pattern;
The first content of described first mode is exported to described first focal zone, and to described first non-focusing region output institute State the second content of the second pattern.
Method the most according to claim 1, wherein, the content of described second pattern is for representing the interior of described first mode The trend of the partial information in appearance and/or change.
Method the most according to claim 1 and 2, wherein, it is interior that every piece of non-focusing region in described display interface shows Holding corresponding to a kind of pattern in various modes, described various modes includes described second pattern,
After the described eyes determining user the first focal zone on display interface and the first non-focusing region, described side Method also includes:
Determining from the eyes of described user to direction and the direction of visual lines of described user in described first non-focusing region Angle;
According to described angle, determine from described various modes described first non-focusing region content to be shown corresponding described in Second pattern.
Method the most according to claim 1 and 2, wherein, it is interior that every piece of non-focusing region in described display interface shows Holding a kind of pattern in corresponding various modes, described various modes includes described second pattern,
After the described eyes determining user the first focal zone on display interface and the first non-focusing region, described side Method also includes:
Determine the distance between described first non-focusing region and described first focal zone;
According to described distance, determine from described various modes described first non-focusing region content to be shown corresponding described in Second pattern.
Method the most according to any one of claim 1 to 4, wherein, also includes:
In the case of the position of described user and/or the direction of visual lines of eyes change, determine the eye of presently described user The eyeball the second focal zone on described display interface and the second non-focusing region;
The 3rd content of described first mode is exported to described second focal zone, and to described second non-focusing region output 4th content of described second pattern.
Method the most according to any one of claim 1 to 5, wherein, described determines that the eyes of user are on display interface The first focal zone and the first non-focusing region include:
Determine the eye position of described user;
According to the eye position of described user, determine the eyes of described user described first focal zone on described display interface Territory and described first non-focusing region.
Method the most according to claim 6, wherein, the described eye position determining described user, including:
Obtain the image of the described user of current shooting;
According to described image, determine the eye position of described user.
Method the most according to claim 7, wherein, described according to described image, determine the eye position of described user, bag Include:
According to described image, determine head position and the face orientation of described user of described user;
Obtain eye position probabilistic model;
The head position of described user and the face orientation of described user are inputted described eye position probabilistic model, determines described The eye position of user.
Method the most according to claim 8, wherein, described eye position probabilistic model determines according to following methods:
Obtaining multiple sample, each sample in the plurality of sample includes the head position of user, face orientation and eyes position Put;
According to the plurality of sample, RANSAC algorithm and principal component analytical method is used to set up described eye position general Rate model.
Method the most according to any one of claim 1 to 9, wherein, the content of described second pattern includes for representing The trend of the partial information in the content of described first mode and/or the image of change.
11. 1 kinds of display control units, including:
Determine unit, be used for the eyes determining user the first focal zone on display interface and the first non-focusing region, its In, the content that every piece of region in described display interface shows corresponds to first mode or the second pattern, described first mode The level of detail of content is higher than the level of detail of the content of described second pattern;
Output unit, for exporting the first content of described first mode, and to described first non-to described first focal zone Focal zone exports the second content of described second pattern.
12. devices according to claim 11, wherein, the content of described second pattern is for representing described first mode The trend of the partial information in content and/or change.
13. according to the device described in claim 11 or 12, and wherein, every piece of non-focusing region in described display interface shows Content is corresponding to a kind of pattern in various modes, and described various modes includes described second pattern,
Described determine that unit is additionally operable to:
Determining the eyes of described user described first focal zone on described display interface and described first non-focusing district After territory, determining from the eyes of described user to direction and the direction of visual lines of described user in described first non-focusing region Angle;
According to described angle, determine from described various modes described first non-focusing region content to be shown corresponding described in Second pattern.
14. according to the device described in claim 11 or 12, and wherein, every piece of non-focusing region in described display interface shows A kind of pattern in content correspondence various modes, described various modes includes described second pattern,
Described determine that unit is additionally operable to:
Determining the eyes of described user described first focal zone on described display interface and described first non-focusing district After territory, determine the distance between described first non-focusing region and described first focal zone;
According to described distance, determine from described various modes described first non-focusing region content to be shown corresponding described in Second pattern.
15. according to the device according to any one of claim 11 to 14, wherein,
Described determine that unit is additionally operable in the case of the position of described user and/or the direction of visual lines of eyes change, really The eyes of fixed presently described user the second focal zone on described display interface and the second non-focusing region;
Described output unit is additionally operable to export the 3rd content of described first mode to described second focal zone, and to described the Two non-focusing regions export the 4th content of described second pattern.
16. according to the device according to any one of claim 11 to 15, wherein, described determine unit specifically for:
Determine the eye position of described user;
According to the eye position of described user, determine that the eyes of described user described first on described display interface focuses on Region and described first non-focusing region.
17. devices according to claim 16, wherein, described determine unit specifically for:
Obtain the image of the described user of current shooting;
According to described image, determine the eye position of described user.
18. devices according to claim 17, wherein, described determine unit specifically for:
According to described image, determine head position and the face orientation of described user of described user;
Obtain eye position probabilistic model;
The head position of described user and the face orientation of described user are inputted described eye position probabilistic model, determines described The eye position of user.
19. devices according to claim 18, wherein, described determine that unit is additionally operable to:
Obtaining multiple sample, each sample in the plurality of sample includes the head position of user, face orientation and eyes position Put;
According to the plurality of sample, RANSAC algorithm and principal component analytical method is used to set up described eye position general Rate model.
20. according to the device according to any one of claim 11 to 19, and wherein, the content of described second pattern includes for table Show trend and/or the image of change of partial information in the content of described first mode.
CN201610483587.5A 2016-06-27 2016-06-27 Display control method and device Active CN106095375B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610483587.5A CN106095375B (en) 2016-06-27 2016-06-27 Display control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610483587.5A CN106095375B (en) 2016-06-27 2016-06-27 Display control method and device

Publications (2)

Publication Number Publication Date
CN106095375A true CN106095375A (en) 2016-11-09
CN106095375B CN106095375B (en) 2021-07-16

Family

ID=57213700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610483587.5A Active CN106095375B (en) 2016-06-27 2016-06-27 Display control method and device

Country Status (1)

Country Link
CN (1) CN106095375B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106899766A (en) * 2017-03-13 2017-06-27 宇龙计算机通信科技(深圳)有限公司 A kind of safety instruction method and its device and mobile terminal
CN106959759A (en) * 2017-03-31 2017-07-18 联想(北京)有限公司 A kind of data processing method and device
CN109241958A (en) * 2018-11-28 2019-01-18 同欣医疗咨询(天津)有限公司 Myopia prevention device, device and method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002079962A2 (en) * 2001-03-28 2002-10-10 Koninklijke Philips Electronics N.V. Method and apparatus for eye gazing smart display
CN103376104A (en) * 2012-04-24 2013-10-30 昆达电脑科技(昆山)有限公司 Method for generating split picture according to touch gesture
CN103430136A (en) * 2007-06-25 2013-12-04 微软公司 Graphical tile-based expansion cell guide
CN104484043A (en) * 2014-12-25 2015-04-01 广东欧珀移动通信有限公司 Screen brightness regulation method and device
CN104951808A (en) * 2015-07-10 2015-09-30 电子科技大学 3D (three-dimensional) sight direction estimation method for robot interaction object detection
CN105408838A (en) * 2013-08-09 2016-03-16 辉达公司 Dynamic GPU feature adjustment based on user-observed screen area

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002079962A2 (en) * 2001-03-28 2002-10-10 Koninklijke Philips Electronics N.V. Method and apparatus for eye gazing smart display
CN103430136A (en) * 2007-06-25 2013-12-04 微软公司 Graphical tile-based expansion cell guide
CN103376104A (en) * 2012-04-24 2013-10-30 昆达电脑科技(昆山)有限公司 Method for generating split picture according to touch gesture
CN105408838A (en) * 2013-08-09 2016-03-16 辉达公司 Dynamic GPU feature adjustment based on user-observed screen area
CN104484043A (en) * 2014-12-25 2015-04-01 广东欧珀移动通信有限公司 Screen brightness regulation method and device
CN104951808A (en) * 2015-07-10 2015-09-30 电子科技大学 3D (three-dimensional) sight direction estimation method for robot interaction object detection

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106899766A (en) * 2017-03-13 2017-06-27 宇龙计算机通信科技(深圳)有限公司 A kind of safety instruction method and its device and mobile terminal
CN106959759A (en) * 2017-03-31 2017-07-18 联想(北京)有限公司 A kind of data processing method and device
CN106959759B (en) * 2017-03-31 2020-09-25 联想(北京)有限公司 Data processing method and device
CN109241958A (en) * 2018-11-28 2019-01-18 同欣医疗咨询(天津)有限公司 Myopia prevention device, device and method

Also Published As

Publication number Publication date
CN106095375B (en) 2021-07-16

Similar Documents

Publication Publication Date Title
US20200160818A1 (en) Systems and methods for head-mounted display adapted to human visual mechanism
US20180075797A1 (en) Display device, driving device, and method for driving the display device
CN109196574A (en) For reducing the method and apparatus of the near-sighted source property effect of electronic console
US20210074055A1 (en) Intelligent Stylus Beam and Assisted Probabilistic Input to Element Mapping in 2D and 3D Graphical User Interfaces
US20110218953A1 (en) Design of systems for improved human interaction
CN106095375A (en) Display control method and device
CN104112275A (en) Image segmentation method and device
CN107038738A (en) Object is shown using modified rendering parameter
US20170289529A1 (en) Anaglyph head mounted display
CN110442486A (en) A kind of remote device diagnostics system and method based on mixed reality technology
SG184465A1 (en) Liquid-crystal display device and three-dimensional display system
CN204964878U (en) Can eliminate head -mounted display apparatus of colour difference
CN110110778A (en) Image processing method, device, electronic equipment and computer readable storage medium
CN103824540B (en) A kind of display methods and electronic equipment
CN104246863B (en) For the method that external display resolving power is selected
CN102186094B (en) Method and device for playing media files
CN106773042B (en) Composite display, display control method and wearable device
CN107688240A (en) Wear the control method, equipment and system of display device
CN102074025B (en) Image stylized drawing method and device
CN108345488A (en) The display methods and device at interface
US10558046B2 (en) Display system for virtual reality and method of driving the same
CN206470737U (en) Display system
CN107688241A (en) Wear the control method, equipment and system of display device
CN103871391A (en) Color display method and equipment
Orlosky et al. The role of focus in advanced visual interfaces

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant