CN111913561A - Display method and device based on eye state, display equipment and storage medium - Google Patents

Display method and device based on eye state, display equipment and storage medium Download PDF

Info

Publication number
CN111913561A
CN111913561A CN201910377131.4A CN201910377131A CN111913561A CN 111913561 A CN111913561 A CN 111913561A CN 201910377131 A CN201910377131 A CN 201910377131A CN 111913561 A CN111913561 A CN 111913561A
Authority
CN
China
Prior art keywords
display
eye
state
user
boundary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910377131.4A
Other languages
Chinese (zh)
Inventor
王稳
孙小霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Suzhou Software Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201910377131.4A priority Critical patent/CN111913561A/en
Publication of CN111913561A publication Critical patent/CN111913561A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides a display method and device based on eye states, display equipment and a storage medium. The method comprises the following steps: acquiring current first eye state information of a user through image acquisition; acquiring second eye state information of the user when the eyes are in a natural state in the current illumination environment; comparing the first eye state information with the second eye state information, and determining the current opening and closing state of the eyes of the user; zooming display contents according to the opening and closing state; therefore, the opening and closing state of the current eyes of the user can be determined, the self-adaptive scaling of the display content is carried out based on the opening and closing state, so that the eye using state of the user can be accurately acquired by the display equipment, the display content can be adjusted in a self-adaptive mode, and the reading comfort of the user is improved. And the first eye state information is acquired through the image, and the second eye state information in the natural state is compared, so that the accuracy can be improved compared with the processing based on the distance.

Description

Display method and device based on eye state, display equipment and storage medium
Technical Field
The present invention relates to the field of information technologies, and in particular, to a display method and apparatus based on an eye state, a display device, and a storage medium.
Background
With the popularization of electronic devices, more and more users use electronic devices to read, for example, to read characters and view images by using electronic books.
In the prior art, for the eye comfort of a reader, the size of the text is adaptively adjusted according to the distance between a user and the electronic device. On one hand, the method is generally only suitable for reading characters, and on the other hand, the method cannot accurately meet the requirement of comfortable watching of a user by zooming the size of the characters based on the distance.
Disclosure of Invention
In view of the above, embodiments of the present invention are intended to provide a display method and apparatus based on eye state, a display device, and a storage medium.
The technical scheme of the invention is realized as follows:
a display method based on eye state, comprising:
acquiring current first eye state information of a user through image acquisition;
acquiring second eye state information of the user when the eyes are in a natural state in the current illumination environment;
comparing the first eye state information with the second eye state information, and determining the current opening and closing state of the eyes of the user;
and zooming the display content according to the opening and closing state.
Based on the above scheme, the obtaining the current first eye state information of the user through image acquisition includes:
collecting face image information;
identifying the face image information and positioning the positions of human eyes;
detecting the position of the human eyes and determining the position of the outer boundary of the iris;
and determining the area ratio of the current first eye opening and closing state of the user according to the iris outer boundary position and the exposed iris area in the face image information.
Based on the above scheme, the comparing the first eye state information and the second eye state information to determine the current open/close state of the eyes of the user includes:
and comparing the area ratio of the first eye opening and closing state with the area ratio of the second eye opening and closing state in a natural state under the current illumination condition to obtain a comparison result.
Based on the above scheme, the zooming display content according to the opening and closing state includes at least one of the following:
if the comparison result shows that the user is currently in a squinting state, amplifying the display content;
if the comparison result shows that the user is in a glaring state at present, reducing the display content;
if the comparison result shows that the eyes of the user are in a natural state currently and if the illumination intensity is lower than a preset light intensity, amplifying the display content;
and if the comparison result shows that the eyes of the user are in a natural state at present and the illumination intensity is equal to or higher than the preset light intensity, maintaining the display size of the display content.
Based on the above scheme, the zooming the display content according to the opening and closing state includes:
and when the duration of the current opening and closing state of the eyes of the user reaches a preset duration, zooming the display content according to the opening and closing state.
Based on the above scheme, the method further comprises:
if the display content is adjusted to the maximum display size, the comparison result shows that the eyes of the user are still maintained in the squinting state, and the user's eyes are determined to be in the fatigue state.
Based on the above scheme, the zooming the display content according to the opening and closing state includes:
acquiring a display boundary of the display content in a display interface;
determining a boundary distance of the display boundary reaching a page boundary of the display interface;
scaling the display content based on the boundary distance.
Based on the above scheme, the scaling the display content based on the boundary distance includes at least one of:
if a shortest boundary distance exists, carrying out equal-ratio scaling on the display content according to the condition that the middle point of the display boundary corresponding to the shortest boundary distance of the boundary distance is taken as a scaling center;
if the boundary distances between the two display boundaries and the page boundary are equal, carrying out equal-ratio scaling on the display content according to the position relation between the display boundaries corresponding to the equal two boundary distances;
if the boundary distances between the three display boundaries and the page boundary are equal, performing equal-ratio scaling on the display content by taking the intersection point of the central lines of the three display boundaries as the scaling center;
and if the boundary distances between the four display boundaries and the page boundary are equal, performing equal-ratio zooming on the display content by taking the center of the display content as the zooming center.
Based on the above solution, if the boundary distance between the two display boundaries and the page boundary is equal, performing equal scaling on the display content according to the position relationship between the display boundaries corresponding to the equal two boundary distances includes:
if the position relation between the display boundaries corresponding to the equal two boundary distances is an adjacent relation, zooming the display content by taking the intersection point of the display boundaries corresponding to the equal two boundary distances as a zooming center;
and if the position relation between the display boundaries corresponding to the two equal boundary distances is opposite side relation, zooming the display content by taking an intersection point formed by connecting the end points of the display boundaries corresponding to the two equal boundary distances as a central point.
Based on the above scheme, the scaling the display content based on the boundary distance further includes:
after the geometric scaling is executed, determining whether the display content meets a scaling stop condition;
and if the scaling stop condition is not met, re-determining the boundary distance between the display boundary where the display content is located and the page boundary.
Based on the above scheme, the method further comprises:
determining whether the eyes of the user execute a preset eye action according to the first eye state information;
and if the eyes of the user execute the preset eye action, zooming the display content according to a zooming instruction corresponding to the preset eye action.
A display device based on eye state, comprising:
the acquisition module is used for acquiring the current first eye state information of the user through image acquisition;
the acquisition module is used for acquiring second eye state information when the eyes of the user are in a natural state in the current illumination environment;
the comparison module is used for comparing the first eye state information with the second eye state information and determining the current opening and closing state of the eyes of the user;
and the zooming module is used for zooming the display content according to the opening and closing state.
Based on the above scheme, the acquisition module includes:
the acquisition submodule is used for acquiring facial image information;
the recognition submodule is used for recognizing the face image information and positioning the position of the human eyes;
the detection submodule is used for detecting the position of the human eyes and determining the position of the outer boundary of the iris;
and the determining submodule is used for determining the area ratio of the current first eye opening and closing state of the user according to the position of the outer boundary of the iris and the area of the iris exposed in the face image information.
Based on the above scheme, the comparison module is specifically configured to compare the first eye opening and closing area ratio with a second eye opening and closing area ratio in a natural state under the current illumination condition, so as to obtain a comparison result.
Based on the foregoing solution, the scaling module is specifically configured to execute one of:
if the comparison result shows that the user is currently in a squinting state, amplifying the display content;
if the comparison result shows that the user is in a glaring state at present, reducing the display content;
if the comparison result shows that the eyes of the user are in a natural state currently and if the illumination intensity is lower than a preset light intensity, amplifying the display content;
and if the comparison result shows that the eyes of the user are in a natural state at present and the illumination intensity is equal to or higher than the preset light intensity, maintaining the display size of the display content.
Based on the above scheme, the zooming module is specifically configured to zoom the display content according to the opening and closing state when the duration of the opening and closing state of the current eyes of the user reaches a preset duration.
Based on the above scheme, the apparatus further comprises:
the first determining module is used for determining that the eyes of the user are in a fatigue state if the display content is adjusted to the maximum display size and the comparison result shows that the eyes of the user are still maintained in a squinting state.
Based on the above scheme, the zooming module is specifically configured to obtain a display boundary of the display content in a display interface; determining a boundary distance of the display boundary reaching a page boundary of the display interface; scaling the display content based on the boundary distance.
Based on the foregoing solution, the scaling module is specifically configured to execute at least one of:
if a shortest boundary distance exists, carrying out equal-ratio scaling on the display content according to the condition that the middle point of the display boundary corresponding to the shortest boundary distance of the boundary distance is taken as a scaling center;
if the boundary distances between the two display boundaries and the page boundary are equal, carrying out equal-ratio scaling on the display content according to the position relation between the display boundaries corresponding to the equal two boundary distances;
if the boundary distances between the three display boundaries and the page boundary are equal, performing equal-ratio scaling on the display content by taking the intersection point of the central lines of the three display boundaries as the scaling center;
and if the boundary distances between the four display boundaries and the page boundary are equal, performing equal-ratio zooming on the display content by taking the center of the display content as the zooming center.
Based on the above scheme, the scaling module is specifically configured to, if the position relationship between the display boundaries corresponding to the equal two boundary distances is an adjacent relationship, scale the display content by using an intersection point of the display boundaries corresponding to the equal two boundary distances as a scaling center; and if the position relation between the display boundaries corresponding to the two equal boundary distances is opposite side relation, zooming the display content by taking an intersection point formed by connecting the end points of the display boundaries corresponding to the two equal boundary distances as a central point.
Based on the above scheme, the scaling module is further configured to determine whether the display content meets a scaling stop condition after the geometric scaling is performed; and if the scaling stop condition is not met, re-determining the boundary distance between the display boundary where the display content is located and the page boundary.
Based on the above scheme, the apparatus further comprises:
a second determining module, configured to determine whether the user's eyes perform a predetermined eye action according to the first eye state information;
the zooming module is further configured to zoom the display content according to a zooming instruction corresponding to the predetermined eye movement if the user's eyes perform the predetermined eye movement.
A display device, comprising:
a display;
a memory;
and the processor is respectively connected with the display and the memory and is used for realizing any one of the display methods based on the eye state by executing the computer executable instructions stored in the memory so as to control the display of the display.
A computer storage medium having stored thereon computer-executable instructions; the computer-executable instructions, when executed, enable any one of the display methods based on eye state.
According to the technical scheme provided by the embodiment of the invention, the first eye state information of the user is acquired by utilizing the image, then the second eye state that the eyes of the user are in a natural state under the current illumination environment is compared in a gathering manner, the opening and closing state of the current eyes of the user is determined, and the self-adaptive scaling of the display content is carried out based on the opening and closing state, so that the display equipment can accurately acquire the eye using state of the user and adaptively adjust the display content so as to improve the reading comfort of the user. And the first eye state information is acquired through the image, and the second eye state information in the natural state is compared, so that the accuracy can be improved compared with the processing based on the distance.
Drawings
Fig. 1 is a schematic flowchart of a display method based on eye state according to an embodiment of the present invention;
fig. 2 is a schematic flow chart illustrating a process of determining an opening/closing state of a human eye according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating a determination of an open/close state of a human eye based on an area ratio according to an embodiment of the present invention;
fig. 4 is a schematic diagram illustrating scaling of graphics based on a specific time ratio as a scaling trigger condition according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating a method for determining eye fatigue according to an embodiment of the invention;
fig. 6 is a schematic diagram illustrating a comparison of the opening/closing state of the eyes, the light rays and the image-text zooming according to an embodiment of the present invention;
fig. 7 is a schematic diagram of scaling graphics based on a boundary distance according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a display device based on eye states according to an embodiment of the present invention;
FIG. 9 is a flowchart illustrating another method for displaying an eye state according to an embodiment of the present invention;
fig. 10 is a schematic flowchart of scaling graphics and text according to an embodiment of the present invention.
Detailed Description
The technical solution of the present invention is further described in detail with reference to the drawings and the specific embodiments of the specification.
As shown in fig. 1, the present embodiment provides a display method based on eye state, including:
step S110: acquiring current first eye state information of a user through image acquisition;
step S120: acquiring second eye state information of the user when the eyes are in a natural state in the current illumination environment;
step S130: comparing the first eye state information with the second eye state information, and determining the current opening and closing state of the eyes of the user;
step S140: and zooming the display content according to the opening and closing state.
The display method based on the eye state provided by the embodiment can be applied to display equipment including but not limited to various display terminals, for example, a fixed display terminal or a mobile display terminal. The mobile display terminal includes but is not limited to a tablet computer or an e-reader, etc.
In this embodiment, the current first eye state information of the user is collected by using an image collection module such as a front camera worn by the display device itself. When image acquisition is carried out, the image acquisition module can only acquire eye images of a user and can also acquire the whole face image of the user.
When the first eye state information is acquired, second eye state information when the eyes are in a natural state in the current illumination environment is acquired.
The natural state is the eye state when the user neither glazes nor squints. The natural state is a state in which the user feels comfortable with the eyes.
And then comparing the first eye state information with the second eye state information to obtain the current opening and closing state of the eyes of the user.
Because of the physiological decision of using eyes, if a user feels more strenuous to see an object, the user can naturally squint eyes; a natural glaring is provided if the user feels the visual impact of an item. Therefore, in the embodiment, the opening and closing state can be accurately determined through the information of the two eye states of the current state and the natural state.
If the accuracy of the opening and closing state is improved, the displayed content is zoomed according to the opening and closing degree of eyes, so that the user can watch more comfortably, and the inaccurate zooming caused by only depending on the distance is reduced.
As shown in fig. 2, the step S110 may include:
step S111: collecting face image information;
step S112: identifying the face image information and positioning the positions of human eyes;
step S113: detecting the position of the human eyes and determining the position of the outer boundary of the iris;
step S114: and determining the area ratio of the current first eye opening and closing state of the user according to the iris outer boundary position and the exposed iris area in the face image information.
In the face image information collected in the embodiment, because the face image information is collected, compared with the eye image information which is collected only, the incomplete eye image collection phenomenon can be reduced compared with the eye image information which is collected only.
In this embodiment, the eye image information is recognized to locate the position of the human eye. If the position of the eyes is located, the eyes are initially located.
And detecting the position of the human eye to obtain the position of the outer boundary of the iris of the human eye. In this embodiment, the outer boundary of the iris of the human eye corresponds to the outer boundary of the eyeball of the human eye.
If the position of the outer boundary of the iris is determined, and the area of the iris exposed in the face image is combined, the current eye opening degree (i.e. the opening and closing degree) can be obtained.
Specifically, for example, the area of the iris of the line in the image is compared with the area of the complete iris determined according to the position of the outer boundary of the iris, so as to obtain the area ratio of the first eye in the open-close state. The first eye is in an open-close state with an area ratio of 0-1.
In step S112, the position of the human eye can be determined based on the color comparison. Since the color of the human eye and the color information of the skin around the eye are different, the position of the human eye is identified by the color information in the present embodiment.
Specifically, for example, the position of the eye is preliminarily located by an edge detection algorithm. Then, the color information of the image is converted from an RGB space to a YUV space, and then the skin and the positions of the eyes are accurately segmented according to the U color components in the YUV space, so that the positions of the eyes are positioned.
The preliminary location of the eye position by the edge detection algorithm may include:
acquiring the gray value of each pixel in the face image information;
determining the position with the gradient change larger than a preset value as a boundary based on the gradient change of the gray value;
and positioning the position of the eye according to the boundary.
Of course, the above is only an example of locating the position of the human eye, and the specific implementation is not limited to any one of the above.
The iris outer boundary may be located based on a gray projection method in step S113.
For example, the step S113 may include:
and projecting to any point in the pupil by using a gray projection method, wherein the point is used as an initial circle center O of the pupil. And finding several points around the initial circle center as the center, and continuously evolving the points according to the operation mechanism of Snake (Snake) until the boundary of the pupil, namely the rough inner boundary of the iris. The centroid of Snake is used as the center of the pupil, the average value of the distances from each Snake point to the centroid is used as the radius of the pupil, and then the position of the inner boundary of the iris can be accurately positioned through further correction. The center of the pupil is approximately regarded as the center of the iris outer boundary, and a parameter r is used for searching, so that the iris outer boundary is determined.
In summary, in the present embodiment, the eye opening/closing area ratio of the current eyes of the user can be accurately calculated through the above steps S111 to S114.
Fig. 3 is a schematic diagram showing a natural state, a glaring state, and a squint state obtained by the area ratio determination method.
In some embodiments, the step S130 may include:
and comparing the area ratio of the first eye opening and closing state with the area ratio of the second eye opening and closing state in a natural state under the current illumination condition to obtain a comparison result.
The illumination information representing the current illumination condition, such as the current illumination intensity, is acquired by using a brightness sensor, and the natural state of human eyes is different under different illumination conditions.
In this embodiment, the current first eye opening and closing area ratio is compared with the second eye opening and closing area ratio in the predetermined natural state, and the comparison result is obtained, so as to know whether the eyes of the current user are in the natural state.
In other embodiments, the step S140 may include at least one of:
if the comparison result shows that the user is currently in a squinting state, amplifying the display content;
if the comparison result shows that the user is in a glaring state at present, reducing the display content;
if the comparison result shows that the eyes of the user are in a natural state currently and if the illumination intensity is lower than a preset light intensity, amplifying the display content;
and if the comparison result shows that the eyes of the user are in a natural state at present and the illumination intensity is smaller than or lower than the preset light intensity, maintaining the display size of the display content.
If the user is in a squinting state, the display content can be enlarged by a predetermined enlargement step. Specifically, if the currently displayed content is a character, the font size of the character may be increased according to the increasing step size of the font size. In some embodiments, the increase in size may be made directly on the current size.
If the currently displayed content is an image, the image can be magnified at a constant magnification factor. Here, the image is magnified with equal magnification ratio in the first direction and the second direction. The first direction is perpendicular to the second direction. For example, the magnification factor is 1.1 times, etc.
And if the current user is in the glaring state, distinguishing the characters and the images for reduction. The characters can be reduced by word size or across word sizes. The reduction across word sizes here includes: zooming out across one or more font sizes; for example, a direct reduction from a 3 to a 5 word is a cross-word reduction.
The same applies to the image, with an equal scaling down.
If the current user's eyes are in a natural state. It is determined whether to maintain the display size of the display content according to the illumination intensity. If the light intensity ratio is too small, maintaining the large font size display is substantially not beneficial to the long-term eye protection of the user, so that the display content can be properly amplified, which is beneficial to the eye protection of the user.
In still other embodiments, the step S140 may include: and when the duration of the current opening and closing state of the eyes of the user reaches a preset duration, zooming the display content according to the opening and closing state.
In this embodiment, due to the physiological uncontrollable action, the user may naturally and unnaturally squint or gladiolus movement, in order to accurately determine whether the display content needs to be zoomed. In this embodiment, the duration of the user maintaining the squint state or the glaring state is determined to time, if the duration reaches a preset duration, the display content is zoomed according to the opening and closing state, otherwise, the current display size of the display content is maintained.
As shown in fig. 4, the triggering condition for adjusting the display content (e.g., the image and text in fig. 4) is performed, the corresponding eye state is maintained for a specific time (i.e., corresponding to the specific time ratio in fig. 4), and the current display size of the display content is continuously maintained, i.e., remains unchanged, if the natural state is maintained for the specific time (corresponding to the preset time length). If the squint state reaches a specific time ratio, amplifying the image-text; and if the glaring state reaches the specific plastic part ratio, reducing the image-text.
In some embodiments, the method further comprises:
if the display content is adjusted to the maximum display size, the comparison result shows that the eyes of the user are still maintained in the squinting state, and the user's eyes are determined to be in the fatigue state.
If the display content has been adjusted to the maximum display size and the user is still in a squinting state, it is determined that the user is in a tired state, at which time the user may be in a squinting state as well, no matter how the adjustment is made.
As shown in fig. 5, the text is magnified when the squinting state continuously reaches the preset duration, but the text is magnified continuously when the squinting state continuously reaches the preset duration, and if the text is magnified to the maximum value, it is determined that the eyes of the user are in a fatigue state currently.
The display content having been adjusted to a maximum display size includes, but is not limited to:
the characters have been adjusted to the maximum font size;
the image has been adjusted to the maximum display area. The maximum display area here may be a preset display area, or may be a display area corresponding to the maximum display resolution of the image without distortion according to the resolution of the image.
The method further comprises the following steps:
if the eyes of the user are in a fatigue state, the current display size of the display content is maintained, for example, the current font size of the characters is maintained, and the current display area of the image is maintained.
As shown in fig. 6, the corresponding relationship between the opening and closing state of the eyes, the light and the display area of the graphics is pre-established, and after the user state is determined in step S130, the scaling size of the graphics, which enables the user to switch from the unnatural state to the natural state, is queried by combining the light in the current illumination environment.
In the embodiment of the invention, the image and text comprise: images and/or text.
In other embodiments, the step S140 may further include:
acquiring a display boundary of the display content in a display interface;
determining a boundary distance of the display boundary reaching a page boundary of the display interface;
scaling the display content based on the boundary distance.
In this embodiment, if the display interface is a full-screen display interface, the display interface is full of the entire display screen, and the page boundary of the display interface is the boundary of the screen. And if the display interface is a non-full screen display interface, the display interface is not fully paved with the whole display interface, and at least one page boundary of the display interface is in the display screen.
The display boundary of the display content may include: the boundaries of the image, and/or the boundaries of the text block.
In the present embodiment, the boundary distances between the display boundary and the page boundary are calculated, respectively.
As shown in fig. 7, L1, L2, L3 and L4 are the above boundary distances.
After the boundary distance is determined, zooming the display content based on the boundary distance may specifically include:
determining a zoom center based on the boundary distance;
adjusting a display area of the image and/or a text block composed of a plurality of texts based on the zoom center.
And if the zooming centers are different, the display positions of the zoomed images or the zoomed text blocks on the display interface are different.
In this embodiment, in order to maintain the zoomed display content within the sight line of the user, it is avoided that the user needs to look for the position that the user looked at before again after the adjustment, and the zoom center is determined by the boundary distance for zooming.
In other embodiments, the step S140 may include at least one of:
if a shortest boundary distance exists, carrying out equal-ratio scaling on the display content according to the condition that the middle point of the display boundary corresponding to the shortest boundary distance of the boundary distance is taken as a scaling center;
if the boundary distances between the two display boundaries and the page boundary are equal, carrying out equal-ratio scaling on the display content according to the position relation between the display boundaries corresponding to the equal two boundary distances;
if the boundary distances between the three display boundaries and the page boundary are equal, performing equal-ratio scaling on the display content by taking the intersection point of the central lines of the three display boundaries as the scaling center;
and if the boundary distances between the four display boundaries and the page boundary are equal, performing equal-ratio zooming on the display content by taking the center of the display content as the zooming center.
Firstly, there are 4 boundary distances between the display boundary and the page boundary, firstly, the shortest boundary distance is selected, and then zooming is performed based on the middle point of the display boundary corresponding to the shortest boundary distance as the zooming center.
In the embodiment of the invention, the zooming center is a point which does not change the position in the zooming process. If the display boundary corresponding to the shortest boundary distance is enlarged with the midpoint thereof as the zoom center, the display boundary of the display content remains stationary, but the display content moves to the center position of the display interface. If the middle point of the display boundary corresponding to the shortest boundary distance is taken as a zooming center for zooming out, the display boundary keeps the original position still, and the display content moves to the position of the display boundary on the display interface.
Referring to FIG. 7, the boundary distance L1 is initially the shortest, and is now centered on the midpoint Q of L1.
The scaling is synchronously zooming in or zooming out in the first direction and the second direction by the same scaling. The first direction is perpendicular to the second direction.
Since the middle point of the shortest boundary distance is preferably used as the zoom center before or when the boundary distance is determined for the first time, the two boundary distances may start to be equal in the zooming process, and in this case, the position relationship of the display boundary corresponding to the boundary distance corresponding to the two shortest boundary distances is further analyzed to determine the zoom center.
If three shortest boundary distances appear, the intersection point of the center lines of the page boundaries corresponding to the three shortest boundary distances is used as a zoom center, and in this case, the zoom center generally appears in the image. If the image is a rectangular image, the zoom center is generally the midpoint of the image.
If the distances of the 4 boundaries are equal, scaling the image or the character block directly by taking the center of the image or the center of the character block as a scaling center.
Further, if the boundary distance between the two display boundaries and the page boundary is equal, performing equal scaling on the display content according to the position relationship between the display boundaries corresponding to the equal two boundary distances includes:
if the position relation between the display boundaries corresponding to the equal two boundary distances is an adjacent relation, zooming the display content by taking the intersection point of the display boundaries corresponding to the equal two boundary distances as a zooming center;
and if the position relation between the display boundaries corresponding to the two equal boundary distances is opposite side relation, zooming the display content by taking an intersection point formed by connecting the end points of the display boundaries corresponding to the two equal boundary distances as a central point.
And if the two boundary distances are equal, determining the zoom center according to the position relation between the display boundaries corresponding to the two boundary distances.
For example, in fig. 7, L2 is equal to L1, and the display boundaries corresponding to L1 and L2 are adjacent, and the intersection point P of L1 and L2 is taken as the zoom center.
As shown in fig. 7, if L2 is equal to L4, and the display boundaries corresponding to L2 and L4 are opposite, the end points of the two display boundaries are cross-connected, and the intersection point P' of the two cross-connected lines is used as the zoom center.
If L2, L1, and L3 are all equal, the intersection N of the center lines of the display boundaries corresponding to L2, L1, and L3 is taken as the zoom center.
In this embodiment, the scaling the display content based on the boundary distance includes:
after the geometric scaling is executed, determining whether the display content meets a scaling stop condition;
if the scaling stop is not satisfied.
The meeting of the zoom stop condition comprises at least one of:
in the amplification process, the actual display area of the display content reaches the maximum display area;
in the process of reducing, the actual reduced area of the display content reaches the minimum display area;
during the zooming in or out process, the display content has reached the currently desired display area.
If the zoom stop condition is not satisfied, it is described that the display content needs to be further zoomed.
After one or more zooms are completed, the distance between the display boundary and the page boundary needs to be re-determined.
In the foregoing embodiment, the device adaptively performs scaling of the display content according to the acquired image, so that the device can automatically adjust the display size of the display content without a special zoom instruction input by the user, thereby enabling the user to be in a more comfortable eye-using state. But in some cases the adaptation of the device may not be able to meet the user's requirements, or the display device may not be able to trigger the adaptation, the method further comprises:
determining whether the eyes of the user execute a preset eye action according to the first eye state information;
and if the eyes of the user execute the preset eye action, zooming the display content according to a zooming instruction corresponding to the preset eye action.
The predetermined ocular action includes, but is not limited to, at least one of:
a predetermined number of consecutive blinking actions;
a predetermined number of successive glazery movements, etc.
In short, the predetermined eye movements may be eye movements that the user intentionally inputs, and if the predetermined eye movements are captured through an image, the display content may be reduced or enlarged according to a correspondence relationship between the predetermined eye movements and the enlargement and reduction.
In this way, the display device can either adaptively zoom the display content or perform zooming of the display content for the user's intention based on the user's eye movement.
As shown in fig. 8, a display device based on eye state includes:
the acquisition module 110 is configured to acquire current first eye state information of a user through image acquisition;
the obtaining module 120 is configured to obtain second eye state information when the eyes of the user are in a natural state in the current illumination environment;
a comparison module 130, configured to compare the first eye state information and the second eye state information, and determine a current opening/closing state of the eyes of the user;
and a zooming module 140 for zooming the display content according to the opening and closing state.
In some embodiments, the acquisition module 110, the obtaining module 120, the comparison module 130, and the scaling module 140 may be program modules; the program modules can realize the functions of the modules after being executed by the processor.
In other embodiments, the acquisition module 110, the obtaining module 120, the comparing module 130, and the scaling module 140 may be a hardware-software combination module; the soft and hard combining module may include various integrated circuits, such as a complex programmable circuit or a field programmable circuit.
In still other embodiments, the acquisition module 110, the acquisition module 120, the comparison module 130, and the scaling module 140 may be purely hardware modules. Including but not limited to application specific integrated circuits.
Optionally, the acquisition module 110 includes:
the acquisition submodule is used for acquiring facial image information;
the recognition submodule is used for recognizing the face image information and positioning the position of the human eyes;
the detection submodule is used for detecting the position of the human eyes and determining the position of the outer boundary of the iris;
and the determining submodule is used for determining the area ratio of the current first eye opening and closing state of the user according to the position of the outer boundary of the iris and the area of the iris exposed in the face image information.
Optionally, the comparing module 130 is specifically configured to compare the first eye opening and closing area ratio with a second eye opening and closing area ratio in a natural state under the current illumination condition, so as to obtain a comparison result.
Optionally, the scaling module 140 is specifically configured to perform one of the following:
if the comparison result shows that the user is currently in a squinting state, amplifying the display content;
if the comparison result shows that the user is in a glaring state at present, reducing the display content;
if the comparison result shows that the eyes of the user are in a natural state currently and if the illumination intensity is lower than a preset light intensity, amplifying the display content;
and if the comparison result shows that the eyes of the user are in a natural state at present and the illumination intensity is equal to or higher than the preset light intensity, maintaining the display size of the display content.
Optionally, the zooming module 140 is specifically configured to zoom the display content according to the opening/closing state when the duration of the current opening/closing state of the eyes of the user reaches a preset duration.
Optionally, the apparatus further comprises:
the first determining module is used for determining that the eyes of the user are in a fatigue state if the display content is adjusted to the maximum display size and the comparison result shows that the eyes of the user are still maintained in a squinting state.
Optionally, the zooming module 140 is specifically configured to obtain a display boundary of the display content in a display interface; determining a boundary distance of the display boundary reaching a page boundary of the display interface; scaling the display content based on the boundary distance.
Optionally, the scaling module 140 is specifically configured to execute at least one of:
if a shortest boundary distance exists, carrying out equal-ratio scaling on the display content according to the condition that the middle point of the display boundary corresponding to the shortest boundary distance of the boundary distance is taken as a scaling center;
if the boundary distances between the two display boundaries and the page boundary are equal, carrying out equal-ratio scaling on the display content according to the position relation between the display boundaries corresponding to the equal two boundary distances;
if the boundary distances between the three display boundaries and the page boundary are equal, performing equal-ratio scaling on the display content by taking the intersection point of the central lines of the three display boundaries as the scaling center;
and if the boundary distances between the four display boundaries and the page boundary are equal, performing equal-ratio zooming on the display content by taking the center of the display content as the zooming center.
Optionally, the scaling module 140 is specifically configured to, if the position relationship between the display boundaries corresponding to the equal two boundary distances is an adjacent relationship, scale the display content by using an intersection point of the display boundaries corresponding to the equal two boundary distances as a scaling center; and if the position relation between the display boundaries corresponding to the two equal boundary distances is opposite side relation, zooming the display content by taking an intersection point formed by connecting the end points of the display boundaries corresponding to the two equal boundary distances as a central point.
Optionally, the scaling module 140 is further configured to determine whether the display content meets a scaling stop condition after the scaling module performs the geometric scaling; and if the scaling stop condition is not met, re-determining the boundary distance between the display boundary where the display content is located and the page boundary.
Optionally, the apparatus further comprises:
a second determining module, configured to determine whether the user's eyes perform a predetermined eye action according to the first eye state information;
the zooming module 140 is further configured to zoom the display content according to a zooming instruction corresponding to the predetermined eye movement if the eye of the user performs the predetermined eye movement.
Several specific examples are provided below in connection with any of the embodiments described above:
as shown in fig. 9, the method for displaying zoom of mobile terminal graphics based on eye state acquisition according to this embodiment includes:
s1: acquiring facial feature information of a user by using a front-facing video input device;
in this embodiment, the video input device located at the front end of the smart device is used to obtain the facial image information of the user.
S2: analyze and confirm the approximate position of the human eye on the face:
in this embodiment, first, a face is determined by skin color through a method of identifying an object by color information and an edge detection method, a color image is converted from RGB to YUV space, a U color image component of the image is extracted, and a rough position of an eye is established by symmetry of a hole and a facial organ where the image appears by skin segmentation. RGB goes to YUV formula:
Figure BDA0002052085170000181
the method of skin color analysis is fast but susceptible to light sources, and the present example is localized in conjunction with an edge detection method, the basic idea of which is to determine whether each pixel is located on the boundary of an object by detecting the state of the pixel and its neighborhood. The fact that the edge detection is to identify the point of the digital image with obvious gray value change is to extract the boundary line between the object and the background in the image by adopting an algorithm so as to determine the position of human eyes on the face.
S3: analyzing the precise position of the human eye;
determining the accurate position of human eyes by a contour line method, projecting the accurate position into the pupil by a gray projection method, and (1) projecting the accurate position into any point in the pupil by the gray projection method, wherein the point is used as an initial circle center O of the pupil. (2) And finding several points around the initial circle center as the center, and continuously evolving the points according to the Snake operation mechanism until reaching the boundary of the pupil, namely the rough inner boundary of the iris. (3) The centroid of Snake is used as the center of the pupil, the average value of the distances from each Snake point to the centroid is used as the radius of the pupil, and then the position of the inner boundary of the iris can be accurately positioned through further correction. (4) The center of the pupil is approximately regarded as the center of the outer boundary of the iris, and a parameter r is used for searching. An iris outer boundary is determined.
S4: reading the natural and comfortable state of human eyes under different illumination environments:
the illumination intensity sensor is utilized to read and record the natural and comfortable state of human eyes under different illumination conditions, and the area ratio of the human eye opening and closing state under different illumination and natural states is calculated and used as the data reference used by the user.
S5: analyzing the opening and closing state of human eyes; can include the following steps:
combining the steps, obtaining an opening and closing image of human eyes by utilizing gray level projection, calculating the area proportion of the opening and closing degree in the whole eyes, comparing the area proportion with similar illumination in a database and the area proportion of the opening and closing state of the human eyes in a natural state, and judging the opening and closing state of the human eyes, wherein the opening and closing state belongs to the natural state, the squint state and the gladiole state. And comparing the area ratio of the eye opening and closing state with the area ratio of the natural comfortable state, and if the squint state or the gladiolus state lasts for a certain time within the preset time, realizing the image-text zooming operation.
S6: judging whether a trigger condition is reached;
in this embodiment, a segment of correspondence table of the eye opening and closing state, the size of the light and the font, and the scaling ratio of the image is first established, the font is scaled by 1pt, and the scaling factor of the image 0.1 is scaled in an equal ratio until a preset value is reached. If the font reaches the maximum/minimum value and the image does not reach, the image-text can be continuously zoomed, and the user can adjust the image-text to restore the initial size by quickly blinking.
As shown in fig. 4, when a user reads an image, whether the eye belongs to a natural state is determined by the data of the natural state of the user stored in the database and the percentage of the opening degree of the eye to a certain specific time, the rule is used as a trigger condition for zooming the image, and the user blinks more than three times quickly as a trigger condition for recovering the initial size.
As shown in fig. 4, when the user reads, the user continues to obtain an eye squinting state or an eye glaring state, the eye state of the user returns to a natural state to form a digital signal cycle, and the cycle is a digital signal cycle of three states and is used for counting and judging whether to continue to execute the operation until a preset value, and judging whether to be tired.
S7: zooming the image-text when a triggering condition is reached; and if the eye state reaches a trigger condition, executing the zooming operation of the located page.
As shown in fig. 4, when a user reads, the opening and closing degree of the eyes is determined as a natural state, the size of the font is maintained by strong light and natural light, and the font is amplified by weak light; the opening and closing degree of the eyes is judged to be squinting state, 1pt amplifying font is added according to the font size each time, the zooming factor of the picture 0.1 is subjected to geometric amplification, and the opening and closing state of the eyes is judged again until the eyes are natural. Otherwise, if the font is too large, the user achieves the glaring state by increasing the opening and closing degree of the eyes and reduces the image and text to the normal size. The user adjusts the image-text to restore the initial size by blinking more than three times.
Conditions and means for obtaining the preset values:
(1) and setting a minimum font and a maximum font, a maximum picture and a minimum picture in the corresponding table, and if the maximum font and the picture are still continuously squinted, judging that the user is tired in reading.
(2) Determining the opening and closing degree of the eyes by combining the behavior data of the user through gray level projection comparison, and judging whether the eyes belong to a natural state or not according to the percentage of the state in a certain specific time, squinting the pictures and texts, and glaring the pictures and texts.
(3) And establishing a corresponding table of the opening and closing state of eyes, the sizes of light rays and fonts and a corresponding table of the picture scaling ratio, wherein the fonts are scaled by the size of 1pt, and scaling factors of the pictures of 0.1 are scaled in an equal ratio until a preset value is reached. If the font reaches the maximum/minimum value and the image does not reach, the image-text can be continuously zoomed.
(4) The user adjusts the image-text to restore the initial size by blinking more than three times.
S8: and if the trigger condition is not met, maintaining the current display of the image-text.
In this embodiment, the method for determining the eye state includes:
identifying the eye opening and closing state and reading an image;
reading the eye opening and closing state of the user under the natural and comfortable reading condition under different illumination intensities;
calculating the area proportion of the current eye opening and closing degree in the whole eyes;
and comparing the opening and closing state area ratio of the human eyes with natural illumination in a database and the opening and closing state area ratio of the human eyes in a natural state, and judging the opening and closing state of the human eyes, wherein the opening and closing state belongs to a natural state, a squint state and a gladiole state.
A page scaling determination method based on facial recognition and eye state comprises the following steps:
judging whether the opening and closing state of the eyes reaches a preset trigger condition or not;
if the trigger condition is met, executing page image-text magnification or reduction operation;
and if the squint state or the glaring state lasts for a certain time within the preset time, the image-text zooming operation is realized.
The font scaling method may include:
judging the illumination condition and intensity by using a sensor;
and determining whether a preset trigger condition is reached or not by combining the eye state.
The teletext scaling method, in particular the scaling of the image, may comprise:
determining a frame of the image and text;
determining the distance from the frame to the boundary of the screen or the boundary of other pictures;
comparing the lengths of L1, L2, L3 and L4;
determining the shortest distance L1;
taking the center Q of the side of the shortest distance L1, and carrying out geometric scaling by taking the Q as the scaling center of the image-text;
judging whether the maximum or minimum area occupation ratio allowed by the interface is reached;
if yes, stopping zooming;
if not, a second equidistant edge appears;
judging whether the equidistant edges are adjacent edges or opposite edges;
if the adjacent side exists, taking the intersection point p of the adjacent side as a zooming center to perform equal scaling;
if the edge is opposite, taking the rotation center p' as a zooming center, and carrying out geometric zooming;
judging whether the maximum or minimum area occupation ratio allowed by the interface is reached;
if yes, stopping zooming;
if not, a third equidistant side appears;
and (4) taking the central connecting line of the three edges to obtain a point N as a zooming center to perform geometric zooming until the allowable maximum area or minimum area is reached.
As shown in fig. 10, the center of the first zooming is determined, the center Q of the side with the shortest distance L1 is taken, and the image is zoomed in an equal ratio by taking Q as the center of the image;
and in the zooming process, the zooming center is changed according to the distance from the page boundary or other pictures. Referring to fig. 10, if the first scaling does not reach the maximum or minimum area ratio preset in the page, a second equidistant edge appears, and whether the edge is an adjacent edge or an opposite edge is determined, if the edge is an adjacent edge, the intersection point P is taken as a new center, and the scaling is performed; if the rotation center P 'is opposite, the rotation center P' is taken as a new center, and geometric scaling is performed. And if the second zooming does not reach the maximum or minimum area ratio preset in the page, a third equidistant side appears, the central points of the three sides are connected to obtain a point N, and the zooming is carried out in an equal ratio by taking the N as the center until the maximum or minimum area ratio allowed by the interface is reached. And stopping zooming if the zooming process reaches the maximum or minimum area ratio allowed by the page set by the page.
The eye state judging method can comprise the following steps:
determining facial features by color information positioning and combining with an edge detection method;
determining the pupil position by contour method gray projection;
determining the position of the inner boundary of the iris of the human eye;
determining the position of the outer boundary of the iris of the human eye;
the open-close state of human eyes is determined and compared with the natural state of human eyes.
6. The acquisition path of the trigger condition comprises:
(1) scaling decision
The light environment of human eyes is normal light, dark and bright;
the opening and closing degree area ratio of human eyes is in an unnatural state, and the eyes are squinted or glared;
the unnatural state occupies a specific gravity higher than a preset value for a certain time.
(2) Fatigue determination
Referring to fig. 5, when the image and text are enlarged to the maximum value of the preset value, the human eyes are still in an eye squinting state.
According to the application, the eye reading state of a user is compared with the natural state through the eye state identification sensor, the front video input device, the distance sensor and the illumination intensity sensor, whether control is triggered or not is judged, and therefore the image and text are zoomed.
Judging whether a preset trigger condition is reached or not by adopting the proportion occupied by the human eyes with the opening and closing degrees in an unnatural state (squint or gladiolus) under different illumination intensities, wherein light rays are not added into a considered range by other schemes; by comparing the area ratio of the eye opening and closing state with the area ratio of the natural comfortable state, if the squint state or the gladio state lasts for a certain time within the preset time, the image-text zooming operation is realized, and other schemes do not provide a judgment method in detail. Compared with the scheme of zooming the pictures and texts through the distance between the screen and the user, the method is more accurate, has strong anti-interference capability, and has better reading experience when the user wears glasses or the screen is not bright enough. Meanwhile, this example realizes the scaling manner of the image, which is not included in the third patent.
Specific determination conditions include:
illuminating the environment, wherein the light is natural, strong or weak;
the current eye opening and closing degree accounts for the area proportion of the whole eyes, and the current eye opening and closing degree is compared with the area proportion of similar illumination in a database and the eye opening and closing state in a natural state;
the opening and closing degree of the human eyes is in an unnatural state, and the human eyes are squinted or glad;
the specific gravity occupied by the unnatural state in a certain time is higher than a preset value;
magnifying the font by the squint state, and reducing the image-text by the gladiole state;
when the image and text are amplified to the maximum value of the preset value, the human eyes are still in the squinting state, and fatigue is judged.
The method comprises the steps of firstly judging the light environment by using an illumination intensity sensor, acquiring facial feature information of a user by using a front-mounted video input device, analyzing and determining the position of human eyes on the face, calculating the area proportion of the current eye opening and closing degree to the whole eye, comparing the area proportion with the area proportion of the eye opening and closing state in a natural state and similar illumination in a database, and judging the opening and closing state of the human eyes, wherein the opening and closing state belongs to a natural state, a squint state and a gladiole state. Judging whether the opening and closing degree of human eyes and the specific gravity occupied by the unnatural state in a certain time reach a preset value or not, if so, executing the zooming operation of the located page, and realizing smooth reading of the pictures and texts in the screen of the intelligent device under the condition that a user inconveniently controls the zooming of the pictures and texts of the page by using hands or voice.
The present embodiment provides a display device including:
display device
A memory;
and a processor, connected to the display and the memory respectively, for implementing the eye state based display method provided by any of the foregoing technical solutions by executing computer executable instructions located on the memory, so as to control the display of the display, for example, one or more of the eye state based display methods shown in fig. 1, fig. 2, fig. 9 and fig. 10.
The memory can be various types of memories, such as random access memory, read only memory, flash memory, and the like. The memory may be used for information storage, e.g., storing computer-executable instructions, etc. The computer-executable instructions may be various program instructions, such as object program instructions and/or source program instructions, and the like.
The processor may be various types of processors, such as a central processing unit, a microprocessor, a digital signal processor, a programmable array, a digital signal processor, an application specific integrated circuit, or an image processor, among others.
The processor may be connected to the memory via a bus. The bus may be an integrated circuit bus or the like.
In some embodiments, the display apparatus may further include: a communication interface, which may include: a network interface, e.g., a local area network interface, a transceiver antenna, etc. The communication interface is also connected with the processor and can be used for information transceiving.
In some embodiments, the display device also includes a human interaction interface, which may include various input and output devices, such as a keyboard, a touch screen, and the like, for example.
The present embodiments provide a computer storage medium having stored thereon computer-executable instructions; the computer executable instructions, when executed, can be applied to one or more of the display device, the database, and the eye state based display method provided by one or more aspects of the first private network, for example, one or more of the eye state based display methods shown in fig. 1, 2, 9, and 10.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (11)

1. A display method based on eye state, comprising:
acquiring current first eye state information of a user through image acquisition;
acquiring second eye state information of the user when the eyes are in a natural state in the current illumination environment;
comparing the first eye state information with the second eye state information, and determining the current opening and closing state of the eyes of the user;
and zooming the display content according to the opening and closing state.
2. The method of claim 1, wherein obtaining the current first eye state information of the user through image acquisition comprises:
collecting face image information;
identifying the face image information and positioning the positions of human eyes;
detecting the position of the human eyes and determining the position of the outer boundary of the iris;
and determining the area ratio of the current first eye opening and closing state of the user according to the iris outer boundary position and the exposed iris area in the face image information.
3. The method of claim 2, wherein the comparing the first eye state information and the second eye state information to determine a current on-off state of the user's eyes comprises:
and comparing the area ratio of the first eye opening and closing state with the area ratio of the second eye opening and closing state in a natural state under the current illumination condition to obtain a comparison result.
4. The method of claim 3, wherein the zooming the display content according to the open-close state comprises:
and when the duration of the current opening and closing state of the eyes of the user reaches a preset duration, zooming the display content according to the opening and closing state.
5. The method of claim 4, further comprising:
if the display content is adjusted to the maximum display size, the comparison result shows that the eyes of the user are still maintained in the squinting state, and the user's eyes are determined to be in the fatigue state.
6. The method according to any one of claims 1 to 5, wherein the zooming the display content according to the open-close state comprises:
acquiring a display boundary of the display content in a display interface;
determining a boundary distance of the display boundary reaching a page boundary of the display interface;
scaling the display content based on the boundary distance.
7. The method of claim 6, wherein the scaling the display content based on the boundary distance comprises at least one of:
if a shortest boundary distance exists, carrying out equal-ratio scaling on the display content according to the condition that the middle point of the display boundary corresponding to the shortest boundary distance of the boundary distance is taken as a scaling center;
if the boundary distances between the two display boundaries and the page boundary are equal, carrying out equal-ratio scaling on the display content according to the position relation between the display boundaries corresponding to the equal two boundary distances;
if the boundary distances between the three display boundaries and the page boundary are equal, performing equal-ratio scaling on the display content by taking the intersection point of the central lines of the three display boundaries as the scaling center;
and if the boundary distances between the four display boundaries and the page boundary are equal, performing equal-ratio zooming on the display content by taking the center of the display content as the zooming center.
8. The method according to any one of claims 1 to 3, further comprising:
determining whether the eyes of the user execute a preset eye action according to the first eye state information;
and if the eyes of the user execute the preset eye action, zooming the display content according to a zooming instruction corresponding to the preset eye action.
9. An eye state-based display device, comprising:
the acquisition module is used for acquiring the current first eye state information of the user through image acquisition;
the acquisition module is used for acquiring second eye state information when the eyes of the user are in a natural state in the current illumination environment;
the comparison module is used for comparing the first eye state information with the second eye state information and determining the current opening and closing state of the eyes of the user;
and the zooming module is used for zooming the display content according to the opening and closing state.
10. A display device, comprising:
a display;
a memory;
a processor, coupled to the display and the memory, respectively, for implementing the method provided in any one of claims 1 to 8 by executing computer-executable instructions stored on the memory, to control the display of the display.
11. A computer storage medium having stored thereon computer-executable instructions; the computer-executable instructions, when executed, enable the method provided by any one of claims 1 to 8 to be carried out.
CN201910377131.4A 2019-05-07 2019-05-07 Display method and device based on eye state, display equipment and storage medium Pending CN111913561A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910377131.4A CN111913561A (en) 2019-05-07 2019-05-07 Display method and device based on eye state, display equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910377131.4A CN111913561A (en) 2019-05-07 2019-05-07 Display method and device based on eye state, display equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111913561A true CN111913561A (en) 2020-11-10

Family

ID=73242466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910377131.4A Pending CN111913561A (en) 2019-05-07 2019-05-07 Display method and device based on eye state, display equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111913561A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112799516A (en) * 2021-02-05 2021-05-14 深圳技术大学 Screen content adjusting method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120314047A1 (en) * 2011-06-13 2012-12-13 Sony Corporation Information processing apparatus and program
CN108720851A (en) * 2018-05-23 2018-11-02 释码融和(上海)信息科技有限公司 A kind of driving condition detection method, mobile terminal and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120314047A1 (en) * 2011-06-13 2012-12-13 Sony Corporation Information processing apparatus and program
CN108720851A (en) * 2018-05-23 2018-11-02 释码融和(上海)信息科技有限公司 A kind of driving condition detection method, mobile terminal and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112799516A (en) * 2021-02-05 2021-05-14 深圳技术大学 Screen content adjusting method and system

Similar Documents

Publication Publication Date Title
CN108229277B (en) Gesture recognition method, gesture control method, multilayer neural network training method, device and electronic equipment
Zhao et al. Foresee: A customizable head-mounted vision enhancement system for people with low vision
CN108989571B (en) Adaptive font adjustment method and device for mobile phone character reading
US10284817B2 (en) Device for and method of corneal imaging
KR101890542B1 (en) System and method for display enhancement
CN109633907B (en) Method for automatically adjusting brightness of monocular AR (augmented reality) glasses and storage medium
KR100556856B1 (en) Screen control method and apparatus in mobile telecommunication terminal equipment
CN106709404B (en) Image processing apparatus and image processing method
CN107368806B (en) Image rectification method, image rectification device, computer-readable storage medium and computer equipment
WO2022137603A1 (en) Determination method, determination device, and determination program
CN112183200B (en) Eye movement tracking method and system based on video image
CN110706283B (en) Calibration method and device for sight tracking, mobile terminal and storage medium
CN113191938B (en) Image processing method, image processing device, electronic equipment and storage medium
CN102043942A (en) Visual direction judging method, image processing method, image processing device and display device
WO2018076172A1 (en) Image display method and terminal
CN109194952B (en) Head-mounted eye movement tracking device and eye movement tracking method thereof
CN111913561A (en) Display method and device based on eye state, display equipment and storage medium
CN114021211A (en) Intelligent peep-proof system
CN107147786B (en) Image acquisition control method and device for intelligent terminal
CN111612780B (en) Human eye vision recognition method, device and computer storage medium
JP2023090721A (en) Image display device, program for image display, and image display method
CN114281236B (en) Text processing method, apparatus, device, medium, and program product
CN111179860A (en) Backlight mode adjusting method of electronic equipment, electronic equipment and device
CN106557766B (en) Fuzzy character processing method and system and electronic equipment
CN114565531A (en) Image restoration method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201110