CN108012026B - Eyesight protection method and mobile terminal - Google Patents

Eyesight protection method and mobile terminal Download PDF

Info

Publication number
CN108012026B
CN108012026B CN201711209210.1A CN201711209210A CN108012026B CN 108012026 B CN108012026 B CN 108012026B CN 201711209210 A CN201711209210 A CN 201711209210A CN 108012026 B CN108012026 B CN 108012026B
Authority
CN
China
Prior art keywords
mobile terminal
determining
viewing position
analysis period
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711209210.1A
Other languages
Chinese (zh)
Other versions
CN108012026A (en
Inventor
陈纲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201711209210.1A priority Critical patent/CN108012026B/en
Publication of CN108012026A publication Critical patent/CN108012026A/en
Application granted granted Critical
Publication of CN108012026B publication Critical patent/CN108012026B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72451User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to schedules, e.g. using calendar applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/12Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Environmental & Geological Engineering (AREA)
  • Telephone Function (AREA)

Abstract

The invention provides a method for protecting eyesight and a mobile terminal, wherein the method comprises the following steps: collecting a face feature image according to a first preset frequency; determining a viewing position parameter between human eyes and a mobile terminal screen according to each human face feature image; determining the reading state of the user according to the obtained multiple viewing position parameters in the first analysis period; and performing eyesight protection reminding according to the reading state of the user. According to the embodiment of the invention, whether the user is in the unhealthy reading state which is unfavorable for eyesight is determined according to the plurality of viewing position parameters in one analysis period, so that the operation of reasonably prompting the user to protect eyesight can be realized.

Description

Eyesight protection method and mobile terminal
Technical Field
The invention relates to the field of mobile terminals, in particular to a method for protecting eyesight and a mobile terminal.
Background
Along with the popularization of mobile terminal technology, people use mobile terminals for longer and higher time and frequency, and many people can also use the mobile terminals in different environments and occasions, for example, in a running bus or subway, a large number of head-lowering people can see the head-lowering people who look at the mobile terminals in a low head mode, and the vision of people can be greatly damaged due to the long-time incorrect use of the mobile terminals.
In the prior art, in order to prompt people to protect eyesight, a mobile terminal often adopts two methods: one is to measure the distance between human eyes and the mobile terminal, and if the distance between the human eyes and the mobile terminal is smaller than a certain threshold value, the user is judged to be in an unhealthy reading state, and then a prompt is given to remind people to protect eyesight; the other method is to judge whether the mobile terminal is in a shaking state, judge that the user is in an unhealthy reading state as long as the mobile terminal is judged to be in the shaking state, and further send out a prompt to remind people to protect eyesight.
The inventor finds that the two technical schemes have the following defects in the process of researching the two technical schemes: according to the two technical schemes, a single measurement result is used as a judgment standard, whether a user is in an unhealthy reading state is judged, and then people are prompted to protect eyesight, and specifically, a prompt for protecting eyesight is given to the situation that the distance between human eyes and the mobile terminal is too close to the mobile terminal at any time or the situation that the mobile terminal is in a shaking state at any time; on one hand, a single measurement result cannot accurately reflect whether the user is in an unhealthy reading state, on the other hand, too frequent prompting to protect eyesight also brings interference to the user when the user normally uses the mobile terminal, and people are likely to close the eyesight protection function of the mobile terminal in order to avoid disturbance.
Disclosure of Invention
The embodiment of the invention provides a vision protection method and a mobile terminal, and aims to solve the problem that in the prior art, when the mobile terminal reminds a user to protect the vision, the detection result is not accurate enough, so that excessive interference is caused to the user.
In order to solve the above technical problem, the present invention provides a method for protecting eyesight, comprising:
collecting a face feature image according to a first preset frequency;
determining a viewing position parameter between human eyes and a mobile terminal screen according to each human face feature image;
determining the reading state of the user according to the obtained multiple viewing position parameters in the first analysis period;
and performing eyesight protection reminding according to the reading state of the user.
In a first aspect, an embodiment of the present invention further provides a mobile terminal, including:
the face feature image acquisition module is used for acquiring a face feature image according to a first preset frequency;
the viewing position parameter determining module is used for determining the viewing position parameter between human eyes and a screen of the mobile terminal according to each human face feature image;
the reading state determining module is used for determining the reading state of the user according to the obtained multiple viewing position parameters in the first analysis period;
and the eyesight protection reminding module is used for carrying out eyesight protection reminding according to the reading state of the user.
In a second aspect, an embodiment of the present invention further provides a mobile terminal, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the foregoing processing method for protecting eyesight.
In a third aspect, the embodiment of the present invention additionally provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the processing method for protecting eyesight as claimed in any one of the above claims.
In the embodiment of the invention, a face characteristic image is collected according to a first preset frequency; determining a viewing position parameter between human eyes and a mobile terminal screen according to each human face feature image; the viewing position parameter may reflect a state that human eyes are viewing the mobile terminal; and then according to the obtained multiple viewing position parameters in the first analysis period, determining the reading state of the user, and then carrying out eyesight protection reminding according to the reading state of the user, so that the operation of reasonably reminding the user to protect eyesight can be realized. Specifically, the invention determines whether the user is in an unhealthy reading state according to the plurality of viewing position parameters in an analysis period, on one hand, the plurality of viewing position parameters can accurately reflect the reading state of the user when the user views the mobile terminal, and can realize the purpose of avoiding misjudgment and executing vision reminding; on the other hand, after an analysis period is finished, whether eyesight protection reminding operation is executed or not is analyzed, and the eyesight protection reminding operation can be prevented from being too frequent, so that too much interference is not brought to the use of the mobile terminal by a user.
Drawings
FIG. 1 is a flow chart of steps of a method for protecting vision according to a first embodiment of the present invention;
FIG. 2 is a flowchart illustrating the detailed steps of a method for protecting eyesight according to a second embodiment of the present invention;
fig. 3 is a block diagram of a mobile terminal according to a third embodiment of the present invention;
fig. 4 is a block diagram of a detailed structure of a mobile terminal according to a third embodiment of the present invention;
fig. 5 is a block diagram of a mobile terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
[ METHOD EXAMPLES ] As a method for producing a semiconductor device
Referring to fig. 1, a flow chart of the steps of a method of protecting vision in an embodiment of the invention is shown. The method is applied to the mobile terminal, and comprises the following specific steps:
step 101: and collecting a face characteristic image according to a first preset frequency.
In the embodiment of the invention, the mobile terminal has a camera shooting function and a face recognition function, and includes but is not limited to a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer and the like.
The first preset frequency can be set by a person skilled in the art according to practical situations, for example, … times per second, 10 times per second, and 100 times per second, and the embodiment of the present invention does not limit the specific value of the first preset frequency.
In specific application, the mobile terminal can collect a face feature image in front of a screen of the mobile terminal at the background according to a first preset frequency.
Step 102: and determining a viewing position parameter between human eyes and a screen of the mobile terminal according to each human face feature image.
In the embodiment of the invention, for each collected face characteristic image, the viewing position parameter between the human eyes in the face characteristic image and the screen of the mobile terminal can be determined through the technologies of face recognition, eyeball tracking, infrared distance measurement and the like.
The viewing position parameter between the human eyes and the mobile terminal screen can be determined according to the actual application scene, for example, the distance value between the human eyes and the mobile terminal screen, the angle value between the human eyes and the mobile terminal screen and the like.
In specific application, each face characteristic image and each viewing position parameter can be stored according to the obtained time sequence, so that each face characteristic image and each viewing position parameter can be obtained in sequence by reading the storage area.
As an alternative to the embodiment of the present invention, after step 102, the method may further include: and if the acquisition process of the face feature image reaches a first analysis period, acquiring a plurality of viewing position parameters in the first analysis period.
In specific application, the mobile terminal can enter a first analysis period while acquiring a first face feature image, wherein the first analysis period can be a period of time set according to an actual application scene, and can also be a period of time set according to the actual application scene and used for acquiring a certain number of face feature images in a metering manner, so that a plurality of viewing position parameters can be read in the first analysis period.
Step 103: and determining the reading state of the user according to the obtained multiple viewing position parameters in the first analysis period.
In a specific application, the reading status may include a healthy reading status and an unhealthy reading status, and various indicators of the healthy reading status may be set, for example, the distance is greater than 20 centimeters, the angle is less than 100 degrees, the number of times that the distance is greater than 20 centimeters is measured is less than the preset number of times, the angle found in the two previous and subsequent measurements is less than the preset angle value, and the reading status exceeding the healthy reading status is classified as the unhealthy reading status.
According to the comparison condition of the plurality of viewing position parameters and various indexes of the healthy reading state, the reading state of the user can be determined, for example, if one or more viewing position parameters do not accord with the corresponding indexes in the healthy reading state, the user can be judged to be in the unhealthy reading state, otherwise, the user is judged to be in the healthy reading state; or if one or more viewing position parameters meet corresponding indexes in the unhealthy reading state, the user can be judged to be in the unhealthy reading state, otherwise, the user is judged to be in the healthy reading state; the embodiment of the invention does not limit the specific method for determining the reading state of the user.
Step 104: and performing eyesight protection reminding according to the reading state of the user.
In specific application, if the user is judged to be in a healthy reading state, vision reminding is not needed; if the user is judged to be in the unhealthy reading state, the reminding operation for protecting the eyesight can be one or more of vibration, voice prompt, text prompt, picture prompt and the like of the mobile terminal, so that the user can know that the user is in the unhealthy reading state and pay attention to the eyesight protection.
In the embodiment of the invention, a face characteristic image is collected according to a first preset frequency; determining a viewing position parameter between human eyes and a mobile terminal screen according to each human face feature image; the viewing position parameter may reflect a state that human eyes are viewing the mobile terminal; and then according to the obtained multiple viewing position parameters in the first analysis period, determining the reading state of the user, and then carrying out eyesight protection reminding according to the reading state of the user, so that the operation of reasonably reminding the user to protect eyesight can be realized. Specifically, the invention determines whether the user is in an unhealthy reading state according to the plurality of viewing position parameters in an analysis period, on one hand, the plurality of viewing position parameters can accurately reflect the reading state of the user when the user views the mobile terminal, and can realize the purpose of avoiding misjudgment and executing vision reminding; on the other hand, after an analysis period is finished, whether eyesight protection reminding operation is executed or not is analyzed, and the eyesight protection reminding operation can be prevented from being too frequent, so that too much interference is not brought to the use of the mobile terminal by a user.
[ METHOD EXAMPLE II ]
Referring to fig. 2, a flow chart illustrating specific steps of a method for protecting eyesight in an embodiment of the present invention is shown. The method comprises the following specific steps:
step 201: and detecting whether the mobile terminal is in a bright screen state.
In specific application, if the mobile terminal is detected to process the black screen state, the mobile terminal user does not operate the mobile terminal and does not need to execute any method for protecting eyesight; and if the mobile terminal is detected to be in the bright screen state, indicating that the mobile terminal user is likely to operate the mobile terminal, and starting the step of protecting eyesight.
Step 202: when the mobile terminal is in a bright screen state, detecting whether a face feature image exists in front of a screen of the mobile terminal through a front camera of the mobile terminal, and if the face feature image exists in front of the screen of the mobile terminal, acquiring the face feature image according to a first preset frequency.
In specific application, when a mobile terminal user uses the mobile terminal, the mobile terminal may be in a bright screen state due to misoperation, but the mobile terminal user does not really operate the mobile terminal at the moment, and if the eyesight protection step is started, the waste of electric quantity and resources of the mobile terminal is caused; therefore, whether the human face characteristic image exists in front of the screen of the mobile terminal can be further detected through the front camera of the mobile terminal.
When the mobile terminal is in a bright screen state and no human face characteristic image exists in front of a screen of the mobile terminal, the situation that a user of the mobile terminal does not operate the mobile terminal and does not need to execute any vision protection method is shown; if the mobile terminal is in a bright screen state and the face feature image exists in front of the screen of the mobile terminal, the step that the user of the mobile terminal may operate the mobile terminal and the eyesight can be protected is started.
In the embodiment of the invention, the step of judging whether the mobile terminal user is in the state of operating the mobile terminal is set before the step of collecting the face characteristic image according to the first preset frequency, specifically, the mobile terminal is detected to be in the bright screen state, and the face characteristic image exists in front of the screen of the mobile terminal, so that the user is judged to be in the state of operating the mobile terminal, the subsequent step of protecting the eyesight is started, and the loss of resources and electric quantity of the mobile terminal caused by the step of starting the method of protecting the eyesight due to the misoperation of the mobile terminal user can be avoided.
Step 203: and collecting a face characteristic image according to a first preset frequency.
As a preferable solution of the embodiment of the present invention, after step 203, the method may include the steps of: and determining a viewing position parameter between human eyes and a screen of the mobile terminal according to each human face feature image.
After determining the viewing position parameter between the human eyes and the screen of the mobile terminal according to each human face feature image, the method also comprises the following steps:
substep A1: and acquiring a difference value between the viewing position parameter and the previous viewing position parameter of the viewing position parameter every time one viewing position parameter is acquired.
In specific application, the human face characteristic image can be analyzed and calculated every time one human face characteristic image is collected, so that the watching parameters between human eyes and a mobile terminal screen in the human face characteristic image are obtained, and the watching position parameters are stored in sequence according to the time sequence of the collected human face characteristic images.
And carrying out subtraction calculation on the currently obtained viewing position parameter and the viewing parameter one before the currently obtained viewing position parameter to obtain a difference value between the two adjacent parameters.
Substep A2: when at least one difference value exceeds a preset difference threshold value, adjusting the first preset frequency to a second preset frequency; wherein the second preset frequency is greater than the first preset frequency.
In a specific application, the preset difference threshold may be set by a person skilled in the art according to an actual scenario. For example, if the viewing position parameter is a distance value, the preset difference threshold value may be set to 5cm, and if the difference value between two adjacent distance values is greater than 5cm, which indicates that the distance between the mobile terminal and the human eyes changes greatly at this time, the frequency of acquiring the facial feature images may be increased, and the first preset frequency is adjusted to a second preset frequency that is larger, so as to increase the number of times of analyzing the change of the distance value, so as to prompt the user to protect eyesight more timely or more accurately.
Substep A3: adjusting the first analysis period to a second analysis period; wherein the first analysis period is greater than the second analysis period.
In the embodiment of the invention, after the difference value of the parameters of two adjacent watching positions is detected to be larger than the preset difference threshold value, the frequency of acquiring the face characteristic image is accelerated, the analysis period is shortened, and the first analysis period is adjusted to be the second analysis period which is smaller, so that the user can be analyzed and prompted to protect eyesight in a shorter period.
As another preferable solution of the embodiment of the present invention, the sub-steps a1-a2 may be replaced by the following steps:
substep B1: and judging whether the mobile terminal is in a moving state and/or a shaking state.
In the embodiment of the present invention, the method for determining whether the mobile terminal is in a moving state may be: whether the mobile terminal is in the moving state is detected through a gyroscope, a sensor and the like which are arranged on the mobile terminal, or whether the mobile terminal is in the moving state can be detected through other peripheral devices which can detect the moving state of the mobile terminal.
The method for judging whether the mobile terminal is in a shaking state may be: whether the mobile terminal is in the shaking state is detected through a gyroscope, a sensor and the like which are arranged on the mobile terminal, or whether the mobile terminal is in the shaking state can be detected through other peripheral devices which can detect the shaking state of the mobile terminal.
In a specific application, the mobile terminal can be judged to be in one of a moving state and a shaking state, or the moving state and the shaking state of the mobile terminal can be judged by a colleague.
Substep B2: if the mobile terminal is in a moving state and/or a shaking state, adjusting the first preset frequency to a second preset frequency; wherein the second preset frequency is greater than the first preset frequency.
In a specific application, if the mobile terminal is in a moving state and/or a shaking state, it indicates that the user may be in an unhealthy reading state, and the eyes of the user may be damaged for a long time, so that the first preset frequency may be increased, and then the operation of the above sub-step a3 is performed to prompt the user to protect the eyesight in time.
In order to clearly explain the present invention, the embodiment of the present invention takes the viewing position parameters including the distance value and the angle value as an example in the following steps, and the implementation process of each step is described in detail.
Step 204: obtaining a parallax value of human eyes in each pair of face feature images according to each pair of face feature images acquired by the two front cameras; the mobile terminal comprises two front cameras.
In the embodiment of the invention, the mobile terminal comprises two front cameras, the positions of pupils of two eyes are detected through the face recognition function of the mobile terminal, the two front cameras are focused on the positions of the eyes of the people, the targets are matched, a triangle is formed after the matching is successful, and then the distance value between the eyes of the people and the screen of the mobile terminal can be calculated according to the trigonometric function.
Specifically, when the face feature images are collected, two face feature images respectively corresponding to each camera can be obtained through one-time collection, and the horizontal coordinate x of the eyes imaged on the two face feature images is obtained by taking the eyes of the mobile terminal user as a point to be measuredlAnd xrThere is a disparity value xl-xr
Step 205: and calculating the distance value between the human eyes and the screen of the mobile terminal according to the parallax value, the distance parameter between the two front cameras and the focal length parameter.
On the basis of step 204, assuming that the physical distance of the two cameras on the mobile terminal is T and the focal length is f, the distance value Z between the human eyes and the mobile terminal screen can be obtained, and the specific formula is as follows:
Figure BDA0001484279650000081
it can be understood that there are many methods for measuring the distance between the human eye and the mobile terminal at present, and in practical applications, a person skilled in the art may select a more preferable scheme according to practice.
Step 206: and obtaining the sight line direction of the human eyes according to each human face feature image.
In the embodiment of the invention, the mobile terminal has an eyeball tracking function, and the sight direction of the mobile terminal watched by human eyes can be judged according to an eyeball tracking technology.
In specific application, when the implementation directions of human eyes are different, the eyeballs and the peripheries of the eyeballs have slight changes, the changes generate extractable features, and the mobile phone can extract the features through image capturing or scanning and other methods, so that the sight line direction of the human eyes can be tracked in real time.
Step 207: and determining an angle value between the human eyes and the screen of the mobile terminal according to the human eye sight direction.
In specific application, after the direction of the sight of human eyes is determined, the angle value between the human eyes and the screen of the mobile terminal under the current state can be measured.
It can be understood that there are many methods for measuring the angle value between the human eye and the mobile terminal at present, and in practical applications, a person skilled in the art may select a more preferable scheme according to practice.
In the embodiment of the invention, the basis is provided for judging whether the user is in a healthy reading state by measuring the distance value and/or the angle value between the human eyes and the screen of the mobile terminal, and the distance value and/or the angle value between the human eyes and the screen of the mobile terminal are important factors which may cause visual impairment when people operate the mobile terminal, so that the accurate basis can be provided for the visual protection operation.
It can be understood that, the combination of step 204 and step 205, the combination of step 206 and step 207, and the sequence between the two may be set according to an actual application scenario, which is not limited in the embodiment of the present invention.
Step 208: and if the acquisition process of the face feature image reaches a first analysis period, acquiring a plurality of viewing position parameters in the first analysis period.
In a specific application, the first analysis period may be: starting from the end time of the last first analysis period and ending at the time when the number of the collected face characteristic images reaches the preset number; and/or ending from the ending time of the last first analysis period to the time when the total time length of the collected face feature images reaches the preset time length.
The mobile terminal can enter a first analysis period when the first face feature image is collected, and can also set the time for entering the first analysis period according to an actual application scene.
The first analysis period may be from the start time of the first analysis period from the beginning, or from the end time of the last first sub-period, the collected facial feature images are counted, and when the number of the collected facial feature images reaches a preset number, it is considered that one first analysis period is completed.
The first analysis period may also be a timing operation from the start time of the first analysis period from the beginning or from the end time of the last first sub-period, and when the total time length for acquiring the face feature image reaches a preset time length, the first analysis period is considered to be completed.
The first analysis period may also be a period in which the above counting and timing operations are performed simultaneously, and when the number of the collected face feature images reaches a preset number and the total duration of the collected face feature images reaches a preset duration, it is considered that one first analysis period is completed.
The embodiment of the present invention does not specifically limit the first analysis period, and when the first analysis period ends, a plurality of viewing position parameters in the first analysis period are acquired. As a data basis for judging whether to perform eyesight protection prompting.
Step 209: and acquiring a distance difference value between two adjacent distance values.
In a specific application, the distance values may be stored in a preset storage area according to an acquired time sequence, and acquiring a distance difference between two adjacent distance values may specifically be, starting from a second distance value acquired in sequence, performing subtraction on a current distance value and a previous distance value adjacent to the current distance value every time a distance value is acquired, so as to obtain a distance difference between two distance values of the bell.
Step 210: determining the distance change times in the first analysis period according to the distance difference, and/or determining the first ratio of the distance difference which is larger than a first threshold value in the first analysis period in all the distance differences according to the distance difference.
The method for determining the number of distance changes in the first analysis period according to the distance difference may be: and counting the number of distance difference values which are not 0 or not more than a preset value, wherein the number is the distance change times.
The method for determining, according to the distance difference, a first proportion of distance differences larger than a first threshold among all distance differences may be: and comparing the distance difference values with a first threshold respectively, and dividing the number of the distance difference values larger than the first threshold by the number of all the distance difference values to obtain a first ratio of the distance difference values larger than the first threshold in all the distance difference values.
Step 211: and acquiring an angle difference value between two adjacent angle values.
In a specific application, the angle values may be stored in a preset storage area according to an obtained time sequence, and the angle difference between two adjacent angle values may be obtained by subtracting the current angle value from the previous angle value adjacent to the current angle value every time an angle value is obtained from the second angle value obtained in sequence, so as to obtain the angle difference between two ring angle values.
Step 212: according to the angle difference value, determining the angle change times in the first analysis period, and/or according to the angle difference value, determining a second proportion of the angle difference values which are larger than a second threshold value in the first analysis period in all the angle difference values.
The method for determining the number of angle changes in the first analysis period according to the angle difference may be: and measuring the number of the angle difference values which are not 0 or do not exceed a preset value, wherein the number is the angle change times.
The method for determining the first percentage of the angle difference values larger than the first threshold among all the angle difference values according to the angle difference values may be: and comparing the angle difference values with a first threshold value respectively, and dividing the number of the angle difference values larger than the first threshold value by the number of all the angle difference values to obtain a first ratio of the angle difference values larger than the first threshold value in all the angle difference values.
Step 213: and determining the reading state of the user according to the distance change times, the angle change times, the first ratio and/or the second ratio.
In a specific application, the healthy reading state can be set as an interval value associated with one or more of the distance change times, the angle change times, the first proportion and the second proportion, for example, the interval value associated with one or more of the distance change times smaller than a preset value, the angle change times smaller than a preset value, the first proportion smaller than a preset value and the second proportion smaller than a preset value; and when one or more of the distance change times, the angle change times, the first proportion and the second proportion are in the interval of the healthy reading state, determining that the reading state of the user is the healthy reading state, and otherwise, determining that the reading state is the unhealthy reading state.
In the embodiment of the present invention, a person skilled in the art may also set an interval value of the unhealthy reading state by a method similar to the above method according to an actual situation, so as to determine the reading state of the user.
In the embodiment of the invention, the reading state of the user is determined according to the distance change times, the angle change times, the first proportion and/or the second proportion, and the user can be very accurately determined to be in a healthy reading state or an unhealthy reading state.
Step 214: and if the user is in an unhealthy reading state, performing line protection eyesight reminding.
As a preferable scheme of the embodiment of the present invention, the executing of the eyesight protection reminding operation includes:
displaying a reminding picture of impaired vision on the screen of the mobile terminal;
and/or, performing a shaking operation;
and/or performing voice reminding operation.
In a specific application, the reminding operation may be combined arbitrarily or implemented separately to achieve the purpose of reminding a user to protect eyesight, which is not limited in the embodiment of the present invention.
As another preferable scheme of the embodiment of the present invention, if it is detected that no human face feature exists in front of the screen of the mobile terminal within a preset time, the viewing position parameter is cleared.
If the fact that the human face characteristics do not exist in front of the screen of the mobile terminal is detected within the preset time period, it is indicated that the user does not operate the mobile terminal within the time period, at this time, the previously stored viewing position parameters cannot provide any basis for a subsequent possible eyesight protection scheme, the viewing position parameters are cleared, and occupation of storage resources can be reduced.
In the embodiment of the invention, a face characteristic image is collected according to a first preset frequency; determining a viewing position parameter between human eyes and a mobile terminal screen according to each human face feature image; the viewing position parameter may reflect a state that human eyes are viewing the mobile terminal; then judging whether the acquisition process of the face feature image reaches a first analysis period, and if the acquisition process of the face feature image reaches the first analysis period, acquiring a plurality of viewing position parameters in the first analysis period; determining the reading state of the user according to the plurality of viewing position parameters; the reading state comprises a healthy reading state and an unhealthy reading state; if the user is in an unhealthy reading state, the eyesight protection reminding operation is executed, and the operation of reasonably reminding the user to protect the eyesight can be realized. Specifically, the invention determines whether the user is in an unhealthy reading state according to the plurality of viewing position parameters in an analysis period, on one hand, the plurality of viewing position parameters can accurately reflect the reading state of the user when the user views the mobile terminal, and can realize the purpose of avoiding misjudgment and executing vision reminding; on the other hand, after an analysis period is finished, whether eyesight protection reminding operation is executed or not is analyzed, and the eyesight protection reminding operation can be prevented from being too frequent, so that too much interference is not brought to the use of the mobile terminal by a user.
It should be noted that the foregoing method embodiments are described as a series of acts or combinations for simplicity in explanation, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts or acts described, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
[ third example of device ]
Referring to fig. 3, a block diagram of a mobile terminal 300 according to an embodiment of the present invention is shown. The method comprises the following steps:
the face feature image acquisition module 310 is configured to acquire a face feature image according to a first preset frequency.
And the viewing position parameter determining module 320 is configured to determine a viewing position parameter between human eyes and a screen of the mobile terminal according to each of the facial feature images.
A reading state determining module 330, configured to determine a reading state of the user according to the plurality of viewing position parameters; the reading states include a healthy reading state and an unhealthy reading state.
And the eyesight protection reminding module 340 is used for executing eyesight protection reminding operation if the user is in an unhealthy reading state.
Preferably, referring to fig. 4, on the basis of fig. 3, the mobile terminal 300 may further include:
and the bright screen detection module 301 is configured to detect whether the mobile terminal is in a bright screen state.
The person feature image detection module 302 is configured to detect whether a face feature image exists in front of a screen of the mobile terminal through a front camera of the mobile terminal if the mobile terminal is in a bright screen state; if the human face image exists in front of the screen of the mobile terminal, the operation of the human face image acquisition module 320 is performed.
The viewing position parameter determination module 320 includes:
and a disparity value determining submodule 3201, configured to obtain a disparity value of human eyes in each pair of face feature images according to each pair of face feature images acquired by the two front cameras.
And the distance value calculating operator module 3202 is used for calculating a distance value between human eyes and a screen of the mobile terminal according to the parallax value, and the distance parameter and the focal length parameter between the two front cameras.
A sight direction determining submodule 3203, configured to obtain a sight direction of human eyes according to each of the face feature images;
the angle value determining submodule 3204 is configured to determine an angle value between the human eyes and the mobile terminal screen according to the human eye sight direction.
The viewing position parameter acquiring module 350 includes:
a first analysis cycle determining sub-module 3501, configured to start from an end time of a previous first analysis cycle, when the number of the acquired face feature images reaches a preset number; and/or, starting from the ending moment of the last first analysis period, and enabling the total time length for acquiring the face feature images to reach the preset time length.
The reading status determination module 330 includes:
the distance difference obtaining sub-module 3301 is configured to obtain a distance difference between two adjacent distance values.
A distance change number determining sub-module 3302, configured to determine, according to the distance difference, a distance change number in the first analysis period; and/or the presence of a gas in the gas,
a first percentage determining sub-module 3303, configured to determine a first percentage of distance differences that are greater than a first threshold in the first analysis period among all distance differences;
the angular difference determination sub-module 3304 is used to obtain the angular difference between two adjacent angular values.
The angle change frequency determining sub-module 3305 is configured to determine the angle change frequency in the first analysis period according to the angle difference; and/or the presence of a gas in the gas,
a second percentage determining sub-module 3306, configured to determine, according to the angle difference value, a second percentage of the angle difference values that are greater than a second threshold value in the first analysis period among all the angle difference values;
the reading state determining sub-module 3307 is configured to determine a reading state of the user according to the distance change times, the angle change times, the first proportion, and/or the second proportion.
Preferably, the mobile terminal 300 includes:
and the difference value acquisition module is used for acquiring a difference value between the viewing position parameter and the previous viewing position parameter of the viewing position parameter every time one viewing position parameter is acquired.
The first frequency adjusting module is used for adjusting the first preset frequency to a second preset frequency when at least one difference value exceeds a preset difference threshold value; wherein the second preset frequency is greater than the first preset frequency.
The analysis period adjusting module is used for adjusting the first analysis period to a second analysis period; wherein the first analysis period is greater than the second analysis period.
And the mobile terminal state judging module is used for judging whether the mobile terminal is in a moving state and/or a shaking state.
The second frequency adjusting module is used for adjusting the first preset frequency to a second preset frequency if the mobile terminal is in a moving state and/or a shaking state; wherein the second preset frequency is greater than the first preset frequency.
And the viewing position parameter emptying module is used for emptying the viewing position parameter if the situation that the human face characteristics do not exist in front of the screen of the mobile terminal within the preset time is detected.
Preferably, the eyesight protection reminding module 350 includes:
the eyesight protection reminding submodule is used for displaying a reminding picture of impaired eyesight on the screen of the mobile terminal; and/or, performing a shaking operation; and/or performing voice reminding operation.
In the embodiment of the invention, a face characteristic image is collected according to a first preset frequency; determining a viewing position parameter between human eyes and a mobile terminal screen according to each human face feature image; the viewing position parameter may reflect a state that human eyes are viewing the mobile terminal; and then according to the obtained multiple viewing position parameters in the first analysis period, determining the reading state of the user, and then carrying out eyesight protection reminding according to the reading state of the user, so that the operation of reasonably reminding the user to protect eyesight can be realized. Specifically, the invention determines whether the user is in an unhealthy reading state according to the plurality of viewing position parameters in an analysis period, on one hand, the plurality of viewing position parameters can accurately reflect the reading state of the user when the user views the mobile terminal, and can realize the purpose of avoiding misjudgment and executing vision reminding; on the other hand, after an analysis period is finished, whether eyesight protection reminding operation is executed or not is analyzed, and the eyesight protection reminding operation can be prevented from being too frequent, so that too much interference is not brought to the use of the mobile terminal by a user.
The mobile terminal can implement each process implemented by the mobile terminal in the method embodiments of fig. 1 to fig. 2, and is not described herein again to avoid repetition.
Fig. 5 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present invention.
The mobile terminal 500 includes, but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and a power supply 511. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 5 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 510 is configured to acquire a facial feature image according to a first preset frequency; determining a viewing position parameter between human eyes and a mobile terminal screen according to each human face feature image; determining the reading state of the user according to the obtained multiple viewing position parameters in the first analysis period; and performing eyesight protection reminding according to the reading state of the user. .
In the embodiment of the invention, a face characteristic image is collected according to a first preset frequency; determining a viewing position parameter between human eyes and a mobile terminal screen according to each human face feature image; the viewing position parameter may reflect a state that human eyes are viewing the mobile terminal; and then according to the obtained multiple viewing position parameters in the first analysis period, determining the reading state of the user, and then carrying out eyesight protection reminding according to the reading state of the user, so that the operation of reasonably reminding the user to protect eyesight can be realized. Specifically, the invention determines whether the user is in an unhealthy reading state according to the plurality of viewing position parameters in an analysis period, on one hand, the plurality of viewing position parameters can accurately reflect the reading state of the user when the user views the mobile terminal, and can realize the purpose of avoiding misjudgment and executing vision reminding; on the other hand, after an analysis period is finished, whether eyesight protection reminding operation is executed or not is analyzed, and the eyesight protection reminding operation can be prevented from being too frequent, so that too much interference is not brought to the use of the mobile terminal by a user.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 510; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 can also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access through the network module 502, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output related to a specific function performed by the mobile terminal 500 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used to receive an audio or video signal. The input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphic processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. The microphone 5042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 501 in case of the phone call mode.
The mobile terminal 500 also includes at least one sensor 505, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 5061 and/or a backlight when the mobile terminal 500 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 505 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 506 is used to display information input by the user or information provided to the user. The Display unit 506 may include a Display panel 5061, and the Display panel 5061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 507 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 5071 using a finger, stylus, or any suitable object or attachment). The touch panel 5071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 510 to determine the type of the touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of the touch event. Although in fig. 5, the touch panel 5071 and the display panel 5061 are two independent components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 508 is an interface through which an external device is connected to the mobile terminal 500. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 500 or may be used to transmit data between the mobile terminal 500 and external devices.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 509 and calling data stored in the memory 509, thereby performing overall monitoring of the mobile terminal. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The mobile terminal 500 may further include a power supply 511 (e.g., a battery) for supplying power to various components, and preferably, the power supply 511 may be logically connected to the processor 510 via a power management system, so that functions of managing charging, discharging, and power consumption are performed via the power management system.
In addition, the mobile terminal 500 includes some functional modules that are not shown, and thus, are not described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, which includes a processor 510, a memory 509, and a computer program that is stored in the memory 509 and can be run on the processor 510, and when the computer program is executed by the processor 510, the processes of the embodiment of the eyesight protection method are implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the embodiment of the eyesight protection method, and can achieve the same technical effects, and in order to avoid repetition, the detailed description is omitted here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (12)

1. A method for protecting eyesight is applied to a mobile terminal, and is characterized by comprising the following steps:
collecting a face feature image according to a first preset frequency;
determining a viewing position parameter between human eyes and a mobile terminal screen according to each human face feature image;
determining the reading state of the user according to the obtained multiple viewing position parameters in the first analysis period;
performing eyesight protection reminding according to the reading state of the user;
the mobile terminal comprises two front cameras, and the viewing position parameter comprises a distance value;
the step of determining the viewing position parameter between the human eyes and the screen of the mobile terminal according to each human face feature image comprises the following steps:
obtaining a parallax value of human eyes in each pair of face feature images according to each pair of face feature images acquired by the two front cameras;
calculating the distance value between the human eyes and the screen of the mobile terminal according to the parallax value, the distance parameter between the two front cameras and the focal length parameter;
the viewing position parameter further comprises an angle value;
the step of determining the viewing position parameter between the human eyes and the screen of the mobile terminal according to each human face feature image further comprises the following steps:
obtaining the sight direction of human eyes according to each human face feature image;
determining an angle value between human eyes and a mobile terminal screen according to the human eye sight direction;
the step of determining the reading state of the user according to the obtained plurality of viewing position parameters in the first analysis period includes:
acquiring a distance difference value between two adjacent distance values;
determining the distance change times in the first analysis period according to the distance difference;
and/or the presence of a gas in the gas,
determining a first proportion of distance differences larger than a first threshold value in all distance differences in the first analysis period according to the distance differences;
acquiring an angle difference value between two adjacent angle values;
determining the angle change times in the first analysis period according to the angle difference;
and/or the presence of a gas in the gas,
according to the angle difference, determining a second proportion of the angle difference larger than a second threshold value in all the angle difference values in the first analysis period;
and determining the reading state of the user according to the distance change times, the angle change times, the first ratio and/or the second ratio.
2. The method of claim 1, wherein the first analysis cycle comprises:
starting from the end time of the last first analysis period and ending at the time when the number of the collected face characteristic images reaches the preset number;
and/or ending from the ending time of the last first analysis period to the time when the total time length of the collected face feature images reaches the preset time length.
3. The method according to claim 1, wherein after the step of determining the viewing position parameter between the human eyes and the screen of the mobile terminal according to each of the facial feature images, the method further comprises:
when one viewing position parameter is obtained, obtaining a difference value between the viewing position parameter and a previous viewing position parameter of the viewing position parameter;
when at least one difference value exceeds a preset difference threshold value, adjusting the first preset frequency to a second preset frequency; wherein the second preset frequency is greater than the first preset frequency.
4. The method of claim 3, wherein the step of adjusting the first preset frequency to a second preset frequency is followed by:
adjusting the first analysis period to a second analysis period; wherein the first analysis period is greater than the second analysis period.
5. The method according to claim 1, wherein after the step of determining the viewing position parameter between the human eyes and the screen of the mobile terminal according to each of the facial feature images, the method further comprises:
judging whether the mobile terminal is in a moving state and/or a shaking state;
if the mobile terminal is in a moving state and/or a shaking state, adjusting the first preset frequency to a second preset frequency; wherein the second preset frequency is greater than the first preset frequency.
6. The method of claim 1, wherein the step of acquiring the facial feature images at the first preset frequency is preceded by the steps of:
detecting whether a human face characteristic image exists in front of a screen of the mobile terminal or not through a front camera of the mobile terminal when the mobile terminal is in a bright screen state;
and if the face characteristic image exists in front of the screen of the mobile terminal, acquiring the face characteristic image according to a first preset frequency.
7. A mobile terminal, comprising:
the face feature image acquisition module is used for acquiring a face feature image according to a first preset frequency;
the viewing position parameter determining module is used for determining the viewing position parameter between human eyes and a screen of the mobile terminal according to each human face feature image;
the reading state determining module is used for determining the reading state of the user according to the obtained multiple viewing position parameters in the first analysis period; the eyesight protection reminding module is used for carrying out eyesight protection reminding according to the reading state of the user;
the mobile terminal comprises two front cameras, the viewing position parameter comprises a distance value, and the viewing position parameter determining module comprises:
the parallax value determining submodule is used for obtaining the parallax value of human eyes in each pair of face feature images according to each pair of face feature images acquired by the two front cameras;
the distance value calculation operator module is used for calculating the distance value between human eyes and a screen of the mobile terminal according to the parallax value, and the distance parameter and the focal length parameter between the two front cameras;
the viewing position parameter further includes an angle value, the viewing position parameter determination module further includes:
the sight direction determining submodule is used for obtaining the sight direction of human eyes according to each face feature image;
the angle value determining submodule is used for determining an angle value between human eyes and a screen of the mobile terminal according to the human eye sight direction;
the reading state determination module comprises:
the distance difference obtaining submodule is used for obtaining a distance difference between two adjacent distance values;
the distance change time determining submodule is used for determining the distance change time in the first analysis period according to the distance difference;
and/or the presence of a gas in the gas,
a first proportion determination submodule for determining a first proportion of distance differences greater than a first threshold value among all distance differences within the first analysis period;
the angle difference determining submodule is used for obtaining the angle difference between two adjacent angle values;
the angle change time determining submodule is used for determining the angle change time in the first analysis period according to the angle difference;
and/or the presence of a gas in the gas,
the second proportion determining submodule is used for determining a second proportion of the angle difference values which are larger than a second threshold value in the first analysis period in all the angle difference values according to the angle difference values;
and the reading state determining submodule is used for determining the reading state of the user according to the distance change times, the angle change times, the first proportion and/or the second proportion.
8. The mobile terminal of claim 7, wherein the first analysis period comprises:
starting from the end time of the last first analysis period and ending at the time when the number of the collected face characteristic images reaches the preset number;
and/or ending from the ending time of the last first analysis period to the time when the total time length of the collected face feature images reaches the preset time length.
9. The mobile terminal of claim 7, further comprising:
a difference value obtaining module, configured to obtain, for each obtained viewing position parameter, a difference value between the viewing position parameter and a previous viewing position parameter of the viewing position parameters;
the first frequency adjusting module is used for adjusting the first preset frequency to a second preset frequency when at least one difference value exceeds a preset difference threshold value; wherein the second preset frequency is greater than the first preset frequency.
10. The mobile terminal of claim 9, further comprising:
the analysis period adjusting module is used for adjusting the first analysis period to a second analysis period; wherein the first analysis period is greater than the second analysis period.
11. The mobile terminal of claim 7, further comprising:
the mobile terminal state judging module is used for judging whether the mobile terminal is in a moving state and/or a shaking state;
the second frequency adjusting module is used for adjusting the first preset frequency to a second preset frequency if the mobile terminal is in a moving state and/or a shaking state; wherein the second preset frequency is greater than the first preset frequency.
12. The mobile terminal of claim 7, further comprising:
the character feature image detection module is used for detecting whether a human face feature image exists in front of a screen of the mobile terminal or not through a front camera of the mobile terminal when the mobile terminal is in a bright screen state; and if the human face characteristic image exists in front of the screen of the mobile terminal, executing the operation of the human face characteristic image acquisition module.
CN201711209210.1A 2017-11-27 2017-11-27 Eyesight protection method and mobile terminal Active CN108012026B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711209210.1A CN108012026B (en) 2017-11-27 2017-11-27 Eyesight protection method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711209210.1A CN108012026B (en) 2017-11-27 2017-11-27 Eyesight protection method and mobile terminal

Publications (2)

Publication Number Publication Date
CN108012026A CN108012026A (en) 2018-05-08
CN108012026B true CN108012026B (en) 2020-06-05

Family

ID=62054094

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711209210.1A Active CN108012026B (en) 2017-11-27 2017-11-27 Eyesight protection method and mobile terminal

Country Status (1)

Country Link
CN (1) CN108012026B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046552A (en) * 2019-03-21 2019-07-23 南京华捷艾米软件科技有限公司 Protect the method for user's eyesight and the device of protection user's eyesight
CN110236551A (en) * 2019-05-21 2019-09-17 西藏纳旺网络技术有限公司 Acquisition methods, device, electronic equipment and the medium of user's cervical vertebra tilt angle
CN114694360A (en) * 2020-12-25 2022-07-01 深圳Tcl新技术有限公司 Eye using prompting method and device for intelligent terminal, intelligent terminal and storage medium
CN114264257A (en) * 2021-12-21 2022-04-01 山东省产品质量检验研究院 Surface area measuring method and system for rotary container
CN114371825B (en) * 2022-01-08 2022-12-13 北京布局未来教育科技有限公司 Remote education service system based on Internet

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103000004A (en) * 2011-09-13 2013-03-27 三星电子(中国)研发中心 Monitoring method for visibility range from eyes to screen
CN104168384A (en) * 2014-08-22 2014-11-26 广东欧珀移动通信有限公司 Mobile terminal and health warning control method thereof
CN105049598A (en) * 2015-06-01 2015-11-11 广东小天才科技有限公司 Method for detecting visual distance of user and mobile terminal
CN105825494A (en) * 2015-08-31 2016-08-03 维沃移动通信有限公司 Image processing method and mobile terminal
CN107340849A (en) * 2016-04-29 2017-11-10 和鑫光电股份有限公司 Mobile device and its eyeshield control method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103000004A (en) * 2011-09-13 2013-03-27 三星电子(中国)研发中心 Monitoring method for visibility range from eyes to screen
CN104168384A (en) * 2014-08-22 2014-11-26 广东欧珀移动通信有限公司 Mobile terminal and health warning control method thereof
CN105049598A (en) * 2015-06-01 2015-11-11 广东小天才科技有限公司 Method for detecting visual distance of user and mobile terminal
CN105825494A (en) * 2015-08-31 2016-08-03 维沃移动通信有限公司 Image processing method and mobile terminal
CN107340849A (en) * 2016-04-29 2017-11-10 和鑫光电股份有限公司 Mobile device and its eyeshield control method

Also Published As

Publication number Publication date
CN108012026A (en) 2018-05-08

Similar Documents

Publication Publication Date Title
CN108012026B (en) Eyesight protection method and mobile terminal
CN109381165B (en) Skin detection method and mobile terminal
CN108491123B (en) Method for adjusting application program icon and mobile terminal
CN110719402B (en) Image processing method and terminal equipment
CN108307106B (en) Image processing method and device and mobile terminal
CN108038825B (en) Image processing method and mobile terminal
CN110969981A (en) Screen display parameter adjusting method and electronic equipment
CN109994111B (en) Interaction method, interaction device and mobile terminal
CN108881617B (en) Display switching method and mobile terminal
CN108229420B (en) Face recognition method and mobile terminal
CN107463897B (en) Fingerprint identification method and mobile terminal
CN108962187B (en) Screen brightness adjusting method and mobile terminal
CN109241832B (en) Face living body detection method and terminal equipment
CN109525837B (en) Image generation method and mobile terminal
CN109544445B (en) Image processing method and device and mobile terminal
CN111401463A (en) Method for outputting detection result, electronic device, and medium
CN111107219B (en) Control method and electronic equipment
CN110197159B (en) Fingerprint acquisition method and terminal
CN107809515B (en) Display control method and mobile terminal
CN108196663B (en) Face recognition method and mobile terminal
CN107895108B (en) Operation management method and mobile terminal
CN112733673A (en) Content display method and device, electronic equipment and readable storage medium
CN108960097B (en) Method and device for obtaining face depth information
CN109947345B (en) Fingerprint identification method and terminal equipment
CN109753776B (en) Information processing method and device and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant