CN110610768B - Eye use behavior monitoring method and server - Google Patents

Eye use behavior monitoring method and server Download PDF

Info

Publication number
CN110610768B
CN110610768B CN201910704752.9A CN201910704752A CN110610768B CN 110610768 B CN110610768 B CN 110610768B CN 201910704752 A CN201910704752 A CN 201910704752A CN 110610768 B CN110610768 B CN 110610768B
Authority
CN
China
Prior art keywords
eye
user
preset
evaluation
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910704752.9A
Other languages
Chinese (zh)
Other versions
CN110610768A (en
Inventor
毕宏生
胡媛媛
毛力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Tongxing Intelligent Technology Co ltd
Original Assignee
Jinan Tongxing Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Tongxing Intelligent Technology Co ltd filed Critical Jinan Tongxing Intelligent Technology Co ltd
Priority to CN201910704752.9A priority Critical patent/CN110610768B/en
Publication of CN110610768A publication Critical patent/CN110610768A/en
Application granted granted Critical
Publication of CN110610768B publication Critical patent/CN110610768B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Epidemiology (AREA)
  • Pathology (AREA)
  • Primary Health Care (AREA)
  • Biomedical Technology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The application discloses a method and a server for monitoring eye use behaviors, which are used for monitoring the eye use behaviors of a user and helping the user to scientifically use eyes. The server receives eye use data of the user acquired by the monitor; according to the eye use data, when a first preset condition is met, an instruction for acquiring an eye use image of a user is sent to the monitor; determining the eye using state of the user according to the eye using image; and analyzing the eye using behaviors of the user according to the eye using data, the eye using state and a preset evaluation dimension, and determining the grade of the eye using behaviors of the user. The monitoring method can monitor the eye use data of the user in real time, determine the eye use behavior of the user by combining image recognition, and analyze the eye use behavior of the user so as to help the user to form good eye use habits.

Description

Eye use behavior monitoring method and server
Technical Field
The application relates to the technical field of computers, in particular to a method and a server for monitoring eye using behaviors.
Background
The myopia is one of the main factors harming the eyesight health of the teenagers, particularly, the prevalence rate of the myopia of the teenagers is continuously high due to the popularization of electronic products in recent decades, and the myopia brings serious harm to the eyesight health and body health of the teenagers.
In the eye use behavior of teenagers, the eye use habit and the eye use environment often play a crucial role. The method helps teenagers to master the knowledge and method for scientific eye use, ensures that the teenagers use the eyes under the appropriate environment, can effectively prevent the occurrence of myopia, and can prevent the further deepening of the myopia.
However, when a teenager uses an eye, the teenager cannot use the eye scientifically and lacks correct cognition on the eye scientifically, so a large number of poor eye using habits exist, which can cause damage to the eyesight health of the teenager and are not beneficial to the prevention of myopia of the teenager.
Therefore, a method capable of effectively monitoring the eye using behavior of the user, analyzing the eye using state of the user, reminding the user in time when finding that the user has bad eye using behavior, and correcting the eye using behavior of the user is an important problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a method and a server for monitoring eye use behaviors, which are used for monitoring the eye use behaviors of a user, helping the user to scientifically use eyes and forming a good eye use habit.
The method for monitoring the eye using behavior provided by the embodiment of the application comprises the following steps:
the method comprises the steps that a server receives eye use data of a user, wherein the eye use data are obtained by a monitor, and the eye use data at least comprise an eye use distance between eyes of the user and a fixation object, duration corresponding to the eye use distance and a head inclination angle of the user;
According to the eye use data, when a first preset condition is met, an instruction for acquiring an eye use image of a user is sent to the monitor; the eye using image is an image corresponding to the front of the user, and the first preset condition at least comprises that the eye using distance reaches a preset distance threshold value;
determining the eye using state of a user according to the eye using image, wherein the eye using state is at least one of a reading state, an electronic screen state and an outdoor state;
analyzing the eye using behavior of the user according to the eye using data, the eye using state and a preset evaluation dimension, and determining the grade of the eye using behavior of the user; wherein the preset evaluation dimension comprises any one or more of: average single reading time, outdoor time, average reading distance, average head inclination angle and average single electronic screen watching time.
The monitoring method for the eye using behavior provided by the embodiment of the application comprises the following steps:
the method comprises the steps that a monitor obtains eye use data of a user and sends the eye use data to a server, wherein the eye use data at least comprise an eye use distance between eyes of the user and a fixation object, duration time corresponding to the eye use distance and a head inclination angle of the user;
The monitor receives an instruction of acquiring the eye use image of the user sent by the server; the eye using image is an image corresponding to the right front of the user, and the instruction is sent when the server determines that a first preset condition is met according to the eye using data; the first preset condition at least comprises that the eye using distance reaches a preset distance threshold value;
the method comprises the steps that an eye use image of a user is collected and sent to a server, so that the server determines the eye use state of the user according to the eye use image, analyzes the eye use behavior of the user according to the eye use data, the eye use state and a preset evaluation dimension, and determines the grade of the eye use behavior of the user; wherein, the eye using state is at least one of a reading state, an electronic screen state and an outdoor state; the preset evaluation dimension comprises any one or more of the following items: average single reading time, outdoor time, average reading distance, average head inclination angle and average single electronic screen watching time.
An embodiment of the present application provides a server, including:
the eye using data acquisition device is used for acquiring eye using data of a user, wherein the eye using data at least comprises an eye using distance between the eyes of the user and a fixation object, duration corresponding to the eye using distance and a head inclination angle of the user;
The processor is used for determining that an instruction for acquiring an eye use image of a user is sent to the monitor when a first preset condition is met according to the eye use data; the eye using image is an image corresponding to the front of a user, and the first preset condition at least comprises that the eye using distance reaches a preset distance threshold; determining the eye using state of a user according to the eye using image, wherein the eye using state is at least one of a reading state, an electronic screen state and an outdoor state; analyzing the eye using behavior of the user according to the eye using data, the eye using state and a preset evaluation dimension, and determining the grade of the eye using behavior of the user; wherein the preset evaluation dimension comprises any one or more of: average single reading time, outdoor time, average reading distance, average head inclination angle and average single electronic screen watching time.
The embodiment of the application provides a monitoring method of eye using behaviors, a server obtains eye using data of a user through a monitor, and when the situation that the eye using of the user meets a first preset condition is determined, an instruction for obtaining an eye using image of the user is sent to the monitor. Then, the eye use state of the user is determined according to the acquired eye use data and the eye use image. And analyzing the eye using behavior of the user according to the eye using data and the determined eye using state, and determining the grade of the eye using behavior of the user. Through confirming user's the condition of using the eye, call the camera of monitor under the condition of first preset condition, can effectively reduce the energy consumption of monitor, reduce the work load of monitor, be favorable to the small size design of monitor. In addition, by monitoring the eye using behavior of the user in real time, the eye using data of the user can be accurately mastered, the excellence of the eye using behavior of the user is analyzed and determined, whether the eye using behavior of the user is scientific or not is judged, the bad eye using behavior of the user is determined, and the user is helped to form good eye using habits.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flowchart of a method for monitoring eye use behavior according to an embodiment of the present application;
FIG. 2 is a schematic view of a monitor used in accordance with an embodiment of the present disclosure;
fig. 3 is a schematic view illustrating a method of using an eye image capturing device according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to help a user form a good eye use habit, the embodiment of the application provides a method for monitoring eye use behavior. The execution body of the method comprises a plurality of devices capable of information interaction, such as mobile terminals, servers and the like. Each device may be respectively responsible for executing a part of steps of the method, and the steps responsible for each device may be distributed according to needs, which is not limited in the present application.
In the embodiment of the present application, the method will be described by taking a monitor and a server capable of information interaction as an example. The monitor is used for acquiring eye use data and eye use images of a user and sending the acquired data to the server. The monitor can be a device fixed on a frame of glasses of a user, and can also be a device fixed at the positions of ears, the front of the chest or a collar and the like of the user, so that the eye using behavior of the user can be monitored in real time. The server is used for processing the acquired eye use data and the eye use image.
Fig. 1 is a flowchart of a method for monitoring eye usage behavior provided in an embodiment of the present application, which specifically includes:
s101: the monitor acquires eye use data of the user and sends the eye use data to the server.
In the embodiment of the application, the monitor can acquire the eye use data of the user and send the eye use data to the server. The eye use data refers to data related to the eye use behavior of the user, and includes at least an eye use distance, a duration corresponding to the eye use distance, and a head inclination angle of the user. The eye distance refers to a distance between the eyes of the user and the fixation object (such as a book, a mobile phone and the like) when the user looks at the object, and can be obtained through an infrared laser sensor and other devices. The duration time corresponding to the eye distance is the time for which the user maintains a certain eye distance, and can be obtained by a timer or calculated by the server. The head inclination angle of the user can be obtained by a gyroscope or the like.
When a user looks at an object, the eye using distance is too close, the duration is too long, or the eye using posture is not correct, so that the eye using health of the user can be damaged, and the eye using is not beneficial to scientific eye using. Therefore, after receiving the eye use data of the user sent by the monitor, the server can send a reminding instruction to the monitor when determining that the eye use distance of the user is smaller than a first preset distance threshold and the duration is greater than a first preset time threshold, or when the eye use distance of the user is smaller than or equal to a second preset distance threshold and the duration is greater than a second preset time threshold, or when the head inclination angle of the user is greater than a preset angle threshold, and the monitor reminds the user. Wherein the eye-using distance is less than the first preset distance threshold, and the duration is greater than the first preset time threshold, which may indicate that the duration is too long at an excessively close eye-using distance, for example, the eye-using distance is less than 20 cm, and the duration is greater than 15 seconds. The eye-using distance being less than or equal to the second preset distance threshold and the duration being greater than the second preset time threshold may indicate that the user uses eyes for too long time at the normal eye-using distance, for example, the eye-using distance is less than or equal to 60 cm and greater than 33 cm, and the duration is greater than 45 minutes. A head tilt angle of the user greater than a preset angle threshold may indicate that the user's eye pose is not correct, e.g., a head tilt angle greater than 10 degrees, etc.
It should be noted that, the present application only proposes several possible conditions for reminding the user, and other conditions for reminding the user, which are not mentioned in the present application, may be set as needed, and the present application does not limit this. The mode that the monitor reminded can be warning sound, vibrations etc..
S102: and when the server determines that the first preset condition is met, sending an instruction for acquiring the eye use image of the user to the monitor, and acquiring the eye use image of the user by the monitor.
In the embodiment of the application, in order to distinguish different eye behaviors of a user (such as reading, watching a mobile phone, watching a television, and the like), the server may determine the gaze object of the user through the monitor to determine the specific eye behavior of the user.
Specifically, the server may send, according to the eye use data of the user sent by the monitor, an instruction to collect an eye use image of the user to the monitor when the eye use data satisfies a first preset condition, so as to obtain the eye use image of the user. In a possible implementation manner, as shown in fig. 2, the monitor may be a device installed on a temple of the glasses of the user, the image of the eyes of the user collected by the monitor is an image corresponding to the front of the user, the first preset condition indicates that the server determines, according to the data of the eyes of the user, that the user may change the behavior of the eyes. Through this kind of mode, under the possible circumstances of predetermined (promptly, the circumstances of first preset condition), the camera of recalling the monitor acquires user's eye image, can effectively reduce the work load of monitor, reduces the consumption of monitor, and the energy can be saved increases the standby time of monitor.
In general, a user often performs a certain eye-using action within a specific eye-using distance. For example, the eye distance of a user is usually 20 cm to 60 cm when the user reads and watches a mobile phone, the eye distance of the user is usually more than 120 cm when the user watches a television, and the like. Therefore, the first preset condition may be set according to the eye distance. Specifically, the first preset condition may at least include: and when the eye using distance reaches a preset distance threshold value, acquiring an eye using image of the user. The server can send an instruction for acquiring the eye use image of the user to the monitor according to the eye use data of the user when the eye use distance of the user is determined to reach any one preset distance threshold value in the plurality of preset distance threshold values, so that the monitor acquires the eye use image in the current sight of the user.
Further, some sudden actions may be taken by the user during the use of the eyes. For example, looking up out of the eye while reading, looking down at a cell phone while watching television, etc. In this case, the server may determine that the eye distance of the user reaches a preset distance threshold, and send an instruction to the monitor to capture an image of the eye of the user. However, in fact, the current eye usage image of the user collected by the monitor does not represent the change of the eye usage behavior of the user, and the server may misjudge the current eye usage behavior of the user due to the sudden action of the user. Therefore, in order to avoid that sudden actions of the user are mistakenly judged as changes of the eye using behaviors of the user, the accuracy of judging the eye using behaviors of the user is improved, and when the server sends an instruction of collecting eye using images of the user to the monitor, the monitor can be required to continuously collect a plurality of current eye using images of the user according to a certain time interval. The server can comprehensively judge the current eye using behavior of the user according to the plurality of eye using images.
S103: the server determines the eye using state of the user according to the eye using image of the user.
After receiving the eye image of the user sent by the monitor, the server can perform image recognition on the eye image to determine an object or a scene in the sight of the user in the eye image, so as to determine the eye using state (i.e. eye using behavior) of the user.
In the embodiment of the application, the process of determining the eye using state of the user by the server according to the eye using data and the eye using image of the user comprises the following two steps:
firstly, after the server acquires the eye image of the user, the image recognition can be carried out on the eye image according to a preset neural network model, and the label corresponding to the eye image is determined from a plurality of preset labels. Specifically, the server may train a plurality of image recognition models of two classes according to a preset neural network model, so as to perform two-class recognition on the eye-using image. And taking the result of the classified image identification as a preset label, wherein the label at least comprises an electronic screen, a non-electronic screen, outdoors and non-outdoors. Then, the server may determine, from preset tags, at least one tag corresponding to the acquired eye-using image.
In one embodiment, the pre-set neural network model may include 5 parts, wherein,
The 1 st part of convolution adopts convolution kernel of 3x3, the convolution step is 1, and the output is 64 feature maps;
the 2 nd partial convolution adopts convolution kernel of 3x3, the convolution step is 1, and the output is 128 feature maps;
the 3 rd part comprises three-layer convolution, wherein the three-layer convolution respectively adopts a convolution kernel of 3x3, a convolution kernel of 1x1 and a convolution kernel of 3x3, the convolution step size is 1, and the output is 256 feature maps; through the three-layer convolution of the part 3, the non-linear degree can be increased, and the accuracy of image feature identification is improved;
the 4 th convolution adopts convolution kernel of 3x3, the convolution step is 1, and the output is 256 feature maps;
the partial convolution 5 takes a convolution kernel of 3x3 with an output of 2 to achieve a binary output.
Wherein, after each convolution layer, a pooling kernel of 2x2 can be adopted, the step length is 2, and the expression of the pooling algorithm is
Figure BDA0002151751490000071
Wherein, PlkIs a downsampled eigenvalue obtained from a pooling kernel of size c × c, FijCorresponding elements in a pooling kernel with the size of c × c in the convolution feature map F, a is the sum of all elements in the pooling kernel, σ is the standard deviation, bxFor the bias term, l is a first preset value, and k is a second preset value. Through the pooling algorithm, appropriate pooling weights can be distributed to different elements of the pooling core, so that the extracted characteristic value can better express global characteristics. The method effectively avoids the problem of key feature loss caused by a maximum pooling method and the problem of weakening of a larger feature value caused by average pooling, and further enables different pooling domains to extract more accurate features.
The model is used for extracting deeper image features through the increase of the convolution layer, and the convolution layer of the part 5 does not exist in the model, so that the conventional full-connection layer is replaced by the convolution layer, the parameter quantity and the calculated quantity can be reduced, and the training speed of the model is favorably improved.
The server can respectively carry out image recognition training according to a preset neural network model and the image recognition requirement so as to realize recognition of mobile phones, televisions, outdoors and the like. Specifically, taking a mobile phone identification as an example, a preset training set is used to perform iterative training on the neural network model, where a training sample in the training set is an image labeled as a mobile phone or a non-mobile phone. The neural network model can automatically learn according to the labels of the training samples in the training set so as to establish the neural network model for identifying the mobile phone. A loss function is preset in the model and represents an estimation of the degree of inconsistency between the predicted value and the true value of the model. When the model is subjected to iterative training in the training set, the loss function outputs a function value along with each iteration, and when the function value output by the loss function is stabilized at the minimum value, the model can be regarded as being completed by initial training. After the model initial training is completed, the model which is initially trained can be tested by adopting a test set so as to judge the accuracy of the model. And each test sample in the test set is an unmarked image containing a mobile phone or not containing the mobile phone, and after the test sample is input into the model, the model identifies the test sample and determines whether the test sample contains the mobile phone. And comparing the test result of the test set with the real result, so that the accuracy of the model can be judged. If the accuracy of the model is low, parameters in the model can be adjusted to improve the accuracy of the model. And (4) training the neural network model for identifying the mobile phone by testing the test set and adjusting the model parameters. The process of training the neural network model for identifying other objects (e.g., television, outdoor, etc.) is consistent with the training process, and the embodiment of the present application is not described herein again.
Secondly, after determining the label corresponding to the eye image, the server can determine the eye using state of the user according to the second preset condition according to the acquired eye using data and the labeled eye image. The eye using state of the user can at least comprise a reading state, an electronic screen state and an outdoor state. The reading state represents that the user is reading, the electronic screen state represents that the user is using the electronic product, and the outdoor state represents that the user is doing outdoor activities.
The second preset condition may include at least:
when the eye using distance is smaller than a second preset distance threshold, the duration time is larger than a third preset time threshold, and the label corresponding to the eye using image is a non-electronic screen and is not outdoors, the eye using state of the user is a reading state;
when the eye using distance is smaller than a second preset distance threshold, the duration time is larger than a third preset time threshold, and the label corresponding to the eye using image is an electronic screen, the eye using state of the user is an electronic screen state;
when the eye using distance is larger than a third preset distance threshold value, the duration time is larger than a fourth preset time threshold value, and the label corresponding to the eye image is used outdoors, the eye using state of the user is the outdoor state.
For example, when the eye distance is less than 60 cm, the duration is more than 10 seconds, and the eye image has no electronic screen and is not outdoors, the user is considered to be reading and is in a reading state; the eye using distance is less than 60 cm, the duration is more than 10 seconds, and the electronic screen is identified in the eye using image and is considered as the state that the user uses the electronic product and is in the electronic screen state; the eye distance is more than 120 cm, the duration is more than 20 seconds, and the eye image display is in the outdoor, and the user is considered to be doing activities in the outdoor and is in the outdoor state.
It should be noted that, some possible implementations of the second preset condition proposed by the present application are not limited to the second preset condition set by other possible implementations that are not proposed by the present application. For example, when the eye-using distance is greater than the third preset distance threshold, the duration time is greater than the fourth preset time threshold, and the tag corresponding to the eye-using image is an electronic screen, the eye-using state of the user may also be an electronic screen state, and the like.
Further, in S102, it is proposed that the server may obtain a plurality of eye images of the user continuously acquired by the monitor according to a certain time interval, so as to comprehensively judge the eye use state of the user according to the plurality of eye images. Then, in the embodiment of the present application, the second preset condition may be adjusted accordingly, so as to determine the eye usage state of the user according to the content and the number of the labels corresponding to the multiple eye usage images. Therefore, the accuracy of judging the eye use state of the user can be improved, and the misjudgment of the eye use state of the user caused by the sudden action of the user can be reduced. For example, when the eye using distance is greater than a third preset distance threshold and the duration is greater than a fourth preset time threshold, 3 eye using images of the user are continuously acquired at intervals of 5 seconds, and when the labels corresponding to more than 2 eye using images in the 3 eye using images are outdoor, the eye using state of the user is judged to be an outdoor state; and so on.
In addition, when the user uses the eyes, the eye image acquired by the monitor may be unclear due to the shaking of the head, thereby affecting the subsequent recognition of the eye image. Therefore, when the server acquires the eye image of the user collected by the monitor, the server can firstly judge whether the acquired eye image is a blurred image, and when the eye image is judged to be the blurred image, the server performs image deblurring processing on the determined blurred image so as to improve the definition of the image and improve the accuracy of subsequent image identification.
Specifically, first, the server may be based on the fact that processing a non-blurred image may destroy the original quality of the image
Figure BDA0002151751490000101
Determining a gradient map of the eye image and based thereon
Figure BDA0002151751490000102
And judging whether the eye image is a blurred image. Wherein, gx(i, j) and gy(i, j) are gradient diagrams of the image f in the x and y directions, respectively, m, n are the number of lines and columns of the image f in the x and y directions, respectively, and GnumIs the sum of the number of non-zero gradient values of the x-direction gradient map and the y-direction gradient map. When S is<When 7, the server can judge that the eye image is a blurred image. The value 7 can be determined experimentally.
Secondly, the server can be based on
Figure BDA0002151751490000103
Figure BDA0002151751490000104
And mx, y Is 1Nh (s, t) epsilon h (x, y) Is, and t determines a foreground blurred image in the blurred images. Wherein q (x, y) is a foreground blurred image, c is a third preset value, and d is a fourth preset value,NhIs the total number of pixels in the neighborhood of the pixel with (x, y) in the blurred image, h (x, y) is the set of pixel points in the neighborhood of the pixel with (x, y) in the blurred image, I (s, t) is the grayscale value of the pixel with (x, y) in the blurred image, and m (x, y) is the mean value of I (x, y).
And finally, the server can process the determined foreground blurred image by adopting Gaussian filtering to obtain a foreground clear image, and then the foreground clear image is used as an eye use image subjected to image deblurring processing to carry out image recognition.
The image processing method can separate the foreground image part (namely the default user's attention) in the blurred image from the original image so as to process the foreground image part, restore the definition of the image and reduce the workload of equipment.
S104: and the server analyzes the eye using behaviors of the user according to the eye using data, the eye using state and the preset evaluation dimension, and determines the grade of the eye using behaviors of the user.
After the server determines the eye using state of the user, the eye using behavior of the user can be analyzed according to the eye using data, the determined eye using state and a plurality of preset evaluation dimensions, so that the grade of the eye using behavior of the user is determined.
Specifically, the server may determine the eye use data of the user and the corresponding eye use state monitored by the monitor within a preset time period, and determine the sub-evaluation level of the eye use behavior of the user for each evaluation dimension according to a preset evaluation dimension and a preset first evaluation criterion. Then, the server can determine the total evaluation level of the eye using behavior of the user according to the determined plurality of sub-evaluation levels and a preset second evaluation standard. The first evaluation criterion may be determined according to the eye data of the user and a preset threshold (e.g., a preset time threshold, a preset distance threshold, etc.) of each preset evaluation dimension, and the second evaluation criterion may be determined according to the content and number of each sub-evaluation level.
Specifically, according to the eye use data of the user acquired by the monitor, each preset evaluation dimension and the grade included in each evaluation dimension can be determined. The preset evaluation dimension at least comprises the average single-reading time length, the outdoor time length, the average reading distance, the average head inclination and the average single-electronic-screen time length of the user in a preset time period. Each evaluation dimension may include a rating of good, bad, and very bad. The average single-reading time length is the average value of the duration time of each reading state of the user in a preset time period; the outdoor time length is the total time length of the duration time of the outdoor state of the user in a preset time period; the average reading distance is the average value of the eye using distance corresponding to each reading state of the user in a preset time period; the average head inclination is an average value of the head inclinations of the user within a preset time period; the average single electronic screen duration is the average of the duration of each electronic screen state of the user in a preset time period.
The first evaluation criterion may include at least:
when the average single-reading time is less than or equal to 40 minutes, the corresponding sub-evaluation grade is excellent; when the average single-reading time is longer than 40 minutes and less than or equal to 60 minutes, the corresponding sub-evaluation grade is good; when the average single reading time length is more than 60 minutes and less than or equal to 100 minutes, the corresponding sub-evaluation grade is poor; when the average single reading time is longer than 100 minutes, the corresponding sub-evaluation grade is extremely poor;
when the outdoor time length is more than or equal to 2 hours, the corresponding sub-evaluation grade is excellent; when the outdoor time length is more than or equal to 1 hour and less than 2 hours, the corresponding sub-evaluation grade is good; when the outdoor time length is more than or equal to 0.5 hour and less than 1 hour, the corresponding sub-evaluation grade is poor; when the outdoor time length is less than 0.5 hour, the corresponding sub-evaluation grade is extremely poor;
when the average reading distance is more than or equal to 33 cm, the corresponding sub-evaluation grade is excellent; when the average reading distance is more than or equal to 25 cm and less than 33 cm, the corresponding sub-evaluation grade is good; when the average reading distance is more than or equal to 20 cm and less than 25 cm, the corresponding sub-evaluation grade is poor; when the average reading distance is less than 20 cm, the corresponding sub-evaluation grade is extremely poor;
When the average head inclination angle is less than or equal to 5 degrees, the corresponding sub-evaluation grade is excellent; when the average head inclination angle is larger than 5 degrees and less than or equal to 10 degrees, the corresponding sub-evaluation grade is good; when the average head inclination angle is larger than 10 degrees and less than or equal to 15 degrees, the corresponding sub-evaluation grade is poor, and when the average head inclination angle is larger than 15 degrees, the corresponding sub-evaluation grade is extremely poor;
when the average time for watching the electronic screen once is less than or equal to 10 minutes, the corresponding sub-evaluation grade is excellent; when the average time for watching the electronic screen once is more than 10 minutes and less than or equal to 15 minutes, the corresponding sub-evaluation grade is good; when the average time length of watching the electronic screen once is more than 15 minutes and less than or equal to 30 minutes, the corresponding sub-evaluation grade is poor; when the average time length of a single fixation on the electronic screen is more than 30 minutes, the corresponding sub-evaluation grade is extremely poor.
The second evaluation criterion may include at least:
the sub-evaluation grades of the average single-time reading time length, the outdoor time length and the average reading distance are excellent, and the total evaluation grade is excellent if other sub-evaluation grades are excellent or good;
the sub-evaluation grades of the average single-time reading time length, the outdoor time length and the average reading distance are excellent or good, and the total evaluation grade is good if other sub-evaluation grades are not extremely poor;
And if the sub-evaluation grades of the average single-reading time length, the outdoor time length and the average reading distance are good or poor, at most one item is poor, and the other sub-evaluation grades are not very poor, the total evaluation grade is poor.
Through the evaluation process, the server can determine the excellent degree of the eye using behaviors of the user in each evaluation dimension and the excellent degree of the overall eye using behaviors of the user in a preset time period so as to determine the poor eye using behaviors of the user and judge whether the user uses eyes scientifically.
The first evaluation criterion and the second evaluation criterion are only some possible implementations proposed in the embodiments of the present application, and the present application is not limited to setting the first evaluation criterion and the second evaluation criterion by other possible implementations.
Further, in a case where the user does not wear the glasses (e.g., puts the glasses on a table, puts the glasses in a bag, etc.), or in a case where the user does not wear the monitor correctly (e.g., takes the monitor off the ear, puts the monitor on a table, etc.), the monitor cannot normally monitor the eye use data of the user, and then the state of the monitor in such a case may be regarded as a non-operating state. Correspondingly, the server can determine that the monitor is in a non-working state according to the judgment of the first preset condition when the eye distance is smaller than a fourth preset distance threshold and the duration is larger than a fifth preset time threshold, or when the illumination intensity is smaller than a preset light intensity threshold and the duration is larger than a fifth preset time threshold, or under the conditions that the electric quantity of the monitor is smaller than a preset electric quantity threshold and the like. Wherein, the illumination intensity can be obtained through the camera of monitor.
Further, when the monitor is in a non-operating state, the eye using behavior of the user cannot be normally monitored. Therefore, the server can determine the eye use data of the monitor in the non-working state from the acquired eye use data, eliminate the eye use data, and analyze the eye use behavior of the user by adopting the eye use data after elimination. Therefore, the range of the real eye use data of the user can be accurately determined, and the accuracy and the scientificity of the analysis of the eye use behavior of the user are improved.
After analyzing the eye using behavior of the user, the server may correct the eye using behavior of the user according to each of the sub-evaluation levels and the total evaluation level. Specifically, the server may determine the point obtained by the user in each evaluation dimension according to each sub-evaluation level of the user in a first preset time period and a preset score corresponding to each sub-evaluation level. Then, the server can determine that the user reaches the standard in each evaluation dimension according to the integral obtained by the user in each evaluation dimension and the preset standard score corresponding to each evaluation dimension aiming at each evaluation dimension when the integral of the user in the evaluation dimension is more than or equal to the standard score; and determining the standard score corresponding to each evaluation dimension based on the sub-evaluation level corresponding to each evaluation dimension in a second preset time period. And finally, the server can determine the reward behaviors corresponding to the user from a plurality of preset reward behaviors according to the standard-reaching evaluation dimensions of the user and the preset weights of all the evaluation dimensions, and sends the determined reward behaviors to the terminal corresponding to the monitor for displaying. Wherein the second preset time period is before the first preset time period.
In a possible implementation manner, the bad eye use behavior of the user can be corrected through an electronic game preset in the server. Specifically, for the electronic game played by the user, game levels with different difficulties (i.e., corresponding to the above standard scores) may be set according to the sub-evaluation grades corresponding to each evaluation dimension of the user within the second preset time period. The server may determine, according to the sub-evaluation level corresponding to each evaluation dimension of the user within the first preset time period, a bonus behavior (e.g., an accelerator card, a double score, etc.) in the corresponding game, where each sub-evaluation level of the user may have a positive correlation with the bonus behavior obtained in the game, that is, the higher each sub-evaluation level of the user is, the higher the value of the bonus behavior obtained in the game is. Through the game mode, the user can be encouraged to pay attention to the eye using behavior of the user in order to obtain better reward, accelerate clearance and accelerate the game process, so that the aim of correcting the poor eye using behavior is fulfilled, and a good eye using habit is formed.
For example, the preset scores corresponding to the levels preset by the server are preferably 30 points, well 20 points, poor 10 points and extremely poor 0 points. The first preset time period is from 7 months and 8 days to 7 months and 15 days, and the second preset time period is from 7 months and 1 day to 7 months and 7 days.
The sub-evaluation grades corresponding to the evaluation dimensions of the user in the 7 th-7 th days from the 7 th month 1 are as follows: the average single-time reading time is long, the outdoor time is long, the average reading distance is good, the average head inclination angle is poor, and the average single-time watching time of the electronic screen is poor. The server can determine that the standard scores corresponding to the average single reading time length, the outdoor time length, the average reading distance, the average head inclination angle and the average single electronic screen watching time length are respectively 10 scores, 20 scores, 30 scores and 30 scores from 7-month 8 days to 7-month 15 days according to the setting of the user based on the data.
And then, the server determines that the sub-evaluation grades of the users from 7/8 th day to 7/15 th day are that the average single-time reading time length is excellent, the outdoor time length is excellent, the average reading distance is excellent, the average head inclination angle is excellent, and the average single-time watching time length is excellent. Then, the user's points in each evaluation dimension are 30 points, 20 points, 30 points, and 20 points, respectively.
And comparing the integral corresponding to each evaluation dimension obtained by the user with the preset standard score corresponding to each evaluation dimension from 7 months 8 days to 7 months 15 days, so that the average single-time reading time of the user reaches the standard, the outdoor time reaches the standard, the average reading distance reaches the standard, the average head inclination angle reaches the standard, and the average single-time watching electronic screen time does not reach the standard.
Then, the server can determine reward and punishment behaviors to the user according to the condition that each evaluation dimension of the user reaches the standard, aiming at the eye using behaviors corresponding to the evaluation dimensions which do not reach the standard of the user, the eye using behaviors which the user progresses or steps back, the progress or step back degree of the user and the like.
In addition, the server can re-analyze the eye using behaviors of the user according to a preset time interval (such as one week), and obtain a new sub-evaluation level and a new total evaluation level of each eye using behavior of the user, so as to update the correction scheme of the user, and be beneficial to performing targeted correction on poor eye using habits of the user.
Further, as shown in fig. 3, the nose bridge position of the glasses of the user may be provided with an eye image capture device 310. In this case, the monitor 320 monitors that the user is looking at the electronic screen, but the point of regard of the user is not actually on the electronic screen, since the user may be dazzled, distracted, etc. during the use of eyes. Therefore, in order to determine whether the user is gazing at the electronic screen, the server may collect the eye image of the user through the eye image collecting device 310 disposed right in front of the user and a separate infrared light source device (not shown in the figure), determine pupil position data corresponding to the eye image, and determine whether the user is gazing at the electronic screen according to the determined pupil position data.
The monitor 320 includes an image capturing device, such as a camera, for capturing an eye image of the user directly in front of the user, so that the server can determine the eye state of the user. The eye image capture device 310 is used for capturing an eye image of a user, and can be fixed to the bridge of the nose of the user's glasses, and includes a capture portion 311 and a support portion 312. The supporting portion 312 is used for supporting the collecting portion 311 and is connected to the bridge of the nose of the user's glasses, and is retractable and adjustable in direction, and can collect both-eye images or one-eye images of the user. The infrared light source device is detachably fixed on a terminal used by a user, such as a computer or a mobile phone, and is used for emitting infrared light to irradiate the eyes of the user, so that the server can determine pupil position data according to the eye image of the user acquired by the eye image acquisition device 310.
Specifically, when the monitor 320 monitors that the eye use state of the user is in the electronic screen state, in order to determine whether the user is watching the electronic screen, the eye image capture device 310 may be invoked according to a preset condition, so that the eye image capture device 310 captures an eye image of the user, and sends the eye image of the user to the server.
The server can determine different thresholds to segment the eye images according to preset step gray level thresholds of the eye images to obtain segmentation areas and mutual wrapping characteristics among the segmentation areas, and extract eyeball areas. The characteristic of mutual wrapping among the divided areas is the space characteristic that the sclera, the iris and the pupil are sequentially wrapped from outside to inside in an eyeball area wrapped by the upper eyelid and the lower eyelid. Because the sclera, the iris and the pupil are arranged in the eyeball area wrapped by the upper eyelid and the lower eyelid from outside to inside in sequence, the gray scales of the three areas are reduced in sequence. Therefore, by means of the gray-scale step-type distribution characteristics of the sclera, the iris and the pupil and the mutual wrapping characteristics of the sclera, the iris and the pupil, the eyeball area can be extracted by setting a proper step gray-scale threshold value and judging the mutual wrapping characteristics among the areas divided by different threshold values.
After the eyeball area is extracted, selecting a lowest gray value point in the eyeball area as a seed point for extracting the pupil area, and then obtaining the complete pupil area through a preset growth threshold value and a boundary condition and any existing area growth algorithm. And calculating the central coordinate of the pupil area according to the pupil area, wherein the central coordinate is the pupil central coordinate.
Secondly, the cornea reflective point is generally the area with the highest brightness on the eye image, the server can carry out binarization processing on the eye image through a preset threshold value, and completely separate the cornea reflective point through a proper preset threshold value to obtain a binarization image containing the reflective point. The server may then determine the pupil location data as a relative offset between the pupil center location and the corneal glint location in the eye image.
Finally, the server can determine the offset of the gazing point of the user on the electronic screen according to a preset polynomial fitting algorithm and the determined pupil position data (namely the relative offset of the pupil), so as to determine whether the user gazes at the electronic screen.
Specifically, the server may prompt the user when the user uses the monitor 320 and the eye image capture device 310 for the first time, and perform calibration between the pupil position and the initial position of the electronic screen with a corresponding electronic product according to a mobile phone mode, a computer mode, and a television mode in a pre-divided electronic screen state, so as to form a corresponding relationship between the pupil position of the user and each position of the electronic screen. And the server can determine a corresponding polynomial fitting algorithm according to the calibration process, and further determine whether the user is watching the electronic screen.
When determining whether the user is watching the electronic screen, according to the current pupil position information of the user and the determined polynomial fitting algorithm, the position information of the gazing point of the user on the electronic screen, namely the offset between the current gazing point of the user and the predetermined initial position, can be determined. Then, according to the determined position information of the user's gaze point and the size information of the electronic screen, the server can determine whether the user's gaze point is on the electronic screen, that is, whether the user is watching the electronic screen.
For example, the polynomial fitting algorithm may be
Figure BDA0002151751490000171
Wherein (x)P,yP) Is pupil position data (i.e., relative pupil deviation)Displacement amount), (X)P,YP) For gaze point location data (i.e., gaze point offset on electronic screen), a0~a11The unknown coefficients to be determined can be obtained by a calibration procedure. The server can determine (x) through the eye image of the userP,yP) Substituting the value into the polynomial to obtain (X)P,YP) I.e. the position information of the user's gaze point.
In addition, the binarized image containing the glistenings determined by the server may include glistenings formed by the reflection of the user's glasses. In order to remove the interference of the reflection of the glasses, the server can calculate the areas of all the reflection points in the binary image containing the reflection points, and the reflection points with the areas within the preset value range are used as cornea reflection points so as to eliminate the influence of the reflection points of the glasses.
As shown in fig. 4, a schematic structural diagram of a server provided in the embodiment of the present application includes:
a receiver 210, configured to receive eye use data of a user acquired by a monitor, where the eye use data at least includes an eye use distance between an eye of the user and a fixation, a duration corresponding to the eye use distance, and a head inclination angle of the user; the monitor is arranged on a spectacle frame of the spectacles worn by the user;
a processor 220, configured to determine, according to the eye use data, that a first preset condition is met; wherein the first preset condition at least comprises that the eye distance reaches a preset distance threshold; determining the eye using state of a user according to the eye using image, wherein the eye using state is at least one of a reading state, an electronic screen state and an outdoor state; analyzing the eye using behavior of the user according to the eye using data, the eye using state and a preset evaluation dimension, and determining the grade of the eye using behavior of the user; wherein the preset evaluation dimension comprises any one or more of: average single reading time, outdoor time, average reading distance, average head inclination angle and average single electronic screen watching time;
A transmitter 230 for transmitting an instruction to the monitor to acquire an image of the eye of the user; the eye image is an image corresponding to the front of the user.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art to which the present application pertains. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (12)

1. A method of monitoring eye usage, comprising:
the method comprises the steps that a server receives eye use data of a user, wherein the eye use data are acquired by a monitor, and at least comprise an eye use distance between eyes of the user and a fixation object, duration corresponding to the eye use distance and a head inclination angle of the user;
according to the eye use data, when a first preset condition is met, an instruction for acquiring an eye use image of a user is sent to the monitor; the eye using image is an image corresponding to the front of a user, and the first preset condition at least comprises that the eye using distance reaches a preset distance threshold;
determining the eye using state of a user according to the eye using image, wherein the eye using state is at least one of a reading state, an electronic screen state and an outdoor state;
Analyzing the eye using behavior of the user according to the eye using data, the eye using state and a preset evaluation dimension, and determining the grade of the eye using behavior of the user; wherein the preset evaluation dimension comprises any one or more of: average single reading time, outdoor time, average reading distance, average head inclination angle and average single electronic screen watching time;
determining the eye using state of the user according to the eye using image, including,
the server determines a label corresponding to the eye image according to a preset neural network model, wherein the label at least comprises one of an electronic screen, a non-electronic screen, outdoors and non-outdoors;
determining the eye using state of the user according to the eye using data and the label corresponding to the eye using image and a second preset condition;
determining the eye using state of the user according to a second preset condition, wherein the determining at least comprises the following steps:
when the eye using distance is smaller than a second preset distance threshold value, the duration time is larger than a third preset time threshold value, and the label corresponding to the eye using image is a non-electronic screen and is not outdoors, the eye using state of the user is a reading state;
when the eye using distance is smaller than a second preset distance threshold value, the duration time is larger than a third preset time threshold value, and the label corresponding to the eye using image is an electronic screen, the eye using state of the user is the electronic screen state;
When the eye using distance is larger than a third preset distance threshold, the duration time is larger than a fourth preset time threshold, and the label corresponding to the eye using image is outdoor, the eye using state of the user is an outdoor state;
the neural network model comprises a plurality of convolution portions, wherein
One convolution part comprises three layers of convolutions, the three layers of convolutions respectively adopt convolution kernels of 3x3, convolution kernels of 1x1 and convolution kernels of 3x3, the step sizes of the three layers of convolutions are all 1, the output is 256 feature maps, pooling kernels of 2x2 are adopted after each convolution layer, the step size is 2, and the expression of a pooling algorithm is
Figure FDA0003591792420000021
Wherein, PlkIs a downsampled eigenvalue obtained from a pooling kernel of size c × c, FijCorresponding elements in a pooling kernel with the size of c × c in the convolution feature map F, a is the sum of all elements in the pooling kernel, σ is the standard deviation, bxFor the bias term, l is a first preset value, and k is a second preset value.
2. The method of claim 1, further comprising:
when the server receives that the eye using distance of the user sent by the monitor is smaller than a first preset distance threshold value and the duration time that the eye using distance of the user is smaller than the first preset distance threshold value is larger than a first preset time threshold value;
Or when the server receives that the eye distance of the user sent by the monitor is less than or equal to a second preset distance threshold value and the duration time that the eye distance of the user is less than or equal to the second preset distance threshold value is greater than a second preset time threshold value;
or when the server receives that the head inclination angle of the user sent by the monitor is larger than a preset angle threshold value, the server sends a reminding instruction to the monitor so that the monitor can remind the user.
3. The method of claim 1, wherein the eye data further comprises illumination intensity;
analyzing the eye using behavior of the user according to the eye using data, the eye using state and a preset evaluation dimension, wherein the analyzing specifically comprises the following steps:
when the eye using distance is smaller than a fourth preset distance threshold value, the duration time is larger than a fifth preset time threshold value, or the illumination intensity is smaller than a preset light intensity threshold value, the duration time is larger than a fifth preset time threshold value, or the electric quantity of the monitor is smaller than a preset electric quantity threshold value, the server determines that the monitor is in a non-working state;
the server determines eye use data acquired by the monitor in a working state according to the eye use data acquired by the monitor in a non-working state;
And analyzing the eye using behavior of the user according to the eye using data acquired by the monitor in the working state, the eye using state of the user and a preset evaluation dimension.
4. The method according to claim 1, wherein analyzing the eye-using behavior of the user according to the eye-using data, the eye-using state and the preset evaluation dimension to determine the level of the eye-using behavior of the user specifically comprises:
the server determines sub-evaluation levels of the user for each evaluation dimension according to eye use data of the user in a first preset time period, a plurality of evaluation dimensions and a preset first evaluation standard; the sub-evaluation grades comprise one of excellent, good, poor and extremely poor;
determining a total evaluation level of the eye using behavior of the user according to a plurality of sub-evaluation levels of the user in each evaluation dimension and a preset second evaluation standard; the total evaluation grade comprises one of excellent, good, poor and extremely poor;
the first evaluation criterion is determined based on eye use data of a user and preset threshold values of evaluation dimensions;
the second evaluation criterion is determined based on the contents and number of the respective sub-evaluation levels.
5. The method according to claim 4, characterized in that said first evaluation criterion comprises at least:
When the average single-reading time is less than or equal to 40 minutes, the corresponding sub-evaluation grade is excellent; when the average single-reading time length is more than 40 minutes and less than or equal to 60 minutes, the corresponding sub-evaluation grade is good; when the average single-reading time length is more than 60 minutes and less than or equal to 100 minutes, the corresponding sub-evaluation grade is poor, and when the average single-reading time length is more than 100 minutes, the corresponding sub-evaluation grade is extremely poor;
when the outdoor time length is more than or equal to 2 hours, the corresponding sub-evaluation grade is excellent; when the outdoor time length is more than or equal to 1 hour and less than 2 hours, the corresponding sub-evaluation grade is good; when the outdoor time length is more than or equal to 0.5 hour and less than 1 hour, the corresponding sub-evaluation grade is poor; when the outdoor time length is less than 0.5 hour, the corresponding sub-evaluation grade is extremely poor;
when the average reading distance is more than or equal to 33 cm, the corresponding sub-evaluation grade is excellent; when the average reading distance is more than or equal to 25 cm and less than 33 cm, the corresponding sub-evaluation grade is good; when the average reading distance is more than or equal to 20 cm and less than 25 cm, the corresponding sub-evaluation grade is poor; when the average reading distance is less than 20 cm, the corresponding sub-evaluation grade is extremely poor;
when the average head inclination angle is less than or equal to 5 degrees, the corresponding sub-evaluation grade is excellent; when the average head inclination angle is greater than 5 degrees and less than or equal to 10 degrees, the corresponding sub-evaluation grade is good; when the average head inclination angle is larger than 10 degrees and smaller than or equal to 15 degrees, the corresponding sub-evaluation grade is poor, and when the average head inclination angle is larger than 15 degrees, the corresponding sub-evaluation grade is extremely poor;
When the average time for watching the electronic screen once is less than or equal to 10 minutes, the corresponding sub-evaluation grade is excellent; when the average time for watching the electronic screen once is more than 10 minutes and less than or equal to 15 minutes, the corresponding sub-evaluation grade is good; when the average time length of watching the electronic screen once is more than 15 minutes and less than or equal to 30 minutes, the corresponding sub-evaluation grade is poor; when the average time length of a single fixation on the electronic screen is more than 30 minutes, the corresponding sub-evaluation grade is extremely poor.
6. The method according to claim 5, wherein the second evaluation criterion comprises at least any one or more of:
the sub-evaluation grades of the average single-time reading time length, the outdoor time length and the average reading distance are excellent, and the total evaluation grade is excellent if other sub-evaluation grades are excellent or good;
the sub-evaluation grades of the average single-time reading time length, the outdoor time length and the average reading distance are excellent or good, and the total evaluation grade is good if other sub-evaluation results are not extremely poor;
and if the sub-evaluation grades of the average single-reading time length, the outdoor time length and the average reading distance are good or poor and at most one is poor, and the other sub-evaluation grades are not very poor, the total evaluation grade is poor.
7. The method of claim 4, further comprising:
determining the integral obtained by the user in each evaluation dimension according to each sub-evaluation grade of the user in the first preset time period and the preset score corresponding to each sub-evaluation grade;
according to the integral obtained by the user in each evaluation dimension and the preset standard score corresponding to each evaluation dimension, aiming at each evaluation dimension, when the integral of the user in the evaluation dimension is more than or equal to the standard score, determining that the user reaches the standard in the evaluation dimension; the standard score corresponding to each evaluation dimension is determined based on the sub-evaluation level corresponding to each evaluation dimension in a second preset time period; the second preset time period is before the first preset time period;
determining reward behaviors corresponding to the user from a plurality of preset reward behaviors according to the standard-reaching evaluation dimensionality of the user and the preset weight of each evaluation dimensionality;
and sending the reward behavior to a terminal corresponding to the monitor for display.
8. The method of claim 1, wherein prior to determining the eye-using state of the user from the eye-using image, the method further comprises:
According to
Figure FDA0003591792420000051
And
Figure FDA0003591792420000052
when S is<When 7, judging the eye image as a blurred image, wherein gx(i, j) and gy(i, j) are gradient diagrams of the image f in the x and y directions, respectively, m, n are the number of lines and columns of the image f in the x and y directions, respectively, and GnumIs the sum of the number of non-zero gradient values of the x-direction gradient map and the y-direction gradient map;
according to
Figure FDA0003591792420000053
Figure FDA0003591792420000054
And
Figure FDA0003591792420000055
determining in the blurred imageThe foreground blurred image is obtained, wherein q (x, y) is the foreground blurred image, c is a third preset value, d is a fourth preset value, and N ishIs the total number of pixels in the neighborhood of the pixel with (x, y) in the blurred image, h (x, y) is the set of pixel points in the neighborhood of the pixel with (x, y) in the blurred image, I (s, t) is the gray value of the pixel with (x, y) in the blurred image, and m (x, y) is the mean value of I (s, t);
and processing the determined foreground blurred image by adopting Gaussian filtering to obtain a foreground clear image which is used as an eye use image after image deblurring.
9. The method of claim 1, further comprising:
the server receives an eye image of a user acquired by eye image acquisition equipment; the eye image acquisition equipment is arranged on the nose bridge of the glasses of the user;
Determining pupil position data corresponding to the eye image; the pupil position data is the relative offset between the pupil center position and the cornea reflecting point position in the eye image;
and determining whether the user watches the electronic screen or not according to the pupil position data.
10. The method of claim 9, wherein the pupil center position is determined by:
segmenting the eye image according to a preset gray threshold, and acquiring an eyeball area corresponding to the eye image according to segmented segmentation areas and mutual wrapping characteristics among the segmentation areas;
selecting a point with the lowest gray value as a seed point in the eyeball area, and obtaining a pupil area through a preset growth threshold value, a boundary condition and an area growth algorithm;
and determining the center position of the pupil according to the pupil area.
11. The method of claim 9, wherein the corneal reflection point is determined by:
carrying out binarization processing on the eye image according to a preset gray threshold value to obtain a binarization image containing reflective points;
calculating the areas of all the reflective points in the binary image;
And taking the reflecting points with the numerical values corresponding to the areas of the reflecting points in the preset value area range as cornea reflecting points.
12. A server, comprising:
the eye using data acquisition device is used for acquiring eye using data of a user, wherein the eye using data at least comprises an eye using distance between the eyes of the user and a fixation object, duration corresponding to the eye using distance and a head inclination angle of the user;
the processor is used for determining that an instruction for acquiring an eye use image of a user is sent to the monitor when a first preset condition is met according to the eye use data; the eye using image is an image corresponding to the front of a user, and the first preset condition at least comprises that the eye using distance reaches a preset distance threshold; determining the eye using state of a user according to the eye using image, wherein the eye using state is at least one of a reading state, an electronic screen state and an outdoor state; analyzing the eye using behavior of the user according to the eye using data, the eye using state and a preset evaluation dimension, and determining the grade of the eye using behavior of the user; wherein the preset evaluation dimension comprises any one or more of: average single-time reading time, outdoor time, average reading distance, average head inclination angle and average single-time watching time of the electronic screen;
The processor is further used for determining a label corresponding to the eye image according to a preset neural network model, wherein the label at least comprises one of an electronic screen, a non-electronic screen, outdoors and non-outdoors; determining the eye using state of the user according to the eye using data and the label corresponding to the eye using image and a second preset condition; determining the eye using state of the user according to a second preset conditionAt least comprising: when the eye using distance is smaller than a second preset distance threshold value, the duration time is larger than a third preset time threshold value, and the label corresponding to the eye using image is a non-electronic screen and is not outdoors, the eye using state of the user is a reading state; when the eye using distance is smaller than a second preset distance threshold value, the duration time is larger than a third preset time threshold value, and the label corresponding to the eye using image is an electronic screen, the eye using state of the user is the electronic screen state; when the eye using distance is larger than a third preset distance threshold, the duration time is larger than a fourth preset time threshold, and the label corresponding to the eye using image is outdoor, the eye using state of the user is an outdoor state; the neural network model comprises a plurality of convolution parts, wherein one convolution part comprises three layers of convolutions, the three layers of convolutions respectively adopt convolution kernels of 3x3, convolution kernels of 1x1 and convolution kernels of 3x3, the step length of each layer of convolution is 1, 256 feature maps are output, pooling kernels of 2x2 are adopted after each layer of convolution, the step length is 2, and the expression of a pooling algorithm is
Figure FDA0003591792420000071
Wherein, PlkIs a downsampled eigenvalue obtained from a pooling kernel of size c × c, FijCorresponding elements in a pooling kernel with the size of c × c in the convolution feature map F, a is the sum of all elements in the pooling kernel, σ is the standard deviation, bxFor the bias term, l is a first preset value, and k is a second preset value.
CN201910704752.9A 2019-07-31 2019-07-31 Eye use behavior monitoring method and server Active CN110610768B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910704752.9A CN110610768B (en) 2019-07-31 2019-07-31 Eye use behavior monitoring method and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910704752.9A CN110610768B (en) 2019-07-31 2019-07-31 Eye use behavior monitoring method and server

Publications (2)

Publication Number Publication Date
CN110610768A CN110610768A (en) 2019-12-24
CN110610768B true CN110610768B (en) 2022-06-28

Family

ID=68890381

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910704752.9A Active CN110610768B (en) 2019-07-31 2019-07-31 Eye use behavior monitoring method and server

Country Status (1)

Country Link
CN (1) CN110610768B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111243742B (en) * 2020-01-14 2023-08-25 中科海微(北京)科技有限公司 Intelligent glasses capable of analyzing eye habit of children
CN114947726B (en) * 2022-05-10 2023-02-28 北京神光少年科技有限公司 Calculation method for analyzing eye use habit and eye use strength
CN115414033B (en) * 2022-11-03 2023-02-24 京东方艺云(杭州)科技有限公司 Method and device for determining abnormal eye using behavior of user
CN117334023A (en) * 2023-12-01 2024-01-02 四川省医学科学院·四川省人民医院 Eye behavior monitoring method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102520796A (en) * 2011-12-08 2012-06-27 华南理工大学 Sight tracking method based on stepwise regression analysis mapping model
CN109920548A (en) * 2019-03-15 2019-06-21 北京艾索健康科技有限公司 A kind of management method collected and analyzed and device with eye data
CN110012220A (en) * 2019-02-25 2019-07-12 深圳市赛亿科技开发有限公司 A kind of pre- myopic-preventing method, intelligent glasses and computer readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109151156A (en) * 2017-06-27 2019-01-04 富泰华工业(深圳)有限公司 Electronic device and method with myopia prevention function

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102520796A (en) * 2011-12-08 2012-06-27 华南理工大学 Sight tracking method based on stepwise regression analysis mapping model
CN110012220A (en) * 2019-02-25 2019-07-12 深圳市赛亿科技开发有限公司 A kind of pre- myopic-preventing method, intelligent glasses and computer readable storage medium
CN109920548A (en) * 2019-03-15 2019-06-21 北京艾索健康科技有限公司 A kind of management method collected and analyzed and device with eye data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"指纹图像奇异点提取的一种鲁棒方法",;沈伟等;《计算机工程》;20030228;第29卷(第2期);第16-17页,第197页 *
"运动模糊图像的去模糊算法研究";周菲;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20180415(第4期);正文第16-17页 *

Also Published As

Publication number Publication date
CN110610768A (en) 2019-12-24

Similar Documents

Publication Publication Date Title
CN110610768B (en) Eye use behavior monitoring method and server
CN110623629B (en) Visual attention detection method and system based on eyeball motion
CN103026367B (en) For rendering system and method for the display to compensate the vision impairment of beholder
KR101890542B1 (en) System and method for display enhancement
US10706281B2 (en) Controlling focal parameters of a head mounted display based on estimated user age
CN109191802B (en) Method, device, system and storage medium for eyesight protection prompt
US10788684B2 (en) Method for adapting the optical function of an adaptive ophthalmic lenses system
CN105518416A (en) Navigation method based on see-through head-mounted device
CN105164576A (en) A method of controlling a head mounted electro-optical device adapted to a wearer
CN112732071B (en) Calibration-free eye movement tracking system and application
CN110222597B (en) Method and device for adjusting screen display based on micro-expressions
WO2019153927A1 (en) Screen display method, device having display screen, apparatus, and storage medium
WO2022137603A1 (en) Determination method, determination device, and determination program
CN109194952B (en) Head-mounted eye movement tracking device and eye movement tracking method thereof
CN109299645A (en) Method, apparatus, system and storage medium for sight protectio prompt
CN115116088A (en) Myopia prediction method, apparatus, storage medium, and program product
CN114816065A (en) Screen backlight adjusting method, virtual reality device and readable storage medium
CN109977836A (en) A kind of information collecting method and terminal
CN116473501B (en) Automatic recording method, device and system for inserting-sheet type subjective refraction result
US20230142618A1 (en) Eye Tracking System for Determining User Activity
US11156831B2 (en) Eye-tracking system and method for pupil detection, associated systems and computer programs
CN108523840B (en) Eye state detection system for tablet personal computer
CN116781823A (en) Control method for mobile phone screen brightness and display terminal thereof
CN117472315A (en) Eye protection system and method for tablet personal computer
CN116994323A (en) Eye fatigue information detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220114

Address after: Room 914, building 3, Minghu Plaza, Tianqiao District, Jinan City, Shandong Province

Applicant after: Jinan Tongxing Intelligent Technology Co.,Ltd.

Address before: 250014 No. 48, xiongshan Road, Shizhong District, Jinan City, Shandong Province

Applicant before: Bi Hongsheng

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant