CN117334023A - Eye behavior monitoring method and system - Google Patents

Eye behavior monitoring method and system Download PDF

Info

Publication number
CN117334023A
CN117334023A CN202311628837.6A CN202311628837A CN117334023A CN 117334023 A CN117334023 A CN 117334023A CN 202311628837 A CN202311628837 A CN 202311628837A CN 117334023 A CN117334023 A CN 117334023A
Authority
CN
China
Prior art keywords
eye
pixel point
layer
face image
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311628837.6A
Other languages
Chinese (zh)
Inventor
张允�
曾喻
罗勇
刘青松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Peoples Hospital of Sichuan Academy of Medical Sciences
Original Assignee
Sichuan Peoples Hospital of Sichuan Academy of Medical Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Peoples Hospital of Sichuan Academy of Medical Sciences filed Critical Sichuan Peoples Hospital of Sichuan Academy of Medical Sciences
Priority to CN202311628837.6A priority Critical patent/CN117334023A/en
Publication of CN117334023A publication Critical patent/CN117334023A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an eye behavior monitoring method and system, which belong to the technical field of image processing, wherein the method comprises the following steps: acquiring a real-time distance between a user and electronic equipment, and acquiring a face image of the user when the real-time distance is smaller than or equal to a preset distance; constructing and correcting an eye area extraction model, generating an eye area correction model, and generating an eye area of a face image of a user; and determining the eye use behavior coefficient of the eye use area, and reminding the eye use behavior of the user. The invention collects the face image of the user in time, and accurately identifies the region of the eyes in the face image of the user by utilizing the eye region extraction model, thereby being convenient for the follow-up steps to accurately monitor the eye behavior; meanwhile, the eye use time length, the distance between the electronic equipment and the eye use time length and the eye use action coefficient of the eye use area are calculated, whether the eye use action needs to be reminded or not is accurately judged, and when the user possibly suffers from eye use fatigue, the user is reminded in time.

Description

Eye behavior monitoring method and system
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an eye behavior monitoring method and system.
Background
In the increasingly popular age of electronic products, people are increasingly dependent on the electronic products, and the damage to eyes caused by using the electronic products for a long time and a short distance is also more serious, so that a scheme capable of monitoring the eye using behaviors needs to be provided. At present, whether the eye use behavior is tired is mainly judged according to the length of the eye use time, whether the eye use behavior needs to be rested or not, and under the condition of great workload, whether the eye use behavior is tired or not is judged only according to the eye use time, so that real-time and accurate detection cannot be realized, the eye use behavior of a user cannot be timely reminded, and the monitoring effect cannot be achieved.
Disclosure of Invention
In order to solve the problems, the invention provides an eye behavior monitoring method and system.
The technical scheme of the invention is as follows: an eye behavior monitoring method comprises the following steps:
acquiring a real-time distance between a user and electronic equipment, and acquiring a face image of the user when the real-time distance is smaller than or equal to a preset distance;
constructing and correcting an eye area extraction model, generating an eye area correction model, splitting a user face image by using the eye area correction model, generating a first face image and a second face image, and generating an eye area of the user face image according to the pixel points of the first face image and the pixel points of the second face image;
and determining an eye use behavior coefficient of the eye use region according to the real-time distance between the user and the electronic equipment and the eye use time length of the user, and reminding the eye use behavior of the user according to a comparison result of the eye use behavior coefficient and the eye use behavior threshold value.
Further, the method for extracting and correcting the eye area of the face image of the user comprises the following substeps:
constructing an eye region extraction model;
correcting the eye region extraction model to generate an eye region correction model;
and extracting the eye area of the face image of the user by using the eye area correction model.
Further, the eye region extraction model comprises a face image input layer, a first region splitting layer, a second region splitting layer, a first key pixel point set extraction layer, a second key pixel point set extraction layer, a first pixel point weight generation layer, a second pixel point weight generation layer and an eye region output layer;
the input end of the face image input layer is used as the input end of the eye region extraction model, the first output end of the face image input layer is connected with the input end of the first region splitting layer, and the second output end of the face image input layer is connected with the input end of the second region splitting layer;
the output end of the first region splitting layer, the first key pixel point set extraction layer and the input end of the first pixel point weight generation layer are sequentially connected; the output end of the second region splitting layer, the second key pixel point set extraction layer and the input end of the second pixel point weight generation layer are sequentially connected;
the output end of the first pixel point weight generation layer is connected with the first input end of the eye area output layer; the output end of the second pixel point weight generation layer is connected with the second input end of the eye area output layer;
the output end of the eye region output layer is used as the output end of the eye region extraction model.
Further, the face image input layer is used for inputting the face image of the user into the eye area extraction model;
the first region splitting layer and the second region splitting layer are used for splitting the face image of the user into a first face image and a second face image uniformly;
the first key pixel point set extraction layer is used for extracting all key pixel points of the first face image and generating a first key pixel point set; the second key pixel point set extraction layer is used for extracting all key pixel points of the second face image to generate a second key pixel point set;
the first pixel point weight generation layer is used for generating pixel weights for all key pixel points in the first key pixel point set; the second pixel point weight generation layer is used for generating pixel weights for all key pixel points in the second key pixel point set;
the eye area output layer is used for connecting all key pixel points in the first key pixel point set and the second key pixel point set according to the sequence of the pixel weights from large to small to generate an eye area.
Further, the Loss function Loss of the first key pixel set extraction layer 1 The expression is:
wherein M represents the number of pixels of the first face image, R m A red component value G representing an mth pixel point in the first face image m A green component value representing the mth pixel point in the first face image, B m The blue component value of the mth pixel point in the first face image is represented, A represents the length of the first face image, B represents the width of the first face image, and log (·) represents a logarithmic function;
loss function Loss of second key pixel point set extraction layer 2 The expression is:
wherein N represents the number of pixels of the second face image, r n A red component value g representing an nth pixel point in the second face image n A green component value, b, representing an nth pixel point in the second face image n The blue component value of the nth pixel point in the second face image is represented, a represents the length of the second face image, and b represents the width of the second face image.
Further, the expression of the first pixel point weight generating layer is:
wherein X represents the first pixel weightRegenerating the output of the layer, u=1, 2, …, U, x u Representing the pixel weight of the U-th key pixel point in the first pixel point weight generation layer, wherein U represents the number of the key pixel points in the first pixel point weight generation layer and gamma 1 B represents the learning rate of the first pixel point weight generation layer 1 Super-parameter o representing first pixel point weight generation layer u A gray value representing the ith key pixel point in the first pixel point weight generation layer, c representing a constant, e representing an index, loss 1 Representing a loss function of the first set of key pixels extraction layer;
the expression of the second pixel point weight generating layer is:
wherein Y represents the output of the second pixel point weight generating layer, v=1, 2, …, V, Y v Representing the pixel weight of the u-th key pixel in the second pixel weight generating layer, V representing the number of key pixels in the second pixel weight generating layer, and gamma 2 A learning rate, b, representing the second pixel point weight generation layer 2 Super-parameters, p, representing the second pixel point weight generation layer v Representing gray values of the v-th key pixel in the second pixel weight generation layer, loss 2 Representing the loss function of the second set of key pixels extraction layer.
Further, the method for generating the eye region correction model comprises the following steps: and correcting the first pixel point weight generation layer and the second pixel point weight generation layer to generate a corresponding first pixel point weight correction layer and second pixel point weight correction layer.
Further, the expression of the first pixel point weight correction layer is:
wherein X ' represents the output of the first pixel point weight correction layer, U ' =1, 2, …, U ',the pixel weight of the U 'th key pixel point in the first pixel point weight correction layer is represented, U' represents the number of the pixel points in the first pixel point weight correction layer, and gamma is represented 1 B represents the learning rate of the first pixel point weight generation layer 1 Super-parameter representing first pixel point weight generation layer,/->The gray value of the u' th key pixel point in the first pixel point weight correction layer is represented, c represents a constant, and e represents an index;
the expression of the second pixel point weight correction layer is:
wherein Y ' represents the output of the second pixel point weight correction layer, V ' =1, 2, …, V ',the pixel weight of the V 'th key pixel point in the second pixel point weight correction layer is represented, V' represents the number of the pixel points in the second pixel point weight correction layer, and gamma 2 B represents the learning rate of the first pixel point weight generation layer 2 Super-parameter representing first pixel point weight generation layer,/->And the gray value of the v' th key pixel point in the second pixel point weight correction layer is represented.
Further, according to the real-time distance between the user and the electronic device, determining an eye-using behavior coefficient of the eye-using area, and reminding the user of the eye-using behavior according to the eye-using behavior coefficient, including the following substeps:
according to the real-time distance between the user and the electronic equipment, calculating an eye use behavior coefficient theta of the eye use area, wherein the calculation formula is as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein T represents the eye duration of a user, S represents the area of an eye region, S represents the area of a face image of the user, and l represents the real-time distance between the user and the electronic equipment;
setting an eye behavior threshold;
when the eye use behavior coefficient of the eye use area is larger than or equal to the eye use behavior threshold value, the electronic equipment is utilized to send out prompt sound, and reminding of the eye use behavior of the user is completed.
The beneficial effects of the invention are as follows: when the distance between the user and the electronic equipment is too short, the invention timely collects the face image of the user, and accurately identifies the region of the eyes in the face image of the user by utilizing the eye region extraction model, thereby being convenient for the accurate monitoring of the eye behavior in the subsequent steps; meanwhile, the eye use time length, the distance between the electronic equipment and the eye use time length and the eye use action coefficient of the eye use area are calculated, whether the eye use action needs to be reminded or not is accurately judged, and when the user possibly suffers from eye use fatigue, the user is reminded in time.
Based on the system, the invention also provides an eye behavior monitoring system which comprises a face image acquisition unit, an eye region generation unit and an eye behavior prompting unit;
the face image acquisition unit is used for acquiring the real-time distance between the user and the electronic equipment, and acquiring the face image of the user when the real-time distance is smaller than or equal to the preset distance;
the eye area generating unit is used for constructing and correcting an eye area extraction model, generating an eye area correction model, splitting a user face image by using the eye area correction model, generating a first face image and a second face image, and generating an eye area of the user face image according to the pixel points of the first face image and the pixel points of the second face image;
the eye-using behavior prompting unit is used for determining an eye-using behavior coefficient of an eye-using area according to the real-time distance between the user and the electronic equipment and the eye-using time length of the user, and prompting the eye-using behavior of the user according to a comparison result of the eye-using behavior coefficient and the eye-using behavior threshold value.
The beneficial effects of the invention are as follows: the invention can accurately judge whether the eye-using behavior needs to be reminded or not, and timely remind the user when the user possibly suffers from eye fatigue.
Drawings
FIG. 1 is a flow chart of a method of eye behavior monitoring;
FIG. 2 is a schematic diagram of a structure of an eye region extraction model;
fig. 3 is a schematic diagram of the eye behavior monitoring method.
Detailed Description
Embodiments of the present invention are further described below with reference to the accompanying drawings.
As shown in fig. 1, the present invention provides an eye behavior monitoring method, which includes the following steps:
acquiring a real-time distance between a user and electronic equipment, and acquiring a face image of the user when the real-time distance is smaller than or equal to a preset distance;
constructing and correcting an eye area extraction model, generating an eye area correction model, splitting a user face image by using the eye area correction model, generating a first face image and a second face image, and generating an eye area of the user face image according to the pixel points of the first face image and the pixel points of the second face image;
and determining an eye use behavior coefficient of the eye use region according to the real-time distance between the user and the electronic equipment and the eye use time length of the user, and reminding the eye use behavior of the user according to a comparison result of the eye use behavior coefficient and the eye use behavior threshold value.
In the present invention, a distance sensor may be installed on the electronic device to acquire a real-time distance between the user and the electronic device. The preset distance can be set manually according to actual conditions. When the real-time distance is smaller than or equal to the preset distance, the fact that the distance of the user using the electronic equipment is relatively short is indicated, face images of the user need to be collected, eye behaviors of the user are monitored, and real-time reminding is facilitated.
In the embodiment of the invention, the eye-using area of the face image of the user is extracted and corrected, and the method comprises the following substeps:
constructing an eye region extraction model;
correcting the eye region extraction model to generate an eye region correction model;
and extracting the eye area of the face image of the user by using the eye area correction model.
In the embodiment of the invention, as shown in fig. 2, the eye region extraction model includes a face image input layer, a first region separation layer, a second region separation layer, a first key pixel point set extraction layer, a second key pixel point set extraction layer, a first pixel point weight generation layer, a second pixel point weight generation layer and an eye region output layer;
the input end of the face image input layer is used as the input end of the eye region extraction model, the first output end of the face image input layer is connected with the input end of the first region splitting layer, and the second output end of the face image input layer is connected with the input end of the second region splitting layer;
the output end of the first region splitting layer, the first key pixel point set extraction layer and the input end of the first pixel point weight generation layer are sequentially connected; the output end of the second region splitting layer, the second key pixel point set extraction layer and the input end of the second pixel point weight generation layer are sequentially connected;
the output end of the first pixel point weight generation layer is connected with the first input end of the eye area output layer; the output end of the second pixel point weight generation layer is connected with the second input end of the eye area output layer;
the output end of the eye region output layer is used as the output end of the eye region extraction model.
In the embodiment of the invention, the face image input layer is used for inputting the face image of the user into the eye area extraction model;
the first region splitting layer and the second region splitting layer are used for splitting the face image of the user into a first face image and a second face image uniformly;
the first key pixel point set extraction layer is used for extracting all key pixel points of the first face image and generating a first key pixel point set; the second key pixel point set extraction layer is used for extracting all key pixel points of the second face image to generate a second key pixel point set;
the first pixel point weight generation layer is used for generating pixel weights for all key pixel points in the first key pixel point set; the second pixel point weight generation layer is used for generating pixel weights for all key pixel points in the second key pixel point set;
the eye area output layer is used for connecting all key pixel points in the first key pixel point set and the second key pixel point set according to the sequence of the pixel weights from large to small to generate an eye area.
In the invention, the user face image comprises the left face and the right face of the user, so the first region splitting layer and the second region splitting layer can vertically split the user face image into two face images, thereby being convenient for extracting key pixel points of the two face images subsequently. The first key pixel point set extraction layer and the second key pixel point set extraction layer can extract key pixel points of two images through RGB operation on the two face images respectively, and the area where all the key pixel point sets are located is the approximate eye area. The first pixel point weight generation layer and the second pixel point weight generation layer are used for generating pixel weights for all key pixel points in the approximate eye-using areas in the two images, and are connected in sequence from large to small according to the pixel weights, so that an accurate eye-using area can be formed.
In the embodiment of the invention, the Loss function Loss of the first key pixel point set extraction layer 1 The expression is:
wherein M represents the number of pixels of the first face image, R m A red component value G representing an mth pixel point in the first face image m A green component value representing the mth pixel point in the first face image, B m The blue component value of the mth pixel point in the first face image is represented, A represents the length of the first face image, B represents the width of the first face image, and log (·) represents a logarithmic function;
loss function Loss of second key pixel point set extraction layer 2 The expression is:
wherein N represents the number of pixels of the second face image, r n A red component value g representing an nth pixel point in the second face image n A green component value, b, representing an nth pixel point in the second face image n The blue component value of the nth pixel point in the second face image is represented, a represents the length of the second face image, and b represents the width of the second face image.
In the embodiment of the present invention, the expression of the first pixel point weight generating layer is:
wherein X represents the output of the first pixel point weight generating layer, u=1, 2, …, U, X u Representing the pixel weight of the U-th key pixel point in the first pixel point weight generation layer, wherein U represents the number of the key pixel points in the first pixel point weight generation layer and gamma 1 B represents the learning rate of the first pixel point weight generation layer 1 Super-parameter o representing first pixel point weight generation layer u A gray value representing the ith key pixel point in the first pixel point weight generation layer, c representing a constant, e representing an index, loss 1 Representing a loss function of the first set of key pixels extraction layer;
the expression of the second pixel point weight generating layer is:
wherein Y represents the output of the second pixel point weight generating layer, v=1, 2, …, V, Y v Representing the pixel weight of the u-th key pixel in the second pixel weight generating layer, V representing the number of key pixels in the second pixel weight generating layer, and gamma 2 A learning rate, b, representing the second pixel point weight generation layer 2 Super-parameters, p, representing the second pixel point weight generation layer v Representing gray values of the v-th key pixel in the second pixel weight generation layer, loss 2 Representing the loss function of the second set of key pixels extraction layer.
In the embodiment of the invention, the generation method of the eye area correction model comprises the following steps: and correcting the first pixel point weight generation layer and the second pixel point weight generation layer to generate a corresponding first pixel point weight correction layer and second pixel point weight correction layer.
In the embodiment of the present invention, the expression of the first pixel point weight correction layer is:
wherein X ' represents the output of the first pixel point weight correction layer, U ' =1, 2, …, U ',the pixel weight of the U 'th key pixel point in the first pixel point weight correction layer is represented, U' represents the number of the pixel points in the first pixel point weight correction layer, and gamma is represented 1 B represents the learning rate of the first pixel point weight generation layer 1 Super-parameter representing first pixel point weight generation layer,/->The gray value of the u' th key pixel point in the first pixel point weight correction layer is represented, c represents a constant, and e represents an index;
the expression of the second pixel point weight correction layer is:
wherein Y ' represents the output of the second pixel point weight correction layer, V ' =1, 2, …, V ',the pixel weight of the V 'th key pixel point in the second pixel point weight correction layer is represented, V' represents the number of the pixel points in the second pixel point weight correction layer, and gamma 2 B represents the learning rate of the first pixel point weight generation layer 2 Super-parameter representing first pixel point weight generation layer,/->And the gray value of the v' th key pixel point in the second pixel point weight correction layer is represented.
In the embodiment of the invention, the eye use behavior coefficient of the eye use area is determined according to the real-time distance between the user and the electronic equipment, and the eye use behavior of the user is reminded according to the eye use behavior coefficient, and the method comprises the following substeps:
according to the user andthe real-time distance between the electronic devices calculates the eye use behavior coefficient theta of the eye use area, and the calculation formula is as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein T represents the eye duration of a user, S represents the area of an eye region, S represents the area of a face image of the user, and l represents the real-time distance between the user and the electronic equipment;
setting an eye behavior threshold;
when the eye use behavior coefficient of the eye use area is larger than or equal to the eye use behavior threshold value, the electronic equipment is utilized to send out prompt sound, and reminding of the eye use behavior of the user is completed.
In the present invention, the eye-use behavior threshold may be set based on historical data or by human. After the accurate eye use area is obtained, whether the user needs to be reminded of the eyes is determined by calculating the eye use time length of the user, the real-time distance between the user and the electronic equipment and the like.
Based on the above method, the invention also provides an eye behavior monitoring system, as shown in fig. 3, comprising a face image acquisition unit, an eye region generation unit and an eye behavior prompting unit;
the face image acquisition unit is used for acquiring the real-time distance between the user and the electronic equipment, and acquiring the face image of the user when the real-time distance is smaller than or equal to the preset distance;
the eye area generating unit is used for constructing and correcting an eye area extraction model, generating an eye area correction model, splitting a user face image by using the eye area correction model, generating a first face image and a second face image, and generating an eye area of the user face image according to the pixel points of the first face image and the pixel points of the second face image;
the eye-using behavior prompting unit is used for determining an eye-using behavior coefficient of an eye-using area according to the real-time distance between the user and the electronic equipment and the eye-using time length of the user, and prompting the eye-using behavior of the user according to a comparison result of the eye-using behavior coefficient and the eye-using behavior threshold value.
Those of ordinary skill in the art will recognize that the embodiments described herein are for the purpose of aiding the reader in understanding the principles of the present invention and should be understood that the scope of the invention is not limited to such specific statements and embodiments. Those of ordinary skill in the art can make various other specific modifications and combinations from the teachings of the present disclosure without departing from the spirit thereof, and such modifications and combinations remain within the scope of the present disclosure.

Claims (10)

1. A method of eye behavior monitoring comprising the steps of:
acquiring a real-time distance between a user and electronic equipment, and acquiring a face image of the user when the real-time distance is smaller than or equal to a preset distance;
constructing and correcting an eye area extraction model, generating an eye area correction model, splitting a user face image by using the eye area correction model, generating a first face image and a second face image, and generating an eye area of the user face image according to the pixel points of the first face image and the pixel points of the second face image;
and determining an eye use behavior coefficient of the eye use region according to the real-time distance between the user and the electronic equipment and the eye use time length of the user, and reminding the eye use behavior of the user according to a comparison result of the eye use behavior coefficient and the eye use behavior threshold value.
2. The eye-use behavior monitoring method according to claim 1, wherein the extracting and correcting the eye-use area of the face image of the user comprises the sub-steps of:
constructing an eye region extraction model;
correcting the eye region extraction model to generate an eye region correction model;
and extracting the eye area of the face image of the user by using the eye area correction model.
3. The eye-using behavior monitoring method according to claim 2, wherein the eye-using region extraction model includes a face image input layer, a first region split layer, a second region split layer, a first key pixel point set extraction layer, a second key pixel point set extraction layer, a first pixel point weight generation layer, a second pixel point weight generation layer, and an eye-using region output layer;
the input end of the face image input layer is used as the input end of the eye region extraction model, the first output end of the face image input layer is connected with the input end of the first region splitting layer, and the second output end of the face image input layer is connected with the input end of the second region splitting layer;
the output end of the first region splitting layer, the first key pixel point set extraction layer and the input end of the first pixel point weight generation layer are sequentially connected; the output end of the second region splitting layer, the second key pixel point set extraction layer and the input end of the second pixel point weight generation layer are sequentially connected;
the output end of the first pixel point weight generation layer is connected with the first input end of the eye area output layer; the output end of the second pixel point weight generation layer is connected with the second input end of the eye area output layer;
the output end of the eye-using region output layer is used as the output end of the eye-using region extraction model.
4. The eye-use behavior monitoring method according to claim 3, wherein the face image input layer is configured to input a face image of a user into the eye-use region extraction model;
the first region splitting layer and the second region splitting layer are used for splitting the face image of the user into a first face image and a second face image uniformly;
the first key pixel point set extraction layer is used for extracting all key pixel points of the first face image and generating a first key pixel point set; the second key pixel point set extraction layer is used for extracting all key pixel points of the second face image to generate a second key pixel point set;
the first pixel point weight generation layer is used for generating pixel weights for all key pixel points in the first key pixel point set; the second pixel point weight generation layer is used for generating pixel weights for all key pixel points in the second key pixel point set;
the eye-using region output layer is used for connecting all key pixel points in the first key pixel point set and the second key pixel point set according to the sequence of the pixel weights from large to small to generate an eye-using region.
5. The eye-use behavior monitoring method according to claim 3, wherein the first set of key pixels extracts a Loss function Loss of the layer 1 The expression is:
wherein M represents the number of pixels of the first face image, R m A red component value G representing an mth pixel point in the first face image m A green component value representing the mth pixel point in the first face image, B m The blue component value of the mth pixel point in the first face image is represented, A represents the length of the first face image, B represents the width of the first face image, and log (·) represents a logarithmic function;
the Loss function Loss of the second key pixel point set extraction layer 2 The expression is:
wherein N represents the number of pixels of the second face image, r n A red component value g representing an nth pixel point in the second face image n A green component value, b, representing an nth pixel point in the second face image n The blue component value of the nth pixel point in the second face image is represented, a represents the length of the second face image, and b represents the width of the second face image.
6. The eye-use behavior monitoring method according to claim 3, wherein the expression of the first pixel-point weight generating layer is:
wherein X represents the output of the first pixel point weight generating layer, u=1, 2, …, U, X u Representing the pixel weight of the U-th key pixel point in the first pixel point weight generation layer, wherein U represents the number of the key pixel points in the first pixel point weight generation layer and gamma 1 B represents the learning rate of the first pixel point weight generation layer 1 Super-parameter o representing first pixel point weight generation layer u A gray value representing the ith key pixel point in the first pixel point weight generation layer, c representing a constant, e representing an index, loss 1 Representing a loss function of the first set of key pixels extraction layer;
the expression of the second pixel point weight generating layer is:
wherein Y represents the output of the second pixel point weight generating layer, v=1, 2, …, V, Y v Representing the pixel weight of the u-th key pixel in the second pixel weight generating layer, V representing the number of key pixels in the second pixel weight generating layer, and gamma 2 A learning rate, b, representing the second pixel point weight generation layer 2 Super-parameters, p, representing the second pixel point weight generation layer v Representing gray values of the v-th key pixel in the second pixel weight generation layer, loss 2 Representing the loss function of the second set of key pixels extraction layer.
7. The eye-using behavior monitoring method according to claim 2, wherein the generating method of the eye-using region correction model is as follows: and correcting the first pixel point weight generation layer and the second pixel point weight generation layer to generate a corresponding first pixel point weight correction layer and second pixel point weight correction layer.
8. The eye-use behavior monitoring method according to claim 7, wherein the expression of the first pixel point weight correction layer is:
wherein X ' represents the output of the first pixel point weight correction layer, U ' =1, 2, …, U ',the pixel weight of the U 'th key pixel point in the first pixel point weight correction layer is represented, U' represents the number of the pixel points in the first pixel point weight correction layer, and gamma is represented 1 B represents the learning rate of the first pixel point weight generation layer 1 Super-parameter representing first pixel point weight generation layer,/->The gray value of the u' th key pixel point in the first pixel point weight correction layer is represented, c represents a constant, and e represents an index;
the expression of the second pixel point weight correction layer is:
wherein Y ' represents the output of the second pixel point weight correction layer, V ' =1, 2, …, V ',the pixel weight of the V 'th key pixel point in the second pixel point weight correction layer is represented, V' represents the number of the pixel points in the second pixel point weight correction layer, and gamma 2 B represents the learning rate of the first pixel point weight generation layer 2 Super-parameter representing first pixel point weight generation layer,/->And the gray value of the v' th key pixel point in the second pixel point weight correction layer is represented.
9. The eye-use behavior monitoring method according to claim 1, wherein the determining the eye-use behavior coefficient of the eye-use area according to the real-time distance between the user and the electronic device, and reminding the user of the eye-use behavior according to the eye-use behavior coefficient, comprises the following substeps:
according to the real-time distance between the user and the electronic equipment, calculating an eye use behavior coefficient theta of the eye use area, wherein the calculation formula is as follows:the method comprises the steps of carrying out a first treatment on the surface of the Wherein T represents the eye duration of a user, S represents the area of an eye region, S represents the area of a face image of the user, and l represents the real-time distance between the user and the electronic equipment;
setting an eye behavior threshold;
when the eye use behavior coefficient of the eye use area is larger than or equal to the eye use behavior threshold value, the electronic equipment is utilized to send out prompt sound, and reminding of the eye use behavior of the user is completed.
10. The eye behavior monitoring system is characterized by comprising a face image acquisition unit, an eye region generation unit and an eye behavior prompting unit;
the face image acquisition unit is used for acquiring the real-time distance between the user and the electronic equipment, and acquiring the face image of the user when the real-time distance is smaller than or equal to the preset distance;
the eye area generating unit is used for constructing and correcting an eye area extraction model, generating an eye area correction model, splitting a user face image by using the eye area correction model, generating a first face image and a second face image, and generating an eye area of the user face image according to the pixel points of the first face image and the pixel points of the second face image;
the eye-using behavior prompting unit is used for determining an eye-using behavior coefficient of an eye-using area according to the real-time distance between the user and the electronic equipment and the eye-using time length of the user, and prompting the eye-using behavior of the user according to a comparison result of the eye-using behavior coefficient and the eye-using behavior threshold value.
CN202311628837.6A 2023-12-01 2023-12-01 Eye behavior monitoring method and system Pending CN117334023A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311628837.6A CN117334023A (en) 2023-12-01 2023-12-01 Eye behavior monitoring method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311628837.6A CN117334023A (en) 2023-12-01 2023-12-01 Eye behavior monitoring method and system

Publications (1)

Publication Number Publication Date
CN117334023A true CN117334023A (en) 2024-01-02

Family

ID=89293871

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311628837.6A Pending CN117334023A (en) 2023-12-01 2023-12-01 Eye behavior monitoring method and system

Country Status (1)

Country Link
CN (1) CN117334023A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109426773A (en) * 2017-08-24 2019-03-05 浙江宇视科技有限公司 A kind of roads recognition method and device
CN110251070A (en) * 2019-06-13 2019-09-20 苏毅 It is a kind of to use eye health condition monitoring method and system
CN110610768A (en) * 2019-07-31 2019-12-24 毕宏生 Eye use behavior monitoring method and server
CN114596602A (en) * 2020-12-03 2022-06-07 北京新氧科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN116824621A (en) * 2023-01-10 2023-09-29 大连理工大学 Pedestrian re-identification method based on multi-granularity visual transducer

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109426773A (en) * 2017-08-24 2019-03-05 浙江宇视科技有限公司 A kind of roads recognition method and device
CN110251070A (en) * 2019-06-13 2019-09-20 苏毅 It is a kind of to use eye health condition monitoring method and system
CN110610768A (en) * 2019-07-31 2019-12-24 毕宏生 Eye use behavior monitoring method and server
CN114596602A (en) * 2020-12-03 2022-06-07 北京新氧科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN116824621A (en) * 2023-01-10 2023-09-29 大连理工大学 Pedestrian re-identification method based on multi-granularity visual transducer

Similar Documents

Publication Publication Date Title
CN109726652B (en) Method for detecting sleeping behavior of person on duty based on convolutional neural network
DE102010034450B4 (en) Apparatus and method for communicating with visible light with image processing
CN109559362B (en) Image subject face replacing method and device
CN111091109B (en) Method, system and equipment for predicting age and gender based on face image
CN110045656B (en) Heating equipment fault monitoring system based on cloud computing
CN113191699A (en) Power distribution construction site safety supervision method
CN106815560A (en) It is a kind of to be applied to the face identification method that self adaptation drives seat
CN111126366B (en) Method, device, equipment and storage medium for distinguishing living human face
CN108596041A (en) A kind of human face in-vivo detection method based on video
CN109568123B (en) Acupuncture point positioning method based on YOLO target detection
CN107464260A (en) A kind of rice canopy image processing method using unmanned plane
CN115337044A (en) Nucleic acid sampling monitoring method, device, system and computer readable storage medium
CN104505089B (en) Spoken error correction method and equipment
CN108664886A (en) A kind of fast face recognition method adapting to substation's disengaging monitoring demand
CN114721403A (en) Automatic driving control method and device based on OpenCV and storage medium
CN109191341B (en) Classroom video frequency point arrival method based on face recognition and Bayesian learning
CN117334023A (en) Eye behavior monitoring method and system
CN112700568B (en) Identity authentication method, equipment and computer readable storage medium
CN113011345B (en) Image quality detection method, image quality detection device, electronic equipment and readable storage medium
CN110135274B (en) Face recognition-based people flow statistics method
CN110796068A (en) Drowning detection method and system for community swimming pool
JP2020121022A5 (en)
CN115641607A (en) Method, device, equipment and storage medium for detecting wearing behavior of power construction site operator
CN110991307B (en) Face recognition method, device, equipment and storage medium
CN111899846A (en) Tongue picture detector based on end cloud cooperation and detection method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination