CN113029018A - Eye protection prompting method and device, terminal equipment and computer readable storage medium - Google Patents

Eye protection prompting method and device, terminal equipment and computer readable storage medium Download PDF

Info

Publication number
CN113029018A
CN113029018A CN202110150780.8A CN202110150780A CN113029018A CN 113029018 A CN113029018 A CN 113029018A CN 202110150780 A CN202110150780 A CN 202110150780A CN 113029018 A CN113029018 A CN 113029018A
Authority
CN
China
Prior art keywords
distance
face
terminal equipment
determining
target face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110150780.8A
Other languages
Chinese (zh)
Inventor
王玥
程骏
刘业鹏
庞建新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN202110150780.8A priority Critical patent/CN113029018A/en
Publication of CN113029018A publication Critical patent/CN113029018A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/14Measuring arrangements characterised by the use of optical techniques for measuring distance or clearance between spaced objects or spaced apertures
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1072Measuring physical dimensions, e.g. size of the entire body or parts thereof measuring distances on the body, e.g. measuring length, height or thickness
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1079Measuring physical dimensions, e.g. size of the entire body or parts thereof using optical or photographic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms

Abstract

The application is applicable to the technical field of terminals, and particularly relates to an eye protection prompting method and device, terminal equipment and a computer readable storage medium. According to the method, when a user uses the terminal equipment, a shot image can be obtained through a front camera of the terminal equipment, a target face in the shot image is determined, and the inner canthus distance of the target face is obtained. Then, the distance between the target face and the terminal equipment can be determined according to the inner canthus distance and the shot image; and when the distance is smaller than a preset threshold value, eye protection prompting is carried out. The distance between the eyes of the user and the terminal equipment can be accurately determined only through the front camera of the terminal equipment and the inner canthus distance of the user, so that accurate eye protection prompting can be carried out on the user according to the distance, the eyesight of the user is protected, the hardware cost of the terminal equipment does not need to be increased, the method and the device can be widely applied to the existing terminal equipment, and user experience is improved.

Description

Eye protection prompting method and device, terminal equipment and computer readable storage medium
Technical Field
The application belongs to the technical field of terminals, and particularly relates to an eye protection prompting method and device, terminal equipment and a computer readable storage medium.
Background
With the function of the terminal equipment becoming more and more rich, the time for people to use the terminal equipment in daily life is also becoming more and more. When using the terminal device, the user often watches the display screen of the terminal device at a short distance, and when the user watches the display screen at a short distance for a long time, the eyes of the user are inevitably damaged, resulting in the visual deterioration.
In order to solve the problem, in the prior art, an infrared ranging sensor is generally added in the terminal device to measure the distance between the eyes of the user and the terminal device through the infrared ranging sensor, so as to perform eye protection prompting on the user. And carry out the eyeshield suggestion through adding infrared ray range sensor, can increase terminal equipment's hardware cost.
Disclosure of Invention
The embodiment of the application provides an eye protection prompting method and device, terminal equipment and a computer readable storage medium, and the distance between eyes of a user and the terminal equipment can be determined on the basis of not increasing hardware cost, so that the eyes of the user can be prompted to protect the vision of the user.
In a first aspect, an embodiment of the present application provides an eye protection prompting method, which is applied to a terminal device, and the eye protection prompting method may include:
acquiring a shot image through a front camera of the terminal equipment;
determining a target face in the shot image, and acquiring the inner canthus distance of the target face;
determining the distance between the target face and the terminal equipment according to the angular distance and the shot image;
and when the distance is smaller than a preset threshold value, eye protection prompting is carried out.
For example, the determining the distance between the target human face and the terminal device according to the angular distance and the captured image may include:
determining a first position of a left inner eye corner point of the target face and a second position of a right inner eye corner point of the target face according to the shot image;
acquiring the focal length of the front camera;
and determining the distance between the target face and the terminal equipment according to the angular distance, the first position, the second position and the focal length.
Specifically, the determining the distance between the target face and the terminal device according to the angular distance between the inner canthus, the first position, the second position and the focal length may include:
determining the distance between the target face and the terminal equipment according to the following formula:
Figure BDA0002931719040000021
wherein Distance is a Distance between the target face and the terminal device, f is the focal length, and S is the angular Distance, (x)l,yl) Is the first position, (x)r,yr) For the second position, dx is a physical size of one pixel in the captured image in the x-axis direction, and dy is a physical size of one pixel in the captured image in the x-axis direction.
In a possible implementation manner of the first aspect, the determining a target face in the captured image may include:
carrying out face detection on the shot image to obtain a face contained in the shot image;
and when the shot images contain a plurality of faces, determining the face with the largest face area as the target face.
For example, the acquiring the angular distance between inner corners of the target human face may include:
extracting the features of the target face to obtain a first feature vector of the target face;
acquiring a second feature vector matched with the first feature vector, and determining the angular distance corresponding to the second feature vector as the angular distance of the target human face;
and when a second characteristic vector matched with the first characteristic vector does not exist, determining a preset angular distance as the angular distance of the target human face.
It should be understood that, before the obtaining of the second feature vector matching the first feature vector, the method may include:
acquiring a registration image and an inner canthus distance corresponding to the registration image;
extracting the characteristics of a preset face in the registered image to obtain a third characteristic vector of the preset face;
and associating and storing the third feature vector with the medial canthal spacing corresponding to the registration image, wherein the third feature vector comprises the second feature vector.
In a second aspect, the embodiment of the present application provides an eyeshield suggestion device, is applied to terminal equipment, eyeshield suggestion device can include:
the shot image acquisition module is used for acquiring a shot image through a front camera of the terminal equipment;
the target face determining module is used for determining a target face in the shot image and acquiring the angular distance between inner canthus of the target face;
and the distance determining module is used for determining the distance between the target face and the terminal equipment according to the inner canthus distance and the shot image.
And the eye protection prompting module is used for carrying out eye protection prompting when the distance is smaller than the preset distance.
Illustratively, the distance determining module may include:
the position determining unit is used for determining a first position of a left inner eye corner point of the target face and a second position of a right inner eye corner point of the target face according to the shot image;
the focal length acquisition unit is used for acquiring the focal length of the front camera;
a distance determining unit, configured to determine a distance between the target face and the terminal device according to the angular distance, the first location, the second location, and the focal length.
Specifically, the distance determining unit is specifically configured to determine the distance between the target face and the terminal device according to the following formula:
Figure BDA0002931719040000031
wherein Distance is a Distance between the target face and the terminal device, f is the focal length, and S is the angular Distance, (x)l,yl) Is the first position, (x)r,yr) For the second position, dx is a physical size of one pixel in the captured image in the x-axis direction, and dy is a physical size of one pixel in the captured image in the x-axis direction.
In a possible implementation manner of the second aspect, the target face determining module may include:
the face detection unit is used for carrying out face detection on the shot image to obtain a face contained in the shot image;
and the face determining unit is used for determining the face with the largest face area as the target face when the shot image contains a plurality of faces.
Illustratively, the target face determining module may further include:
the characteristic extraction unit is used for extracting the characteristics of the target face to obtain a first characteristic vector of the target face;
a first distance determining unit, configured to obtain a second eigenvector matched with the first eigenvector, and determine an angular distance corresponding to the second eigenvector as an angular distance of the target human face;
and a second angular distance determining unit, configured to determine a preset angular distance as the angular distance of the target human face when there is no second eigenvector matched with the first eigenvector.
It should be understood that the device may further include:
a registered image acquisition module for acquiring a registered image and the inner canthus distance corresponding to the registered image;
the feature extraction module is used for extracting features of a preset face in the registered image to obtain a third feature vector of the preset face;
and an association storage module, configured to associate and store the third feature vector with the angular distance corresponding to the registration image, where the third feature vector includes the second feature vector.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the eye protection prompting method according to any one of the foregoing first aspects when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and the computer program, when executed by a processor, implements the eye protection prompting method according to any one of the foregoing first aspects.
In a fifth aspect, an embodiment of the present application provides a computer program product, which when running on a terminal device, causes the terminal device to execute the eye protection prompting method according to any one of the above first aspects.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Compared with the prior art, the embodiment of the application has the advantages that:
in the embodiment of the application, when a user uses the terminal device, the terminal device can acquire a shot image through the front camera, determine a target face in the shot image, and acquire the angular distance between inner canthus of the target face. Then, the distance between the target face and the terminal equipment can be determined according to the inner canthus distance and the shot image; and when the distance is smaller than a preset threshold value, eye protection prompting is carried out. The distance between the eyes of the user and the terminal equipment can be accurately determined only through the front camera of the terminal equipment and the inner canthus distance of the user, so that accurate eye protection prompting can be carried out on the user according to the distance, the eyesight of the user is protected, the hardware cost of the terminal equipment does not need to be increased, the method and the device can be widely applied to the existing terminal equipment, and user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flow chart of an eye protection prompting method according to an embodiment of the present application;
fig. 2 is a schematic view of the angular separation;
fig. 3 is a schematic flowchart of determining a distance between a target face and a terminal device according to an embodiment of the present application;
FIG. 4 is a schematic view of a front-facing camera projection;
fig. 5 is a schematic structural view of an eye protection prompting device provided by an embodiment of the present application;
fig. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In addition, the references to "a plurality" in the embodiments of the present application should be interpreted as two or more.
At present, an infrared ranging sensor is generally added in a terminal device to measure a distance between eyes of a user and the terminal device through the infrared ranging sensor, or a dual current camera is arranged in the terminal device to detect a depth through the dual current camera to obtain a distance between the eyes of the user and the terminal device, so that an eye protection prompt can be performed on the user according to the distance. And carry out distance measurement through adding infrared distance measuring sensor or setting up two present cameras, must increase terminal equipment's hardware cost, and to the current terminal equipment that does not have infrared distance measuring sensor and two mesh front cameras, then can't carry out the eyeshield suggestion, influence user experience.
In order to solve the above problem, an embodiment of the present application provides an eye protection prompting method, where the eye protection prompting method may directly use a front camera of a terminal device to obtain a captured image, so as to determine a distance between a target face and the terminal device according to the captured image and an inner canthus distance of the target face, and thus may perform eye protection prompting on a user according to the distance. The distance between the eyes of the user and the terminal equipment can be accurately determined only through the front camera of the terminal equipment and the inner canthus distance of the user, so that accurate eye protection prompting can be carried out on the user according to the distance, the eyesight of the user is protected, the hardware cost of the terminal equipment does not need to be increased, the method and the device can be widely applied to the existing terminal equipment, and user experience is improved.
The eye protection prompting method provided by the embodiment of the application can be an optional function in the terminal equipment, the optional function can be a system function of the terminal equipment and also can be an application function of a certain application program in the terminal equipment, and a user can start the optional function according to actual needs to perform eye protection prompting. For example, when the selectable function is an application function of an application program, if a user wants to perform an eye protection prompt, the user may start the application program to trigger the terminal device to start the front camera to acquire a captured image, and determine a distance between the user's eyes and the terminal device according to the captured image and the medial angular distance of the user, so as to perform the eye protection prompt on the user according to the distance.
It should be understood that when the optional function is an application function of an application program, the application program may run in a background after being started, and the application program may have a right to start a front camera of the terminal device, a right to store and read data in the terminal device, and a right to run in a floating window manner.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an eye protection prompting method according to an embodiment of the present disclosure. By way of example and not limitation, the eye protection prompting method can be applied to terminal equipment such as a mobile phone, a tablet computer, a wearable device and the like with a display screen and a front camera. As shown in fig. 1, the eye protection prompting method may include:
and S101, acquiring a shot image through a front camera of the terminal equipment.
In the embodiment of the application, in the process that a user uses the terminal equipment, the front-facing camera of the terminal equipment can shoot images at intervals of preset time to obtain shot images. The preset time can be specifically set according to actual conditions, for example, the preset time can be set to 10 seconds, that is, the front-facing camera can perform image shooting every 10 seconds.
And S102, determining a target face in the shot image, and acquiring the inner canthus distance of the target face.
Specifically, after the shot image is obtained, the terminal device may perform face detection on the shot image through a face detection network to determine whether the shot image contains a face. When the face is included in the captured image, the terminal device may determine a target face in the captured image. When the shot image does not contain the human face, the terminal equipment can continue to shoot the image through the front camera.
It is understood that, when only one face is included in the captured image, the terminal device may directly determine the face as the target face. When a plurality of faces are included in the captured image, the terminal device may determine a largest face among the plurality of faces as a target face. The size of the face can be determined by the face area selected by the face detection network frame, that is, the larger the face area is, the larger the face corresponding to the face area is.
In the embodiment of the application, after the target face is determined, the terminal device can obtain the angular distance between inner canthus of the target face. As shown in fig. 2, the angular distance between the inner canthus of the target face is the distance between the corner of the left inner eye and the corner of the right inner eye of the target face.
Specifically, the terminal device may perform a face alignment operation on the target face. Then, the terminal device can extract the face features of the aligned target face through a face feature extraction network to obtain a first feature vector corresponding to the target face. Then, the terminal device may acquire a second eigenvector matched with the first eigenvector, and may determine the angular distance corresponding to the second eigenvector as the angular distance of the target human face. The second eigenvector and the angular distance corresponding to the second eigenvector may be stored in the terminal device, or may be stored in a third party terminal device in communication connection with the terminal device, for example, the second eigenvector and the angular distance corresponding to the second eigenvector may be stored in a cloud server in communication with the terminal device.
When there is no second eigenvector matching the first eigenvector, the terminal device may determine the preset angular distance as the angular distance of the target face. The preset angular distance may be a default angular distance of the terminal device system. For example, the terminal device may determine the average angular interval or the standard angular interval of the adult as the preset angular interval.
It is to be understood that whether the second feature vector matches the first feature vector may be determined by calculating a similarity between the second feature vector and the first feature vector. For example, when the similarity between the second feature vector and the second feature vector is greater than or equal to a preset similarity, it may be determined that the second feature vector matches the first feature vector; when the similarity between the second feature vector and the first feature vector is less than the preset similarity, it may be determined that the second feature vector does not match the first feature vector. The preset similarity may be specifically determined according to an actual situation, and for example, the preset similarity may be determined to be any numerical value such as 80% or 90%.
The second feature vector may be any one of the third feature vectors, and the third feature vector may be a feature vector corresponding to the registered face. The third eigenvector and the medial angular spacing corresponding to the third eigenvector are detailed below.
In the embodiment of the application, when the user needs to use the eye protection prompting function, the user needs to perform information registration first, so that the terminal device can perform matching of the target face according to the information registered by the user, and thereby the inner canthus distance is acquired. Specifically, when registering information, a user may upload a registration image and a medial angular distance of a preset human face corresponding to the registration image to a terminal device. After the terminal device obtains the registration image and the angular distance between inner canthus corresponding to the registration image, feature extraction may be performed on a preset human face in the registration image to obtain a third feature vector corresponding to the preset human face, and the third feature vector and the angular distance between inner canthus corresponding to the registration image may be stored in association, for example, in association with the terminal device, or in association with a third-party terminal device in communication connection with the terminal device. The registered image may be an image obtained by real-time shooting by the terminal device in the registration process, or an image shot in advance, and the registered image should include a clear front face of the user.
It should be noted that, in order to avoid performing invalid registration, after acquiring the registration image, the terminal device may perform face detection on the registration image through the face detection network. When it is detected that the registered image includes one face, the terminal device may determine the face as a preset face to be registered, and perform face alignment operation on the preset face. Then, the terminal device may extract the face features of the aligned preset face through a face feature extraction network to obtain a third feature vector corresponding to the preset face, and store the third feature vector in association with the angular distance corresponding to the registration image submitted by the user. When it is detected that the registered image includes a plurality of faces, the terminal device may determine a largest face of the plurality of faces as a preset face to be registered, and perform a face alignment operation on the preset face. Then, the terminal device may extract the face features of the aligned preset face through a face feature extraction network to obtain a third feature vector corresponding to the preset face, and store the third feature vector in association with the angular distance corresponding to the registration image submitted by the user.
Therefore, when the angular distance corresponding to the target face is determined, the terminal device may match the first feature vector corresponding to the target face with each of the third feature vectors to find the third feature vector matched with the first feature vector. The found third eigenvector is the second eigenvector, and the angular interval corresponding to the third eigenvector can be the angular interval corresponding to the second eigenvector.
S103, determining the distance between the target human face and the terminal equipment according to the inner canthus distance and the shot image;
specifically, as shown in fig. 3, the determining a distance between the target human face and the terminal device according to the angular distance and the captured image may include:
s301, determining a first position of a left inner eye corner point of the target face and a second position of a right inner eye corner point of the target face according to the shot image;
in the embodiment of the present application, the first position of the left inner eye corner point refers to a first position of the left inner eye corner point in an image plane coordinate system, and the second position of the right inner eye corner point refers to a second position of the right inner eye corner point in the image plane coordinate system.
S302, acquiring the focal length of the front camera;
and S303, determining the distance between the target face and the terminal equipment according to the angular distance between the inner canthus, the first position, the second position and the focal length.
Specifically, the determining the distance between the target face and the terminal device according to the angular distance between the inner canthus, the first position, the second position and the focal length may include:
determining the distance between the target face and the terminal equipment according to the following formula:
Figure BDA0002931719040000111
wherein Distance is a Distance between the target face and the terminal device, f is the focal length, and S is the angular Distance, (x)l,yl) Is the first position, (x)r,yr) For the second position, dx is a physical size of one pixel in the captured image in the x-axis direction, and dy is a physical size of one pixel in the captured image in the x-axis direction.
The derivation of the above formula is explained in detail below.
Referring to fig. 4, fig. 4 is a schematic view illustrating a scene projected by a front camera. As shown in fig. 4, when a captured image is obtained by a camera, two coordinate systems, i.e., a camera coordinate system (O), may be involvedc-XcYcZc) And an image plane coordinate system (0-XY). Wherein, OcIs the origin of the camera coordinate system, i.e. the center point of the front camera (indicated as camera center in fig. 4), (X)c,Yc,Zc) The coordinate in the camera coordinate system is represented by 0, which is the origin of the image plane coordinate system, i.e., the center point of the captured image, and (X, Y) is represented by coordinates in the image plane coordinate system.
It should be understood that P in FIG. 4R(XR,YR,ZR) Is a coordinate point, P, of the right inner eye corner point of the target face in a camera coordinate systemL(XL,YL,ZL) Is the coordinate point, p, of the left inner eye corner point of the target face in the camera coordinate systemr(xr,yr) As the coordinate point of the right inner eye corner point of the target human face in the image plane coordinate systemI.e. coordinate points in the captured image, pl(xl,yl) Is the coordinate point of the left inner eye corner point of the target human face in the image plane coordinate system, namely the coordinate point in the shot image. A is the central point O of the front cameracAt the vertical point of the plane where the target face is located, B and PRPerpendicular, D and PLAnd vertically, C is a coordinate point of B in the image plane coordinate system, and E is a coordinate point of D in the image plane coordinate system. Since the left and right eyes of the user are not necessarily on the same horizontal line at the time of photographing, X in the camera can be assumedcThe axis being horizontal, so PRAnd PLThe angle between the line between the two and the horizontal may be α.
The eye protection prompting method provided by the embodiment of the application can be applied to a scene that a plane where a target face is located and a plane where a terminal device is located are parallel, so that the distance between the eyes of the target face and the terminal device is determined as the distance between the plane where the target face is located and the plane where the terminal device is located, namely a line segment O is determinedcThe length of a.
Wherein, line segment OcThe length of O is the focal length f of the front camera of the terminal device, and the unit can be millimeter, namely O mmco ═ f. The focal length f of the front camera can be directly obtained from the terminal device, for example, from the attribute information of the terminal device. As can be seen from FIG. 2, line segment PRPLThe length of the angular interval is the angular interval S, and the unit can also be mm. The second position of the right inner eye corner point of the target human face in the image plane coordinate system is pr(xr,yr) And the first position of the left inner eye corner point of the target face in the image plane coordinate system is pl(xl,yl) May be determined from the captured image. Since the coordinates in the image plane coordinate system are in units of pixels, it is necessary to determine the physical size dx of each pixel in the x-axis direction and the physical sizes dy, dx, and dy in the y-axis direction in the image plane coordinate system in advance.
It will be appreciated that O can be determined according to the principle of similar trianglesco/OcA ═ Co/AB, i.e. OcA=Oco AB/Co. Wherein Co/CE ═ AB/BD can be determined according to a similar principle, and
Figure BDA0002931719040000121
therefore, AB ═ Co × BD/CE ═ xr|*BD/(|xl-xr| so that O can be derivedcA=Oco*PRPL/
Figure BDA0002931719040000122
Figure BDA0002931719040000123
And S104, when the distance is smaller than a preset threshold value, carrying out eye protection prompting.
The preset threshold value can be determined according to actual conditions. For example, the terminal device may determine the preset threshold according to the optimal reading distance. For example, the threshold value may be set to any value of 30 cm, 35 cm, or 40 cm.
In the embodiment of the application, when the distance between the target face and the terminal device is determined to be smaller than the preset threshold value, the terminal device can determine that the eyes of the user are close to the display screen of the terminal device, and at the moment, the terminal device can perform eye protection prompting on the user. For example, the user may be prompted to protect eyes by displaying information in a display interface of the terminal device, for example, a floating window may be popped up in the display interface, and information such as "the current distance is closer to the display screen, please pay attention to adjusting the distance" may be displayed in the floating window. Or the user can be prompted by eye protection through voice playing, and the like. The embodiment of the application does not limit the way of eye protection prompting.
It can be understood that when it is determined that the distance between the target face and the terminal device is greater than or equal to the preset threshold, the terminal device may determine that the distance between the eyes of the user and the display screen of the terminal device is appropriate, and at this time, the terminal device may not perform eye protection prompting, and may continue to acquire the next captured image, so as to determine whether to perform eye protection prompting on the user according to the next captured image.
As can be seen from the foregoing description, when the target face is not a face corresponding to the registered user, that is, when there is no second eigenvector matched with the first eigenvector, the terminal device may determine the preset angular distance as the angular distance of the target face. At this moment, when the distance between the target human face and the terminal equipment is smaller than the preset threshold value, the terminal equipment can prompt the user for eye protection, and can also prompt the user for a non-registered user to prompt the user to perform information registration, so that when the user subsequently uses the terminal equipment, the distance between the user eyes and the terminal equipment can be determined according to the inner canthus distance of the user, the user can be accurately prompted for eye protection, and the user experience is improved.
In the embodiment of the application, when a user uses the terminal device, the terminal device can acquire a shot image through the front camera, determine a target face in the shot image, and acquire the angular distance between inner canthus of the target face. Then, the distance between the target face and the terminal equipment can be determined according to the inner canthus distance and the shot image; and when the distance is smaller than a preset threshold value, eye protection prompting is carried out. The distance between the eyes of the user and the terminal equipment can be accurately determined only through the front camera of the terminal equipment and the inner canthus distance of the user, so that accurate eye protection prompting can be carried out on the user according to the distance, the eyesight of the user is protected, the hardware cost of the terminal equipment does not need to be increased, the method and the device can be widely applied to the existing terminal equipment, and user experience is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 5 is a block diagram of an eye protection prompting device according to an embodiment of the present application, which corresponds to the eye protection prompting method described in the above embodiments, and only the portions related to the embodiment of the present application are shown for convenience of description.
Referring to fig. 5, the eye protection prompting device is applied to a terminal device, and the eye protection prompting device may include:
a shot image obtaining module 501, configured to obtain a shot image through a front camera of the terminal device;
a target face determining module 502, configured to determine a target face in the captured image, and obtain angular intervals of the target face;
a distance determining module 503, configured to determine a distance between the target human face and the terminal device according to the angular distance and the captured image.
And the eye protection prompting module 504 is used for performing eye protection prompting when the distance is smaller than the preset distance.
For example, the distance determining module 503 may include:
the position determining unit is used for determining a first position of a left inner eye corner point of the target face and a second position of a right inner eye corner point of the target face according to the shot image;
the focal length acquisition unit is used for acquiring the focal length of the front camera;
a distance determining unit, configured to determine a distance between the target face and the terminal device according to the angular distance, the first location, the second location, and the focal length.
Specifically, the distance determining unit is specifically configured to determine the distance between the target face and the terminal device according to the following formula:
Figure BDA0002931719040000141
wherein Distance is a Distance between the target face and the terminal device, f is the focal length, and S is the angular Distance, (x)l,yl) Is the first position, (x)r,yr) For the second position, dx is a physical size of one pixel in the captured image in the x-axis direction, and dy is a physical size of one pixel in the captured image in the x-axis direction.
In a possible implementation manner, the target face determining module 502 may include:
the face detection unit is used for carrying out face detection on the shot image to obtain a face contained in the shot image;
and the face determining unit is used for determining the face with the largest face area as the target face when the shot image contains a plurality of faces.
Illustratively, the target face determining module 502 may further include:
the characteristic extraction unit is used for extracting the characteristics of the target face to obtain a first characteristic vector of the target face;
a first distance determining unit, configured to obtain a second eigenvector matched with the first eigenvector, and determine an angular distance corresponding to the second eigenvector as an angular distance of the target human face;
and a second angular distance determining unit, configured to determine a preset angular distance as the angular distance of the target human face when there is no second eigenvector matched with the first eigenvector.
It should be understood that the device may further include:
a registered image acquisition module for acquiring a registered image and the inner canthus distance corresponding to the registered image;
the feature extraction module is used for extracting features of a preset face in the registered image to obtain a third feature vector of the preset face;
and an association storage module, configured to associate and store the third feature vector with the angular distance corresponding to the registration image, where the third feature vector includes the second feature vector.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 6, the terminal device 6 of this embodiment includes: at least one processor 60 (only one shown in fig. 6), a memory 61, and a computer program 62 stored in the memory 61 and executable on the at least one processor 60, wherein the processor 60 executes the computer program 62 to implement the steps of any of the various eye-protection alert method embodiments described above.
The terminal device 6 may be a mobile phone, a notebook, a palm computer, or other terminal devices. The terminal device may include, but is not limited to, a processor 60, a memory 61. Those skilled in the art will appreciate that fig. 6 is only an example of the terminal device 6, and does not constitute a limitation to the terminal device 6, and may include more or less components than those shown, or combine some components, or different components, such as an input/output device, a network access device, and the like.
The processor 60 may be a Central Processing Unit (CPU), and the processor 60 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), field-programmable gate arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may in some embodiments be an internal storage unit of the terminal device 6, such as a hard disk or a memory of the terminal device 6. In other embodiments, the memory 61 may also be an external storage device of the terminal device 6, such as a plug-in hard disk, a Smart Memory Card (SMC), a Secure Digital (SD) card, a flash memory card (flash card), and the like, which are equipped on the terminal device 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the terminal device 6. The memory 61 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 61 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps in the above-mentioned method embodiments may be implemented.
The embodiments of the present application provide a computer program product, which when running on a terminal device, enables the terminal device to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include at least: any entity or device capable of carrying computer program code to the apparatus/terminal device, recording medium, computer memory, read-only memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable storage media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and proprietary practices.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. The eye protection prompting method is applied to terminal equipment and is characterized by comprising the following steps:
acquiring a shot image through a front camera of the terminal equipment;
determining a target face in the shot image, and acquiring the inner canthus distance of the target face;
determining the distance between the target face and the terminal equipment according to the angular distance and the shot image;
and when the distance is smaller than a preset threshold value, eye protection prompting is carried out.
2. The eye protection prompting method according to claim 1, wherein the determining the distance between the target human face and the terminal device according to the angular distance and the captured image comprises:
determining a first position of a left inner eye corner point of the target face and a second position of a right inner eye corner point of the target face according to the shot image;
acquiring the focal length of the front camera;
and determining the distance between the target face and the terminal equipment according to the angular distance, the first position, the second position and the focal length.
3. The eye protection prompting method according to claim 2, wherein the determining the distance between the target human face and the terminal device according to the angular distance, the first position, the second position and the focal length comprises:
determining the distance between the target face and the terminal equipment according to the following formula:
Figure FDA0002931719030000011
wherein Distance is a Distance between the target face and the terminal device, f is the focal length, and S is the angular Distance, (x)l,yl) Is the first position, (x)r,yr) For the second position, dx is a physical size of one pixel in the captured image in the x-axis direction, and dy is a physical size of one pixel in the captured image in the x-axis direction.
4. The eye-protection prompting method of claim 1, wherein the determining the target face in the captured image comprises:
carrying out face detection on the shot image to obtain a face contained in the shot image;
and when the shot images contain a plurality of faces, determining the face with the largest face area as the target face.
5. The eye-protection prompting method according to claim 1, wherein the obtaining the angular distance between inner canthus of the target human face comprises:
extracting the features of the target face to obtain a first feature vector of the target face;
and acquiring a second feature vector matched with the first feature vector, and determining the angular interval corresponding to the second feature vector as the angular interval of the target human face.
6. The eye-protection cue method of claim 5 further comprising:
and when a second characteristic vector matched with the first characteristic vector does not exist, determining a preset angular distance as the angular distance of the target human face.
7. The eye-protection prompting method of claim 5 or 6, wherein before the obtaining the second feature vector matched with the first feature vector, the method comprises:
acquiring a registration image and an inner canthus distance corresponding to the registration image;
extracting the characteristics of a preset face in the registered image to obtain a third characteristic vector of the preset face;
and associating and storing the third feature vector with the medial canthal spacing corresponding to the registration image, wherein the third feature vector comprises the second feature vector.
8. The utility model provides an eyeshield suggestion device, is applied to terminal equipment, a serial communication port, eyeshield suggestion device includes:
the shot image acquisition module is used for acquiring a shot image through a front camera of the terminal equipment;
the target face determining module is used for determining a target face in the shot image and acquiring the angular distance between inner canthus of the target face;
the distance determining module is used for determining the distance between the target human face and the terminal equipment according to the inner canthus distance and the shot image;
and the eye protection prompting module is used for carrying out eye protection prompting when the distance is smaller than the preset distance.
9. A terminal device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the eye-protection alert method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements the eye-protection alert method according to any one of claims 1 to 7.
CN202110150780.8A 2021-02-03 2021-02-03 Eye protection prompting method and device, terminal equipment and computer readable storage medium Pending CN113029018A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110150780.8A CN113029018A (en) 2021-02-03 2021-02-03 Eye protection prompting method and device, terminal equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110150780.8A CN113029018A (en) 2021-02-03 2021-02-03 Eye protection prompting method and device, terminal equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113029018A true CN113029018A (en) 2021-06-25

Family

ID=76459843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110150780.8A Pending CN113029018A (en) 2021-02-03 2021-02-03 Eye protection prompting method and device, terminal equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113029018A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2812743C1 (en) * 2023-04-24 2024-02-01 Андрей Анатольевич Тарасов Method for determining safe distance from mobile phone screen to user's eyes

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101192270A (en) * 2007-11-08 2008-06-04 北京中星微电子有限公司 Display, device and method for accomplishing sight protection
US20120127325A1 (en) * 2010-11-23 2012-05-24 Inventec Corporation Web Camera Device and Operating Method thereof
CN103063193A (en) * 2012-11-30 2013-04-24 青岛海信电器股份有限公司 Method and device for ranging by camera and television
CN103793719A (en) * 2014-01-26 2014-05-14 深圳大学 Monocular distance-measuring method and system based on human eye positioning
CN104076925A (en) * 2014-06-30 2014-10-01 天马微电子股份有限公司 Method for reminding user of distance between eyes and screen
CN105759971A (en) * 2016-03-08 2016-07-13 珠海全志科技股份有限公司 Method and system for automatically prompting distance from human eyes to screen
CN109116655A (en) * 2018-08-02 2019-01-01 Oppo广东移动通信有限公司 Apparatus control method, device, storage medium and electronic equipment
CN109191802A (en) * 2018-07-20 2019-01-11 北京旷视科技有限公司 Method, apparatus, system and storage medium for sight protectio prompt
CN109492590A (en) * 2018-11-13 2019-03-19 广东小天才科技有限公司 A kind of distance detection method, distance detection device and terminal device
CN109683703A (en) * 2018-10-30 2019-04-26 努比亚技术有限公司 A kind of display control method, terminal and computer readable storage medium
CN109816718A (en) * 2017-11-22 2019-05-28 腾讯科技(深圳)有限公司 A kind of method, apparatus and storage medium of play cuing
CN109977727A (en) * 2017-12-27 2019-07-05 广东欧珀移动通信有限公司 Sight protectio method, apparatus, storage medium and mobile terminal
CN110162232A (en) * 2018-02-11 2019-08-23 中国移动通信集团终端有限公司 Screen display method, device, equipment and storage medium with display screen
CN110784600A (en) * 2019-11-28 2020-02-11 西南民族大学 Device and method for detecting distance between human eyes and mobile phone
CN110913074A (en) * 2019-11-28 2020-03-24 北京小米移动软件有限公司 Sight distance adjusting method and device, mobile equipment and storage medium
CN111679736A (en) * 2020-05-22 2020-09-18 中国长城科技集团股份有限公司 Method and device for adjusting screen brightness of electronic equipment and terminal equipment
CN112070021A (en) * 2020-09-09 2020-12-11 深圳数联天下智能科技有限公司 Distance measurement method, distance measurement system, distance measurement equipment and storage medium based on face detection

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101192270A (en) * 2007-11-08 2008-06-04 北京中星微电子有限公司 Display, device and method for accomplishing sight protection
US20120127325A1 (en) * 2010-11-23 2012-05-24 Inventec Corporation Web Camera Device and Operating Method thereof
CN103063193A (en) * 2012-11-30 2013-04-24 青岛海信电器股份有限公司 Method and device for ranging by camera and television
CN103793719A (en) * 2014-01-26 2014-05-14 深圳大学 Monocular distance-measuring method and system based on human eye positioning
CN104076925A (en) * 2014-06-30 2014-10-01 天马微电子股份有限公司 Method for reminding user of distance between eyes and screen
CN105759971A (en) * 2016-03-08 2016-07-13 珠海全志科技股份有限公司 Method and system for automatically prompting distance from human eyes to screen
CN109816718A (en) * 2017-11-22 2019-05-28 腾讯科技(深圳)有限公司 A kind of method, apparatus and storage medium of play cuing
CN109977727A (en) * 2017-12-27 2019-07-05 广东欧珀移动通信有限公司 Sight protectio method, apparatus, storage medium and mobile terminal
CN110162232A (en) * 2018-02-11 2019-08-23 中国移动通信集团终端有限公司 Screen display method, device, equipment and storage medium with display screen
CN109191802A (en) * 2018-07-20 2019-01-11 北京旷视科技有限公司 Method, apparatus, system and storage medium for sight protectio prompt
CN109116655A (en) * 2018-08-02 2019-01-01 Oppo广东移动通信有限公司 Apparatus control method, device, storage medium and electronic equipment
CN109683703A (en) * 2018-10-30 2019-04-26 努比亚技术有限公司 A kind of display control method, terminal and computer readable storage medium
CN109492590A (en) * 2018-11-13 2019-03-19 广东小天才科技有限公司 A kind of distance detection method, distance detection device and terminal device
CN110784600A (en) * 2019-11-28 2020-02-11 西南民族大学 Device and method for detecting distance between human eyes and mobile phone
CN110913074A (en) * 2019-11-28 2020-03-24 北京小米移动软件有限公司 Sight distance adjusting method and device, mobile equipment and storage medium
CN111679736A (en) * 2020-05-22 2020-09-18 中国长城科技集团股份有限公司 Method and device for adjusting screen brightness of electronic equipment and terminal equipment
CN112070021A (en) * 2020-09-09 2020-12-11 深圳数联天下智能科技有限公司 Distance measurement method, distance measurement system, distance measurement equipment and storage medium based on face detection

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2812743C1 (en) * 2023-04-24 2024-02-01 Андрей Анатольевич Тарасов Method for determining safe distance from mobile phone screen to user's eyes

Similar Documents

Publication Publication Date Title
US8315443B2 (en) Viewpoint detector based on skin color area and face area
CN106250894B (en) Card information identification method and device
EP2991027B1 (en) Image processing program, image processing method and information terminal
CN113408403A (en) Living body detection method, living body detection device, and computer-readable storage medium
CN108965835B (en) Image processing method, image processing device and terminal equipment
CN107944367B (en) Face key point detection method and device
CN109040596B (en) Method for adjusting camera, mobile terminal and storage medium
CN113126937B (en) Display terminal adjusting method and display terminal
CN110706283B (en) Calibration method and device for sight tracking, mobile terminal and storage medium
CN108605087A (en) Photographic method, camera arrangement and the terminal of terminal
CN110708463B (en) Focusing method, focusing device, storage medium and electronic equipment
CN112330715A (en) Tracking method, tracking device, terminal equipment and readable storage medium
CN112348686B (en) Claim settlement picture acquisition method and device and communication equipment
CN116582653B (en) Intelligent video monitoring method and system based on multi-camera data fusion
CN112291473B (en) Focusing method and device and electronic equipment
CN113012407A (en) Eye screen distance prompt myopia prevention system based on machine vision
CN114910052A (en) Camera-based distance measurement method, control method and device and electronic equipment
US11017557B2 (en) Detection method and device thereof
CN108604128B (en) Processing method and mobile device
CN112200002A (en) Body temperature measuring method and device, terminal equipment and storage medium
CN109934168B (en) Face image mapping method and device
US20220360707A1 (en) Photographing method, photographing device, storage medium and electronic device
CN113029018A (en) Eye protection prompting method and device, terminal equipment and computer readable storage medium
CN111402391A (en) User face image display method, display device and corresponding storage medium
CN108810407B (en) Image processing method, mobile terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210625