CN111915667A - Sight line identification method, sight line identification device, terminal equipment and readable storage medium - Google Patents

Sight line identification method, sight line identification device, terminal equipment and readable storage medium Download PDF

Info

Publication number
CN111915667A
CN111915667A CN202010730277.5A CN202010730277A CN111915667A CN 111915667 A CN111915667 A CN 111915667A CN 202010730277 A CN202010730277 A CN 202010730277A CN 111915667 A CN111915667 A CN 111915667A
Authority
CN
China
Prior art keywords
shape
vision
user
reference object
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010730277.5A
Other languages
Chinese (zh)
Inventor
周鲁平
胡晓华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Original Assignee
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shuliantianxia Intelligent Technology Co Ltd filed Critical Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority to CN202010730277.5A priority Critical patent/CN111915667A/en
Publication of CN111915667A publication Critical patent/CN111915667A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/028Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters
    • A61B3/032Devices for presenting test symbols or characters, e.g. test chart projectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/60Rotation of a whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Abstract

The application is applicable to the technical field of vision detection, and provides a sight line identification method, an identification device, a terminal device and a readable storage medium, wherein the method comprises the following steps: acquiring a first image, wherein the first image comprises a preset reference object, and the preset reference object is an article carried by a vision detection user; identifying the shape of the preset reference object in the first image to obtain a first shape; and if the first shape is different from the second shape, judging that the sight line of the vision detection user is not perpendicular to the sighting target, wherein the second shape is the shape of the preset reference object when the sight line of the vision detection user is perpendicular to the sighting target. The problem of whether present eyesight test equipment can not discern eyesight test user's sight and sighting target perpendicular can be solved in this application.

Description

Sight line identification method, sight line identification device, terminal equipment and readable storage medium
Technical Field
The present application belongs to the field of vision detection, and in particular, to a sight line identification method, an identification apparatus, a terminal device, and a readable storage medium.
Background
With the development of science and technology, electronic products are more and more widely applied in the life of people. However, the use of electronic products for a long period of time and the unhealthy habit of using eyes have led to an increasing number of people with myopia. Therefore, it is necessary to periodically detect the eyesight, thereby finding out the eyesight problem in time and correcting it in time, thereby preventing myopia.
In the process of vision detection, the sight of a vision detection user is generally required to be in a vertical relation with a visual target, and the obtained detection result is accurate. However, in the actual testing process, the vision testing user does not necessarily stand directly in front of the vision testing device. And current vision testing devices are generally fixed on a wall surface. Therefore, at this time, the sight line of the vision test user is not perpendicular to the optotype. When the sight line of the vision detection user is not perpendicular to the sighting target, the vision detection device cannot recognize the sight line, so that the obtained vision detection result is inaccurate.
Therefore, the current eyesight test equipment has the problem that whether the sight line of the eyesight test user is perpendicular to the sighting target cannot be identified.
Disclosure of Invention
The embodiment of the application provides a sight line identification method, a sight line identification device, a terminal device and a readable storage medium, and can solve the problem that the existing vision detection equipment cannot identify whether the sight line of a vision detection user is perpendicular to a sighting target or not.
In a first aspect, an embodiment of the present application provides a line of sight identification method, including:
acquiring a first image, wherein the first image comprises a preset reference object, and the preset reference object is an article carried by a vision detection user;
identifying the shape of the preset reference object in the first image to obtain a first shape;
and if the first shape is different from a second shape, determining that the sight line of the vision test user is not perpendicular to the visual target, wherein the second shape is the shape of the preset reference object when the sight line of the vision test user is perpendicular to the visual target.
In a second aspect, an embodiment of the present application provides a line of sight recognition apparatus, including:
the first image acquisition module is used for acquiring a first image, wherein the first image comprises a preset reference object, and the preset reference object is an article carried by a vision detection user;
the identification module is used for identifying the shape of the preset reference object in the first image to obtain a first shape;
and a determining module, configured to determine that the sight line of the vision testing user is not perpendicular to the visual target if the first shape is different from a second shape, where the second shape is a shape of the preset reference object when the sight line of the vision testing user is perpendicular to the visual target.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method according to the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and the computer program implements the steps of the method according to the first aspect when executed by a processor.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the line-of-sight identification method according to any one of the first aspect.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Compared with the prior art, the embodiment of the application has the advantages that:
in view of the above, the present application provides a sight line identification method, which includes, first, acquiring a first image, where the first image includes a preset reference object, and the preset reference object is an article carried by a vision detection user. And then recognizing the shape of a preset reference object in the first image to obtain a first shape. After the first shape is obtained, when the sight line of the vision test user is not perpendicular to the sighting target, the shape of the vision test user carried preset reference object after imaging can be changed. For example, when the preset reference object is actually circular and the sight line of the vision detection user is not perpendicular to the visual target, the shape of the imaged reference object is elliptical. Therefore, if the first shape is different from the second shape, and the second shape is the shape of the preset reference object when the sight line of the vision detection user is perpendicular to the sighting target, it can be determined that the sight line of the vision detection user is not perpendicular to the sighting target. Therefore, in the application, whether the sight of the vision detection user is perpendicular to the sighting target can be judged through the preset reference object carried by the vision detection user.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flowchart of a line-of-sight recognition method according to an embodiment of the present application;
FIG. 2 is a top view of a predetermined reference object and a display screen displaying a visual target according to an embodiment of the present application;
FIG. 3 is a front view of a default reference object provided in accordance with an embodiment of the present application;
fig. 4 is a schematic diagram of a three-dimensional optotype provided by an embodiment of the present application before and after rotation.
FIG. 5 is a diagram illustrating a relationship between a test distance and a vision correction value according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a method for determining a type of test eye provided by an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating a method for determining an eye-shielding state according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a line of sight recognition apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The sight line identification method provided by the embodiment of the application can be applied to terminal devices such as mobile phones, tablet computers, notebook computers, ultra-mobile personal computers (UMPCs), netbooks, Personal Digital Assistants (PDAs), and the like, and the embodiment of the application does not limit the specific types of the terminal devices.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Example one
A line of sight recognition method provided in an embodiment of the present application is described below, with reference to fig. 1, where the method includes:
step S101, a first image is obtained, wherein the first image comprises a preset reference object, and the preset reference object is an article carried by a vision detection user.
In step S101, the first image refers to an image before the vision test user prepares to perform vision test or an image at the time of vision test of the vision test user. It should be understood that the first image may be an image acquired by the camera when the terminal device performs the eyesight test operation, or may be an image acquired by another terminal device when the terminal device performs the eyesight test operation and then sent to the terminal device. The user may select a source of the first image according to actual requirements, which is not specifically limited in the present application.
The predetermined reference object may be a circular icon or a square icon, and a certain pattern is required on the icon so that the icon can be recognized. The user can select the preset reference object according to actual requirements, and the application is not specifically limited herein. When the visual detection user carries the preset reference object, the preset reference object can be attached to the eye shade, or attached to the visual detection user and other objects on the visual detection user. The user can select the attachment of the preset reference object according to actual requirements, and the application is not specifically limited herein.
And S102, recognizing the shape of a preset reference object in the first image to obtain a first shape.
In step S102, the method for identifying the shape of the preset reference object in the first image includes a fourier descriptor method, a wavelet moment method, a depth learning algorithm, and the like. The user may select the method for recognizing the shape of the preset reference object in the first image according to actual requirements, which is not specifically limited herein.
Step S103, if the first shape is different from the second shape, the sight line of the vision detection user is judged not to be perpendicular to the sighting target, and the second shape is the shape of a preset reference object when the sight line of the vision detection user is perpendicular to the sighting target.
In step S103, when the sight line of the vision test user is not perpendicular to the optotype, the shape of the preset reference object carried by the vision test user after imaging may be changed. The shape of the preset reference object when the sight line of the vision test user is not perpendicular to the optotype may be compared with the shape of the preset reference object when the sight line of the vision test user is perpendicular to the optotype. I.e. comparing the first shape with the second shape. When the first shape is different from the second shape, the sight line of the vision detection user is not perpendicular to the sighting target.
In some embodiments, if the first shape is different from the second shape, determining that the gaze of the vision testing user is not perpendicular to the optotype includes: when the second shape is a circle, if the first shape is an ellipse, the sight line of the vision detection user is judged not to be perpendicular to the sighting target.
In this embodiment, the actual shape of the preset reference object is a circle, that is, the second shape is a circle, so that when the sight line of the vision detection user is not perpendicular to the visual target, the imaged shape of the preset reference object is an ellipse, and at this time, the first shape is an ellipse. Therefore, when the second shape is a circle, if the first shape is an ellipse, it is determined that the sight line of the vision test user is not perpendicular to the optotype.
In some embodiments, if the first shape is the same as the second shape, the vision testing user's line of sight is determined to be perpendicular to the optotype. In this embodiment, the second shape is a shape of a preset reference object when the sight line of the vision test user is perpendicular to the optotype. Therefore, when the first shape is the same as the second shape, the vision detecting device can determine that the sight line of the vision detecting user is perpendicular to the optotype.
In some embodiments, after determining that the visual acuity test user's line of sight is not perpendicular to the optotype, the method further comprises: calculating the similarity between the first image and each preset image, wherein the preset image is an image formed by rotating a preset reference object in a first direction and is stored in advance; determining the rotation direction corresponding to the preset image with the highest similarity as a first rotation direction; and determining a first rotation angle according to the first shape, and performing angle adjustment operation on the sighting target according to the first rotation angle and the first rotation direction so that the sighting target is perpendicular to the sight of the vision detection user.
In this embodiment, the preset image is an image formed by rotating a preset reference object in a first direction, which is stored in advance. The method comprises the steps of firstly rotating a preset reference object in each direction and then imaging to obtain each preset image, and then associating each preset image with the rotated direction. Therefore, after the first image is acquired, the similarity between the first image and each preset image is calculated, and the rotation direction corresponding to the preset image with the highest similarity is the rotation direction of the preset reference object, namely the first rotation direction.
The method for calculating the similarity between the first image and each preset image can be selected or designed according to actual requirements. For example, in the present embodiment, a structural similarity measure method, a cosine similarity measure method, and a histogram-based method are adopted as methods for calculating the similarity between the first image and each preset image. The present application is not specifically limited herein.
Therefore, when the first shape is different from the second shape, a first rotation direction of the preset reference object is obtained, a first rotation angle is determined according to the first shape, and then an angle adjustment operation is performed on the sighting target according to the first rotation angle and the first rotation direction, so that the sighting target is perpendicular to the sight line of the vision detection user.
In some possible implementations, when the first shape is an ellipse, determining the first rotation angle from the first shape includes: acquiring a first length of a short axis and a second length of a long axis in an ellipse; and calculating an inverse cosine value according to the first length and the second length to obtain a first rotation angle.
In this embodiment, a process of calculating an inverse cosine value from the first length and the second length to obtain the first rotation angle will be described with reference to fig. 2 and 3. Fig. 2 is a top view of a predetermined reference object and a display screen displaying a visual target. A display screen 201 displays a optotype. The length of the actual diameter of the reference object is preset to d. When the optometric user stands at the position 202, that is, when the preset reference object is at the position 202, the sight line of the optometric user is perpendicular to the optotype on the display screen 201 on which the optotype is displayed. When the vision detection user moves to the position 203, the horizontal offset angle < 1 exists between the sight line of the vision detection device and the sighting target on the display screen 201 for displaying the sighting target, namely the first rotating angle is < 1.
According to the geometric principle, the angle 1 is equal to 2 and equal to 3 and equal to 4 and equal to 5 and equal to 6. Wherein the angle 6 represents the included angle between the plane where the preset reference object is located and the plane where the display screen displaying the sighting target is located. Since the plane of the preset reference object at the position 202 is parallel to the plane of the display screen displaying the optotype, angle 6 also represents the included angle between the plane of the preset reference object at the position 203 and the plane of the preset reference object at the position 202. As shown in fig. 3. 301 shows a front view of the pre-defined reference object at location 203 (it should be understood that the pre-defined reference object of fig. 3 is only used to illustrate an example of solving for the rotation angle, and in practice, other patterns should be attached to the pre-defined reference object so that the pre-defined reference object can be identified), and 302 shows a front view of the pre-defined reference object at location 202. At this time, the cosine value of the angle 6 is the ratio of the projection length c of the actual radius 3013 on the plane 302 to the length d/2 of the actual radius 3013.
After the vision detection device images the preset reference object at the position 203, the actual diameter 204 (the diameter where the actual radius 3013 is located) of the preset reference object is changed to 205, and at this time, 205 is the short axis of the ellipse, and the first length of the short axis is 2c (the plane where the preset reference object at the position 202 is located is parallel to the plane where the display screen displaying the optotype is located, that is, the plane where 302 is located is parallel to the plane where the display screen displaying the optotype is located). And the actual diameter 3014 in the pre-set reference yields the major axis of the ellipse after imaging by the vision testing device. Since the actual diameter 3014 is perpendicular to the actual diameter 204 (the diameter at which the actual radius 3013 is located) and parallel to the plane on which the display screen displaying the optotype is located, the length of the actual diameter 3014 after imaging by the vision inspection device is also d. Thus, the second length of the major axis of the resulting ellipse is d. Thus, the resulting ellipse has a minor axis with a first length of 2c and a major axis with a second length of d. Therefore, the size of ≦ 6 can be obtained by calculating the inverse cosine of the first length of the short axis and the second length of the long axis
∠6=arccos(2c/d)
And < 6 > is equal to < 1, so that the magnitude of the first rotation angle < 1 can be obtained.
After the size of the first rotation angle < 1 is obtained, angle adjustment operation is performed on the cursor according to the first rotation angle and the first rotation direction.
In other possible implementations, the performing an angle adjustment operation on the cursor according to the first rotation angle and the first rotation direction includes: and controlling the display screen for displaying the sighting target to rotate by a first rotation angle towards a first rotation direction. In this embodiment, the eyesight test equipment is through the rotatory first rotation angle of control display screen to first direction of rotation that shows the sighting mark to make the sighting mark perpendicular with eyesight test user's sight, and then make the eyesight test result that follow-up obtained more accurate.
In other possible implementations, the visual target is a three-dimensional visual target, and accordingly, according to the first rotation angle and the first rotation direction, the performing the angle adjustment operation on the visual target includes: and controlling the three-dimensional sighting mark to rotate by a first rotation angle in the first rotation direction. In this implementation, because the optotype is a three-dimensional optotype, the vision detecting device may directly control the three-dimensional optotype to rotate the first rotation angle in the first rotation direction without controlling the display screen displaying the optotype to rotate the first rotation angle in the first rotation direction. As shown in fig. 4, 401 denotes a optotype displayed before rotation, and 402 denotes an optotype displayed after rotation. In this embodiment, through controlling three-dimensional sighting mark to rotate first rotation angle to first rotation direction for need not control the display screen that shows the sighting mark and rotate first rotation angle to first rotation direction, it is more convenient when making the sighting mark perpendicular with eyesight detection user's sight.
In other embodiments, after determining that the visual acuity test user's line of sight is not perpendicular to the optotype, the method further comprises: and executing prompting operation to prompt the vision testing user to stand in the vision testing area. In this embodiment, when the sight line of the vision test user is not perpendicular to the optotype, the vision test device performs a prompt operation to prompt the vision test user to detect the site vision test area, so that the sight line of the vision test user is perpendicular to the optotype. The prompting mode includes but is not limited to voice prompt, prompt information displayed on a display screen and the like. The prompt mode user can select according to actual requirements, and the application is not specifically limited herein.
To sum up, the application provides a sight line identification method, which includes acquiring a first image, where the first image includes a preset reference object, and the preset reference object is an article carried by a vision detection user. And then recognizing the shape of the preset reference object in the first image to obtain a first shape. After the first shape is obtained, when the sight line of the vision test user is not perpendicular to the sighting target, the shape of the vision test user carried preset reference object after imaging can be changed. For example, when the preset reference object is actually circular and the sight line of the vision detection user is not perpendicular to the visual target, the shape of the imaged reference object is elliptical. Therefore, if the first shape is different from the second shape, and the second shape is the shape of the preset reference object when the sight line of the vision detection user is perpendicular to the sighting target, it can be determined that the sight line of the vision detection user is not perpendicular to the sighting target. Therefore, in the application, whether the sight of the vision detection user is perpendicular to the sighting target can be judged through the preset reference object carried by the vision detection user.
In some embodiments, after the vision test device obtains the visual target feedback information after the vision test user's sight line is perpendicular to the visual target, and determines the target vision test value based on the visual target feedback information and the visual target information, wherein the visual target feedback information is used for representing the recognition result of the vision test user based on the visual target's opening orientation feedback, and the visual target information comprises the visual target corresponding opening orientation information and the visual target corresponding vision value information.
In the present embodiment, after the sight line of the vision test user is perpendicular to the optotype, the vision test user starts performing vision test. The opening orientation of the visual target displayed by the visual detection user is identified, and the identification result of the visual detection user on the opening orientation of the visual target is fed back to the visual detection equipment through a first preset mode. After the vision detection equipment receives the recognition result of the vision detection user on the opening orientation of the sighting target, the vision detection equipment acquires sighting target feedback information according to the recognition result of the vision detection user on the opening orientation of the sighting target, and then judges whether the vision detection user correctly recognizes the displayed sighting target according to the opening orientation information of the displayed sighting target. For example, the displayed optotype is opened upward, and it is determined whether the result of determination on the displayed optotype in the optotype feedback information is also upward. And if the judgment result of the visual target displayed in the visual target feedback information is upward, judging that the judgment of the visual target displayed by the vision detection user is correct, so that the vision detection user can see the displayed visual target clearly. If the vision test user can clearly see the displayed optotype, the vision test device displays the optotype of the previous grade. For example, if the eyesight test user can clearly see that the displayed optotype corresponds to an eyesight value of 4.9, the optotype corresponding to an eyesight value of 5.0 is displayed.
It should be noted that, in order to more accurately determine whether the vision testing user can clearly see the optotype at the level, the vision testing user may determine a plurality of optotypes at the same level, and accumulate the number of times of correct determination, and if the number of times of correct determination is equal to the first threshold, determine that the vision testing user can clearly see the optotype at the level. And, in order to obtain the vision test result more accurately, it is set that the target vision test value is output only when the number of times of correct judgments on the optotype of the level by the vision test user is equal to the first threshold value, and the number of times of erroneous judgments on the optotype of the level immediately preceding the level is equal to the second threshold value. For example, when the number of times that the vision test user correctly determines the optotype corresponding to the vision 4.8 is equal to 4, and the number of times that the vision test user incorrectly determines the optotype corresponding to the vision 4.9 is equal to 3, the test result with the vision 4.8 is output. It should be understood that the first threshold and the second threshold may be a fixed value or a range of values. The user can set according to actual needs, and the application is not specifically limited herein.
In some possible implementation manners, the step of feeding back, by the vision testing user, the recognition result of the vision testing user on the opening orientation of the optotype to the vision testing device through the first preset manner may include: and the vision detection user sends the recognition result of the vision detection user on the opening orientation of the sighting target to the vision detection equipment through the eye shade. For example, 4 keys of up, down, left and right are arranged on the eye shade, when the opening of the displayed sighting target faces upwards, the vision detection user sends the recognition result of the vision detection user on the opening of the sighting target to the terminal equipment by clicking the up key, and when the opening of the displayed sighting target faces leftwards, the vision detection user sends the recognition result of the vision detection user on the opening of the sighting target to the terminal equipment by clicking the left key.
In other possible implementation manners, the step of feeding back, by the vision testing user, the recognition result of the vision testing user on the opening orientation of the sighting target to the vision testing device through the first preset manner may include: and the vision detection user sends the recognition result of the vision detection user on the opening orientation of the sighting target to the vision detection equipment through the orientation of the finger of the vision detection user. And then the vision detection equipment acquires the images of the fingers of the vision detection user, and acquires the sighting mark feedback information according to the images of the fingers of the vision detection user. In the detection process, the vision detection user uses the finger to indicate the opening direction of the sighting mark, and the vision detection equipment acquires the finger image through the camera and identifies the finger image to obtain the direction indicated by the finger, so that the sighting mark feedback information is obtained. The acquisition mode of the visual detection user to the identification result of the opening orientation of the sighting target can be set according to actual requirements, and the visual detection method and the visual detection system are not specifically limited.
It should be noted that the number of the targets displayed each time is 1, and if the target feedback information is not acquired within the preset time, the next target is switched. And if the number of the sighting marks which do not acquire the sighting mark feedback information continuously reaches a preset threshold value, stopping detection.
In some embodiments, before starting the vision test, target identity information of the vision test user may be further acquired, a historical vision test value corresponding to the target identity information is acquired, a first level of the displayed optotype is determined according to the historical vision test value, and the level of the displayed optotype is adjusted to the first level.
In this embodiment, the target identity information may be obtained by performing face recognition on a face image collected by a camera, or may be obtained by collecting fingerprint information of a vision detection user, or may be obtained by collecting identity card information of the vision detection user. It should be appreciated that if the vision testing user is a student, the target identity information may also be obtained by collecting campus card information. In the present application, the obtaining manner of the target identity information may be selected according to an actual situation, and the present application is not specifically limited herein.
It should be noted that, when target identity information is obtained by performing face recognition on an image acquired by a camera, if a plurality of faces are obtained by recognition, a non-detection user is prompted to leave a shooting range. Meanwhile, the vision detection equipment can periodically collect images through the camera, carries out face recognition on the periodically collected images, and obtains the historical vision detection value corresponding to the target identity information when only one face is obtained through recognition.
And after the target identity information is acquired, acquiring a historical vision detection value corresponding to the target identity information. Acquiring historical vision detection values corresponding to the target identity information comprises the following steps: matching the target identity information with identity information in a preset identity information database; and if the preset identity information database has identity information matched with the target identity information, acquiring a historical vision detection value corresponding to the target identity information.
In this embodiment, the identity of the vision detection user may be verified. The target identity information is matched with identity information in a preset identity information database, if the identity information matched with the target identity information exists in the preset identity information database, the verification is passed, and then the historical vision detection value corresponding to the target identity information is obtained.
It should be understood that the identity information in the preset identity information database may be generated after the user registers, or may be generated after the administrator of the school uniformly inputs the identity information of the students. The generation mode of the identity information in the preset identity information database can be set according to actual conditions, and the method is not specifically limited herein.
In still other embodiments, if the eye shield is disposed on the vision testing device, the identity of the vision testing user may also need to be verified when the vision testing user removes the eye shield from the vision testing device. If the identity verification is passed, the vision detection user can take out the eye shade. After the eye shade is taken out by the vision detection user, when the target identity information is acquired according to a preset mode, the identity of the vision detection user can be verified again, so that the identity of the vision detection user can be confirmed, and the identity of the user taking out the eye shade is prevented from being inconsistent with the identity of the vision detection user. It should be noted that the authentication modes before and after the authentication may be the same or different. The user may select the information according to actual conditions, and the application is not specifically limited herein.
And after the historical vision detection value is obtained, determining a first grade of the displayed visual target according to the historical vision detection value and adjusting the grade of the displayed visual target to the first grade.
The visual target refers to a preset detection pattern. The specific shape of the sighting mark can be set according to actual conditions. For example, the visual target may be the letter E on the international standard visual acuity chart, or the visual target may be a C-shaped ring on the blue annular visual acuity chart. And after the target identity information is obtained, obtaining a historical vision detection value corresponding to the target identity information, then determining the grade of the first displayed sighting target according to the historical vision detection value, and displaying the sighting target corresponding to the grade.
In other embodiments, if the target identity information does not have a corresponding historical vision detection value, the grade of the first displayed optotype is determined according to a preset grade and the optotype corresponding to the grade is displayed.
In other embodiments, the vision testing device may perform an eye-covering prompting operation to prompt the vision testing user to cover the eyes before beginning vision testing. In some possible implementations, the eye-covering prompting operation includes prompting the vision testing user to cover the left eye or the right eye. In other possible implementations, the eye-covering prompting operation may simply prompt the vision testing user to cover the eyes.
In other embodiments, after determining the target vision detection value based on the optotype feedback information and the information of the optotype, the method further comprises: acquiring the latest vision detection value corresponding to the target identity information; calculating a deviation value between the target vision detection value and the latest vision detection value; and if the deviation value is greater than or equal to the preset deviation threshold value, executing the re-detection prompting operation.
In this embodiment, a deviation value of the last vision test value corresponding to the target vision test value and the target identity information is calculated, and if the deviation value is greater than or equal to a preset deviation threshold value, it indicates that the vision of the vision test user is greatly reduced, and the vision test may be wrong. Therefore, the vision testing device performs the retest prompting operation to prompt the vision testing user to conduct the retest. Alternatively, when the vision detection value of the left eye is the vision detection value of the last right eye, and the vision detection value of the right eye is the vision detection value of the left eye, it indicates that the vision detection may be wrong at this time. Therefore, at this time, the vision testing apparatus can also perform the retest prompting operation to prompt the user to perform the retest again. It should be noted that the above listed conditions for triggering the execution of the re-detection prompting operation are only an example, and the execution of the re-detection prompting operation may be triggered when other abnormal conditions occur.
In some examples, after obtaining the target vision test value, the target vision test value may be directly displayed, or the target vision test value may be sent to a server for storage, so that the target vision test value may be analyzed and suggested according to the test data of the vision test user. It should be noted that, if the user performing the vision test is a student, the target vision test value may also be sent to the parent, so that the parent can know the vision condition of the student in real time.
In other embodiments, the technical solution of the present application further includes: identifying a test distance of a vision test user, and determining a vision correction value according to the test distance, wherein the test distance refers to the distance between the vision test user and the sighting target during vision test; accordingly, determining the target vision detection value based on the optotype feedback information and the information of the optotype includes: determining a preliminary vision detection value based on the visual target feedback information and the visual target information; and adding the preliminary vision detection value and the vision correction value to obtain a target vision detection value.
In vision testing, the vision testing station is typically required to perform testing at a standard location. For example, the standard international eye chart requires that the distance between the vision test user and the optotype is 5 meters. However, in actual testing, it is difficult for a vision testing user to stand exactly 5 meters from the optotype. And if the vision measuring instrument does not stand 5 meters away from the final sighting target, errors exist in the vision measuring values. Therefore, this application is before confirming target vision test value, discerns vision test user's test distance to confirm the eyesight correction value according to test distance, then confirm preliminary vision test value based on the information of sighting target feedback information and sighting target, add preliminary vision test value and eyesight correction value at last, obtain target vision test value, thereby make the target vision test value that finally obtains more accurate. In some embodiments, the correction value may be calculated according to the following formula:
Figure BDA0002602886760000141
wherein e is a correction value, L is a test distance of the vision detection user, and m is a standard distance. It will be appreciated that the accuracy of the vision correction value can be set according to the actual situation. For example, the accuracy of the vision correction value may be set to 0.1.
Referring to fig. 5, fig. 5 illustrates some examples of vision correction values corresponding to test distances, in units of test distances in fig. 5: and (4) rice. Assuming that the standard distance is 5 meters, and calculating the vision correction value by adopting the vision correction value calculation formula, when the test distance is 1 meter, the vision correction value is-0.7; when the testing distance is 1.2 meters, the vision correction value is-0.6; when the testing distance is 1.5 meters, the vision correction value is-0.5; when the testing distance is 2 meters, the vision correction value is-0.4; when the testing distance is 2.5 meters, the vision correction value is-0.3; when the testing distance is 3 meters, the vision correction value is-0.2; when the testing distance is 4 meters, the vision correction value is-0.1; when the testing distance is 5 meters, the vision correction value is 0; when the testing distance is 6.3 meters, the vision correction value is 0.1; when the testing distance is 8 meters, the vision correction value is 0.2; when the test distance was 10 meters, it was 0.3.
In some possible implementations, the test distance of the vision testing user may be identified by:
L=Df/d
wherein, L is the test distance of the vision detection user, f is the focal length of the camera, D is the actual diameter of the reference object, and D is the calculated length of the first reference object.
In this implementation, the first reference object is a circular icon, and the circular icon needs to have a certain pattern so that the circular icon can be identified. It should be noted that the first reference object may be the same as the preset reference object, or may be different from the preset reference object. The user can select according to actual needs, and the application is not specifically limited herein. The circular icon may be attached to the eye shield or to the user and to other items on the user during vision testing. The user may select the attachment of the first reference object according to actual requirements, and the application is not specifically limited herein.
Next, a process of calculating the calculated length of the first reference object will be described.
First, a first reference object image is obtained, and the first reference object image is identified to obtain an outer contour point of the first reference object. Since the first reference object is a circular icon, the shape of the outer contour point of the first reference object is circular or elliptical. When the shape formed by the outer contour points is circular, the length of any first line segment is calculated, the first line segment passes through the center of the circle, and the two end points of the first line segment are the outer contour points. At this time, the length of the first line segment is the calculated length of the first reference object. When the shape formed by the outer contour points is an ellipse, the distance between two end points on the major axis of the ellipse is calculated, and at the moment, the distance is the calculated length of the first reference object.
The length of the first line segment or the distance between two end points on the major axis of the ellipse is calculated according to the position coordinates of the outer contour points, or the width (or height) of the photosensitive chip, the calculation formula of the first reference object calculation length is as follows:
d=ws/a
alternatively, the first and second electrodes may be,
d=hs/b
it should be noted that, if the preliminary vision test value and the vision correction value are added to obtain the target vision test value, when determining the level of the optotype to be displayed according to the historical vision test value, it is necessary to subtract the correction value corresponding to the historical vision test value from the historical vision test value to obtain the first vision test value, and then obtain the level of the optotype to be displayed for the first time according to the first vision test value.
In other embodiments, the technical solution of the present application further includes: acquiring a first face image acquired by a camera, wherein the first face image is a face image for vision detection of a vision detection user; extracting first face characteristic points on a first face image to obtain position information of each first face characteristic point, determining a target type of a test eye of a vision detection user according to the position information of the first face characteristic points, wherein the target type comprises one of the left side or the right side, and correspondingly, determining a target vision detection value based on visual target feedback information and visual target information, and the method comprises the following steps: and determining a target vision detection value of a target eye of the vision detection user based on the visual target feedback information and the visual target information, wherein the target eye is the eye with the type of the target type.
In this embodiment, the extracted first face feature points include: eye feature points 601, nose tip feature points 602, and mouth corner feature points 603. The present embodiment determines the target type of the test eye before determining the vision test result, so that it can be determined whether the test data is for the left eye or the right eye. And the target type of the tested eye is judged according to the position information of the first face characteristic point, and the specific process is as follows:
as shown in fig. 6, a coordinate axis is established with a preset point at the lower left corner in the first face image as a coordinate origin, so as to obtain the position information of each first face feature point. For example, the coordinates of the eye feature point 601 are (x)1,y1) The coordinates of the nose tip feature point 602 are (x)2,y2) The coordinates of the mouth corner feature point 603 are (x)3,y3)、(x4,y4)。
After the position information of each feature point is obtained, the passing mouth corner feature point 603 is made into a straight line, and then the passing nose tip feature point 602 is made into the straight linePerpendicular line x ═ x2And extending the perpendicular. After the perpendicular line is obtained, the abscissa x of the perpendicular line is determined2With the abscissa x of the eye feature point 6011A comparison is made. If the abscissa x of the eye feature point1Abscissa x smaller than the perpendicular2Then the target type of the test eye is determined to be the right side. If the abscissa x of the eye feature point1Greater than the abscissa x of the perpendicular2Then the target type of the tested eye is determined to be the left side. Alternatively, after the vertical line is obtained, the symmetry point 604 of the eye feature point 601 about the symmetry axis is determined using the vertical line as the symmetry axis, and the coordinate of the symmetry point 604 is (x)5,y5). After the symmetry point 604 is obtained, the abscissa x of the eye feature point is determined1With the abscissa x of the point of symmetry5A comparison is made. If x1Less than x5Then the target type of the tested eye is judged as right side, if x is the right side1Greater than x5Then the target type of the test eye is determined to be the left side.
In this embodiment, if the operation of the eye-shielding prompt includes prompting the vision test user whether to shield the left eye or the right eye, the vision test device verifies again whether the test eye is the left eye or the right eye in case the vision test user does not operate according to the prompt, so that it is possible to more accurately determine whether the test data belongs to the right eye or the left eye. Or the eye-shading prompting operation only comprises prompting the vision detection user to shade eyes, at the moment, the vision detection user can select to test the left eye or the right eye firstly according to the own requirement, and then the vision detection device judges whether the tested eye is the left eye or the right eye, so that the user can select to test the left eye or test the right eye firstly according to the own requirement.
In another embodiment, the technical solution of the present application further includes: acquiring a second face image acquired by a camera, wherein the second face image is a face image for vision detection of a vision detection user holding an eye shade, the eye shade is used for shielding eyes of the vision detection user during the vision detection, identifying the eye shade on the second face image, determining central position information of the eye shade, extracting second face characteristic points on the second face image to obtain position information of each second face characteristic point, and determining the shielding state of the eyes according to the position information of the second face characteristic points and the central position information of the eye shade, wherein the shielding state comprises a correct shielding state and an incorrect shielding state; and if the shielding state is an incorrect shielding state, executing re-shielding prompt operation.
In this embodiment, the second face feature point may include: eye feature points 701, nose tip feature points 702, and mouth corner feature points 703. In fig. 7, 705 is an eye mask.
Monitoring the shielding state of the eyes, and judging whether the eyes of the vision detection user are correctly shielded in the vision detection process. If the occlusion is incorrect, the vision detection can be suspended, and the user is reminded to occlude again until the occlusion is correct, and then the vision detection is started. In some embodiments, if the vision testing user has incorrect eye occlusion during vision testing, the data obtained when the occlusion was incorrect may also be flagged and then removed when the vision testing results are determined.
The calculation process for determining the occlusion state of the eyes according to the position information of the second face feature point and the center position information of the eye mask comprises the following steps: firstly, a coordinate axis is established by taking a lower left corner preset point in a second face image as a coordinate origin, and the position information of each second face characteristic point is obtained. For example, the coordinates of the eye feature point 701 are (x)6,y6) The coordinates of the nose tip feature point 702 are (x)7,y7) The coordinates of the mouth angle feature point 703 are (x)8,y8)、(x9,y9) The center 7051 of the eye shield has the coordinate of (x)10,y10)。
After the position information of each second face characteristic point is obtained, the mouth corner characteristic points are connected into a straight line, then a perpendicular line of the straight line is drawn through the nose tip characteristic point and is extended, after the perpendicular line is obtained, a symmetrical point 704 of the eye characteristic point 701 relative to the symmetrical axis is solved by taking the perpendicular line as the symmetrical axis, and the coordinate of the symmetrical point 704 is (x)11,y11). And then calculating the point of symmetry 704 from the center 7051 of the eye shieldAnd the Euclidean distance is judged to be correct if the Euclidean distance is smaller than a preset threshold value, and is judged to be incorrect if the Euclidean distance is larger than the preset threshold value.
In this embodiment, whether the monitoring user eyes shelter from correctly at the visual acuity test in-process, if shelter from incorrectly, then can pause visual acuity test to remind the user to shelter from again, shelter from correctly until the user, restart visual acuity test, thereby can judge user's visual acuity test result more accurately.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Example two
Fig. 8 shows an example of a line-of-sight recognition apparatus, and for convenience of explanation, only the portions related to the embodiments of the present application are shown. The apparatus 800 comprises:
the first image obtaining module 801 is configured to obtain a first image, where the first image includes a preset reference object, and the preset reference object is an article carried by a vision detection user.
The identifying module 802 identifies a shape of the preset reference object in the first image to obtain a first shape.
The determining module 803 is configured to determine that the sight line of the vision testing user is not perpendicular to the visual target if the first shape is different from the second shape, where the second shape is the shape of the preset reference object when the sight line of the vision testing user is perpendicular to the visual target.
Optionally, the determining module 803 is configured to perform:
and when the second shape is a circle, if the first shape is an ellipse, judging that the sight line of the vision detection user is not vertical to the sighting target.
Optionally, the apparatus 800 further comprises:
and the calculating module is used for calculating the similarity between the first image and each preset image, and the preset image is an image formed by rotating a preset reference object in a first direction and is stored in advance.
And the rotation direction determining module is used for determining the rotation direction corresponding to the preset image with the highest similarity as the first rotation direction.
And the rotation angle determining module is used for determining a first rotation angle according to the first shape.
And the angle adjusting module is used for executing angle adjusting operation on the sighting target according to the first rotating angle and the first rotating direction so that the sighting target is perpendicular to the sight of the vision detection user.
Optionally, when the first shape is an ellipse, the rotation angle determination module includes:
the acquisition unit is used for acquiring a first length of a short axis and a second length of a long axis in the ellipse.
And the calculating unit is used for calculating the inverse cosine value according to the first length and the second length to obtain the first rotating angle.
Optionally, the angle adjusting module is configured to perform:
and controlling the display screen for displaying the sighting target to rotate by a first rotation angle towards a first rotation direction.
Optionally, the visual target is a three-dimensional visual target.
Accordingly, the angle adjustment module is configured to perform:
and controlling the three-dimensional sighting mark to rotate by a first rotation angle in the first rotation direction.
Optionally, the apparatus 800 further comprises:
and the prompting module is used for executing prompting operation so as to prompt the vision test user station to reach the vision test area.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the method embodiment of the present application, and specific reference may be made to a part of the method embodiment, which is not described herein again.
EXAMPLE III
Fig. 9 is a schematic diagram of a terminal device provided in the third embodiment of the present application. As shown in fig. 9, the terminal apparatus 900 of this embodiment includes: a processor 901, a memory 902 and a computer program 903 stored in the memory 902 and operable on the processor 901. The processor 901 implements the steps in the various method embodiments described above when executing the computer program 903. Alternatively, the processor 901 implements the functions of the modules/units in the device embodiments when executing the computer program 903.
Illustratively, the computer program 903 may be divided into one or more modules/units, which are stored in the memory 902 and executed by the processor 901 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 903 in the terminal device 900. For example, the computer program 903 may be divided into a first image acquisition module, a recognition module, and a determination module, and each module has the following specific functions:
acquiring a first image, wherein the first image comprises a preset reference object, and the preset reference object is an article carried by a vision detection user;
identifying the shape of the preset reference object in the first image to obtain a first shape;
and if the first shape is different from the second shape, judging that the sight line of the vision detection user is not perpendicular to the sighting target, wherein the second shape is the shape of the preset reference object when the sight line of the vision detection user is perpendicular to the sighting target.
The terminal device may include, but is not limited to, a processor 901 and a memory 902. Those skilled in the art will appreciate that fig. 9 is merely an example of a terminal device 900 and is not intended to limit terminal device 900 and may include more or fewer components than those shown, or some components may be combined, or different components, for example, the terminal device may also include input and output devices, network access devices, buses, etc.
The Processor 901 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware plug-in, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 902 may be an internal storage unit of the terminal device 900, such as a hard disk or a memory of the terminal device 900. The memory 902 may also be an external storage device of the terminal device 900, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the terminal device 900. Further, the memory 902 may include both an internal storage unit and an external storage device of the terminal device 900. The memory 902 is used for storing the computer program and other programs and data required by the terminal device. The memory 902 described above may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the above modules or units is only one logical function division, and there may be other division manners in actual implementation, for example, a plurality of units or plug-ins may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the above method embodiments may be implemented by a computer program, which may be stored in a computer readable storage medium and executed by a processor, so as to implement the steps of the above method embodiments. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying the above-mentioned computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the computer readable medium described above may include content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media that does not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A sight line recognition method, characterized by comprising:
acquiring a first image, wherein the first image comprises a preset reference object, and the preset reference object is an article carried by a vision detection user;
identifying the shape of the preset reference object in the first image to obtain a first shape;
and if the first shape is different from the second shape, judging that the sight line of the vision detection user is not perpendicular to the sighting target, wherein the second shape is the shape of the preset reference object when the sight line of the vision detection user is perpendicular to the sighting target.
2. The gaze recognizing method according to claim 1, wherein the determining that the gaze of the vision detecting user is not perpendicular to the optotype if the first shape is different from the second shape comprises:
and when the second shape is a circle, if the first shape is an ellipse, judging that the sight line of the vision detection user is not vertical to the sighting target.
3. The gaze recognizing method according to claim 1, further comprising, after said determining that the gaze of the vision detecting user is not perpendicular to the optotype:
calculating the similarity between the first image and each preset image, wherein the preset image is an image formed by rotating the preset reference object in a first direction and is stored in advance;
determining the rotation direction corresponding to the preset image with the highest similarity as a first rotation direction;
determining a first rotation angle according to the first shape;
and according to the first rotation angle and the first rotation direction, performing angle adjustment operation on the sighting target, so that the sighting target is perpendicular to the sight line of the vision detection user.
4. The gaze recognition method of claim 3, wherein when the first shape is an ellipse, the determining a first angle of rotation from the first shape comprises:
acquiring a first length of a short axis and a second length of a long axis of the ellipse;
and calculating an inverse cosine value according to the first length and the second length to obtain the first rotation angle.
5. The gaze recognition method of claim 3, wherein the performing an angular adjustment operation on a target based on the first angle of rotation and the first direction of rotation comprises:
and controlling a display screen for displaying the sighting target to rotate towards the first rotation direction by the first rotation angle.
6. The sight line recognition method according to claim 3, wherein the optotype is a three-dimensional optotype;
the executing an angle adjustment operation on the cursor according to the first rotation angle and the first rotation direction includes:
and controlling the three-dimensional sighting mark to rotate to the first rotation direction by the first rotation angle.
7. The gaze recognizing method according to claim 1, further comprising, after said determining that the gaze of the vision detecting user is not perpendicular to the optotype:
and executing prompting operation to prompt the vision testing user to stand in a vision testing area.
8. A sight line recognition apparatus, comprising:
the first image acquisition module is used for acquiring a first image, wherein the first image comprises a preset reference object, and the preset reference object is an article carried by a vision detection user;
the recognition module is used for recognizing the shape of the preset reference object in the first image to obtain a first shape;
the judging module is used for judging that the sight line of the vision detection user is not perpendicular to the sighting target if the first shape is different from the second shape, and the second shape is the shape of the preset reference object when the sight line of the vision detection user is perpendicular to the sighting target.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1-7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202010730277.5A 2020-07-27 2020-07-27 Sight line identification method, sight line identification device, terminal equipment and readable storage medium Pending CN111915667A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010730277.5A CN111915667A (en) 2020-07-27 2020-07-27 Sight line identification method, sight line identification device, terminal equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010730277.5A CN111915667A (en) 2020-07-27 2020-07-27 Sight line identification method, sight line identification device, terminal equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN111915667A true CN111915667A (en) 2020-11-10

Family

ID=73281827

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010730277.5A Pending CN111915667A (en) 2020-07-27 2020-07-27 Sight line identification method, sight line identification device, terminal equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111915667A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112890761A (en) * 2020-11-27 2021-06-04 成都怡康科技有限公司 Vision test prompting method and wearable device
CN113397471A (en) * 2021-06-30 2021-09-17 重庆电子工程职业学院 Vision data acquisition system based on Internet of things
CN113509136A (en) * 2021-04-29 2021-10-19 京东方艺云(北京)科技有限公司 Detection method, vision detection method, device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112890761A (en) * 2020-11-27 2021-06-04 成都怡康科技有限公司 Vision test prompting method and wearable device
CN113509136A (en) * 2021-04-29 2021-10-19 京东方艺云(北京)科技有限公司 Detection method, vision detection method, device, electronic equipment and storage medium
CN113397471A (en) * 2021-06-30 2021-09-17 重庆电子工程职业学院 Vision data acquisition system based on Internet of things

Similar Documents

Publication Publication Date Title
CN111803022A (en) Vision detection method, detection device, terminal equipment and readable storage medium
CN111915667A (en) Sight line identification method, sight line identification device, terminal equipment and readable storage medium
JP6762380B2 (en) Identification method and equipment
US9367677B1 (en) Systems and methods for user authentication using eye movement and pupil size change matching
US8254647B1 (en) Facial image quality assessment
US9373047B2 (en) Biometric authentication device, biometric authentication system, and biometric authentication method
JP2020507836A (en) Tracking surgical items that predicted duplicate imaging
US20110142297A1 (en) Camera Angle Compensation in Iris Identification
CN110123257A (en) A kind of vision testing method, device, sight tester and computer storage medium
JP2014067102A (en) Visual line detection device, computer program for visual line detection and display device
EP2991027A1 (en) Image processing program, image processing method and information terminal
WO2019153927A1 (en) Screen display method, device having display screen, apparatus, and storage medium
EP2544147A1 (en) Biological information management device and method
KR102089498B1 (en) Measuring apparatus for analogue gauge and method thereof
US10146306B2 (en) Gaze position detection apparatus and gaze position detection method
US20230080861A1 (en) Automatic Iris Capturing Method And Apparatus, Computer-Readable Storage Medium, And Computer Device
CN111553266A (en) Identification verification method and device and electronic equipment
CN106997447A (en) Face identification system and face identification method
CN112712053A (en) Sitting posture information generation method and device, terminal equipment and storage medium
CN113348431A (en) Multi-factor authentication for virtual reality
US8971592B2 (en) Method for determining eye location on a frontal face digital image to validate the frontal face and determine points of reference
CN111803023A (en) Vision value correction method, correction device, terminal equipment and storage medium
JP2018101212A (en) On-vehicle device and method for calculating degree of face directed to front side
CN109241892B (en) Instrument panel reading method, instrument panel reading device and electronic equipment
CN113760123A (en) Screen touch optimization method and device, terminal device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination