CN111803022A - Vision detection method, detection device, terminal equipment and readable storage medium - Google Patents

Vision detection method, detection device, terminal equipment and readable storage medium Download PDF

Info

Publication number
CN111803022A
CN111803022A CN202010585427.8A CN202010585427A CN111803022A CN 111803022 A CN111803022 A CN 111803022A CN 202010585427 A CN202010585427 A CN 202010585427A CN 111803022 A CN111803022 A CN 111803022A
Authority
CN
China
Prior art keywords
target
vision
vision detection
information
visual target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010585427.8A
Other languages
Chinese (zh)
Inventor
周鲁平
胡晓华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Original Assignee
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shuliantianxia Intelligent Technology Co Ltd filed Critical Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority to CN202010585427.8A priority Critical patent/CN111803022A/en
Publication of CN111803022A publication Critical patent/CN111803022A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/028Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters
    • A61B3/032Devices for presenting test symbols or characters, e.g. test chart projectors

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The application is applicable to the technical field of detection, and provides a vision detection method, a detection device, a terminal device and a readable storage medium, wherein the method comprises the following steps: when a vision detection instruction is detected, acquiring target identity information according to a preset mode; obtaining historical vision detection values corresponding to the target identity information, determining the grade of a visual target to be displayed according to the historical vision detection values, and displaying the visual target corresponding to the grade; and acquiring visual target feedback information, and determining a target vision detection value based on the visual target feedback information and the visual target information. The technical scheme of the application can solve the problems of relatively troublesome vision detection method and relatively low efficiency.

Description

Vision detection method, detection device, terminal equipment and readable storage medium
Technical Field
The present application belongs to the field of detection, and in particular, to a vision detection method, a detection apparatus, a terminal device, and a readable storage medium.
Background
Along with the development of science and technology, electronic products are increasingly popular with primary and secondary school students. However, long-term use of electronic products and unhealthy eye-using habits lead to a sharp decline in vision of primary and secondary school students, resulting in myopia of the primary and secondary school students. Therefore, the eyesight of the primary and middle school students needs to be regularly detected, so that the myopia of the primary and middle school students can be prevented.
At present, a school usually organizes doctors at regular intervals to check the eyesight of students. The workload of detecting the eyesight by a doctor is large, and the efficiency is low. Moreover, the doctor needs to indicate the sighting target and record the detection result, which is troublesome.
Therefore, the current eyesight detection method is troublesome and has low efficiency.
Disclosure of Invention
The embodiment of the application provides a vision detection method, a detection device, a terminal device and a readable storage medium, and can solve the problems of troublesome vision detection method and low efficiency.
In a first aspect, an embodiment of the present application provides a vision testing method, including:
when a vision detection instruction is detected, acquiring target identity information of a vision detection user according to a preset mode;
acquiring historical vision detection values corresponding to the target identity information, determining the grade of a visual target to be displayed according to the historical vision detection values, and displaying the visual target corresponding to the grade;
and acquiring visual target feedback information, and determining a target vision detection value based on the visual target feedback information and the visual target information, wherein the visual target feedback information is used for representing a recognition result fed back by the vision detection user based on the opening direction of the visual target, and the visual target information comprises the opening direction information corresponding to the visual target and the vision value information corresponding to the visual target.
In a second aspect, an embodiment of the present application provides a vision testing apparatus, including:
the vision detection instruction detection module is used for acquiring target identity information of a vision detection user according to a preset mode when a vision detection instruction is detected;
the visual target display module is used for acquiring historical vision detection values corresponding to the target identity information, determining the grade of a visual target to be displayed according to the historical vision detection values and displaying the visual target corresponding to the grade;
and the target vision detection value determining module is used for acquiring visual target feedback information and determining the target vision detection value based on the visual target feedback information and the visual target information, wherein the visual target feedback information is used for representing the recognition result fed back by the vision detection user based on the opening direction of the visual target, and the visual target information comprises the opening direction information corresponding to the visual target and the vision value information corresponding to the visual target.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method according to the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and the computer program implements the steps of the method according to the first aspect when executed by a processor.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the eyesight detecting method according to any one of the above first aspects.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Compared with the prior art, the embodiment of the application has the advantages that:
in view of the above, the present application provides a vision testing method, which first obtains target identity information of a vision testing user according to a preset mode when a vision testing command is detected. And then, acquiring historical vision detection values corresponding to the target identity information, determining the grade of the first displayed sighting target according to the historical vision detection values, and displaying the sighting target corresponding to the grade. And finally, acquiring visual target feedback information, and determining a target vision detection value based on the visual target feedback information and the visual target information, wherein the visual target feedback information is used for representing a recognition result fed back by a vision detection user based on the opening orientation of the visual target, and the visual target information comprises the opening orientation information corresponding to the visual target and the vision value information corresponding to the visual target. Through the automatic display sighting mark promptly in this application for the user can independently carry out eyesight and detect, and is simple and convenient, and efficiency is higher. In addition, the grade of the visual target to be displayed is determined according to the historical vision detection value of the vision detection user, so that the vision detection can be started on the visual target near the visual target corresponding to the vision value of the vision detection user, the time of the vision detection is shortened, and the efficiency of the vision detection is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic flow chart of a vision testing method according to an embodiment of the present application;
FIG. 2 is a diagram illustrating a relationship between a test distance and a vision correction value according to an embodiment of the present application;
FIG. 3 is a schematic illustration of a reference provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a method for determining a type of test eye provided by an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a method for determining an eye-shielding state according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a vision testing apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Example one
In the following description, referring to fig. 1, a method for detecting eyesight provided by an embodiment of the present application is described, the method includes:
and S101, when the vision detection instruction is detected, acquiring target identity information of a vision detection user according to a preset mode.
In step S101, when the vision testing user wants to perform vision testing, the vision testing user can click a testing button on the vision testing device, so that the vision testing device generates a vision testing instruction; or, the vision testing user can also make the vision testing device generate a vision testing instruction by making a voice. The generation mode of the vision detection instruction can be selected according to actual conditions, and the application is not particularly limited herein. After detecting the vision detection instruction, the vision detection equipment acquires target identity information of a vision detection user according to a preset mode.
In some embodiments, the preset manner of obtaining the target identity information includes: and identifying at least one of the face image of the vision detection user, collecting fingerprint information of the vision detection user and collecting identity card information of the vision detection user.
In this embodiment, the target identity information may be obtained by performing face recognition on a face image collected by a camera, or may be obtained by collecting fingerprint information of a vision detection user, or may be obtained by collecting identity card information of the vision detection user. It should be appreciated that if the vision testing user is a student, the target identity information may also be obtained by collecting campus card information. In the present application, the obtaining manner of the target identity information may be selected according to an actual situation, and the present application is not specifically limited herein.
It should be noted that, when target identity information is obtained by performing face recognition on an image acquired by a camera, if a plurality of faces are obtained by recognition, a non-detection user is prompted to leave a shooting range. Meanwhile, the vision detection equipment can periodically collect images through the camera, carries out face recognition on the periodically collected images, and obtains the historical vision detection value corresponding to the target identity information when only one face is obtained through recognition.
In other embodiments, obtaining the historical vision detection value corresponding to the target identity information includes: matching the target identity information with identity information in a preset identity information database; and if the preset identity information database has identity information matched with the target identity information, acquiring a historical vision detection value corresponding to the target identity information.
In this embodiment, the identity of the vision detection user may be verified. The target identity information is matched with identity information in a preset identity information database, if the identity information matched with the target identity information exists in the preset identity information database, the verification is passed, and then the historical vision detection value corresponding to the target identity information is obtained.
It should be understood that the identity information in the preset identity information database may be generated after the user registers, or may be generated after the administrator of the school uniformly inputs the identity information of the students. The generation mode of the identity information in the preset identity information database can be set according to actual conditions, and the method is not specifically limited herein.
In still other embodiments, if the eye shield is disposed on the vision testing device, the identity of the vision testing user may also need to be verified when the vision testing user removes the eye shield from the vision testing device. If the identity verification is passed, the vision detection user can take out the eye shade. After the eye shade is taken out by the vision detection user, when the target identity information is acquired according to a preset mode, the identity of the vision detection user can be verified again, so that the identity of the vision detection user can be confirmed, and the identity of the user taking out the eye shade is prevented from being inconsistent with the identity of the vision detection user. It should be noted that the authentication modes before and after the authentication may be the same or different. The user may select the information according to actual conditions, and the application is not specifically limited herein.
And S102, obtaining historical vision detection values corresponding to the target identity information, determining the grade of the visual target to be displayed according to the historical vision detection values, and displaying the visual target corresponding to the grade.
In step S102, the optotype to be displayed refers to an optotype displayed when each vision test user performs vision test. When the grade of the visual target to be displayed is determined according to the historical vision detection value and the visual target corresponding to the grade is displayed, the grade of the visual target displayed for the first time can be determined according to the historical vision detection value. The visual target displayed for the first time refers to the visual target displayed when the user starts the visual detection. The visual target refers to a preset detection pattern. The specific shape of the sighting mark can be set according to actual conditions. For example, the visual target may be the letter E on the international standard visual acuity chart, or the visual target may be a C-shaped ring on the blue annular visual acuity chart. And after the target identity information is obtained, obtaining a historical vision detection value corresponding to the target identity information, then determining the grade of the first displayed sighting target according to the historical vision detection value, and displaying the sighting target corresponding to the grade.
In some embodiments, if the target identity information does not have a corresponding historical vision detection value, the grade of the first displayed optotype is determined according to a preset grade and the optotype corresponding to the grade is displayed.
In this embodiment, if the target identification information does not have a corresponding historical vision detection value, the level of the first displayed optotype is determined according to the preset level and the optotype corresponding to the level is displayed.
In still other embodiments, the vision testing device also performs a test area prompting operation before the vision testing device displays the optotype. Since the accuracy of the result of the vision test is affected when the distance between the vision test user and the terminal device is too close or too far. Therefore, before the vision testing device displays the optotype, the vision testing device may perform a test zone prompting operation to instruct the vision testing user to stand at the test zone.
After the vision test user station arrives at the detection area, the vision test user can send confirmation feedback information to the vision test equipment in a first preset mode to inform the vision test equipment that the vision test user station arrives at the detection area. The first preset mode can be that the vision detection user clicks a button on the eye shielding device, or the vision detection user feeds back the vision detection user by voice. The first preset mode for sending the acknowledgement feedback information may be selected according to an actual situation, and the present application is not specifically limited herein. Or, the vision detection equipment periodically acquires the image of the vision detection user through the camera, and then judges whether the vision detection user stands in the detection area or not through the image. When it is detected that the vision test user stands in the test area, the optotype is displayed.
In other embodiments, the vision testing device may perform an eye-covering prompting operation to prompt the vision testing user to cover the eyes before the vision testing device displays the optotypes. In some possible implementations, the eye-covering prompting operation includes prompting the vision testing user to cover the left eye or the right eye. In other possible implementations, the eye-covering prompting operation may simply prompt the vision testing user to cover the eyes.
Step S103, visual target feedback information is obtained, and the target vision detection value is determined based on the visual target feedback information and visual target information, wherein the visual target feedback information is used for representing the recognition result fed back by the vision detection user based on the opening orientation of the visual target, and the visual target information comprises the opening orientation information corresponding to the visual target and the vision value information corresponding to the visual target.
In step S103, after the vision testing device displays the optotype, the vision testing user identifies the opening direction of the displayed optotype and feeds back the identification result of the vision testing user on the opening direction of the optotype to the vision testing device through the first preset mode. After the vision detection equipment receives the recognition result of the vision detection user on the opening orientation of the sighting target, the vision detection equipment acquires sighting target feedback information according to the recognition result of the vision detection user on the opening orientation of the sighting target, and then judges whether the vision detection user correctly recognizes the displayed sighting target according to the opening orientation information of the displayed sighting target. For example, the displayed optotype is opened upward, and it is determined whether the result of determination on the displayed optotype in the optotype feedback information is also upward. And if the judgment result of the visual target displayed in the visual target feedback information is upward, judging that the judgment of the visual target displayed by the vision detection user is correct, so that the vision detection user can see the displayed visual target clearly. If the vision test user can clearly see the displayed optotype, the vision test device displays the optotype of the previous grade. For example, if the eyesight test user can clearly see that the displayed optotype corresponds to an eyesight value of 4.9, the optotype corresponding to an eyesight value of 5.0 is displayed.
It should be noted that, in order to more accurately determine whether the vision testing user can clearly see the optotype at the level, the vision testing user may determine a plurality of optotypes at the same level, and accumulate the number of times of correct determination, and if the number of times of correct determination is equal to the first threshold, determine that the vision testing user can clearly see the optotype at the level. And, in order to obtain the vision test result more accurately, it is set that the target vision test value is output only when the number of times of correct judgments on the optotype of the level by the vision test user is equal to the first threshold value, and the number of times of erroneous judgments on the optotype of the level immediately preceding the level is equal to the second threshold value. For example, when the number of times that the vision test user correctly determines the optotype corresponding to the vision 4.8 is equal to 4, and the number of times that the vision test user incorrectly determines the optotype corresponding to the vision 4.9 is equal to 3, the test result with the vision 4.8 is output. It should be understood that the first threshold and the second threshold may be a fixed value or a range of values. The user can set according to actual needs, and the application is not specifically limited herein.
In some possible implementation manners, the step of feeding back, by the vision testing user, the recognition result of the vision testing user on the opening orientation of the optotype to the vision testing device through the first preset manner may include: and the vision detection user sends the recognition result of the vision detection user on the opening orientation of the sighting target to the vision detection equipment through the eye shade. For example, 4 keys of up, down, left and right are arranged on the eye shade, when the opening of the displayed sighting target faces upwards, the vision detection user sends the recognition result of the vision detection user on the opening of the sighting target to the terminal equipment by clicking the up key, and when the opening of the displayed sighting target faces leftwards, the vision detection user sends the recognition result of the vision detection user on the opening of the sighting target to the terminal equipment by clicking the left key.
In other possible implementation manners, the step of feeding back, by the vision testing user, the recognition result of the vision testing user on the opening orientation of the sighting target to the vision testing device through the first preset manner may include: and the vision detection user sends the recognition result of the vision detection user on the opening orientation of the sighting target to the vision detection equipment through the orientation of the finger of the vision detection user. And then the vision detection equipment acquires the images of the fingers of the vision detection user, and acquires the sighting mark feedback information according to the images of the fingers of the vision detection user. In the detection process, the vision detection user uses the finger to indicate the opening direction of the sighting mark, and the vision detection equipment acquires the finger image through the camera and identifies the finger image to obtain the direction indicated by the finger, so that the sighting mark feedback information is obtained. The acquisition mode of the visual detection user to the identification result of the opening orientation of the sighting target can be set according to actual requirements, and the visual detection method and the visual detection system are not specifically limited.
It should be noted that the number of the targets displayed each time is 1, and if the target feedback information is not acquired within the preset time, the next target is switched. And if the number of the sighting marks which do not acquire the sighting mark feedback information continuously reaches a preset threshold value, stopping detection.
In some embodiments, after determining the target vision detection value based on the optotype feedback information and the information of the optotype, the method further comprises: acquiring the latest vision detection value corresponding to the target identity information; calculating a deviation value between the target vision detection value and the latest vision detection value; and if the deviation value is greater than or equal to the preset deviation threshold value, executing the re-detection prompting operation.
In this embodiment, a deviation value of the last vision test value corresponding to the target vision test value and the target identity information is calculated, and if the deviation value is greater than or equal to a preset deviation threshold value, it indicates that the vision of the vision test user is greatly reduced, and the vision test may be wrong. Therefore, the vision testing device performs the retest prompting operation to prompt the vision testing user to conduct the retest. Alternatively, when the vision detection value of the left eye is the vision detection value of the last right eye, and the vision detection value of the right eye is the vision detection value of the left eye, it indicates that the vision detection may be wrong at this time. Therefore, at this time, the vision testing apparatus can also perform the retest prompting operation to prompt the user to perform the retest again. It should be noted that the above listed conditions for triggering the execution of the re-detection prompting operation are only an example, and the execution of the re-detection prompting operation may be triggered when other abnormal conditions occur.
In some examples, after obtaining the target vision test value, the target vision test value may be directly displayed, or the target vision test value may be sent to a server for storage, so that the target vision test value may be analyzed and suggested according to the test data of the vision test user. It should be noted that, if the user performing the vision test is a student, the target vision test value may also be sent to the parent, so that the parent can know the vision condition of the student in real time.
In other embodiments, the technical solution of the present application further includes: identifying a test distance of a vision test user, and determining a vision correction value according to the test distance, wherein the test distance refers to the distance between the vision test user and the sighting target during vision test; accordingly, determining the target vision detection value based on the optotype feedback information and the information of the optotype includes: determining a preliminary vision detection value based on the visual target feedback information and the visual target information; and adding the preliminary vision detection value and the vision correction value to obtain a target vision detection value.
In vision testing, the vision testing station is typically required to perform testing at a standard location. For example, the standard international eye chart requires that the distance between the vision test user and the optotype is 5 meters. However, in actual testing, it is difficult for a vision testing user to stand exactly 5 meters from the optotype. And if the vision measuring instrument does not stand 5 meters away from the final sighting target, errors exist in the vision measuring values. Therefore, this application is before confirming target vision test value, discerns vision test user's test distance to confirm the eyesight correction value according to test distance, then confirm preliminary vision test value based on the information of sighting target feedback information and sighting target, add preliminary vision test value and eyesight correction value at last, obtain target vision test value, thereby make the target vision test value that finally obtains more accurate. In some embodiments, the correction value may be calculated according to the following formula:
Figure BDA0002554466490000101
wherein e is a correction value, L is a test distance of the vision detection user, and m is a standard distance. It will be appreciated that the accuracy of the vision correction value can be set according to the actual situation. For example, the accuracy of the vision correction value may be set to 0.1.
Referring to fig. 2, fig. 2 illustrates some examples of vision correction values corresponding to test distances, in units of test distances in fig. 2: and (4) rice. Assuming that the standard distance is 5 meters, and calculating the vision correction value by adopting the vision correction value calculation formula, when the test distance is 1 meter, the vision correction value is-0.7; when the testing distance is 1.2 meters, the vision correction value is-0.6; when the testing distance is 1.5 meters, the vision correction value is-0.5; when the testing distance is 2 meters, the vision correction value is-0.4; when the testing distance is 2.5 meters, the vision correction value is-0.3; when the testing distance is 3 meters, the vision correction value is-0.2; when the testing distance is 4 meters, the vision correction value is-0.1; when the testing distance is 5 meters, the vision correction value is 0; when the testing distance is 6.3 meters, the vision correction value is 0.1; when the testing distance is 8 meters, the vision correction value is 0.2; when the test distance was 10 meters, it was 0.3.
In some possible implementations, the test distance of the vision testing user may be identified by:
L=Df/d
wherein, L is the test distance of the vision detection user, f is the focal length of the camera, D is the actual diameter of the reference object, and D is the calculated length of the reference object.
In this implementation, the reference object is a circular icon, and the circular icon needs to have a certain pattern so that the circular icon can be identified. Figure 3 shows a circular icon. It should be understood that the circular icon shown in fig. 3 is only an example, and in practical applications, the circular icon can be used as a reference object as long as the circular icon can be identified. The circular icon may be attached to the eye shield or to the user and to other items on the user during vision testing. The user can select the attachment place of the reference object according to actual requirements, and the application is not specifically limited herein.
Next, a process of calculating the calculated length of the reference object will be described.
Firstly, acquiring a reference object image, and identifying the reference object image to obtain an outer contour point of the reference object. Since the reference object is a circular icon, the shape of the outline point of the reference object is circular or elliptical. When the shape formed by the outer contour points is circular, the length of any first line segment is calculated, the first line segment passes through the center of the circle, and the two end points of the first line segment are the outer contour points. At this time, the length of the first line segment is the calculated length of the reference object. When the shape formed by the outer contour points is an ellipse, the distance between two end points on the major axis of the ellipse is calculated, and at the moment, the distance is the calculated length of the reference object.
The length of the first line segment or the distance between two end points on the major axis of the ellipse may be calculated according to the position coordinates of the outer contour points, or may be calculated according to the number of pixels s between any two contour points on the circular outer contour points (the number of pixels between two end points on the major axis of the ellipse), the camera pixels a × b, and the width w (or height h) of the light sensing chip, where the reference object calculation length calculation formula is as follows:
d=ws/a
alternatively, the first and second electrodes may be,
d=hs/b
it should be noted that, if the preliminary vision test value and the vision correction value are added to obtain the target vision test value, when determining the level of the optotype to be displayed according to the historical vision test value, it is necessary to subtract the correction value corresponding to the historical vision test value from the historical vision test value to obtain the first vision test value, and then obtain the level of the optotype to be displayed for the first time according to the first vision test value.
In other embodiments, the technical solution of the present application further includes: acquiring a first face image acquired by a camera, wherein the first face image is a face image for vision detection of a vision detection user; extracting first face characteristic points on a first face image to obtain position information of each first face characteristic point, determining a target type of a test eye of a vision detection user according to the position information of the first face characteristic points, wherein the target type comprises one of the left side or the right side, and correspondingly, determining a target vision detection value based on visual target feedback information and visual target information, and the method comprises the following steps: and determining a target vision detection value of a target eye of the vision detection user based on the visual target feedback information and the visual target information, wherein the target eye is the eye with the type of the target type.
In this embodiment, the extracted first face feature points include: eye feature points 401, nose tip feature points 402, and mouth corner feature points 403. The present embodiment determines the target type of the test eye before determining the vision test result, so that it can be determined whether the test data is for the left eye or the right eye. And the target type of the tested eye is judged according to the position information of the first face characteristic point, and the specific process is as follows:
as shown in fig. 4, a coordinate axis is established with a preset point at the lower left corner in the first face image as a coordinate origin, so as to obtain the position information of each first face feature point. For example, the coordinates of the eye feature point 401 are (x)1,y1) The coordinates of the nose tip feature point 402 are (x)2,y2) The coordinates of the mouth corner feature point 403 are (x)3,y3)、(x4,y4)。
After obtaining the position information of each feature point, the passing corner feature point 403 makes a straight line, and then the passing nose tip feature point 402 makes a perpendicular line x of the straight line ═ x2And extending the perpendicular. After the perpendicular line is obtained, the abscissa x of the perpendicular line is determined2With the abscissa x of the eye feature point 4011A comparison is made. If the abscissa x of the eye feature point1Is less thanThe abscissa x of the perpendicular2Then the target type of the test eye is determined to be the right side. If the abscissa x of the eye feature point1Greater than the abscissa x of the perpendicular2Then the target type of the tested eye is determined to be the left side. Or, after obtaining the vertical line, with the vertical line as the symmetry axis, the symmetry point 404 of the eye feature point 401 about the symmetry axis is solved, and the coordinate of the symmetry point 404 is (x)5,y5). After the symmetry point 404 is obtained, the abscissa x of the eye feature point is determined1With the abscissa x of the point of symmetry5A comparison is made. If x1Less than x5Then the target type of the tested eye is judged as right side, if x is the right side1Greater than x5Then the target type of the test eye is determined to be the left side.
In this embodiment, if the operation of the eye-shielding prompt includes prompting the vision test user whether to shield the left eye or the right eye, the vision test device verifies again whether the test eye is the left eye or the right eye in case the vision test user does not operate according to the prompt, so that it is possible to more accurately determine whether the test data belongs to the right eye or the left eye. Or the eye-shading prompting operation only comprises prompting the vision detection user to shade eyes, at the moment, the vision detection user can select to test the left eye or the right eye firstly according to the own requirement, and then the vision detection device judges whether the tested eye is the left eye or the right eye, so that the user can select to test the left eye or test the right eye firstly according to the own requirement.
In another embodiment, the technical solution of the present application further includes: acquiring a second face image acquired by a camera, wherein the second face image is a face image for vision detection of a vision detection user holding an eye shade, the eye shade is used for shielding eyes of the vision detection user during the vision detection, identifying the eye shade on the second face image, determining central position information of the eye shade, extracting second face characteristic points on the second face image to obtain position information of each second face characteristic point, and determining the shielding state of the eyes according to the position information of the second face characteristic points and the central position information of the eye shade, wherein the shielding state comprises a correct shielding state and an incorrect shielding state; and if the shielding state is an incorrect shielding state, executing re-shielding prompt operation.
In this embodiment, the second face feature point may include: eye feature point 501, nose tip feature point 502, and mouth corner feature point 503. In fig. 5, reference numeral 505 denotes an eye mask.
Monitoring the shielding state of the eyes, and judging whether the eyes of the vision detection user are correctly shielded in the vision detection process. If the occlusion is incorrect, the vision detection can be suspended, and the user is reminded to occlude again until the occlusion is correct, and then the vision detection is started. In some embodiments, if the vision testing user has incorrect eye occlusion during vision testing, the data obtained when the occlusion was incorrect may also be flagged and then removed when the vision testing results are determined.
The calculation process for determining the occlusion state of the eyes according to the position information of the second face feature point and the center position information of the eye mask comprises the following steps: firstly, a coordinate axis is established by taking a lower left corner preset point in a second face image as a coordinate origin, and the position information of each second face characteristic point is obtained. For example, the coordinates of the eye feature point 501 are (x)6,y6) The coordinates of the nose tip feature point 502 are (x)7,y7) The coordinates of the mouth corner feature point 503 are (x)8,y8)、(x9,y9) The center point 5051 of the eye shield has coordinates of (x)10,y10)。
After the position information of each second face characteristic point is obtained, the mouth corner characteristic points are connected into a straight line, then a perpendicular line of the straight line is made through the nose tip characteristic point and is extended, after the perpendicular line is obtained, a symmetrical point 504 of the eye characteristic point 501 about the symmetrical axis is solved by taking the perpendicular line as the symmetrical axis, and the coordinate of the symmetrical point 504 is (x)11,y11). Then, the Euclidean distance between the symmetric point 504 and the center point 5051 of the eye shield is calculated, if the Euclidean distance is smaller than a preset threshold value, the shielding is judged to be correct, and if the Euclidean distance is larger than the preset threshold value, the shielding is judged to be incorrect.
In this embodiment, whether the monitoring user eyes shelter from correctly at the visual acuity test in-process, if shelter from incorrectly, then can pause visual acuity test to remind the user to shelter from again, shelter from correctly until the user, restart visual acuity test, thereby can judge user's visual acuity test result more accurately.
In summary, the present application provides a vision testing method, which first obtains target identity information of a vision testing user according to a preset mode when a vision testing command is detected. And then, acquiring historical vision detection values corresponding to the target identity information, determining the grade of the first displayed sighting target according to the historical vision detection values, and displaying the sighting target corresponding to the grade. And finally, acquiring visual target feedback information, and determining a target vision detection value based on the visual target feedback information and the visual target information, wherein the visual target feedback information is used for representing a recognition result fed back by a vision detection user based on the opening orientation of the visual target, and the visual target information comprises the opening orientation information corresponding to the visual target and the vision value information corresponding to the visual target. Through the automatic display sighting mark promptly in this application for the user can independently carry out eyesight and detect, and is simple and convenient, and efficiency is higher. In addition, the grade of the visual target displayed for the first time is determined according to the historical vision detection value of the vision detection user, so that the vision detection can be started at the visual target near the visual target corresponding to the vision value of the vision detection user, and the time of the vision detection is shortened.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Example two
Fig. 6 shows an example of a vision inspection apparatus, and only a part related to an embodiment of the present application is shown for convenience of explanation. The apparatus 600 comprises:
the vision detection instruction detection module 601 is configured to, when a vision detection instruction is detected, obtain target identity information of a vision detection user according to a preset mode;
and the visual target display module 602 is configured to obtain a historical vision detection value corresponding to the target identity information, determine a grade of a visual target to be displayed according to the historical vision detection value, and display the visual target corresponding to the grade.
And a target vision detection value determining module 603, configured to obtain target feedback information, and determine a target vision detection value based on the target feedback information and information of the target, where the target feedback information is used to indicate a recognition result fed back by a vision detection user based on the opening orientation of the target, and the information of the target includes information of the opening orientation corresponding to the target and information of the vision value corresponding to the target.
Optionally, the apparatus 600 further comprises:
and the test distance identification module is used for identifying the test distance of the vision detection user and determining a vision correction value according to the test distance, wherein the test distance refers to the distance between the vision detection user and the sighting target during vision detection.
Accordingly, the target vision detection value determination module 603 includes:
and a preliminary vision test value determination unit for determining a preliminary vision test value based on the visual target feedback information and the visual target information.
And the adding unit is used for adding the preliminary vision detection value and the vision correction value to obtain a target vision detection value.
Optionally, the apparatus 600 further comprises:
the first face image acquisition module is used for acquiring a first face image acquired by the camera, and the first face image is a face image for vision detection of a vision detection user.
And the position information obtaining module is used for extracting the first face characteristic points on the first face image to obtain the position information of each first face characteristic point.
And the eye type determining module is used for determining the target type of the test eye of the vision detection user according to the position information of the first face characteristic point, wherein the target type is one of the left side or the right side.
Accordingly, the target vision detection value determination module 603 comprises means for performing:
and determining a target vision detection value of a target eye of the vision detection user based on the visual target feedback information and the visual target information, wherein the target eye is the eye with the type of the target type.
Optionally, the apparatus 600 further comprises:
and the second face image acquisition module is used for acquiring a second face image acquired by the camera, the second face image is a face image for vision detection of a vision detection user by holding the eye shielding device, and the eye shielding device is used for shielding eyes of the vision detection user during vision detection.
And the eye mask recognition module is used for recognizing the eye mask on the second face image and determining the center position information of the eye mask.
And the position information obtaining module is used for extracting the second face characteristic points on the second face image to obtain the position information of each second face characteristic point.
The occlusion state determining module is used for determining the occlusion state of the test eyes of the vision detection user according to the position information of the second face characteristic point and the central position information of the eye shielding device, wherein the occlusion state comprises a correct occlusion state and an incorrect occlusion state;
and the re-shielding prompt operation prompt module is used for executing re-shielding prompt operation if the shielding state is an incorrect shielding state.
Optionally, the apparatus 600 further comprises:
and the matching module is used for matching the target identity information with the identity information in the preset identity information database.
And the query module is used for acquiring the historical vision detection value corresponding to the target identity information if the identity information matched with the target identity information exists in the preset identity information database.
Optionally, the apparatus 600 further comprises:
and the vision detection value acquisition module is used for acquiring the latest vision detection value corresponding to the target identity information.
And the calculating module is used for calculating the deviation value of the latest vision detection value corresponding to the target vision detection value and the target identity information.
And the prompt operation module is used for executing the re-detection prompt operation if the deviation value is greater than or equal to the preset deviation threshold value.
Optionally, the preset manner includes:
the method comprises the steps of identifying at least one of a face image of a vision detection user, collecting fingerprint information of the vision detection user and collecting identity card information of the vision detection user.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the method embodiment of the present application, and specific reference may be made to a part of the method embodiment, which is not described herein again.
EXAMPLE III
Fig. 7 is a schematic diagram of a terminal device provided in the third embodiment of the present application. As shown in fig. 7, the terminal device 700 of this embodiment includes: a processor 701, a memory 702, and a computer program 703 stored in the memory 702 and executable on the processor 701. The steps in the various method embodiments described above are implemented when the processor 701 executes the computer program 703 described above. Alternatively, the processor 701 implements the functions of the modules/units in the device embodiments when executing the computer program 703.
Illustratively, the computer program 703 may be divided into one or more modules/units, which are stored in the memory 702 and executed by the processor 701 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used for describing the execution process of the computer program 703 in the terminal device 700. For example, the computer program 703 may be divided into a vision test instruction detection module, a visual target display module, and a target vision test value determination module, and the functions of the modules are as follows:
when a vision detection instruction is detected, acquiring target identity information of a vision detection user according to a preset mode;
obtaining historical vision detection values corresponding to the target identity information, determining the grade of a visual target to be displayed according to the historical vision detection values, and displaying the visual target corresponding to the grade;
the method comprises the steps of obtaining visual target feedback information, and determining a target vision detection value based on the visual target feedback information and the visual target information, wherein the visual target feedback information is used for representing a recognition result fed back by a vision detection user based on the opening direction of the visual target, and the visual target information comprises opening direction information corresponding to the visual target and vision value information corresponding to the visual target.
The terminal device may include, but is not limited to, a processor 701 and a memory 702. Those skilled in the art will appreciate that fig. 7 is merely an example of a terminal device 700 and does not constitute a limitation of terminal device 700 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 701 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware card, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 702 may be an internal storage unit of the terminal device 700, such as a hard disk or a memory of the terminal device 700. The memory 702 may also be an external storage device of the terminal device 700, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like provided on the terminal device 700. Further, the memory 702 may include both an internal storage unit and an external storage device of the terminal device 700. The memory 702 is used to store the computer program and other programs and data required by the terminal device. The memory 702 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the above modules or units is only one logical function division, and there may be other division manners in actual implementation, for example, a plurality of units or plug-ins may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the above method embodiments may be implemented by a computer program, which may be stored in a computer readable storage medium and executed by a processor, so as to implement the steps of the above method embodiments. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying the above-mentioned computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the computer readable medium described above may include content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media that does not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method of vision testing, comprising:
when a vision detection instruction is detected, acquiring target identity information of a vision detection user according to a preset mode;
obtaining historical vision detection values corresponding to the target identity information, determining the grade of a visual target to be displayed according to the historical vision detection values, and displaying the visual target corresponding to the grade;
the method comprises the steps of obtaining visual target feedback information, and determining a target vision detection value based on the visual target feedback information and the visual target information, wherein the visual target feedback information is used for representing a recognition result fed back by a vision detection user based on the opening direction of the visual target, and the visual target information comprises opening direction information corresponding to the visual target and vision value information corresponding to the visual target.
2. The vision testing method of claim 1, further comprising:
identifying a test distance of the vision detection user, and determining a vision correction value according to the test distance, wherein the test distance refers to the distance between the vision detection user and the sighting target during vision detection;
the determining a target vision detection value based on the sighting target feedback information and the sighting target information comprises:
determining a preliminary vision detection value based on the sighting target feedback information and the sighting target information;
and adding the preliminary vision detection value and the vision correction value to obtain the target vision detection value.
3. The vision testing method of claim 1, further comprising:
acquiring a first face image acquired by a camera, wherein the first face image is a face image for the vision detection of the vision detection user;
extracting first face characteristic points on the first face image to obtain position information of each first face characteristic point;
determining a target type of a test eye of the vision testing user according to the position information of the first face feature point, wherein the target type is one of a left side or a right side;
the determining a target vision detection value based on the sighting target feedback information and the sighting target information comprises:
and determining a target vision detection value of a target eye of the vision detection user based on the visual target feedback information and the visual target information, wherein the target eye is the eye of which the type is the target type.
4. The vision testing method of claim 1, further comprising:
acquiring a second face image acquired by a camera, wherein the second face image is a face image for vision detection of a vision detection user holding an eye shield, and the eye shield is used for shielding eyes of the vision detection user during vision detection;
identifying an eye mask on the second face image, and determining the center position information of the eye mask;
extracting second face characteristic points on the second face image to obtain position information of the second face characteristic points;
determining the occlusion state of the test eye of the vision detection user according to the position information of the second face characteristic point and the central position information of the eye shade, wherein the occlusion state is one of a correct occlusion state or an incorrect occlusion state;
and if the shielding state is the incorrect shielding state, executing re-shielding prompt operation.
5. The vision testing method of claim 1, wherein said obtaining historical vision testing values corresponding to the target identity information comprises:
matching the target identity information with identity information in a preset identity information database;
and if the identity information matched with the target identity information exists in the preset identity information database, acquiring a historical vision detection value corresponding to the target identity information.
6. A vision testing method as described in claim 1, wherein after said determining a target vision test value based on said optotype feedback information and said optotype information, comprising:
acquiring the latest vision detection value corresponding to the target identity information;
calculating a deviation value between the target vision detection value and the latest vision detection value;
and if the deviation value is greater than or equal to a preset deviation threshold value, executing re-detection prompting operation.
7. The vision testing method of claim 1, wherein the predetermined manner includes:
and identifying at least one of the face image of the vision detection user, collecting fingerprint information of the vision detection user and collecting identity card information of the vision detection user.
8. A vision testing device, comprising:
the vision detection instruction detection module is used for acquiring target identity information of a vision detection user according to a preset mode when a vision detection instruction is detected;
the visual target display module is used for acquiring historical vision detection values corresponding to the target identity information, determining the grade of a visual target to be displayed according to the historical vision detection values and displaying the visual target corresponding to the grade;
and the target vision detection value determining module is used for acquiring visual target feedback information and determining the target vision detection value based on the visual target feedback information and the visual target information, wherein the visual target feedback information is used for representing the recognition result fed back by the vision detection user based on the opening direction of the visual target, and the visual target information comprises the opening direction information corresponding to the visual target and the vision value information corresponding to the visual target.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1-7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202010585427.8A 2020-06-24 2020-06-24 Vision detection method, detection device, terminal equipment and readable storage medium Pending CN111803022A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010585427.8A CN111803022A (en) 2020-06-24 2020-06-24 Vision detection method, detection device, terminal equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010585427.8A CN111803022A (en) 2020-06-24 2020-06-24 Vision detection method, detection device, terminal equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN111803022A true CN111803022A (en) 2020-10-23

Family

ID=72844865

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010585427.8A Pending CN111803022A (en) 2020-06-24 2020-06-24 Vision detection method, detection device, terminal equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111803022A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112842249A (en) * 2021-03-09 2021-05-28 京东方科技集团股份有限公司 Vision detection method, device, equipment and storage medium
CN112914494A (en) * 2020-11-27 2021-06-08 成都怡康科技有限公司 Vision test method based on visual target self-adaptive adjustment and wearable device
CN112957002A (en) * 2021-02-01 2021-06-15 江苏盖睿健康科技有限公司 Self-help eyesight detection method and device and computer readable storage medium
CN113397471A (en) * 2021-06-30 2021-09-17 重庆电子工程职业学院 Vision data acquisition system based on Internet of things
CN113456022A (en) * 2021-07-01 2021-10-01 河北北方学院 Vision monitoring file management system and method based on big data fitting
CN113951810A (en) * 2021-10-31 2022-01-21 广西秒看科技有限公司 Calibration method and vision detection system for terminal screen display sighting target
CN114668365A (en) * 2022-04-25 2022-06-28 深圳市迪佳医疗智能科技有限公司 Vision detection method
CN115054198A (en) * 2022-06-10 2022-09-16 广州视域光学科技股份有限公司 Remote intelligent vision detection method, system and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105832283A (en) * 2016-05-20 2016-08-10 上海市浦东新区眼病牙病防治所 Intelligent vision testing system and an intelligent vision testing method
CN106060142A (en) * 2016-06-17 2016-10-26 杨斌 Mobile phone capable of checking eyesight, and method for checking eyesight by using mobile phone
CN109363620A (en) * 2018-10-22 2019-02-22 深圳和而泰数据资源与云技术有限公司 A kind of vision testing method, device, electronic equipment and computer storage media
CN109700423A (en) * 2018-12-29 2019-05-03 杭州瞳创医疗科技有限公司 A kind of the Intelligent eyesight detection method and device of automatic perceived distance
CN110222608A (en) * 2019-05-24 2019-09-10 山东海博科技信息系统股份有限公司 A kind of self-service examination machine eyesight detection intelligent processing method
CN111191616A (en) * 2020-01-02 2020-05-22 广州织点智能科技有限公司 Face shielding detection method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105832283A (en) * 2016-05-20 2016-08-10 上海市浦东新区眼病牙病防治所 Intelligent vision testing system and an intelligent vision testing method
CN106060142A (en) * 2016-06-17 2016-10-26 杨斌 Mobile phone capable of checking eyesight, and method for checking eyesight by using mobile phone
CN109363620A (en) * 2018-10-22 2019-02-22 深圳和而泰数据资源与云技术有限公司 A kind of vision testing method, device, electronic equipment and computer storage media
CN109700423A (en) * 2018-12-29 2019-05-03 杭州瞳创医疗科技有限公司 A kind of the Intelligent eyesight detection method and device of automatic perceived distance
CN110222608A (en) * 2019-05-24 2019-09-10 山东海博科技信息系统股份有限公司 A kind of self-service examination machine eyesight detection intelligent processing method
CN111191616A (en) * 2020-01-02 2020-05-22 广州织点智能科技有限公司 Face shielding detection method, device, equipment and storage medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112914494A (en) * 2020-11-27 2021-06-08 成都怡康科技有限公司 Vision test method based on visual target self-adaptive adjustment and wearable device
CN112957002A (en) * 2021-02-01 2021-06-15 江苏盖睿健康科技有限公司 Self-help eyesight detection method and device and computer readable storage medium
CN112842249A (en) * 2021-03-09 2021-05-28 京东方科技集团股份有限公司 Vision detection method, device, equipment and storage medium
CN112842249B (en) * 2021-03-09 2024-04-19 京东方科技集团股份有限公司 Vision detection method, device, equipment and storage medium
CN113397471A (en) * 2021-06-30 2021-09-17 重庆电子工程职业学院 Vision data acquisition system based on Internet of things
CN113456022A (en) * 2021-07-01 2021-10-01 河北北方学院 Vision monitoring file management system and method based on big data fitting
CN113951810A (en) * 2021-10-31 2022-01-21 广西秒看科技有限公司 Calibration method and vision detection system for terminal screen display sighting target
CN114668365A (en) * 2022-04-25 2022-06-28 深圳市迪佳医疗智能科技有限公司 Vision detection method
CN115054198A (en) * 2022-06-10 2022-09-16 广州视域光学科技股份有限公司 Remote intelligent vision detection method, system and device

Similar Documents

Publication Publication Date Title
CN111803022A (en) Vision detection method, detection device, terminal equipment and readable storage medium
JP6762380B2 (en) Identification method and equipment
CN111915667A (en) Sight line identification method, sight line identification device, terminal equipment and readable storage medium
EP4137038A1 (en) Reliability of gaze tracking data for left and right eye
EP4095744A1 (en) Automatic iris capturing method and apparatus, computer-readable storage medium, and computer device
CN110826372B (en) Face feature point detection method and device
CN111462381A (en) Access control method based on face temperature identification, electronic device and storage medium
CN112712053A (en) Sitting posture information generation method and device, terminal equipment and storage medium
CN113760123A (en) Screen touch optimization method and device, terminal device and storage medium
CN111803023A (en) Vision value correction method, correction device, terminal equipment and storage medium
CN110140166A (en) For providing the system for exempting to manually enter to computer
CN114360043B (en) Model parameter calibration method, sight tracking method, device, medium and equipment
CN114092985A (en) Terminal control method, device, terminal and storage medium
JP2018101212A (en) On-vehicle device and method for calculating degree of face directed to front side
CN111723754B (en) Left-right eye identification method, identification device, terminal equipment and storage medium
CN112051920B (en) Sight line falling point determining method and device
CN108596127B (en) Fingerprint identification method, identity verification method and device and identity verification machine
US20180374208A1 (en) Interface operating method and related mobile device
KR20200109977A (en) Smartphone-based identity verification method using fingerprints and facial images
CN114305316A (en) Vision detection method and system, vision detection all-in-one machine and storage medium
CN112633143A (en) Image processing system, method, head-mounted device, processing device, and storage medium
CN113486891A (en) Screw image processing method and device, electronic equipment and storage medium
CN113820018A (en) Temperature measurement method, device, system and medium based on infrared imaging
CN117576023A (en) Spliced image verification method and device and X-ray photographing system
US20220187910A1 (en) Information processing apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination