CN110123257A - A kind of vision testing method, device, sight tester and computer storage medium - Google Patents

A kind of vision testing method, device, sight tester and computer storage medium Download PDF

Info

Publication number
CN110123257A
CN110123257A CN201910250573.2A CN201910250573A CN110123257A CN 110123257 A CN110123257 A CN 110123257A CN 201910250573 A CN201910250573 A CN 201910250573A CN 110123257 A CN110123257 A CN 110123257A
Authority
CN
China
Prior art keywords
user
vision
judgment result
grade
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910250573.2A
Other languages
Chinese (zh)
Inventor
马啸
王宏
汪显方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Original Assignee
Shenzhen Heertai Home Furnishing Online Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Heertai Home Furnishing Online Network Technology Co Ltd filed Critical Shenzhen Heertai Home Furnishing Online Network Technology Co Ltd
Priority to CN201910250573.2A priority Critical patent/CN110123257A/en
Publication of CN110123257A publication Critical patent/CN110123257A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/028Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters
    • A61B3/032Devices for presenting test symbols or characters, e.g. test chart projectors

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The present embodiments relate to eyesight detection technique field, a kind of vision testing method, device, sight tester and computer storage medium are disclosed, this method comprises: obtaining the ambient image in front of the sight tester;Judge in the ambient image whether to include user;If comprising, it is determined that the distance between the user and the sight tester;In conjunction with the distance, controls the sight tester and show sighting target, and the judging result based on the user, determine the eyesight grade of the user.By the above-mentioned means, the embodiment of the present invention, which realizes the automatic starting of eyesight detection, automatic detection, saves human cost woth no need to manually participate in.

Description

Vision detection method and device, vision detector and computer storage medium
Technical Field
The embodiment of the invention relates to the technical field of vision detection, in particular to a vision detection method, a vision detection device, a vision detector and a computer storage medium.
Background
The traditional vision test method is as follows: the doctor needs to indicate the visual target, the testee answers the mouth or indicates the visual target direction indicated by the doctor by using the gesture, the doctor records the test result of the testee according to the visual target direction indicated by the testee, and determines the next indicated visual target according to the test result. However, although the traditional vision testing method is simple, doctors need to accompany the testing process for a long time, in the scene where the tested people are concentrated, a large amount of human resources and time cost are consumed,
in the process of implementing the invention, the inventor of the invention does not have a technical scheme of automatic vision test at present.
Disclosure of Invention
In view of the above problems, embodiments of the present invention provide a vision testing method, apparatus, vision testing apparatus and computer storage medium, which overcome or at least partially solve the above problems.
According to an aspect of the embodiments of the present invention, there is provided a vision testing method applied to a vision tester, the method including: acquiring an environment image in front of the vision detector; judging whether the environment image contains a user or not; if yes, determining the distance between the user and the vision detector; and controlling the vision detector to display the sighting marks by combining the distance, and determining the vision grade of the user based on the judgment result of the user.
In an alternative form, prior to the step of determining the distance between the user and the vision tester, the method further comprises: extracting facial features of the user from the environment image; according to the facial features, performing identity verification on the user; upon passing the authentication, performing the step of determining the distance between the user and the vision tester; and outputting a cheating alarm prompt when the identity authentication is not passed.
In an alternative mode, the authenticating the user according to the facial features includes: judging whether the facial features are matched with the facial features of the currently specified person to be tested, if so, determining that the user passes the identity authentication, if not, determining that the user does not pass the identity authentication, or judging whether the facial features are matched with the facial features of the user in a preset person to be tested library, if so, determining that the user passes the identity authentication, and if not, determining that the user does not pass the identity authentication.
In an alternative manner, the extracting facial features of the user from the environment image further includes: identifying whether the face of the user in the environment image is a positive face; if the face is not the front face, correcting the face of the user into the front face according to a preset front face correction algorithm; extracting facial features of the user from the corrected environment image; and if the face is the front face, extracting the facial features of the user directly from the environment image.
In an alternative mode, the controlling the vision testing apparatus to display the optotype in combination with the distance includes: acquiring a vision examination mode currently selected by the vision detector and a standard distance corresponding to the vision examination mode; judging whether the distance is the same as the standard distance; if the visual targets are the same, extracting the visual targets from a preset standard visual chart, and controlling the vision detector to display the extracted visual targets; if not, outputting prompt information for prompting the user to adjust the position until the distance between the user and the vision detector is equal to a standard distance, or calculating the proportion of the distance to the standard distance, zooming the visual target extracted from the preset standard visual acuity chart according to the proportion, and controlling the vision detector to display the zoomed visual target.
In an optional manner, before the obtaining of the determination result of the user for the optotype, the method further includes: identifying whether the user has a cheating action; if yes, outputting a cheating alarm prompt, re-extracting the sighting target, and returning to the step of controlling the vision detector to display the sighting target; and if not, executing the step of obtaining the judgment result of the user aiming at the sighting mark.
In an alternative approach, the cheating action comprises: the upper body of the user inclines forwards, the user wears glasses when the current vision inspection mode is the naked eye vision inspection mode, the user does not shield eyes, and/or the eyes shielded by the user are the same as the current eyes to be detected.
In an alternative mode, the controlling the vision testing apparatus to display a visual target and obtain a judgment result of the user for the visual target, and the determining the vision grade of the user based on the judgment result of the user for the visual target includes: controlling the vision detector to display a visual target; acquiring a judgment result of the user on the current sighting target; judging whether the judgment result of the user on the current sighting target is consistent with the judgment results of the user on all previous sighting targets, and if so, adjusting the preset step length; if the adjusted preset step length is larger than or equal to the preset step length threshold value, updating the sighting target according to the judgment result of the user on the current sighting target, the vision grade of the current sighting target and the preset step length; if the difference is consistent, keeping the preset step length unchanged, and executing the step of updating the sighting target according to the judgment result of the user on the current sighting target, the vision grade of the current sighting target and the preset step length; judging whether the vision grade of the updated sighting target exceeds a preset vision grade range or not; and if the judgment result exceeds the preset threshold, taking the vision grade of the visual target with the correct judgment result of the user as the estimated grade of the user.
In an optional manner, when the adjusted preset step length is smaller than the preset step length threshold, the vision grade to which the visual target with the correct last judgment result of the user belongs is used as the estimated grade of the user.
In an optional manner, the method further comprises: recording a judgment result of the user on at least one visual target in the visual target list corresponding to the estimated grade; when the judgment result meets the vision grade test condition, acquiring the judgment result of at least one visual target in the visual target list corresponding to the higher vision grade of the user until the judgment result does not meet the vision grade test condition; taking the vision grade corresponding to the judgment result which finally meets the vision grade test condition as the vision grade of the user; when the judgment result does not meet the vision grade test condition, acquiring the judgment result of at least one visual target in the visual target list corresponding to the lower vision grade of the user until the judgment result meets the vision grade test condition; and taking the vision grade corresponding to the judgment result which finally meets the vision grade test condition as the vision grade of the user.
According to another aspect of the embodiments of the present invention, there is provided a vision testing apparatus including: the acquisition module is used for acquiring an environment image in front of the vision detector; the judging module is used for judging whether the environment image contains the user or not; the first determining module is used for determining the distance between the user and the vision detector if the distance is included; the control module is used for controlling the vision detector to display the sighting target and acquiring the judgment result of the user in combination with the distance; and the second determining module is used for determining the vision grade of the user based on the judgment result of the user.
According to another aspect of embodiments of the present invention, there is provided a vision tester including: the camera, the display, the processor, the memory, the communication interface and the communication bus are used for completing mutual communication; the camera is used for acquiring an environment image in front of the vision detector; the display is used for displaying the sighting target; the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the steps of the vision detection method.
According to a further aspect of the embodiments of the present invention, there is provided a computer storage medium having at least one executable instruction stored therein, the executable instruction causing the processor to execute the steps of the vision detecting method described above.
In the embodiment of the invention, when the environment image in front of the vision detector is determined to contain the user, the distance between the user and the vision detector is determined, the vision detector is controlled to display the visual target by combining the distance, the vision detection is performed on the user, the automatic starting of the vision detection is realized, the whole vision detection process is performed in a self-service manner, the operation by auxiliary personnel is not needed, and the labor cost is saved.
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and the embodiments of the present invention can be implemented according to the content of the description in order to make the technical means of the embodiments of the present invention more clearly understood, and the detailed description of the present invention is provided below in order to make the foregoing and other objects, features, and advantages of the embodiments of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a schematic diagram illustrating an environment in which the method of vision testing of the present invention operates;
FIG. 2 shows a schematic view of a vision monitor of the present invention;
FIG. 3 shows a flow chart of a method embodiment of vision testing of the present invention;
FIG. 4 is a flow chart illustrating control of the displayed optotype in an embodiment of a method of vision testing of the present invention;
FIG. 5 is a schematic view showing the angle of the line of motion with the axis in an embodiment of the method of vision testing of the present invention;
FIG. 6 is a schematic diagram illustrating the shape of a gesture in an embodiment of the method of vision detection of the present invention;
FIG. 7 is a flow chart illustrating the determination of the vision level based on the pre-estimated level after the pre-estimated level is determined in an embodiment of the vision testing method of the present invention;
FIG. 8 is a flow chart illustrating another embodiment of a method of vision testing of the present invention;
FIG. 9 is a flow chart illustrating a method for verifying whether a face of a user in an environmental image is a frontal face according to another embodiment of the vision detection method of the present invention;
FIG. 10 is a flow chart of yet another embodiment of the method of vision testing of the present invention;
FIG. 11 shows a schematic view of an embodiment of the vision testing apparatus of the present invention;
fig. 12 shows a schematic structural view of an embodiment of the vision tester of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Referring to fig. 1, a schematic diagram of an operating environment of the method for detecting eyesight of the present invention, the operating environment 10 includes an eyesight detector 11, a cloud server 12 and an intelligent terminal 13;
the vision tester 11 includes a communication module 111, a camera 112, a display 113 and a controller 114, and the communication module 111, the camera 112 and the display 113 are all connected to the controller 114.
The display 113 provides a user interactive software interface to display optotypes, detection results, and the like during the vision examination. Optionally, the display 113 is a touch display device.
The camera 112 is used for capturing an image of the environment in front of the display 113. When the user needs to perform vision detection, the user needs to stand in front of the display 113 and face the display 113, and the environment image captured by the camera 112 includes the user. Optionally, the camera 112 is a camera with USB, a mesh camera, or a depth camera, etc.
The controller 114 belongs to a control center, and when the controller 114 determines that the user exists according to the environment image acquired by the camera 112, the controller starts vision detection, acquires the judgment result of the user, and then obtains the vision grade of the user based on the judgment result of the user. And after the vision grade of the user is obtained, the controller 114 may control the display to display the vision grade to inform the user of the vision test result. Of course, in other embodiments, the vision tester 11 may further include a speaker connected to the communication bus, and the controller 114 controls the speaker to broadcast the user's vision level. After obtaining the vision level of the user, the controller 114 may also bind the vision level with the identity of the user, and upload the vision level to the cloud server 12 through the communication module 111 for storage. Of course, the process video of the vision detection process of the user can also be collected, and the process video and the vision detection result are uploaded to the cloud server 12 together and stored by the cloud server 12, so that the vision detection process of the user can be traced and backtracked conveniently in the future. When a user needs to inquire the eyesight grade of the user, the user can access the cloud server 12 through the intelligent terminal 13 and extract the eyesight grade of the user from the cloud server 12. Among them, the communication module 111 can be selected as a wireless communicator, for example: 5G wireless communicator, WIFI wireless communicator, Zigbee wireless communicator etc. and wireless communicator can reduce the trouble of walking the line.
Further, a data analysis program may be deployed in the cloud server 12 to analyze the vision data of a certain user and a certain group, for example: and the data analysis program is combined with the historical data to find that the average vision grade of a certain class is reduced, and then notifies the responsible person of the corresponding class, or the data analysis program sends a prompt to the intelligent terminal 13 corresponding to the user when finding that the vision grade of the user is reduced, so as to remind the user to pay attention to the vision of the user.
It can be understood that: in other embodiments, the controller 114 may also store the user's vision level and/or the user's progress video during the vision test in a local memory, and the user may read the local memory to obtain his or her own vision level and progress video.
Referring to fig. 3, fig. 3 is a flowchart of a vision testing method according to an embodiment of the present invention, the method is applied to the vision testing apparatus 11 in the operating environment 10, and specifically, the method includes:
step 201: acquiring an environment image in front of the vision detector;
the environmental image in front of the vision tester is an image in front of the optotype position shown on the display of the vision tester. In general, a camera of the vision tester is fixed in advance, and a viewing lens of the camera is directly opposite to the front of a visual target displayed by a display, so that when a user stands in front of the visual target, the user can enter a viewing range of the camera of the vision tester.
Step 202: judging whether the environment image contains a user, if so, executing step 203, otherwise, returning to execute step 201;
the recognition algorithm for recognizing the user from the environment image can be implemented by the prior art, and is not limited herein.
Whether the vision detection is started or not is determined by judging whether the environment image contains the user or not, so that the automatic starting of the vision detection can be realized, auxiliary personnel are not needed to participate in the operation, and the labor cost is saved.
Of course, in some embodiments, when it is determined that the environment image does not include the user and the duration is greater than the preset time, the elements of the vision tester except the controller and the camera may be controlled to enter a standby state or an off state to save the power consumed by the vision tester, and after it is determined that the environment image includes the user, the elements are activated to perform subsequent vision tests.
Step 203: determining a distance between the user and the vision tester;
the distance between the user and the vision tester is the distance between the user and the position of the optotype displayed by the vision tester. The distance between the user and the vision detector can be obtained by detecting through a distance sensor arranged on the vision detector, or the coordinate of the camera and the coordinate of the position where the sighting mark is located are obtained in advance, when the camera shoots an obtained environment image, the coordinate of the user is calculated according to the position of the user in the environment image, the focal length of the camera and the coordinate of the camera, and then the distance between the user and the vision detector is calculated according to the coordinate of the user and the coordinate of the position where the sighting mark is located.
Step 204: controlling the vision detector to display the sighting marks by combining the distance;
because different eyesight test modes, the distance requirement between the user and the eyesight test instrument is different, in order to ensure the accuracy of eyesight test, before formally displaying the sighting mark, whether the distance between the user and the eyesight test instrument is correct or not can be verified, specifically, as shown in fig. 4, step 204 can include again:
step 2041: acquiring a vision examination mode currently selected by the vision detector;
the vision examination mode comprises a naked eye vision examination mode and a correction vision examination mode, wherein the naked eye vision examination mode refers to vision examination performed by a user on the premise that the user does not wear any vision correction tool, and the correction vision examination mode refers to vision examination performed by the user on the premise that the user wears the vision correction tool.
Step 2042: acquiring a standard distance corresponding to the vision examination mode;
step 2043: judging whether the distance is the same as the standard distance, if so, executing a step 2044, otherwise, executing a step 2045;
step 2044: extracting optotypes from a preset standard visual chart, and controlling the vision detector to display the extracted optotypes;
step 2405: outputting prompt information for prompting the user to adjust the position until the distance between the user and the vision detector is equal to a standard distance;
of course, in order to remind the user of the moving direction more clearly, the magnitude relationship between the distance between the user and the vision detector and the standard distance may also be determined, and according to the magnitude relationship, the moving direction is carried in the output prompt information, for example: when the distance between the user and the vision detector is 3 meters greater than the standard distance, the prompt message of 'moving forward by 3 meters' is output, and when the distance between the user and the vision detector is 1 meter less than the standard distance, the prompt message of 'moving backward by 1 meter' is output.
In some embodiments, the corresponding detection area may be marked in front of the vision tester in advance according to the standard distance corresponding to each vision testing mode, after determining the vision testing mode of the vision tester, it is identified whether the user is located in the corresponding detection area, and when the user is not located in the corresponding detection area, the user is prompted to enter the corresponding detection area, for example: a circular detection area 1 and a circular detection area 2 are arranged in front of the vision detector, the detection area 1 corresponds to a naked eye vision detection mode, the detection area 2 corresponds to a correction vision detection mode, and if the selected vision detection mode is the naked eye vision detection mode, but the user is not in the detection area 1, the user is reminded to enter the detection area 1.
It can be understood that: in other embodiments, when the distance between the user and the vision tester is not equal to the corresponding standard distance, the user may not be prompted to move, but a ratio of the distance to the standard distance is calculated, the optotypes extracted from the preset standard visual acuity chart are scaled according to the ratio, and the vision tester is controlled to display the scaled optotypes, so as to ensure the accuracy of vision testing.
Step 205: acquiring a judgment result of the user;
the judgment result of the user refers to an instruction given by the user to the displayed visual target, in some embodiments, the user can give the judgment result through the limb action, and the judgment result of the user can be identified by acquiring the user image of the user and determining the limb action of the user according to the user image, and several ways of determining the judgment result of the user through the limb action of the user are given as follows:
(1) the motion trail of the hand of the user is recognized from the user image, then the motion trail is fitted into a motion straight line, and the included angles between the motion straight line and the preset left-direction axis, right-direction axis, upward axis and downward axis are determined. And acquiring an axis with an included angle smaller than a first preset value, and taking the direction corresponding to the acquired axis as the indicating direction of the hand. As shown in FIG. 5, the indication directions of the left axis, the right axis, the upward axis and the downward axis are respectively left, right, upward and downward, and the included angles of the motion straight lines with the right axis, the upward axis, the right axis and the downward axis are respectively a1, a2, a3 and a4, wherein a1 is less than 45 degrees, and a2, a3 and a4 are all greater than 45 degrees, so that the right axis is the selected axis.
(2) Recognizing a gesture shape of a user from a user image, and regarding a direction indicated by the gesture shape as an indication direction of an indication action of the user. As shown in fig. 6, when the thumb points to the left, the pointing direction of the user is to the left, when the thumb points to the right, the pointing direction of the user is to the right, when the thumb points to the up, the pointing direction of the user is to the up, and when the thumb points to the down, the pointing direction of the user is to the down.
It can be understood that: in other embodiments, the pointing direction of the user may also be represented by the pointing direction of other fingers of the user, for example: index finger, middle finger, etc.; or, four preset gesture shapes representing upward, downward, leftward and rightward are predefined, when the gesture shape of the user is recognized, a preset gesture shape matched with the gesture shape is found, and the direction represented by the matched preset gesture shape serves as the indication direction of the user.
(3) And constructing four virtual frames in the user image, wherein the four virtual frames are respectively in four directions and are in up-down symmetry and left-right symmetry, and one virtual frame corresponds to one direction. The virtual frame where the hand of the user is located is identified, and the direction corresponding to the virtual frame where the hand of the user is located is used as the indication direction of the hand of the user.
Step 206: and determining the vision grade of the user based on the judgment result of the user.
The vision grade represents the value of vision, for example: the vision level is 1.0, the vision value of the user is 1.0, and when the vision level is 5.0, the vision value of the user is 5.0.
Further, after determining the vision level of the user, the vision level may be notified to the user in the form of sound or image to obtain the vision test result. Of course, it is also possible to directly associate the user's vision level with the user, for example: if the user carries out experience test, the vision grade of the user can be directly filled on the electronic test chart of the user. The associated data can be uploaded to a cloud server for storage and backup, and can also be stored in a local memory of the vision detector.
In order to improve the efficiency of vision detection, the rough vision measurement may be performed on the user to obtain an estimated level, and then the fine vision measurement is performed from the estimated level to obtain a final vision level, specifically, as shown in fig. 7, the vision detecting apparatus is controlled to display a visual target, and a determination result of the user for the visual target is obtained, and based on the determination result of the user for the visual target, the determining the vision level of the user includes:
step 2061: controlling the vision detector to display a visual target;
when the historical vision data of the user is stored, the first visual target displayed by the vision detector can be selected according to the historical vision data of the user, so that the grade of the visual target is closer to the current vision grade of the user, and the detection efficiency is improved. When the historical vision data of the user is not stored, the first visual target can be randomly selected, or the vision grade of the user is estimated by combining a big data analysis technology, and then the visual target is randomly selected from the estimated vision grade.
Step 2062: acquiring a judgment result of the user on the current sighting target;
the current visual target is the visual target currently displayed by the vision detector.
Step 2063: judging whether the judgment result of the user on the current sighting target is consistent with the judgment results of the user on all previous sighting targets, if so, executing a step 2064, otherwise, executing a step 2067;
the previously all optotypes refer to all optotypes previously displayed by the vision tester.
Step 2064: adjusting a preset step length;
step 2065: judging whether the adjusted preset step length is greater than or equal to the preset step length threshold value, if so, executing a step 2066, and if not, executing a step 2069;
it should be noted that: before each vision detection, a fixed value needs to be given to the preset step length, so that the starting values of the preset step length are the same in the process of each vision detection. Of course, in other embodiments, the fixed value may also be given to the preset step length when each vision test is finished, so that the starting value of the preset step length is the fixed value when the vision test is performed next time, thereby ensuring that the starting values of the preset step lengths are the same in each vision test.
Step 2066: updating the sighting target according to the judgment result of the user on the current sighting target, the vision grade of the current sighting target and the preset step length;
step 2067: keeping the preset step size unchanged, and performing step 2066;
step 2068: judging whether the vision grade of the updated sighting target exceeds a preset vision grade range, if so, executing a step 2069, and if not, executing a step 2061;
step 2069: and taking the vision grade of the visual target with the correct last judgment result of the user as the estimated grade of the user.
Step 2160: recording a judgment result of the user on at least one visual target in the visual target list corresponding to the estimated grade;
step 2161: judging whether the judgment result meets the vision grade test condition;
step 2162: when the judgment result meets the vision grade test condition, acquiring the judgment result of at least one visual target in the visual target list corresponding to the higher vision grade of the user until the judgment result does not meet the vision grade test condition;
step 2163: when the judgment result does not meet the vision grade test condition, acquiring the judgment result of at least one visual target in the visual target list corresponding to the lower vision grade of the user until the judgment result meets the vision grade test condition;
step 2164: taking the vision grade corresponding to the judgment result which finally meets the vision grade test condition as the vision grade of the user;
it can be understood that: in other embodiments, when the user's vision does not need to be accurately detected, steps 2160-2165 may not be performed, and the estimated level may be directly used as the user's vision level.
In some embodiments, in order to improve accuracy of detecting eyesight of a user, when determining whether a determination result of the user matches the visual target, a time limit may be further added, it is limited that the user must give the determination result within a preset time period, and if the user does not give the determination result within the preset time period, it is determined that the recognition result of the user is incorrect.
In the embodiment of the invention, when the environment image in front of the vision detector is determined to contain the user, the distance between the user and the vision detector is determined, the vision detector is controlled to display the visual target by combining the distance, the vision detection is performed on the user, the automatic starting of the vision detection is realized, the whole vision detection process is performed in a self-service manner, the operation by auxiliary personnel is not needed, and the labor cost is saved.
Referring to fig. 8, fig. 8 is a flowchart illustrating a method for vision testing according to another embodiment of the present invention, the method includes:
step 201: acquiring an environment image in front of the vision detector;
step 202: judging whether the environment image contains a user, if so, executing step 207, otherwise, returning to execute step 201;
step 207: extracting facial features of the user from the environment image;
the facial features include features such as the shape, position, etc. of the user's facial organs.
In order to improve the accuracy of extracting the facial features of the user from the environment image, the face of the user in the environment image may be subjected to face correction, and then the facial features of the user are extracted from the environment image after the correction, specifically, as shown in fig. 9, step 207 includes:
step 2071: identifying whether the face of the user in the environment image is a positive face, if not, executing a step 2072, and if so, executing a step 2074;
specifically, the recognizing whether the face of the user in the environment image is a front face includes: the method comprises the steps of firstly identifying feature information of a face part of a user in an environment image, then calculating a face deflection angle, a face turning coefficient and a face lifting coefficient of the face according to the feature information of the face part, then judging whether the face deflection angle is located in a preset front face deflection angle range, whether the face turning coefficient is located in a preset front face turning coefficient range and whether the face lifting coefficient is located in a preset front face lifting coefficient range, if so, determining that the face in the environment image is a front face, otherwise, determining that the face in the environment image is not a front face.
The face-off angle is used to represent the angle at which the user's face is deflected. Calculating the face deviation angle of the face according to the feature information of the face part, and specifically comprises the following steps: the method comprises the steps of firstly constructing an image central axis of an environment image, then constructing a face central axis of a face according to feature information of a face part, calculating an included angle between the face central axis and the image central axis, and taking the included angle as an off-face angle.
The face-turning coefficient is used to represent the angle of rotation of the user's face. According to the feature information of the face part, calculating a face turning coefficient of the face, and specifically comprising the following steps: constructing a face central axis of the face according to the feature information of the face part; dividing the face in the environment image into a left face area and a right face area based on a central axis of the face, and calculating a turning coefficient according to a left width of the left face area and a right width of the right face area, or according to a left area of the left face area and a right area of the right face area, or according to a left width of the same face part in the left face area and a right width in the right face area, wherein the calculation formula is as follows:
cp is a face turning coefficient, and El and Er may be a left width of the left face region and a right width of the right face region, respectively, or a left area of the left face region and a right area of the right face region, or a left width of the same kind of face part in the left face region and a right width in the right face region.
The face lifting coefficient is used to represent the angle at which the user's face is lifted or lowered. According to the feature information of the face part, calculating a face lifting coefficient of the face, specifically comprising: determining a first distance between the first part and the second part, determining a second distance between the second part and the third part, and calculating a face lifting coefficient according to the first distance and the second distance, wherein the calculation formula is as follows:
cr is a face lifting coefficient, H1 is a first distance, and H2 is a second distance. Wherein, first position, second position and third position all are located the face, belong to face part, and first position is located the top of second position, and the second position is located the top of third position, for example: the first part is eyes, the second part is nose, the third part is mandible, the first distance is the distance from the tip of the nose to the line connecting the left eye and the right eye along the central axis of the face, and the second distance is the distance from the tip of the nose to the lowest point of the mandible along the central axis of the face.
Step 2072: correcting the face of the user into a front face according to a preset front face correction algorithm;
in some embodiments, correcting the face of the user to be a front face according to a preset front face correction algorithm includes: the method comprises the steps of positioning at least three first key points of a face in an environment image, calculating affine transformation parameters of an affine transformation matrix according to coordinates of the at least three first key points and coordinates of corresponding second key points of the first key points in a preset standard face image, and finally performing coordinate transformation on each pixel point in the face image according to the affine transformation parameters and the affine transformation matrix, so that the face in the face image is corrected to be a face.
The first key point refers to a pixel point of the face of the user, and the specific position of the first key point is not limited. The second key point is a pixel point of a preset standard front face image. The first key points of the face in the face image correspond to the second key points of the face in the standard front face image in a one-to-one mode. According to the coordinates of the at least three first key points and the coordinates of the corresponding second key points in the standard frontal face image, and in combination with a calculation formula, affine transformation parameters can be calculated, wherein the calculation formula is specifically as follows:
wherein, (x1, y1), (x2, y2), (xn, yn) are the coordinates of n second key points of the standard frontal face image, (x1', y1'), (x2', y2'), (xn ', yn') are the coordinates of n first key points of the face in the face image, respectively, where n is equal to or greater than 3, and a1, b1, a2, b2, c1, and c2 are affine transformation parameters.
Step 2073: extracting facial features of the user from the corrected environment image;
step 2074: facial features of the user are extracted directly from the environmental image.
Step 208: according to the facial features, performing identity authentication on the user, and if the user passes the identity authentication, executing step 203, otherwise executing step 209;
by performing identity authentication on the user, the situation that the user is impersonated can be well avoided, the identity of the user can be confirmed, and the situation that the detection result does not correspond to the user is avoided.
In some embodiments, authenticating the user comprises: and judging whether the facial features are matched with the facial features of the currently specified person to be tested, if so, determining that the user passes the identity authentication, and if not, determining that the user does not pass the identity authentication. The currently specified person to be tested is the user currently selected by the vision tester, the hospital calls the number, the vision tester specifies the person to be tested, the specified person to be tested executes vision test before reaching the vision tester, and the obtained vision test result is recorded under the name of the person to be tested.
In some embodiments, authenticating the user may further comprise: and judging whether the facial features are matched with the facial features of the user in a preset testee library, if so, determining that the user passes the identity authentication, and if not, determining that the user does not pass the identity authentication. In this embodiment, the vision testing apparatus does not use the hospital number calling method, but uses the on-going-and-testing method, but in the on-going-and-testing method, only the user in the preset testee library is legal, otherwise, the user is considered as an illegal user.
Further, in order to avoid identity cheating by others using pictures of real users, before identity verification is performed on a user, whether the user in front of the vision detector is a living body can be verified, subsequent identity verification is performed only if the user is the living body, and otherwise, the user is determined to have cheating behaviors. The detection of the living body is not limited, and the multispectral camera may be configured, the multispectral camera captures a multispectral image, and determines whether the user is a living body according to the multispectral image, or the vision detector outputs a motion instruction, and when the user performs an action corresponding to the motion instruction, the user is determined to be a living body, for example: the vision detector outputs a motion instruction of 'lifting the right arm', if the fact that the user lifts the right arm is detected, the user is determined to be a living body, and if not, the user is determined not to be the living body.
Step 209: outputting a cheating alarm prompt;
the cheating alert prompt is used to alert the user and prompt the user to leave the vision tester.
Step 203: determining a distance between the user and the vision tester;
step 204: controlling the vision detector to display the sighting marks by combining the distance;
step 205: acquiring a judgment result of the user;
step 206: and determining the vision grade of the user based on the judgment result of the user.
After obtaining user's eyesight grade, can directly bind and save this eyesight grade and user, perhaps send the user identity that eyesight grade and identity verification obtained before to the high in the clouds server together again, bind by the high in the clouds server, when needs user inquiry eyesight grade, direct access high in the clouds server can.
Certainly, if the identity of the user is not sensitive, the user can not be subjected to identity verification, after the vision level of the user is determined, the user inputs the identity information of the user on a vision detector, the vision detector sends the identity information and the vision level to a cloud server for binding, or the vision detector sends the vision level to the cloud server, then a two-dimensional code containing an identifier of the vision level is displayed, the user uses an intelligent terminal of the user to scan the two-dimensional code to obtain the identifier, the user identity information and the identifier input by the intelligent terminal are sent to the cloud server, and then the cloud server binds the user identity information and the vision level result.
In the embodiment of the invention, when the user is detected to be in front of the vision detector, the user is authenticated first, and the subsequent vision detection process is executed after the user passes the authentication.
Referring to the drawings, fig. 10 is a flowchart of a vision testing method according to a third embodiment of the present invention, the method further includes:
step 201: acquiring an environment image in front of the vision detector;
step 202: judging whether the environment image contains a user, if so, executing step 203, otherwise, returning to execute step 201;
step 203: determining a distance between the user and the vision tester;
step 204: controlling the vision detector to display the sighting marks by combining the distance;
step 210: identifying whether the user has a cheating action, if not, executing step 205, otherwise, executing step 211;
if the user cheats in the process of vision detection, the detected vision result is not accurate, and therefore, in the process of performing vision detection, the user needs to be concerned with whether cheating behaviors exist or not, for example: the upper body of the user inclines forwards, the user wears glasses when the current vision inspection mode is the naked eye vision inspection mode, the user does not shield eyes, and/or the eyes shielded by the user are the same as the current eye to be detected, and the like. The cheating behavior of the user can be acquired by a camera on the vision detector, and the image is identified according to the image. And the recognition algorithm for recognizing the cheating action of the user from the image may employ a neural network algorithm.
Step 211: outputting a cheating alarm prompt, re-extracting the sighting target, and returning to the step 204;
step 205: acquiring a judgment result of the user;
step 206: and determining the vision grade of the user based on the judgment result of the user.
In the embodiment of the invention, in the process of detecting the eyesight of the user, when the user has cheating behaviors, the judgment result of the user is not recorded, and the cheating alarm prompt is output, so that the user can correct the own actions, and the accuracy of the eyesight detection is ensured.
The invention further provides an embodiment of the device for detecting eyesight. As shown in fig. 11, the apparatus for eyesight test 30 includes an obtaining module 301, a judging module 302, a first determining module 303, a control module 304, and a second determining module 305.
The acquisition module 301 is configured to acquire an environment image in front of the vision tester. The determining module 302 is configured to determine whether the environment image includes a user. The first determining module 303 is configured to determine a distance between the user and the vision tester if the distance is included. The control module 304 is configured to control the vision tester to display the optotype and obtain the determination result of the user in combination with the distance. The second determining module 305 is configured to determine the vision level of the user based on the determination result of the user.
In some embodiments, the control module 304 is further specifically configured to: acquiring a vision examination mode currently selected by the vision detector and a standard distance corresponding to the vision examination mode; judging whether the distance is the same as the standard distance; if the visual targets are the same, extracting the visual targets from a preset standard visual chart, and controlling the vision detector to display the extracted visual targets; if not, outputting prompt information for prompting the user to adjust the position until the distance between the user and the vision detector is equal to a standard distance or calculating the proportion of the distance to the standard distance, zooming the optotypes extracted from the preset standard visual acuity chart according to the proportion, and controlling the vision detector to display the zoomed optotypes.
In some embodiments, the second determining module 305 is specifically: acquiring a judgment result of the user on the current sighting target; judging whether the judgment result of the user on the current sighting target is consistent with the judgment results of the user on all previous sighting targets, and if so, adjusting the preset step length; if the adjusted preset step length is larger than or equal to the preset step length threshold value, updating the sighting target according to the judgment result of the user on the current sighting target, the vision grade of the current sighting target and the preset step length; if the difference is consistent, keeping the preset step length unchanged, and executing the step of updating the sighting target according to the judgment result of the user on the current sighting target, the vision grade of the current sighting target and the preset step length; judging whether the vision grade of the updated sighting target exceeds a preset vision grade range or not; if the judgment result exceeds the preset threshold, the vision grade of the visual target with the correct last judgment result of the user is used as the estimated grade of the user; when the adjusted preset step length is smaller than the preset step length threshold value, taking the vision grade of the sighting target with the correct last judgment result of the user as the estimated grade of the user; recording a judgment result of the user on at least one visual target in the visual target list corresponding to the estimated grade; when the judgment result meets the vision grade test condition, acquiring the judgment result of at least one visual target in the visual target list corresponding to the higher vision grade of the user until the judgment result does not meet the vision grade test condition; taking the vision grade corresponding to the judgment result which finally meets the vision grade test condition as the vision grade of the user; when the judgment result does not meet the vision grade test condition, acquiring the judgment result of at least one visual target in the visual target list corresponding to the lower vision grade of the user until the judgment result meets the vision grade test condition; and taking the vision grade corresponding to the judgment result which finally meets the vision grade test condition as the vision grade of the user.
The apparatus for vision testing 30 may also include a first identification module 306, a correction module 307, an extraction module 308, a verification module 309, and a first output module 310.
The first recognition module 306 is configured to recognize whether the face of the user in the environment image is a positive face. The correcting module 307 is configured to correct the face of the user into the right face according to a preset right face correction algorithm if the face is not the right face. The extracting module 308 is configured to extract facial features of the user from the corrected environment image when the first identifying module 306 identifies that the face in the environment image is not a front face, and extract facial features of the user directly from the environment image when the first identifying module 306 identifies that the face in the environment image is a front face. And the verification module 309 is configured to perform identity verification on the user according to the facial feature, and execute the first determination module 303 when the identity verification is passed. A first output module 310, configured to output a cheating alert prompt when the identity authentication is not passed.
In some embodiments, the verification module 309 is further specifically configured to determine whether the facial feature matches a facial feature of a currently specified person to be tested, determine that the user passes the authentication if the facial feature matches the facial feature of the currently specified person to be tested, determine that the user does not pass the authentication if the facial feature does not match the facial feature of the user in a preset person to be tested library, if the facial feature matches the facial feature of the user in the preset person to be tested library, determine that the user passes the authentication, and if the facial feature does not match the facial feature of the user in the preset person to be tested library, determine that the user does not pass the.
Further, the apparatus for eyesight test 30 further includes a second identification module 311 and a second output module 312.
The second identification module 311 is used for identifying whether the user has a cheating action. The second output module 312 is configured to output a cheating alert prompt if the cheating alert prompt exists, re-extract the sighting target, and return to the execution control module 304. If not, the second determination module 305 is executed. In some embodiments, the cheating action comprises: the upper body of the user inclines forwards, the user wears glasses when the current vision inspection mode is the naked eye vision inspection mode, the user does not shield eyes, and/or the eyes shielded by the user are the same as the current eyes to be detected.
In the embodiment of the present invention, when the determining module 302 determines that the environment image in front of the vision tester includes a user, the first determining module 303 determines a distance between the user and the vision tester, and the control module 304 controls the vision tester to display the optotype in combination with the distance, so as to perform vision testing on the user, thereby implementing automatic start-up of vision testing, and the whole vision testing process is performed by itself without needing auxiliary personnel to operate from side, so that the labor cost is saved.
An embodiment of the present invention provides a non-volatile computer storage medium, where the computer storage medium stores at least one executable instruction, and the computer executable instruction may execute the method for detecting eyesight in any method embodiment described above.
Fig. 12 is a schematic structural diagram of an embodiment of the vision tester of the present invention, and the specific embodiment of the present invention is not limited to the specific implementation of the vision tester.
As shown in fig. 12, the vision tester may include: a processor (processor)402, a communication Interface 404, a memory 406, a communication bus 408, a camera 409, a display 410, and a communication module 411.
Wherein: the processor 402, the communication interface 404, the memory 406, the camera 409, the display 410 and the communication module 411 complete communication with each other through the communication bus 408. The processor 402, the communication interface 404, the memory 406 and the communication interface 404 constitute the controller shown in fig. 2, and the camera 409, the display 410 and the communication module 411 are the same as those shown in fig. 2.
A communication interface 404 for communicating with network elements of other devices, such as clients or other servers. The processor 402 is configured to execute the program 410, and may specifically execute the relevant steps in the above-described embodiment of the method for drawing a graph for a vision testing apparatus.
In particular, program 410 may include program code comprising computer operating instructions.
The processor 402 may be a central processing unit CPU, or an application specific Integrated circuit asic, or one or more Integrated circuits configured to implement an embodiment of the present invention. The vision tester includes one or more processors, which may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 406 for storing a program 410. Memory 406 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 410 may specifically be configured to cause the processor 402 to perform the following operations:
acquiring an environment image in front of the vision detector;
judging whether the environment image contains a user or not;
if yes, determining the distance between the user and the vision detector;
and controlling the vision detector to display the sighting marks by combining the distance, and determining the vision grade of the user based on the judgment result of the user.
In an alternative manner, the program 410 causes the processor to perform the following further operations prior to the step of determining the distance between the user and the vision tester:
extracting facial features of the user from the environment image;
according to the facial features, performing identity verification on the user;
upon passing the authentication, performing the step of determining the distance between the user and the vision tester;
and outputting a cheating alarm prompt when the identity authentication is not passed.
In an alternative manner, program 410 causes the processor to perform operations for authenticating the user based on the facial features, including:
judging whether the facial features are matched with the facial features of the currently specified person to be tested, if so, determining that the user passes the identity authentication, if not, determining that the user does not pass the identity authentication,
or,
and judging whether the facial features are matched with the facial features of the user in a preset testee library, if so, determining that the user passes the identity authentication, and if not, determining that the user does not pass the identity authentication.
In an alternative manner, program 410 causes the processor to perform operations of extracting facial features of the user from the environmental image, including:
identifying whether the face of the user in the environment image is a positive face;
if the face is not the front face, correcting the face of the user into the front face according to a preset front face correction algorithm;
extracting facial features of the user from the corrected environment image;
and if the face is the front face, extracting the facial features of the user directly from the environment image.
In an alternative manner, program 410 causes the processor to perform the operation of controlling the vision tester to display the optotype in conjunction with the distance, including:
acquiring a vision examination mode currently selected by the vision detector and a standard distance corresponding to the vision examination mode;
judging whether the distance is the same as the standard distance;
if the visual targets are the same, extracting the visual targets from a preset standard visual chart, and controlling the vision detector to display the extracted visual targets;
if not, outputting prompt information for prompting the user to adjust the position until the distance between the user and the vision detector is equal to a standard distance, or calculating the proportion of the distance to the standard distance, zooming the visual target extracted from the preset standard visual acuity chart according to the proportion, and controlling the vision detector to display the zoomed visual target.
In an alternative manner, the program 410 causes the processor to execute the following operations before the obtaining of the judgment result of the user for the optotype:
identifying whether the user has a cheating action;
if yes, outputting a cheating alarm prompt, re-extracting the sighting target, and returning to the step of controlling the vision detector to display the sighting target;
and if not, executing the step of obtaining the judgment result of the user aiming at the sighting mark.
In an alternative approach, the cheating action comprises: the upper body of the user inclines forwards, the user wears glasses when the current vision inspection mode is the naked eye vision inspection mode, the user does not shield eyes, and/or the eyes shielded by the user are the same as the current eyes to be detected.
In an alternative approach, the program 410 causes the processor to perform: controlling the vision testing instrument to display a visual target and obtain a judgment result of the user for the visual target, and determining the vision grade of the user based on the judgment result of the user for the visual target comprises the following operations:
controlling the vision detector to display a visual target;
acquiring a judgment result of the user on the current sighting target;
judging whether the judgment result of the user on the current sighting target is consistent with the judgment results of the user on all previous sighting targets, and if so, adjusting the preset step length;
if the adjusted preset step length is larger than or equal to the preset step length threshold value, updating the sighting target according to the judgment result of the user on the current sighting target, the vision grade of the current sighting target and the preset step length;
if the difference is consistent, keeping the preset step length unchanged, and executing the step of updating the sighting target according to the judgment result of the user on the current sighting target, the vision grade of the current sighting target and the preset step length;
judging whether the vision grade of the updated sighting target exceeds a preset vision grade range or not;
if the judgment result exceeds the preset threshold, the vision grade of the visual target with the correct last judgment result of the user is used as the estimated grade of the user;
when the adjusted preset step length is smaller than the preset step length threshold value, taking the vision grade of the sighting target with the correct last judgment result of the user as the estimated grade of the user;
recording a judgment result of the user on at least one visual target in the visual target list corresponding to the estimated grade;
when the judgment result meets the vision grade test condition, acquiring the judgment result of at least one visual target in the visual target list corresponding to the higher vision grade of the user until the judgment result does not meet the vision grade test condition;
taking the vision grade corresponding to the judgment result which finally meets the vision grade test condition as the vision grade of the user;
when the judgment result does not meet the vision grade test condition, acquiring the judgment result of at least one visual target in the visual target list corresponding to the lower vision grade of the user until the judgment result meets the vision grade test condition;
and taking the vision grade corresponding to the judgment result which finally meets the vision grade test condition as the vision grade of the user.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specified otherwise.

Claims (13)

1. A vision testing method is applied to a vision tester and is characterized by comprising the following steps:
acquiring an environment image in front of the vision detector;
judging whether the environment image contains a user or not;
if yes, determining the distance between the user and the vision detector;
and controlling the vision detector to display the sighting marks by combining the distance, and determining the vision grade of the user based on the judgment result of the user.
2. The method of claim 1, wherein prior to the step of determining the distance between the user and the vision tester, the method further comprises:
extracting facial features of the user from the environment image;
according to the facial features, performing identity verification on the user;
upon passing the authentication, performing the step of determining the distance between the user and the vision tester;
and outputting a cheating alarm prompt when the identity authentication is not passed.
3. The method of claim 2, wherein the authenticating the user according to the facial features comprises:
judging whether the facial features are matched with the facial features of the currently specified person to be tested, if so, determining that the user passes the identity authentication, if not, determining that the user does not pass the identity authentication,
or,
and judging whether the facial features are matched with the facial features of the user in a preset testee library, if so, determining that the user passes the identity authentication, and if not, determining that the user does not pass the identity authentication.
4. The method of claim 2 or 3, wherein extracting facial features of the user from the environmental image further comprises:
identifying whether the face of the user in the environment image is a positive face;
if the face is not the front face, correcting the face of the user into the front face according to a preset front face correction algorithm;
extracting facial features of the user from the corrected environment image;
and if the face is the front face, extracting the facial features of the user directly from the environment image.
5. The method of any of claims 1-3, wherein said controlling the vision tester to display a visual target in conjunction with the distance comprises:
acquiring a vision examination mode currently selected by the vision detector and a standard distance corresponding to the vision examination mode;
judging whether the distance is the same as the standard distance;
if the visual targets are the same, extracting the visual targets from a preset standard visual chart, and controlling the vision detector to display the extracted visual targets;
if not, outputting prompt information for prompting the user to adjust the position until the distance between the user and the vision detector is equal to a standard distance, or calculating the proportion of the distance to the standard distance, zooming the visual target extracted from the preset standard visual acuity chart according to the proportion, and controlling the vision detector to display the zoomed visual target.
6. The method according to claim 1, 2 or 3, wherein before the obtaining of the judgment result of the user for the optotype, the method further comprises:
identifying whether the user has a cheating action;
if yes, outputting a cheating alarm prompt, re-extracting the sighting target, and returning to the step of controlling the vision detector to display the sighting target;
and if not, executing the step of obtaining the judgment result of the user aiming at the sighting mark.
7. The method of claim 6, wherein the cheating action comprises: the upper body of the user inclines forwards, the user wears glasses when the current vision inspection mode is the naked eye vision inspection mode, the user does not shield eyes, and/or the eyes shielded by the user are the same as the current eyes to be detected.
8. The method of claim 1, 2 or 3, wherein the controlling the vision tester to display a visual target and obtaining the user's determination of the visual target, and wherein determining the user's vision level based on the user's determination of the visual target comprises:
controlling the vision detector to display a visual target;
acquiring a judgment result of the user on the current sighting target;
judging whether the judgment result of the user on the current sighting target is consistent with the judgment results of the user on all previous sighting targets, and if so, adjusting the preset step length;
if the adjusted preset step length is larger than or equal to the preset step length threshold value, updating the sighting target according to the judgment result of the user on the current sighting target, the vision grade of the current sighting target and the preset step length;
if the difference is consistent, keeping the preset step length unchanged, and executing the step of updating the sighting target according to the judgment result of the user on the current sighting target, the vision grade of the current sighting target and the preset step length;
judging whether the vision grade of the updated sighting target exceeds a preset vision grade range or not;
and if the judgment result exceeds the preset threshold, taking the vision grade of the visual target with the correct judgment result of the user as the estimated grade of the user.
9. The method of claim 8,
and when the adjusted preset step length is smaller than the preset step length threshold value, taking the vision grade of the sighting target with the correct last judgment result of the user as the estimated grade of the user.
10. The method of claim 8, further comprising:
recording a judgment result of the user on at least one visual target in the visual target list corresponding to the estimated grade;
when the judgment result meets the vision grade test condition, acquiring the judgment result of at least one visual target in the visual target list corresponding to the higher vision grade of the user until the judgment result does not meet the vision grade test condition;
taking the vision grade corresponding to the judgment result which finally meets the vision grade test condition as the vision grade of the user;
when the judgment result does not meet the vision grade test condition, acquiring the judgment result of at least one visual target in the visual target list corresponding to the lower vision grade of the user until the judgment result meets the vision grade test condition;
and taking the vision grade corresponding to the judgment result which finally meets the vision grade test condition as the vision grade of the user.
11. A vision testing device, comprising:
the acquisition module is used for acquiring an environment image in front of the vision detector;
the judging module is used for judging whether the environment image contains the user or not;
the first determining module is used for determining the distance between the user and the vision detector if the distance is included;
the control module is used for controlling the vision detector to display the sighting target and acquiring the judgment result of the user in combination with the distance;
and the second determining module is used for determining the vision grade of the user based on the judgment result of the user.
12. A vision testing apparatus comprising: the camera, the display, the processor, the memory, the communication interface and the communication bus are used for completing mutual communication;
the camera is used for acquiring an environment image in front of the vision detector;
the display is used for displaying the sighting target;
the memory is configured to store at least one executable instruction that causes the processor to perform the steps of the vision testing method of any one of claims 1-10.
13. A computer storage medium having stored therein at least one executable instruction for causing a processor to perform the steps of the vision testing method of any one of claims 1-10.
CN201910250573.2A 2019-03-29 2019-03-29 A kind of vision testing method, device, sight tester and computer storage medium Pending CN110123257A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910250573.2A CN110123257A (en) 2019-03-29 2019-03-29 A kind of vision testing method, device, sight tester and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910250573.2A CN110123257A (en) 2019-03-29 2019-03-29 A kind of vision testing method, device, sight tester and computer storage medium

Publications (1)

Publication Number Publication Date
CN110123257A true CN110123257A (en) 2019-08-16

Family

ID=67568788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910250573.2A Pending CN110123257A (en) 2019-03-29 2019-03-29 A kind of vision testing method, device, sight tester and computer storage medium

Country Status (1)

Country Link
CN (1) CN110123257A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110852185A (en) * 2019-10-21 2020-02-28 西南民族大学 Vision detection equipment and method based on human skeleton key point identification
CN111700584A (en) * 2020-06-24 2020-09-25 张家鼎 Intelligent eyesight detection system
CN111803023A (en) * 2020-06-24 2020-10-23 深圳数联天下智能科技有限公司 Vision value correction method, correction device, terminal equipment and storage medium
CN111803022A (en) * 2020-06-24 2020-10-23 深圳数联天下智能科技有限公司 Vision detection method, detection device, terminal equipment and readable storage medium
WO2021068486A1 (en) * 2019-10-12 2021-04-15 深圳壹账通智能科技有限公司 Image recognition-based vision detection method and apparatus, and computer device
CN112842249A (en) * 2021-03-09 2021-05-28 京东方科技集团股份有限公司 Vision detection method, device, equipment and storage medium
CN112998642A (en) * 2021-03-12 2021-06-22 湖南亮视嘉生物科技有限公司 Visual chart instrument for intelligently detecting eyesight and detection method thereof
CN113197542A (en) * 2021-04-30 2021-08-03 武汉特斯雷信息技术有限公司 Online self-service vision detection system, mobile terminal and storage medium
CN113456016A (en) * 2021-06-09 2021-10-01 武汉艾格眼科医院有限公司 Isolated automatic achromatopsia detecting system based on VR glasses
WO2021248671A1 (en) * 2020-06-12 2021-12-16 海信视像科技股份有限公司 Display device
CN114305317A (en) * 2021-12-23 2022-04-12 广州视域光学科技股份有限公司 Method and system for intelligently distinguishing user feedback optotypes
CN114468973A (en) * 2022-01-21 2022-05-13 广州视域光学科技股份有限公司 Intelligent vision detection system
CN115553709A (en) * 2022-10-28 2023-01-03 南京尚哲智能科技有限公司 Automatic intelligent vision detection system and method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104090656A (en) * 2014-06-30 2014-10-08 潘晓丰 Eyesight protecting method and system for smart device
CN105955477A (en) * 2016-04-29 2016-09-21 乐视控股(北京)有限公司 Method and apparatus for adjusting display image of VR device and corresponding VR device
CN105996975A (en) * 2016-05-31 2016-10-12 乐视控股(北京)有限公司 Method, device and terminal for testing vision
CN106060142A (en) * 2016-06-17 2016-10-26 杨斌 Mobile phone capable of checking eyesight, and method for checking eyesight by using mobile phone
CN106941562A (en) * 2017-02-24 2017-07-11 上海与德信息技术有限公司 The method and device given a test of one's eyesight
CN109363620A (en) * 2018-10-22 2019-02-22 深圳和而泰数据资源与云技术有限公司 A kind of vision testing method, device, electronic equipment and computer storage media

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104090656A (en) * 2014-06-30 2014-10-08 潘晓丰 Eyesight protecting method and system for smart device
CN105955477A (en) * 2016-04-29 2016-09-21 乐视控股(北京)有限公司 Method and apparatus for adjusting display image of VR device and corresponding VR device
CN105996975A (en) * 2016-05-31 2016-10-12 乐视控股(北京)有限公司 Method, device and terminal for testing vision
CN106060142A (en) * 2016-06-17 2016-10-26 杨斌 Mobile phone capable of checking eyesight, and method for checking eyesight by using mobile phone
CN106941562A (en) * 2017-02-24 2017-07-11 上海与德信息技术有限公司 The method and device given a test of one's eyesight
CN109363620A (en) * 2018-10-22 2019-02-22 深圳和而泰数据资源与云技术有限公司 A kind of vision testing method, device, electronic equipment and computer storage media

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021068486A1 (en) * 2019-10-12 2021-04-15 深圳壹账通智能科技有限公司 Image recognition-based vision detection method and apparatus, and computer device
CN110852185A (en) * 2019-10-21 2020-02-28 西南民族大学 Vision detection equipment and method based on human skeleton key point identification
CN113807375B (en) * 2020-06-12 2024-05-31 海信视像科技股份有限公司 Display equipment
CN113807375A (en) * 2020-06-12 2021-12-17 海信视像科技股份有限公司 Display device
WO2021248671A1 (en) * 2020-06-12 2021-12-16 海信视像科技股份有限公司 Display device
CN111803022A (en) * 2020-06-24 2020-10-23 深圳数联天下智能科技有限公司 Vision detection method, detection device, terminal equipment and readable storage medium
CN111803023A (en) * 2020-06-24 2020-10-23 深圳数联天下智能科技有限公司 Vision value correction method, correction device, terminal equipment and storage medium
CN111700584A (en) * 2020-06-24 2020-09-25 张家鼎 Intelligent eyesight detection system
CN112842249A (en) * 2021-03-09 2021-05-28 京东方科技集团股份有限公司 Vision detection method, device, equipment and storage medium
CN112842249B (en) * 2021-03-09 2024-04-19 京东方科技集团股份有限公司 Vision detection method, device, equipment and storage medium
CN112998642A (en) * 2021-03-12 2021-06-22 湖南亮视嘉生物科技有限公司 Visual chart instrument for intelligently detecting eyesight and detection method thereof
CN113197542A (en) * 2021-04-30 2021-08-03 武汉特斯雷信息技术有限公司 Online self-service vision detection system, mobile terminal and storage medium
CN113197542B (en) * 2021-04-30 2024-01-30 武汉特斯雷信息技术有限公司 Online self-service vision detection system, mobile terminal and storage medium
CN113456016A (en) * 2021-06-09 2021-10-01 武汉艾格眼科医院有限公司 Isolated automatic achromatopsia detecting system based on VR glasses
CN114305317B (en) * 2021-12-23 2023-05-12 广州视域光学科技股份有限公司 Method and system for intelligently distinguishing user feedback optotype
CN114305317A (en) * 2021-12-23 2022-04-12 广州视域光学科技股份有限公司 Method and system for intelligently distinguishing user feedback optotypes
CN114468973B (en) * 2022-01-21 2023-08-11 广州视域光学科技股份有限公司 Intelligent vision detection system
CN114468973A (en) * 2022-01-21 2022-05-13 广州视域光学科技股份有限公司 Intelligent vision detection system
CN115553709A (en) * 2022-10-28 2023-01-03 南京尚哲智能科技有限公司 Automatic intelligent vision detection system and method

Similar Documents

Publication Publication Date Title
CN110123257A (en) A kind of vision testing method, device, sight tester and computer storage medium
CN107609383B (en) 3D face identity authentication method and device
CN107748869B (en) 3D face identity authentication method and device
CN107633165B (en) 3D face identity authentication method and device
JP6598617B2 (en) Information processing apparatus, information processing method, and program
JP2020194608A (en) Living body detection device, living body detection method, and living body detection program
US9154739B1 (en) Physical training assistant system
JP6052399B2 (en) Image processing program, image processing method, and information terminal
US10254831B2 (en) System and method for detecting a gaze of a viewer
CN111488775B (en) Device and method for judging degree of visibility
WO2020042542A1 (en) Method and apparatus for acquiring eye movement control calibration data
CN108478184A (en) Eyesight measurement method and device, VR equipment based on VR
CN108875468B (en) Living body detection method, living body detection system, and storage medium
TWI557601B (en) A puppil positioning system, method, computer program product and computer readable recording medium
CN110751728B (en) Virtual reality equipment with BIM building model mixed reality function and method
EP4095744A1 (en) Automatic iris capturing method and apparatus, computer-readable storage medium, and computer device
CN104850842A (en) Mobile terminal iris identification man-machine interaction method
CN108090463A (en) Object control method, apparatus, storage medium and computer equipment
CN112597785A (en) Method and system for guiding image acquisition of target object
KR101700120B1 (en) Apparatus and method for object recognition, and system inculding the same
WO2010058927A3 (en) Device for photographing face
JP2011150497A (en) Person identification device, person identification method, and software program thereof
WO2017000217A1 (en) Living-body detection method and device and computer program product
CN114092985A (en) Terminal control method, device, terminal and storage medium
JP2019046239A (en) Image processing apparatus, image processing method, program, and image data for synthesis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200407

Address after: 1706, Fangda building, No. 011, Keji South 12th Road, high tech Zone, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen shuliantianxia Intelligent Technology Co.,Ltd.

Address before: 518000, building 10, building ten, building D, Shenzhen Institute of Aerospace Science and technology, 6 hi tech Southern District, Nanshan District, Shenzhen, Guangdong 1003, China

Applicant before: SHENZHEN H & T HOME ONLINE NETWORK TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20190816

RJ01 Rejection of invention patent application after publication