Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Referring to fig. 1, a schematic diagram of an operating environment of the method for detecting eyesight of the present invention, the operating environment 10 includes an eyesight detector 11, a cloud server 12 and an intelligent terminal 13;
the vision tester 11 includes a communication module 111, a camera 112, a display 113 and a controller 114, and the communication module 111, the camera 112 and the display 113 are all connected to the controller 114.
The display 113 provides a user interactive software interface to display optotypes, detection results, and the like during the vision examination. Optionally, the display 113 is a touch display device.
The camera 112 is used for capturing an image of the environment in front of the display 113. When the user needs to perform vision detection, the user needs to stand in front of the display 113 and face the display 113, and the environment image captured by the camera 112 includes the user. Optionally, the camera 112 is a camera with USB, a mesh camera, or a depth camera, etc.
The controller 114 belongs to a control center, and when the controller 114 determines that the user exists according to the environment image acquired by the camera 112, the controller starts vision detection, acquires the judgment result of the user, and then obtains the vision grade of the user based on the judgment result of the user. And after the vision grade of the user is obtained, the controller 114 may control the display to display the vision grade to inform the user of the vision test result. Of course, in other embodiments, the vision tester 11 may further include a speaker connected to the communication bus, and the controller 114 controls the speaker to broadcast the user's vision level. After obtaining the vision level of the user, the controller 114 may also bind the vision level with the identity of the user, and upload the vision level to the cloud server 12 through the communication module 111 for storage. Of course, the process video of the vision detection process of the user can also be collected, and the process video and the vision detection result are uploaded to the cloud server 12 together and stored by the cloud server 12, so that the vision detection process of the user can be traced and backtracked conveniently in the future. When a user needs to inquire the eyesight grade of the user, the user can access the cloud server 12 through the intelligent terminal 13 and extract the eyesight grade of the user from the cloud server 12. Among them, the communication module 111 can be selected as a wireless communicator, for example: 5G wireless communicator, WIFI wireless communicator, Zigbee wireless communicator etc. and wireless communicator can reduce the trouble of walking the line.
Further, a data analysis program may be deployed in the cloud server 12 to analyze the vision data of a certain user and a certain group, for example: and the data analysis program is combined with the historical data to find that the average vision grade of a certain class is reduced, and then notifies the responsible person of the corresponding class, or the data analysis program sends a prompt to the intelligent terminal 13 corresponding to the user when finding that the vision grade of the user is reduced, so as to remind the user to pay attention to the vision of the user.
It can be understood that: in other embodiments, the controller 114 may also store the user's vision level and/or the user's progress video during the vision test in a local memory, and the user may read the local memory to obtain his or her own vision level and progress video.
Referring to fig. 3, fig. 3 is a flowchart of a vision testing method according to an embodiment of the present invention, the method is applied to the vision testing apparatus 11 in the operating environment 10, and specifically, the method includes:
step 201: acquiring an environment image in front of the vision detector;
the environmental image in front of the vision tester is an image in front of the optotype position shown on the display of the vision tester. In general, a camera of the vision tester is fixed in advance, and a viewing lens of the camera is directly opposite to the front of a visual target displayed by a display, so that when a user stands in front of the visual target, the user can enter a viewing range of the camera of the vision tester.
Step 202: judging whether the environment image contains a user, if so, executing step 203, otherwise, returning to execute step 201;
the recognition algorithm for recognizing the user from the environment image can be implemented by the prior art, and is not limited herein.
Whether the vision detection is started or not is determined by judging whether the environment image contains the user or not, so that the automatic starting of the vision detection can be realized, auxiliary personnel are not needed to participate in the operation, and the labor cost is saved.
Of course, in some embodiments, when it is determined that the environment image does not include the user and the duration is greater than the preset time, the elements of the vision tester except the controller and the camera may be controlled to enter a standby state or an off state to save the power consumed by the vision tester, and after it is determined that the environment image includes the user, the elements are activated to perform subsequent vision tests.
Step 203: determining a distance between the user and the vision tester;
the distance between the user and the vision tester is the distance between the user and the position of the optotype displayed by the vision tester. The distance between the user and the vision detector can be obtained by detecting through a distance sensor arranged on the vision detector, or the coordinate of the camera and the coordinate of the position where the sighting mark is located are obtained in advance, when the camera shoots an obtained environment image, the coordinate of the user is calculated according to the position of the user in the environment image, the focal length of the camera and the coordinate of the camera, and then the distance between the user and the vision detector is calculated according to the coordinate of the user and the coordinate of the position where the sighting mark is located.
Step 204: controlling the vision detector to display the sighting marks by combining the distance;
because different eyesight test modes, the distance requirement between the user and the eyesight test instrument is different, in order to ensure the accuracy of eyesight test, before formally displaying the sighting mark, whether the distance between the user and the eyesight test instrument is correct or not can be verified, specifically, as shown in fig. 4, step 204 can include again:
step 2041: acquiring a vision examination mode currently selected by the vision detector;
the vision examination mode comprises a naked eye vision examination mode and a correction vision examination mode, wherein the naked eye vision examination mode refers to vision examination performed by a user on the premise that the user does not wear any vision correction tool, and the correction vision examination mode refers to vision examination performed by the user on the premise that the user wears the vision correction tool.
Step 2042: acquiring a standard distance corresponding to the vision examination mode;
step 2043: judging whether the distance is the same as the standard distance, if so, executing a step 2044, otherwise, executing a step 2045;
step 2044: extracting optotypes from a preset standard visual chart, and controlling the vision detector to display the extracted optotypes;
step 2405: outputting prompt information for prompting the user to adjust the position until the distance between the user and the vision detector is equal to a standard distance;
of course, in order to remind the user of the moving direction more clearly, the magnitude relationship between the distance between the user and the vision detector and the standard distance may also be determined, and according to the magnitude relationship, the moving direction is carried in the output prompt information, for example: when the distance between the user and the vision detector is 3 meters greater than the standard distance, the prompt message of 'moving forward by 3 meters' is output, and when the distance between the user and the vision detector is 1 meter less than the standard distance, the prompt message of 'moving backward by 1 meter' is output.
In some embodiments, the corresponding detection area may be marked in front of the vision tester in advance according to the standard distance corresponding to each vision testing mode, after determining the vision testing mode of the vision tester, it is identified whether the user is located in the corresponding detection area, and when the user is not located in the corresponding detection area, the user is prompted to enter the corresponding detection area, for example: a circular detection area 1 and a circular detection area 2 are arranged in front of the vision detector, the detection area 1 corresponds to a naked eye vision detection mode, the detection area 2 corresponds to a correction vision detection mode, and if the selected vision detection mode is the naked eye vision detection mode, but the user is not in the detection area 1, the user is reminded to enter the detection area 1.
It can be understood that: in other embodiments, when the distance between the user and the vision tester is not equal to the corresponding standard distance, the user may not be prompted to move, but a ratio of the distance to the standard distance is calculated, the optotypes extracted from the preset standard visual acuity chart are scaled according to the ratio, and the vision tester is controlled to display the scaled optotypes, so as to ensure the accuracy of vision testing.
Step 205: acquiring a judgment result of the user;
the judgment result of the user refers to an instruction given by the user to the displayed visual target, in some embodiments, the user can give the judgment result through the limb action, and the judgment result of the user can be identified by acquiring the user image of the user and determining the limb action of the user according to the user image, and several ways of determining the judgment result of the user through the limb action of the user are given as follows:
(1) the motion trail of the hand of the user is recognized from the user image, then the motion trail is fitted into a motion straight line, and the included angles between the motion straight line and the preset left-direction axis, right-direction axis, upward axis and downward axis are determined. And acquiring an axis with an included angle smaller than a first preset value, and taking the direction corresponding to the acquired axis as the indicating direction of the hand. As shown in FIG. 5, the indication directions of the left axis, the right axis, the upward axis and the downward axis are respectively left, right, upward and downward, and the included angles of the motion straight lines with the right axis, the upward axis, the right axis and the downward axis are respectively a1, a2, a3 and a4, wherein a1 is less than 45 degrees, and a2, a3 and a4 are all greater than 45 degrees, so that the right axis is the selected axis.
(2) Recognizing a gesture shape of a user from a user image, and regarding a direction indicated by the gesture shape as an indication direction of an indication action of the user. As shown in fig. 6, when the thumb points to the left, the pointing direction of the user is to the left, when the thumb points to the right, the pointing direction of the user is to the right, when the thumb points to the up, the pointing direction of the user is to the up, and when the thumb points to the down, the pointing direction of the user is to the down.
It can be understood that: in other embodiments, the pointing direction of the user may also be represented by the pointing direction of other fingers of the user, for example: index finger, middle finger, etc.; or, four preset gesture shapes representing upward, downward, leftward and rightward are predefined, when the gesture shape of the user is recognized, a preset gesture shape matched with the gesture shape is found, and the direction represented by the matched preset gesture shape serves as the indication direction of the user.
(3) And constructing four virtual frames in the user image, wherein the four virtual frames are respectively in four directions and are in up-down symmetry and left-right symmetry, and one virtual frame corresponds to one direction. The virtual frame where the hand of the user is located is identified, and the direction corresponding to the virtual frame where the hand of the user is located is used as the indication direction of the hand of the user.
Step 206: and determining the vision grade of the user based on the judgment result of the user.
The vision grade represents the value of vision, for example: the vision level is 1.0, the vision value of the user is 1.0, and when the vision level is 5.0, the vision value of the user is 5.0.
Further, after determining the vision level of the user, the vision level may be notified to the user in the form of sound or image to obtain the vision test result. Of course, it is also possible to directly associate the user's vision level with the user, for example: if the user carries out experience test, the vision grade of the user can be directly filled on the electronic test chart of the user. The associated data can be uploaded to a cloud server for storage and backup, and can also be stored in a local memory of the vision detector.
In order to improve the efficiency of vision detection, the rough vision measurement may be performed on the user to obtain an estimated level, and then the fine vision measurement is performed from the estimated level to obtain a final vision level, specifically, as shown in fig. 7, the vision detecting apparatus is controlled to display a visual target, and a determination result of the user for the visual target is obtained, and based on the determination result of the user for the visual target, the determining the vision level of the user includes:
step 2061: controlling the vision detector to display a visual target;
when the historical vision data of the user is stored, the first visual target displayed by the vision detector can be selected according to the historical vision data of the user, so that the grade of the visual target is closer to the current vision grade of the user, and the detection efficiency is improved. When the historical vision data of the user is not stored, the first visual target can be randomly selected, or the vision grade of the user is estimated by combining a big data analysis technology, and then the visual target is randomly selected from the estimated vision grade.
Step 2062: acquiring a judgment result of the user on the current sighting target;
the current visual target is the visual target currently displayed by the vision detector.
Step 2063: judging whether the judgment result of the user on the current sighting target is consistent with the judgment results of the user on all previous sighting targets, if so, executing a step 2064, otherwise, executing a step 2067;
the previously all optotypes refer to all optotypes previously displayed by the vision tester.
Step 2064: adjusting a preset step length;
step 2065: judging whether the adjusted preset step length is greater than or equal to the preset step length threshold value, if so, executing a step 2066, and if not, executing a step 2069;
it should be noted that: before each vision detection, a fixed value needs to be given to the preset step length, so that the starting values of the preset step length are the same in the process of each vision detection. Of course, in other embodiments, the fixed value may also be given to the preset step length when each vision test is finished, so that the starting value of the preset step length is the fixed value when the vision test is performed next time, thereby ensuring that the starting values of the preset step lengths are the same in each vision test.
Step 2066: updating the sighting target according to the judgment result of the user on the current sighting target, the vision grade of the current sighting target and the preset step length;
step 2067: keeping the preset step size unchanged, and performing step 2066;
step 2068: judging whether the vision grade of the updated sighting target exceeds a preset vision grade range, if so, executing a step 2069, and if not, executing a step 2061;
step 2069: and taking the vision grade of the visual target with the correct last judgment result of the user as the estimated grade of the user.
Step 2160: recording a judgment result of the user on at least one visual target in the visual target list corresponding to the estimated grade;
step 2161: judging whether the judgment result meets the vision grade test condition;
step 2162: when the judgment result meets the vision grade test condition, acquiring the judgment result of at least one visual target in the visual target list corresponding to the higher vision grade of the user until the judgment result does not meet the vision grade test condition;
step 2163: when the judgment result does not meet the vision grade test condition, acquiring the judgment result of at least one visual target in the visual target list corresponding to the lower vision grade of the user until the judgment result meets the vision grade test condition;
step 2164: taking the vision grade corresponding to the judgment result which finally meets the vision grade test condition as the vision grade of the user;
it can be understood that: in other embodiments, when the user's vision does not need to be accurately detected, steps 2160-2165 may not be performed, and the estimated level may be directly used as the user's vision level.
In some embodiments, in order to improve accuracy of detecting eyesight of a user, when determining whether a determination result of the user matches the visual target, a time limit may be further added, it is limited that the user must give the determination result within a preset time period, and if the user does not give the determination result within the preset time period, it is determined that the recognition result of the user is incorrect.
In the embodiment of the invention, when the environment image in front of the vision detector is determined to contain the user, the distance between the user and the vision detector is determined, the vision detector is controlled to display the visual target by combining the distance, the vision detection is performed on the user, the automatic starting of the vision detection is realized, the whole vision detection process is performed in a self-service manner, the operation by auxiliary personnel is not needed, and the labor cost is saved.
Referring to fig. 8, fig. 8 is a flowchart illustrating a method for vision testing according to another embodiment of the present invention, the method includes:
step 201: acquiring an environment image in front of the vision detector;
step 202: judging whether the environment image contains a user, if so, executing step 207, otherwise, returning to execute step 201;
step 207: extracting facial features of the user from the environment image;
the facial features include features such as the shape, position, etc. of the user's facial organs.
In order to improve the accuracy of extracting the facial features of the user from the environment image, the face of the user in the environment image may be subjected to face correction, and then the facial features of the user are extracted from the environment image after the correction, specifically, as shown in fig. 9, step 207 includes:
step 2071: identifying whether the face of the user in the environment image is a positive face, if not, executing a step 2072, and if so, executing a step 2074;
specifically, the recognizing whether the face of the user in the environment image is a front face includes: the method comprises the steps of firstly identifying feature information of a face part of a user in an environment image, then calculating a face deflection angle, a face turning coefficient and a face lifting coefficient of the face according to the feature information of the face part, then judging whether the face deflection angle is located in a preset front face deflection angle range, whether the face turning coefficient is located in a preset front face turning coefficient range and whether the face lifting coefficient is located in a preset front face lifting coefficient range, if so, determining that the face in the environment image is a front face, otherwise, determining that the face in the environment image is not a front face.
The face-off angle is used to represent the angle at which the user's face is deflected. Calculating the face deviation angle of the face according to the feature information of the face part, and specifically comprises the following steps: the method comprises the steps of firstly constructing an image central axis of an environment image, then constructing a face central axis of a face according to feature information of a face part, calculating an included angle between the face central axis and the image central axis, and taking the included angle as an off-face angle.
The face-turning coefficient is used to represent the angle of rotation of the user's face. According to the feature information of the face part, calculating a face turning coefficient of the face, and specifically comprising the following steps: constructing a face central axis of the face according to the feature information of the face part; dividing the face in the environment image into a left face area and a right face area based on a central axis of the face, and calculating a turning coefficient according to a left width of the left face area and a right width of the right face area, or according to a left area of the left face area and a right area of the right face area, or according to a left width of the same face part in the left face area and a right width in the right face area, wherein the calculation formula is as follows:
cp is a face turning coefficient, and El and Er may be a left width of the left face region and a right width of the right face region, respectively, or a left area of the left face region and a right area of the right face region, or a left width of the same kind of face part in the left face region and a right width in the right face region.
The face lifting coefficient is used to represent the angle at which the user's face is lifted or lowered. According to the feature information of the face part, calculating a face lifting coefficient of the face, specifically comprising: determining a first distance between the first part and the second part, determining a second distance between the second part and the third part, and calculating a face lifting coefficient according to the first distance and the second distance, wherein the calculation formula is as follows:
cr is a face lifting coefficient, H1 is a first distance, and H2 is a second distance. Wherein, first position, second position and third position all are located the face, belong to face part, and first position is located the top of second position, and the second position is located the top of third position, for example: the first part is eyes, the second part is nose, the third part is mandible, the first distance is the distance from the tip of the nose to the line connecting the left eye and the right eye along the central axis of the face, and the second distance is the distance from the tip of the nose to the lowest point of the mandible along the central axis of the face.
Step 2072: correcting the face of the user into a front face according to a preset front face correction algorithm;
in some embodiments, correcting the face of the user to be a front face according to a preset front face correction algorithm includes: the method comprises the steps of positioning at least three first key points of a face in an environment image, calculating affine transformation parameters of an affine transformation matrix according to coordinates of the at least three first key points and coordinates of corresponding second key points of the first key points in a preset standard face image, and finally performing coordinate transformation on each pixel point in the face image according to the affine transformation parameters and the affine transformation matrix, so that the face in the face image is corrected to be a face.
The first key point refers to a pixel point of the face of the user, and the specific position of the first key point is not limited. The second key point is a pixel point of a preset standard front face image. The first key points of the face in the face image correspond to the second key points of the face in the standard front face image in a one-to-one mode. According to the coordinates of the at least three first key points and the coordinates of the corresponding second key points in the standard frontal face image, and in combination with a calculation formula, affine transformation parameters can be calculated, wherein the calculation formula is specifically as follows:
wherein, (x1, y1), (x2, y2), (xn, yn) are the coordinates of n second key points of the standard frontal face image, (x1', y1'), (x2', y2'), (xn ', yn') are the coordinates of n first key points of the face in the face image, respectively, where n is equal to or greater than 3, and a1, b1, a2, b2, c1, and c2 are affine transformation parameters.
Step 2073: extracting facial features of the user from the corrected environment image;
step 2074: facial features of the user are extracted directly from the environmental image.
Step 208: according to the facial features, performing identity authentication on the user, and if the user passes the identity authentication, executing step 203, otherwise executing step 209;
by performing identity authentication on the user, the situation that the user is impersonated can be well avoided, the identity of the user can be confirmed, and the situation that the detection result does not correspond to the user is avoided.
In some embodiments, authenticating the user comprises: and judging whether the facial features are matched with the facial features of the currently specified person to be tested, if so, determining that the user passes the identity authentication, and if not, determining that the user does not pass the identity authentication. The currently specified person to be tested is the user currently selected by the vision tester, the hospital calls the number, the vision tester specifies the person to be tested, the specified person to be tested executes vision test before reaching the vision tester, and the obtained vision test result is recorded under the name of the person to be tested.
In some embodiments, authenticating the user may further comprise: and judging whether the facial features are matched with the facial features of the user in a preset testee library, if so, determining that the user passes the identity authentication, and if not, determining that the user does not pass the identity authentication. In this embodiment, the vision testing apparatus does not use the hospital number calling method, but uses the on-going-and-testing method, but in the on-going-and-testing method, only the user in the preset testee library is legal, otherwise, the user is considered as an illegal user.
Further, in order to avoid identity cheating by others using pictures of real users, before identity verification is performed on a user, whether the user in front of the vision detector is a living body can be verified, subsequent identity verification is performed only if the user is the living body, and otherwise, the user is determined to have cheating behaviors. The detection of the living body is not limited, and the multispectral camera may be configured, the multispectral camera captures a multispectral image, and determines whether the user is a living body according to the multispectral image, or the vision detector outputs a motion instruction, and when the user performs an action corresponding to the motion instruction, the user is determined to be a living body, for example: the vision detector outputs a motion instruction of 'lifting the right arm', if the fact that the user lifts the right arm is detected, the user is determined to be a living body, and if not, the user is determined not to be the living body.
Step 209: outputting a cheating alarm prompt;
the cheating alert prompt is used to alert the user and prompt the user to leave the vision tester.
Step 203: determining a distance between the user and the vision tester;
step 204: controlling the vision detector to display the sighting marks by combining the distance;
step 205: acquiring a judgment result of the user;
step 206: and determining the vision grade of the user based on the judgment result of the user.
After obtaining user's eyesight grade, can directly bind and save this eyesight grade and user, perhaps send the user identity that eyesight grade and identity verification obtained before to the high in the clouds server together again, bind by the high in the clouds server, when needs user inquiry eyesight grade, direct access high in the clouds server can.
Certainly, if the identity of the user is not sensitive, the user can not be subjected to identity verification, after the vision level of the user is determined, the user inputs the identity information of the user on a vision detector, the vision detector sends the identity information and the vision level to a cloud server for binding, or the vision detector sends the vision level to the cloud server, then a two-dimensional code containing an identifier of the vision level is displayed, the user uses an intelligent terminal of the user to scan the two-dimensional code to obtain the identifier, the user identity information and the identifier input by the intelligent terminal are sent to the cloud server, and then the cloud server binds the user identity information and the vision level result.
In the embodiment of the invention, when the user is detected to be in front of the vision detector, the user is authenticated first, and the subsequent vision detection process is executed after the user passes the authentication.
Referring to the drawings, fig. 10 is a flowchart of a vision testing method according to a third embodiment of the present invention, the method further includes:
step 201: acquiring an environment image in front of the vision detector;
step 202: judging whether the environment image contains a user, if so, executing step 203, otherwise, returning to execute step 201;
step 203: determining a distance between the user and the vision tester;
step 204: controlling the vision detector to display the sighting marks by combining the distance;
step 210: identifying whether the user has a cheating action, if not, executing step 205, otherwise, executing step 211;
if the user cheats in the process of vision detection, the detected vision result is not accurate, and therefore, in the process of performing vision detection, the user needs to be concerned with whether cheating behaviors exist or not, for example: the upper body of the user inclines forwards, the user wears glasses when the current vision inspection mode is the naked eye vision inspection mode, the user does not shield eyes, and/or the eyes shielded by the user are the same as the current eye to be detected, and the like. The cheating behavior of the user can be acquired by a camera on the vision detector, and the image is identified according to the image. And the recognition algorithm for recognizing the cheating action of the user from the image may employ a neural network algorithm.
Step 211: outputting a cheating alarm prompt, re-extracting the sighting target, and returning to the step 204;
step 205: acquiring a judgment result of the user;
step 206: and determining the vision grade of the user based on the judgment result of the user.
In the embodiment of the invention, in the process of detecting the eyesight of the user, when the user has cheating behaviors, the judgment result of the user is not recorded, and the cheating alarm prompt is output, so that the user can correct the own actions, and the accuracy of the eyesight detection is ensured.
The invention further provides an embodiment of the device for detecting eyesight. As shown in fig. 11, the apparatus for eyesight test 30 includes an obtaining module 301, a judging module 302, a first determining module 303, a control module 304, and a second determining module 305.
The acquisition module 301 is configured to acquire an environment image in front of the vision tester. The determining module 302 is configured to determine whether the environment image includes a user. The first determining module 303 is configured to determine a distance between the user and the vision tester if the distance is included. The control module 304 is configured to control the vision tester to display the optotype and obtain the determination result of the user in combination with the distance. The second determining module 305 is configured to determine the vision level of the user based on the determination result of the user.
In some embodiments, the control module 304 is further specifically configured to: acquiring a vision examination mode currently selected by the vision detector and a standard distance corresponding to the vision examination mode; judging whether the distance is the same as the standard distance; if the visual targets are the same, extracting the visual targets from a preset standard visual chart, and controlling the vision detector to display the extracted visual targets; if not, outputting prompt information for prompting the user to adjust the position until the distance between the user and the vision detector is equal to a standard distance or calculating the proportion of the distance to the standard distance, zooming the optotypes extracted from the preset standard visual acuity chart according to the proportion, and controlling the vision detector to display the zoomed optotypes.
In some embodiments, the second determining module 305 is specifically: acquiring a judgment result of the user on the current sighting target; judging whether the judgment result of the user on the current sighting target is consistent with the judgment results of the user on all previous sighting targets, and if so, adjusting the preset step length; if the adjusted preset step length is larger than or equal to the preset step length threshold value, updating the sighting target according to the judgment result of the user on the current sighting target, the vision grade of the current sighting target and the preset step length; if the difference is consistent, keeping the preset step length unchanged, and executing the step of updating the sighting target according to the judgment result of the user on the current sighting target, the vision grade of the current sighting target and the preset step length; judging whether the vision grade of the updated sighting target exceeds a preset vision grade range or not; if the judgment result exceeds the preset threshold, the vision grade of the visual target with the correct last judgment result of the user is used as the estimated grade of the user; when the adjusted preset step length is smaller than the preset step length threshold value, taking the vision grade of the sighting target with the correct last judgment result of the user as the estimated grade of the user; recording a judgment result of the user on at least one visual target in the visual target list corresponding to the estimated grade; when the judgment result meets the vision grade test condition, acquiring the judgment result of at least one visual target in the visual target list corresponding to the higher vision grade of the user until the judgment result does not meet the vision grade test condition; taking the vision grade corresponding to the judgment result which finally meets the vision grade test condition as the vision grade of the user; when the judgment result does not meet the vision grade test condition, acquiring the judgment result of at least one visual target in the visual target list corresponding to the lower vision grade of the user until the judgment result meets the vision grade test condition; and taking the vision grade corresponding to the judgment result which finally meets the vision grade test condition as the vision grade of the user.
The apparatus for vision testing 30 may also include a first identification module 306, a correction module 307, an extraction module 308, a verification module 309, and a first output module 310.
The first recognition module 306 is configured to recognize whether the face of the user in the environment image is a positive face. The correcting module 307 is configured to correct the face of the user into the right face according to a preset right face correction algorithm if the face is not the right face. The extracting module 308 is configured to extract facial features of the user from the corrected environment image when the first identifying module 306 identifies that the face in the environment image is not a front face, and extract facial features of the user directly from the environment image when the first identifying module 306 identifies that the face in the environment image is a front face. And the verification module 309 is configured to perform identity verification on the user according to the facial feature, and execute the first determination module 303 when the identity verification is passed. A first output module 310, configured to output a cheating alert prompt when the identity authentication is not passed.
In some embodiments, the verification module 309 is further specifically configured to determine whether the facial feature matches a facial feature of a currently specified person to be tested, determine that the user passes the authentication if the facial feature matches the facial feature of the currently specified person to be tested, determine that the user does not pass the authentication if the facial feature does not match the facial feature of the user in a preset person to be tested library, if the facial feature matches the facial feature of the user in the preset person to be tested library, determine that the user passes the authentication, and if the facial feature does not match the facial feature of the user in the preset person to be tested library, determine that the user does not pass the.
Further, the apparatus for eyesight test 30 further includes a second identification module 311 and a second output module 312.
The second identification module 311 is used for identifying whether the user has a cheating action. The second output module 312 is configured to output a cheating alert prompt if the cheating alert prompt exists, re-extract the sighting target, and return to the execution control module 304. If not, the second determination module 305 is executed. In some embodiments, the cheating action comprises: the upper body of the user inclines forwards, the user wears glasses when the current vision inspection mode is the naked eye vision inspection mode, the user does not shield eyes, and/or the eyes shielded by the user are the same as the current eyes to be detected.
In the embodiment of the present invention, when the determining module 302 determines that the environment image in front of the vision tester includes a user, the first determining module 303 determines a distance between the user and the vision tester, and the control module 304 controls the vision tester to display the optotype in combination with the distance, so as to perform vision testing on the user, thereby implementing automatic start-up of vision testing, and the whole vision testing process is performed by itself without needing auxiliary personnel to operate from side, so that the labor cost is saved.
An embodiment of the present invention provides a non-volatile computer storage medium, where the computer storage medium stores at least one executable instruction, and the computer executable instruction may execute the method for detecting eyesight in any method embodiment described above.
Fig. 12 is a schematic structural diagram of an embodiment of the vision tester of the present invention, and the specific embodiment of the present invention is not limited to the specific implementation of the vision tester.
As shown in fig. 12, the vision tester may include: a processor (processor)402, a communication Interface 404, a memory 406, a communication bus 408, a camera 409, a display 410, and a communication module 411.
Wherein: the processor 402, the communication interface 404, the memory 406, the camera 409, the display 410 and the communication module 411 complete communication with each other through the communication bus 408. The processor 402, the communication interface 404, the memory 406 and the communication interface 404 constitute the controller shown in fig. 2, and the camera 409, the display 410 and the communication module 411 are the same as those shown in fig. 2.
A communication interface 404 for communicating with network elements of other devices, such as clients or other servers. The processor 402 is configured to execute the program 410, and may specifically execute the relevant steps in the above-described embodiment of the method for drawing a graph for a vision testing apparatus.
In particular, program 410 may include program code comprising computer operating instructions.
The processor 402 may be a central processing unit CPU, or an application specific Integrated circuit asic, or one or more Integrated circuits configured to implement an embodiment of the present invention. The vision tester includes one or more processors, which may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 406 for storing a program 410. Memory 406 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 410 may specifically be configured to cause the processor 402 to perform the following operations:
acquiring an environment image in front of the vision detector;
judging whether the environment image contains a user or not;
if yes, determining the distance between the user and the vision detector;
and controlling the vision detector to display the sighting marks by combining the distance, and determining the vision grade of the user based on the judgment result of the user.
In an alternative manner, the program 410 causes the processor to perform the following further operations prior to the step of determining the distance between the user and the vision tester:
extracting facial features of the user from the environment image;
according to the facial features, performing identity verification on the user;
upon passing the authentication, performing the step of determining the distance between the user and the vision tester;
and outputting a cheating alarm prompt when the identity authentication is not passed.
In an alternative manner, program 410 causes the processor to perform operations for authenticating the user based on the facial features, including:
judging whether the facial features are matched with the facial features of the currently specified person to be tested, if so, determining that the user passes the identity authentication, if not, determining that the user does not pass the identity authentication,
or,
and judging whether the facial features are matched with the facial features of the user in a preset testee library, if so, determining that the user passes the identity authentication, and if not, determining that the user does not pass the identity authentication.
In an alternative manner, program 410 causes the processor to perform operations of extracting facial features of the user from the environmental image, including:
identifying whether the face of the user in the environment image is a positive face;
if the face is not the front face, correcting the face of the user into the front face according to a preset front face correction algorithm;
extracting facial features of the user from the corrected environment image;
and if the face is the front face, extracting the facial features of the user directly from the environment image.
In an alternative manner, program 410 causes the processor to perform the operation of controlling the vision tester to display the optotype in conjunction with the distance, including:
acquiring a vision examination mode currently selected by the vision detector and a standard distance corresponding to the vision examination mode;
judging whether the distance is the same as the standard distance;
if the visual targets are the same, extracting the visual targets from a preset standard visual chart, and controlling the vision detector to display the extracted visual targets;
if not, outputting prompt information for prompting the user to adjust the position until the distance between the user and the vision detector is equal to a standard distance, or calculating the proportion of the distance to the standard distance, zooming the visual target extracted from the preset standard visual acuity chart according to the proportion, and controlling the vision detector to display the zoomed visual target.
In an alternative manner, the program 410 causes the processor to execute the following operations before the obtaining of the judgment result of the user for the optotype:
identifying whether the user has a cheating action;
if yes, outputting a cheating alarm prompt, re-extracting the sighting target, and returning to the step of controlling the vision detector to display the sighting target;
and if not, executing the step of obtaining the judgment result of the user aiming at the sighting mark.
In an alternative approach, the cheating action comprises: the upper body of the user inclines forwards, the user wears glasses when the current vision inspection mode is the naked eye vision inspection mode, the user does not shield eyes, and/or the eyes shielded by the user are the same as the current eyes to be detected.
In an alternative approach, the program 410 causes the processor to perform: controlling the vision testing instrument to display a visual target and obtain a judgment result of the user for the visual target, and determining the vision grade of the user based on the judgment result of the user for the visual target comprises the following operations:
controlling the vision detector to display a visual target;
acquiring a judgment result of the user on the current sighting target;
judging whether the judgment result of the user on the current sighting target is consistent with the judgment results of the user on all previous sighting targets, and if so, adjusting the preset step length;
if the adjusted preset step length is larger than or equal to the preset step length threshold value, updating the sighting target according to the judgment result of the user on the current sighting target, the vision grade of the current sighting target and the preset step length;
if the difference is consistent, keeping the preset step length unchanged, and executing the step of updating the sighting target according to the judgment result of the user on the current sighting target, the vision grade of the current sighting target and the preset step length;
judging whether the vision grade of the updated sighting target exceeds a preset vision grade range or not;
if the judgment result exceeds the preset threshold, the vision grade of the visual target with the correct last judgment result of the user is used as the estimated grade of the user;
when the adjusted preset step length is smaller than the preset step length threshold value, taking the vision grade of the sighting target with the correct last judgment result of the user as the estimated grade of the user;
recording a judgment result of the user on at least one visual target in the visual target list corresponding to the estimated grade;
when the judgment result meets the vision grade test condition, acquiring the judgment result of at least one visual target in the visual target list corresponding to the higher vision grade of the user until the judgment result does not meet the vision grade test condition;
taking the vision grade corresponding to the judgment result which finally meets the vision grade test condition as the vision grade of the user;
when the judgment result does not meet the vision grade test condition, acquiring the judgment result of at least one visual target in the visual target list corresponding to the lower vision grade of the user until the judgment result meets the vision grade test condition;
and taking the vision grade corresponding to the judgment result which finally meets the vision grade test condition as the vision grade of the user.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specified otherwise.