CN113197542A - Online self-service vision detection system, mobile terminal and storage medium - Google Patents

Online self-service vision detection system, mobile terminal and storage medium Download PDF

Info

Publication number
CN113197542A
CN113197542A CN202110483570.0A CN202110483570A CN113197542A CN 113197542 A CN113197542 A CN 113197542A CN 202110483570 A CN202110483570 A CN 202110483570A CN 113197542 A CN113197542 A CN 113197542A
Authority
CN
China
Prior art keywords
coordinate
coordinates
display screen
shaped
eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110483570.0A
Other languages
Chinese (zh)
Other versions
CN113197542B (en
Inventor
邝瑰琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Tesley Information Technology Co ltd
Original Assignee
Wuhan Tesley Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Tesley Information Technology Co ltd filed Critical Wuhan Tesley Information Technology Co ltd
Priority to CN202110483570.0A priority Critical patent/CN113197542B/en
Publication of CN113197542A publication Critical patent/CN113197542A/en
Application granted granted Critical
Publication of CN113197542B publication Critical patent/CN113197542B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/028Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters
    • A61B3/032Devices for presenting test symbols or characters, e.g. test chart projectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Abstract

The invention discloses an online self-help eyesight detection system, a mobile terminal and a storage medium, wherein the system comprises: the device comprises a distance calculation module, a size calculation module, a display adjustment module, a vision judgment module and a cheating detection module; the invention can realize on-line self-help vision detection, and improve the convenience of vision detection; the distance between the eye node and the center of the E-shaped sighting target, the angle between the eyes and the display screen and the direction are calculated in real time, the size of the E-shaped sighting target displayed on the display screen and the positions of the four vertexes are adjusted in real time, the measurement error can be reduced, cheating by means of foreign objects by a tester is prevented through a series of cheating detection measures, and the reality and effectiveness of an online self-help vision detection result are guaranteed.

Description

Online self-service vision detection system, mobile terminal and storage medium
Technical Field
The invention relates to the field of self-service physical examination systems, in particular to an online self-service vision detection system, a mobile terminal and a storage medium.
Background
The vision detection is an important index for detecting physical conditions, for example, people who apply for a motor vehicle driving license have different vision requirements on an applicant aiming at different driving permission models.
At present, most of vision tests adopt a standard logarithmic visual chart meeting the requirements of the national standard GB11533-2011, and the field test needs to be carried out in a fixed position or a fixed space under the guidance of medical staff, so that the convenience is poor. In the prior art, some vision detection methods based on the mobile terminal also appear, but the accuracy of the vision detection methods needs to be improved, the vision detection methods are not suitable for groups such as motor vehicle drivers needing to accurately measure vision values, and the cheating problems cannot be well avoided when a third party needs to obtain real, objective and effective vision detection results.
Disclosure of Invention
In view of the above, the invention discloses an online self-help vision detection system, which is used for solving the problem that the existing vision detection method is not good in applicability to the group needing to accurately measure the vision value.
In a first aspect of the present invention, an online self-help vision detection system is disclosed, the system comprising:
a distance calculation module: the system comprises a display screen, a human eye node, an E-shaped sighting target center position and a distance calculation module, wherein the E-shaped sighting target center position is randomly generated on the display screen;
a size calculation module: the system is used for calculating the number of pixel points required to be occupied for displaying the side length of the square E-shaped visual target on the display screen in real time based on the PPI pixel density of the display screen and the distance between the eye point and the center position of the E-shaped visual target;
a display adjustment module: the E-shaped sighting target real-time adjusting device is used for adjusting the size of the E-shaped sighting target and the positions of four vertexes in real time based on the number of pixel points occupied by the E-shaped sighting target, the angle from eyes to a display screen and the direction of the E-shaped sighting target, so that the E-shaped sighting target which is always square is virtually displayed in the visual angle direction of the eyes;
and a vision judging module: the direction judgment device is used for acquiring a corresponding direction judgment result when each group of E-shaped sighting marks are displayed, and performing vision judgment;
a cheating detection module: the human body image data collected by the front camera module is used for cheating detection.
Preferably, the distance calculation module specifically includes:
a depth extraction unit: the system comprises a front camera module, a front 3D (three-dimensional) depth sensing camera, a three-dimensional world coordinate system and a three-dimensional face grid, wherein the front camera module is used for extracting depth structure information of the upper half of a human body, the front 3D depth sensing camera is used as an origin, and the 3D face grid is drawn in real time;
a position acquisition unit: the method comprises the steps of establishing a display screen coordinate system by taking a first pixel point at the upper left corner of a display screen as an origin, and acquiring the coordinate position of an E-shaped visual target randomly generated by a cheating detection module on the display screen, wherein the coordinate position is the position of the center point of the E-shaped visual target to be displayed;
a distance calculation unit: according to the position coordinates of the vertex of the cornea on the optical axis of the eyeball acquired from the 3D face grid, the distance between the human eye node and the center position of the E-shaped sighting mark, namely the eye distance, can be calculated, and the formula is as follows:
Figure 56194DEST_PATH_IMAGE001
in the formula d0The distance from the optical center of the eyeball dioptric system to the vertex of the cornea on the optical axis of the eyeball; (x 1, y1, z 1) is the position coordinate of the corneal vertex on the optical axis of the eyeball in a world coordinate system; (x 2, y2, z 2) is the position coordinate of the randomly generated E-shaped sighting mark center in the world coordinate system.
Preferably, the calculation formula of the size calculation module is as follows:
Figure 973335DEST_PATH_IMAGE002
n is the number of square pixel points which are required to be occupied by the side length of the square E-shaped sighting target displayed on the display screen; d is the distance between the human eye node and the center position of the E-shaped sighting mark, namely the eye distance, and the unit is mm; PPI is the pixel density of the display screen; l is recorded at 5 points.
Preferably, the cheating detection module specifically includes:
a visual target transformation unit: the system is used for randomly changing the display position and the opening direction of the next group of E-shaped sighting marks on the display screen after the direction of each group of E-shaped sighting marks is judged, and recording the position coordinates of the center of each group of E-shaped sighting marks in a display screen coordinate system in sequence;
a data acquisition unit: the distance d between the human eye node and the center position of the E-shaped sighting target is obtained in real time, and if the distance is smaller than a preset first distance threshold value, a prompt of too short distance is sent; if the distance is larger than or equal to the first distance threshold value, acquiring a human body posture image in the process of judging the direction of each group of E-shaped sighting mark through a front camera module, extracting the depth structure information of the upper half body of the human body in real time according to the human body posture image, and extracting the coordinates of the joint points of the upper half body of the human body according to the depth structure information of the upper half body of the human body;
a standard attitude detection unit: the human body posture detection device is used for identifying the human body posture according to the coordinates of the upper body joint point of the human body, comparing the human body posture with a standard vision detection posture prompted on a display screen, and judging the standard degree of the posture;
a cheating gesture detection unit: the E-shaped visual target position and the corresponding human body posture are combined for randomly changing, judging whether one or more cheating suspicions including arm extension, foreign matters held by hands and E-shaped visual target position follow exist or not, and judging whether cheating is performed or not.
Preferably, the standard posture detection unit is specifically configured to: acquiring a human body posture image and the upper half body joint point coordinates of the human body corresponding to each group of E-shaped sighting target judgment results, calculating limb included angles formed by the joint point coordinates, respectively calculating the similarity between each limb included angle and each corresponding limb included angle of the standard posture image, and weighting and summing each similarity to obtain the final posture similarity; and taking the attitude similarity as an attitude standard degree, and if the attitude similarity is greater than or equal to a first similarity threshold, determining the attitude standard.
Preferably, the acquiring the human posture image and the coordinates of the upper half body joint points of the human body corresponding to the judgment result of each group of the E-shaped sighting marks, and calculating the limb included angle formed by the coordinates of each key point specifically include:
when the left eye vision detection is carried out, a tester is guided to cover the right eye with the right hand, the left eye coordinate, the left shoulder coordinate, the left elbow joint coordinate, the left wrist joint coordinate, the right elbow joint coordinate, the right wrist joint coordinate and the right hand four-finger fingertip coordinate are extracted according to the human body posture image corresponding to the judgment result, and the right eye coordinate and the right shoulder coordinate are calculated according to the distance and the angle between the left eye and the display screen and the human body bilateral symmetry principle; calculating a limb included angle among connecting lines of coordinates of a left eye, coordinates of a left shoulder and coordinates of a left elbow joint in the human body posture image, and a limb included angle among connecting lines of coordinates of the left shoulder, coordinates of the left elbow joint and coordinates of the left wrist joint; calculating a limb included angle between connecting lines of a right shoulder coordinate, a right elbow joint coordinate and a right wrist joint coordinate in the human body posture image and a limb included angle between connecting lines of the right wrist joint coordinate and a right eye coordinate and coordinates of four fingertips of a right hand respectively;
when right-eye vision detection is carried out, a tester is guided to cover the left eye with the left hand, the coordinate of the right eye, the coordinate of the right shoulder, the coordinate of the right elbow joint, the coordinate of the right wrist joint, the coordinate of the left elbow joint, the coordinate of the left wrist joint and the coordinate of the left four-finger fingertip are extracted according to the human posture image corresponding to the judgment result, and the coordinate of the left eye and the coordinate of the left shoulder are calculated according to the distance and the angle between the right eye and the display screen and the left-right symmetry principle of the human body; calculating a limb included angle among connecting lines of coordinates of a right eye, coordinates of a right shoulder and coordinates of a right elbow joint in the human body posture image, and a limb included angle among connecting lines of coordinates of the right shoulder, coordinates of the right elbow joint and coordinates of a right wrist joint; and calculating the limb included angle between the left shoulder coordinate, the left elbow joint coordinate and the left wrist joint coordinate connecting line in the human body posture image and the limb included angle between the left wrist joint coordinate and the left eye coordinate and the connecting line between the left hand four-finger fingertip coordinates respectively.
Preferably, the cheating gesture detection unit is specifically configured to:
judging whether the arm extends forwards, wherein when the distance between one elbow joint and the display screen is smaller than a preset second distance threshold value, judging that the arm extends forwards;
judging whether the foreign body is held by hand, namely detecting the foreign body through image recognition and judging whether the foreign body is held by hand;
the visual target position following judgment is carried out, in the visual detection process of the same eye, the position coordinates of each displayed E-shaped visual target on the display screen are obtained, and a visual target transformation vector set is formed according to the display sequence; acquiring elbow joint coordinates in the upper body joint point coordinates of the human body corresponding to the judgment result of each group of E-shaped sighting marks, and generating an elbow joint motion vector set according to the judgment sequence; calculating the similarity between the visual target transformation vector set and the elbow joint motion vector set, and judging that the position of the arm follows the E-shaped visual target when the similarity is greater than or equal to a preset second similarity threshold, namely the visual target position following condition exists;
and in the process of judging the direction of each group of E-shaped sighting marks, if the posture standard degree is smaller than a first similarity threshold value and any condition of arm extension, foreign matter holding by hand and sighting mark position following exists, the cheating judgment judges that the sighting marks are cheated and sends cheating reminding.
Preferably, the position coordinates of the center of each displayed E-shaped sighting mark in the coordinate system of the display screen are (x 1,i y 1,i ) The coordinates of the elbow joint in the coordinates of the upper body joint points of the human body corresponding to the judgment result of each group of E-shaped sighting marks in the X, Y plane of the world coordinate system are (x 2,i ,y 2,i ),i=0,1,2,…,mIf the visual target transformation vector set is T1={(x 1,i - x 1,i-1 ,y 1,i - y 1,i-1 )| i=1,2,…,mThe elbow joint motion vector set isT 2 ={(x 2,i - x 2,i-1 ,y 2,i - y 2,i-1 )| i =1,2,…,m };
The formula for calculating the similarity between the sighting target transformation vector set and the elbow joint motion vector set is as follows:
Figure 566121DEST_PATH_IMAGE003
in the formulamThe total number of E-shaped optotypes displayed for monocular vision test is reduced by 1,
Figure 961331DEST_PATH_IMAGE004
in a second aspect of the present invention, a mobile terminal is disclosed, which includes: at least one processing unit, at least one memory unit, a communication interface and a bus; the processing unit, the storage unit and the communication interface complete mutual communication through the bus; the storage unit stores program instructions executable by the processing unit, and the processing unit calls the program instructions to implement the system of the first aspect of the invention; the types of the mobile terminal include: smart mobile phone, portable computer, panel computer, PDA intelligent terminal, on-vehicle intelligent terminal, wearable equipment includes: intelligence wrist-watch, intelligent bracelet, intelligent ring.
In a third aspect of the invention, a computer-readable storage medium is disclosed, which stores computer instructions for causing a computer to implement the system of the first aspect of the invention.
Compared with the prior art, the invention has the following beneficial effects:
1) the invention can realize on-line self-help vision detection, can realize accurate self-help vision detection without time and place limitation and assistance of auxiliary personnel, and improves the convenience of vision detection;
2) the distance between the eye node and the center of the E-shaped sighting target is calculated in real time, the size of the E-shaped sighting target displayed on the display screen is adjusted in real time according to the distance, and the size and the positions of four vertexes of the E-shaped sighting target are adjusted in real time according to the angle and the direction from the eyes to the display screen, so that the E-shaped sighting target which is always square is virtually displayed in the visual angle direction of the eyes, the measurement error can be reduced, and the E-shaped sighting target completely meets the vision detection standard required by the national standard GB 11533-2011;
3) the invention randomly changes the display position and the opening direction of each group of E-shaped sighting marks on the display screen to prevent cheating of testers, simultaneously carries out cheating detection on human body posture images in the vision detection process collected by the front camera module in the process of judging the direction of each group, and judges the standard degree of the posture by comparing the human body posture images with the standard vision detection posture prompted on the display screen; the method comprises the steps of combining randomly transformed E-shaped visual target positions and corresponding human body postures, judging whether one or more cheating suspicions including arm stretching forward, foreign matters held by hands and E-shaped visual target position following exist, and finally judging whether cheating exists or not, preventing a tester from cheating by means of foreign matters such as a magnifying glass and the like, and ensuring the reality and effectiveness of vision detection results;
4) the invention is not only suitable for self-help vision detection of common users, but also suitable for groups such as motor vehicle drivers and the like needing accurate and real measurement of the vision value according to the requirements of third parties, and can further ensure the reality and the objectivity of the vision detection result and effectively prevent cheating while ensuring the convenience.
Drawings
FIG. 1 is a schematic structural diagram of an on-line self-help vision inspection system according to the present invention;
FIG. 2 is a block diagram of the operation flow of the on-line self-help physical examination of a motor vehicle driver according to the present invention;
fig. 3 is a schematic diagram of spherical panoramic shooting by taking a rear camera of a mobile terminal as an example;
FIG. 4 is a block diagram of the working flow of the spherical panoramic photo stitching method of the present invention;
FIG. 5 is a schematic diagram of the distance between the vertex of the cornea and the center of the E-shaped optotype on the optical axis of the eyeball in the world coordinate system;
FIG. 6 is a schematic diagram of the real-time dynamic adjustment of the size of the E-shaped optotype with changes in eye distance;
FIG. 7 is a schematic diagram of left eye vision testing;
FIG. 8 is a diagram illustrating the relationship between the eye distance d and the stroke width h of the optotype in the definition of the viewing angle α;
FIG. 9 is a schematic view of an isosceles triangle formed by the viewing angle α, the eye distance d, and the optotype stroke width h;
fig. 10 is a diagram illustrating an example of coordinate conversion of the center position of the E-shaped optotype by taking the iPhone 12 Pro Max as an example;
FIG. 11 is a schematic diagram of the comparison calculation of the non-standard eye distance and the standard eye distance;
FIG. 12 is a comparison graph of the actual effect of the traditional visual chart light box and the mobile terminal display screen seen by eyes at different distances;
FIG. 13 is a comparison graph of the display effect before and after the adjustment of the size of the E-shaped optotype and the positions of four vertexes;
FIG. 14 is a schematic diagram illustrating the effect of randomly changing the display position of each group of E-shaped optotypes on the display screen;
FIG. 15 is a flow chart of the steps of the preparation of the present invention;
FIG. 16 is a flow chart of a portion of the steps of the environmental test of the present invention;
FIG. 17 is a flow chart of a portion of the steps of a hearing test according to the present invention;
FIG. 18 is a flow chart of some of the steps of the limb detection of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
Referring to fig. 1, the present invention discloses an online self-help vision detection system, which comprises: the system comprises a distance calculation module 10, a size calculation module 20, a display adjustment module 30, a vision judgment module 40 and a cheating detection module 50, and further comprises an environment detection module 60, wherein the environment detection module 60 is used for calling a front camera and a rear camera of the handheld mobile terminal in front of the distance calculation module in vision detection to shoot indoor environment images according to a specified shooting mode, generating a first spherical panoramic picture through an environment image collected by the rear camera, generating a second spherical panoramic picture through the environment image collected by the front camera and simultaneously acquiring audio and video image files of the indoor environment; fusing the first spherical panoramic photo and the second spherical panoramic photo to obtain a third spherical panoramic photo; and performing environment detection based on the third spherical panoramic photo and the audio and video image file, wherein the environment detection is qualified if the environment has no image or audio data of other people and only has one closed room entrance and exit. Then the mobile terminal is placed on the support, the front camera is right opposite to the face and the upper half of the body of the vision tester to shoot, the rear camera is right opposite to the only entrance and exit of the room to shoot, and the vision detection is started.
The distance calculation module 10 is configured to obtain a center position of an E-shaped visual target randomly generated on a display screen, and calculate a distance between a human eye node and the center position of the E-shaped visual target in real time; the distance calculation module 10 specifically includes:
a depth extraction unit: the system comprises a front camera module, a front 3D (three-dimensional) depth sensing camera, a three-dimensional world coordinate system and a three-dimensional face grid, wherein the front camera module is used for extracting depth structure information of the upper half of a human body, the front 3D depth sensing camera is used as an origin, and the 3D face grid is drawn in real time;
a position acquisition unit: the method comprises the steps of establishing a display screen coordinate system by taking a first pixel point at the upper left corner of a display screen as an origin, and acquiring the coordinate position of an E-shaped visual target randomly generated by a cheating detection module on the display screen, wherein the coordinate position is the position of the center point of the E-shaped visual target to be displayed;
a distance calculation unit: the method is used for calculating the distance between a human eye node and the center position of an E-shaped sighting mark, namely the eye distance, according to the position coordinates of the vertex of the cornea on the optical axis of an eyeball acquired from a 3D face grid, and the formula is as follows:
Figure 838020DEST_PATH_IMAGE005
in the formula d0The distance from the optical center of the eye dioptric system to the vertex of the cornea on the optical axis of the eye is usually a fixed value of 7 mm; (x 1, y1, z 1) is the position coordinate of the corneal vertex on the optical axis of the eyeball in a world coordinate system; (x 2, y2, z 2) is the position coordinate of the randomly generated E-shaped sighting mark center in the world coordinate system.
The size calculation module 20 is configured to calculate, in real time, the number of pixels that need to be occupied for displaying the side length of the square E-shaped visual target on the display screen of the mobile terminal based on the PPI pixel density of the display screen and the distance between the eye node and the center position of the E-shaped visual target; the calculation formula is as follows:
Figure 660482DEST_PATH_IMAGE006
n is the number of square pixel points required to be occupied by displaying the side length of the square E-shaped sighting target on a display screen of the mobile terminal; d is the distance between the human eye node and the center position of the E-shaped sighting mark, namely the eye distance, and the unit is mm; PPI is the pixel density of the display screen; l is recorded at 5 points.
The display adjusting module 30 is configured to adjust the size of the E-shaped sighting target and the positions of four vertexes in real time based on the number of pixels occupied by the side length of the E-shaped sighting target, the angle from the eye to the display screen, and the direction of the eye, so that the E-shaped sighting target which is always square is virtually displayed in the visual angle direction of the eye;
specifically, please refer to fig. 5, 6, and 13, fig. 5 is a schematic diagram of a distance between a vertex of an cornea and a center position of an E-shaped visual target on an optical axis of an eyeball in a world coordinate system; FIG. 6 is a schematic diagram of the real-time dynamic adjustment of the size of the E-shaped optotype with changes in eye distance; FIG. 13 is a comparison graph of the display effect before and after the adjustment of the size of the E-shaped optotype and the positions of four vertexes;
the vision judging module 40 is configured to obtain a direction judging result corresponding to each group of E-shaped sighting marks when displayed, and perform vision judgment;
the cheating detection module 50 is used for randomly changing the display position and the opening direction of each group of E-shaped sighting marks on the display screen and carrying out cheating detection through human body image data collected by the front camera module.
The cheating detection module 50 specifically includes: a sighting target transformation unit 501, a data acquisition unit 502, a standard posture detection unit 503 and a cheating posture detection unit 504;
the sighting mark transformation unit 501 is configured to randomly transform the display position and the opening direction of the next group of E-shaped sighting marks on the display screen after the direction of each group of E-shaped sighting marks is determined, and sequentially record the position coordinates of the center of each group of E-shaped sighting marks in the display screen coordinate system;
the data acquisition unit 502 is configured to acquire a distance d between a human eye node and a center position of the E-shaped sighting target in real time, and send a prompt of an excessively short distance if the distance is smaller than a preset first distance threshold; if the distance is larger than or equal to the first distance threshold value, acquiring a human body posture image in the process of judging the direction of each group of E-shaped sighting mark through a front camera module, extracting the depth structure information of the upper half body of the human body in real time according to the human body posture image, and extracting the coordinates of the joint points of the upper half body of the human body according to the depth structure information of the upper half body of the human body; the coordinates of the upper body joint points of the human body comprise coordinates of eyes, left shoulder coordinates, left elbow joint coordinates, left wrist joint coordinates, coordinates of fingertips of four fingers of the left hand except thumbs, coordinates of right shoulder coordinates, coordinates of right elbow joint coordinates, coordinates of right wrist joint coordinates and coordinates of fingertips of four fingers of the right hand except thumbs;
the standard posture detection unit 503 is configured to identify a posture of the human body according to coordinates of the upper body joint point of the human body, compare the posture with a standard vision detection posture prompted on the display screen, and determine a standard degree of the posture; the standard posture detection unit is specifically configured to: acquiring a human body posture image and the upper half body joint point coordinates of the human body corresponding to each group of E-shaped sighting mark judgment results, and calculating a limb included angle formed by the joint point coordinates;
specifically, when the left-eye vision detection is carried out, a tester is guided to cover the right eye with the right hand, the left-eye coordinate, the left-shoulder coordinate, the left-elbow joint coordinate, the left-wrist joint coordinate, the right-elbow joint coordinate, the right-wrist joint coordinate and the right-hand four-finger fingertip coordinate are extracted according to the human body posture image corresponding to the judgment result, and the right-eye coordinate and the right-shoulder coordinate are calculated according to the distance from the left eye to the display screen, the angle and the human body bilateral symmetry principle; calculating a limb included angle among connecting lines of coordinates of a left eye, coordinates of a left shoulder and coordinates of a left elbow joint in the human body posture image, and a limb included angle among connecting lines of coordinates of the left shoulder, coordinates of the left elbow joint and coordinates of the left wrist joint; calculating a limb included angle between connecting lines of a right shoulder coordinate, a right elbow joint coordinate and a right wrist joint coordinate in the human body posture image and a limb included angle between connecting lines of the right wrist joint coordinate and a right eye coordinate and coordinates of four fingertips of a right hand respectively;
when right-eye vision detection is carried out, a tester is guided to cover the left eye with the left hand, the coordinate of the right eye, the coordinate of the right shoulder, the coordinate of the right elbow joint, the coordinate of the right wrist joint, the coordinate of the left elbow joint, the coordinate of the left wrist joint and the coordinate of the left four-finger fingertip are extracted according to the human posture image corresponding to the judgment result, and the coordinate of the left eye and the coordinate of the left shoulder are calculated according to the distance and the angle between the right eye and the display screen and the left-right symmetry principle of the human body; calculating a limb included angle among connecting lines of coordinates of a right eye, coordinates of a right shoulder and coordinates of a right elbow joint in the human body posture image, and a limb included angle among connecting lines of coordinates of the right shoulder, coordinates of the right elbow joint and coordinates of a right wrist joint; and calculating the limb included angle between the left shoulder coordinate, the left elbow joint coordinate and the left wrist joint coordinate connecting line in the human body posture image and the limb included angle between the left wrist joint coordinate and the left eye coordinate and the connecting line between the left hand four-finger fingertip coordinates respectively.
Respectively calculating the similarity between each limb included angle and each corresponding limb included angle of the standard posture image, and performing weighted summation calculation on each similarity to obtain the final posture similarity; and taking the attitude similarity as an attitude standard degree, and if the attitude similarity is greater than or equal to a first similarity threshold, determining the attitude standard.
The cheating posture detection unit 504 is configured to determine whether one or more cheating suspicions including arm extension, foreign matter handling, and E-shaped visual target position following exist or not by combining the randomly changed E-shaped visual target position and the corresponding human body posture, and determine whether cheating is performed or not. The cheating gesture detection unit 504 specifically includes:
the arm protrusion judging subunit judges that the arm protrusion condition exists when the distance between one elbow joint and the display screen is smaller than a preset second distance threshold;
the handheld foreign matter judging subunit detects foreign matters through image recognition and judges whether the foreign matters are handheld or not;
the visual target position following judgment is carried out, in the visual detection process of the same eye, the position coordinates of each displayed E-shaped visual target on the display screen are obtained, a visual target coordinate set is formed according to the display sequence, the transformation vectors between two adjacent coordinates in the visual target coordinate set are respectively calculated, and a visual target transformation vector set is generated; acquiring elbow joint coordinates in the coordinates of the upper body joint points of the human body corresponding to the judgment result of each group of E-shaped sighting marks, forming an elbow joint coordinate set according to the judgment sequence, respectively calculating transformation vectors between two adjacent coordinates in the elbow joint coordinate set, and generating an elbow joint motion vector set; calculating the similarity between the visual target transformation vector set and the elbow joint motion vector set, and judging that the position of the arm follows the E-shaped visual target when the similarity is greater than or equal to a preset second similarity threshold, namely the visual target position following condition exists;
specifically, the position coordinates of the center of each displayed E-shaped sighting mark in the coordinate system of the display screen are set as (x 1,i y 1,i ) The coordinates of the elbow joint in the coordinates of the upper body joint points of the human body corresponding to the judgment result of each group of E-shaped sighting marks in the X, Y plane of the world coordinate system are (x 2,i ,y 2,i ),i=0,1,2,…,mIf the visual target transformation vector set is T1={(x 1,i - x 1,i-1 y 1,i - y 1,i-1 )| i=1,2,…,mThe elbow joint motion vector set is T2 ={(x 2,i - x 2,i-1 y 2,i - y 2,i-1 )| i =1,2,…,m };
The formula for calculating the similarity between the sighting target transformation vector set and the elbow joint motion vector set is as follows:
Figure 6144DEST_PATH_IMAGE007
in the formulamThe total number of E-shaped optotypes displayed for monocular vision test is reduced by 1,
Figure 798520DEST_PATH_IMAGE008
and the cheating judging subunit judges cheating and sends cheating reminding if the posture standard degree is smaller than the first similarity threshold value and any one of the situations of arm extension, foreign matter holding by hand and visual target position following exists in the process of judging the direction of each group of E-shaped visual targets.
In the process of vision detection, the display position and the opening direction of each group of E-shaped sighting marks on the display screen are randomly changed, so that cheating of testers is prevented, if each group of E-shaped sighting marks are fixedly displayed at a certain position of the display screen, the testers can cheat in a mode of fixing a magnifying glass in front of the display screen of the mobile terminal, and the like, so that the display position of the E-shaped sighting marks on the display screen is randomly changed. In order to further ensure that the vision test is real and effective, human body posture images in the judging process of each group of E-shaped visual target directions are collected through the front camera module to further carry out cheating detection, and the standard degree of the posture is judged by comparing the human body posture images with the standard vision detection posture prompted on the display screen. The judgment result of the tester on the direction of the group of E-shaped sighting marks is directly adopted with high standard degree of the posture, otherwise, the judgment is further carried out, and when the arm extends forwards, a foreign body is held by hand or the arm changes along with the position of the E-shaped sighting mark, cheating is likely to happen. Therefore, the method and the device combine the standard degree of the posture, the randomly changed E-shaped visual target position and the corresponding human body posture to judge whether one or more cheating suspicions including arm extension, foreign matter holding by hands and E-shaped visual target position following exist or not, finally judge whether cheating occurs or not, prevent cheating by means of foreign matters such as a magnifying glass and the like by a tester under the condition of reducing misjudgment as much as possible, and guarantee the reality and effectiveness of vision detection results.
The invention is not only suitable for the online self-help vision detection of common users, but also suitable for the groups such as motor vehicle drivers and the like which need to accurately and truly measure the vision value according to the requirements of third parties, and can further ensure the reality and the objectivity of the vision detection result and effectively prevent cheating while ensuring the convenience.
Based on the system, the online self-help vision detection system is applied to the motor vehicle driver human body detection project, and online self-help physical examination of the motor vehicle driver can be further realized. The embodiment provides a method for online self-help physical examination of a motor vehicle driver, which is realized by matching an application program preset in a mobile terminal with a background system, and the detection content comprises the following steps: height detection, identity verification, environment detection, vision detection, color discrimination detection, hearing detection, trunk and neck detection, upper limb detection and lower limb detection. FIG. 2 is a block diagram of the operation process of the on-line self-help physical examination system for the driver of the motor vehicle.
First, early preparation
S1, logging in by the user;
s2, preparation: FIG. 15 is a flow chart of the steps of the preparation of the present invention;
s21, guiding a driver to select a room with a table and a chair, spaciousness, brightness, an area of more than 6 square meters and only one entrance and exit, adjusting the placement of the table and the chair, facing the seat to the entrance of the room, preparing a support, a pair of earphones and a friend, placing the support and the earphones on a desktop, sitting in the direction facing the entrance of the room, placing a mobile terminal on the support, adjusting the angle of the support, and facing the front camera to the face and the upper half of the driver, and facing the rear camera to the only entrance and exit of the room;
s3, information entry:
s31, the application program acquires the picture of the relevant certificate of the driver and the picture of the identity card of the driver held by the driver through calling the rear camera; the method comprises the steps of obtaining the information of names, sexes, certificate types, certificate numbers, mobile phone numbers, driver certificate file numbers and current driving vehicle type of physical examination persons.
Second, height detection
S41, the application program automatically adopts different AR measurement schemes according to the detected configuration condition of the sensor module in the mobile terminal, displays the picture acquired by the camera in a view-finding frame of the display screen, guides relatives and friends to operate the mobile terminal to manually mark in the view-finding frame or automatically identify the foot bottom and the head top of the driver by AI and takes a full-length picture, and realizes height detection.
Third, identity authentication
And S51, the application program performs living body detection and face comparison on the driver through the front camera to complete identity verification.
Fourth, environment detection
And the environment detection step is used for simultaneously calling a front camera and a rear camera of the handheld mobile terminal to acquire indoor environment images according to a specified shooting mode to carry out physical examination environment detection. The method for acquiring the indoor environment image through the mobile terminal comprises the steps of guiding a driver to hold the mobile terminal by two hands, horizontally rotating around a vertical line between the mobile terminal and the ground as an axis for shooting, and simultaneously rotating wrists of the two hands to enable the mobile terminal to shoot around the axis in a +/-90-degree pitching overturning manner; or a supporting device capable of simultaneously rotating horizontally and overturning vertically is adopted to support the mobile terminal to realize 360-degree spherical panoramic shooting at the set horizontal rotating speed and overturning speed. The embodiment takes a first collection manner as an example to illustrate specific steps of environment detection.
S61, entering an environment detection guide page in an application program, wherein the application program guides a driver to stand up and move backwards along with a seat, and the seat and the edge of a desktop are reserved with a specified distance to reserve enough activity space for subsequent standing and turning;
s62, guiding the driver to pick up the mobile terminal by the application program, horizontally holding and lifting the mobile terminal by two hands to be as high as eyes at a distance of 15-20 cm, keeping the display screen horizontal to the sight line, and aligning the rear camera to the entrance and exit of a room.
And S63, guiding the driver by the application program, and moving the driver to a left transverse turn around the axis by taking the vertical point of the mobile terminal in the hand and the ground as the axis and the distance between the face and the mobile terminal as the radius.
S64, when the driver is guided to move one step horizontally by the application program, the mobile terminal is rotated up and down by the wrist to complete a round of +/-90-degree pitching turnover until the real images in the room completely cover the hollow white area of the viewfinder, when the panoramic shooting progress reaches 100%, the driver is guided to align the rear camera to the entrance and exit of the room through an arrow prompt on the display screen to finish the panoramic shooting, the rear camera is started to continuously conduct video recording, the video recording is stopped when the step S68 is finished and the step S71 is started, and the collected video images are uploaded to a background system for auditing. Fig. 3 is a schematic diagram of spherical panoramic shooting by taking a rear camera of a mobile terminal as an example; FIG. 16 is a flow chart of a portion of the steps of the environment detection of the present invention.
S65, in the shooting process of the step S62-S64, the mobile terminal simultaneously calls a front camera, a microphone MIC, a rear camera and an IMU inertia measuring unit, the IMU inertia measuring unit and the rear camera are called to collect indoor environment images, the front camera, the IMU inertia measuring unit and the microphone MIC are also called to collect indoor environment images and automatically record audio and video images in the shooting process of the driver holding the mobile terminal by hands, the front camera and the rear camera simultaneously start shooting and complete spherical panoramic photo shooting, when the shooting progress reaches 100%, namely all space angles in a room are completely covered, shooting of spherical panoramic photos is completed, and first and second spherical panoramic photos of the physical examination environment are automatically spliced; specifically, the corresponding cameras are respectively used as coordinate origin points, the IMU inertia measurement unit is synchronously called while the corresponding cameras collect the environment images, the attitude angle data of the mobile terminal are continuously obtained, and a group of continuous photo sequences containing the attitude angles are obtained; and carrying out feature matching and image splicing on the continuous photo sequence containing the attitude angle to synthesize a spherical panoramic photo of the physical examination environment. FIG. 4 is a block diagram of the working flow of the spherical panoramic photo stitching method of the present invention; the method comprises the steps of acquiring pre-stored multi-angle full-body photos of a driver before generating a first spherical panoramic photo and a second spherical panoramic photo, performing feature matching on the multi-angle full-body photos of the driver and environmental images collected by a front camera and/or a rear camera, and removing the area where a driver body is located from the environmental images collected by the front camera and/or the rear camera.
S66, sending the collected environmental data to a background system for automatic auditing, displaying an environmental detection result, entering the next step if the environmental detection result is qualified, and collecting the environmental data again for detection if the environmental detection result is not qualified;
fuse first spherical panorama photo and second spherical panorama photo in the background, obtain third spherical panorama photo, specifically do:
determining an indoor position relation corresponding to the first spherical panoramic photo and the second spherical panoramic photo according to the position relation between the front camera and the rear camera; extracting all splicing seams in the synthesis process of the first spherical panoramic photo, determining the position mapping relation corresponding to each splicing seam and the second spherical panoramic photo, comparing the first spherical panoramic photo and the second spherical panoramic photo in advance according to a difference method, screening out splicing seams with larger difference and numbering the splicing seams, and carrying out grid division on the first spherical panoramic photo according to the size of the screened splicing seams so that each splicing seam is positioned in each grid area; and carrying out the same grid division on the second spherical panoramic photo according to the corresponding position, carrying out image fusion on each grid area of the corresponding positions of the first spherical panoramic photo and the second spherical panoramic photo, and splicing to obtain a third spherical panoramic photo.
The image fusion of each grid sheet at the corresponding position of the first spherical panoramic photo and the second spherical panoramic photo specifically comprises the following steps: judging the size of two grid areas to be fused, if the size is larger than a preset threshold value, respectively carrying out equidistant division on the grid areas to obtain a plurality of sub-areas, and respectively calculating the information entropy of each sub-area; optimizing the fusion weight of each sub-region by adopting a golden sine algorithm, wherein the optimization target is that the change rate of the information entropy of each sub-region of which the information entropy is smaller than a preset entropy threshold value before and after fusion is smaller than a preset change rate threshold value, and the total information entropy of the fused single grid piece region is minimum; for the condition that the size of the grid area is smaller than or equal to a preset threshold value, directly optimizing the fusion weight of each pixel by adopting a golden sine algorithm; when a blank area exists in each grid sheet at the corresponding position, the fusion weight of the blank area is 0; and performing weighted image fusion on the two corresponding grid areas according to the fusion weight. Taking a certain group of two grid areas to be fused as an example, when optimizing by adopting a golden sine algorithm, firstly, randomly initializing a population position, setting golden sine related parameters, calculating an optimal individual position by taking an optimization target as a fitness function, updating the position, and carrying out iterative operation until the optimal individual position is found outx best As the fusion weight of each sub-region of one of the grid regions, letx best =(w 1,w 2,…,w d) The fusion weight of each sub-region corresponding to the other grid region is (1-w 1,1-w 2,…,1-w d) And d is the number of the subregions, and pairwise fusion is carried out according to the fusion weight of each corresponding subregion. The same principle is adopted to directly optimize the fusion weight of each pixel by adopting a golden sine algorithm.
And finally, performing target recognition on the third spherical panoramic photo by adopting a convolutional neural network, recognizing whether the third spherical panoramic photo has image traces of personnel or not, recognizing whether only one closed room entrance and exit exists or not, performing audio analysis on the audio and video image file, and judging whether talking sounds exist or not. If the physical examination environment has no image and audio data of other people and only has a closed room entrance, the environment detection is qualified.
And S67, after the environment detection is qualified, prompting the driver to keep the rear camera always aligned with the entrance and exit of the room by the application program, and strictly executing and completing the following operations within 30 seconds: according to the last placing position and angle, the mobile terminal is placed on the support (the placing process and the placed rear camera are always aligned with the entrance and exit of a room), the driver returns the seat and sits on the seat facing the entrance of the room, the left sound channel and the right sound channel of the earphone are confirmed, the earphone is worn, the earphone and the mobile terminal are connected, the volume of the earphone is adjusted to be more than 50%, and the mobile terminal automatically enters the next step after 30 seconds.
S68, prompting the driver to notice, including: before the upper limb detection item is finished, the mobile terminal is not picked up, the placing position or the angle of the support is changed, the desktop is not moved, the follow-up physical examination item is prompted according to the page, the voice operation is possible as far as possible, the area outside the button is not required to be operated on the display screen, and the vision detection step is automatically carried out after 10 seconds.
Fifth, vision testing
The online self-help vision detection system can realize online self-help vision detection of the motor vehicle driver.
S71, the application program starts the rear camera to detect the movement of the picture;
s72, before vision detection, the application program matches the PPI value of the display screen according to the detected brand, model, parameter and configuration condition of the internal sensor module of the mobile terminal, and uses the front camera module to establish a three-dimensional world coordinate system with the front 3D deep camera as the original point, extracts the depth structure information of the human face, and draws the 3D face grid in real time, obtains the position coordinate of the corneal vertex on the eyeball optical axis by tracking and positioning the eyes of the driver, automatically adopts different eye distance measurement technical schemes according to the configuration condition of the front camera module, combines the current driving type of the driver submitted in the information input link, judges the standard of the vision detection of the link, loads the display parameter of the corresponding E-shaped sighting target according to the corresponding 5 minutes of recording L value, dynamically adjusts the size of the E-shaped sighting target on the display screen, so as to reach the vision detection standard.
S73, the application program guides the driver to judge whether the driver wears the glasses, acquires a voice instruction spoken by the driver according to a prompt on the display screen by calling the microphone MIC, and acquires the vision correction state of the driver based on the voice recognition technology.
S74, guiding the driver to keep the sight line perpendicular to the display screen as much as possible by the application program, covering the right eye with the palm of the right hand during the left-eye vision detection, and covering the left eye with the palm of the left hand during the right-eye vision detection, please refer to FIG. 7, wherein FIG. 7 is a schematic diagram of the left-eye vision detection; and the eye covering posture is always kept in the vision detection process, and the vision detection is automatically started after the countdown is finished.
And S75, calculating the number of square pixel points required to be occupied by displaying the side length of the square E-shaped sighting mark on the display screen of the mobile terminal in real time by the application program through the distance calculation module and the size calculation module, and dynamically adjusting the size of the E-shaped sighting mark on the display screen of the mobile terminal. Firstly, the distance calculation module automatically adopts different eye distance calculation schemes (including a 3D flight time method, a 3D structured light method, a binocular stereo vision method or an iris tracking distance measurement method) according to the detected configuration condition of the front camera module in the mobile terminal to measure and calculate the distance between the eye node and the center position of the E-shaped sighting mark in real time. For example, when the depth structure information of the human face is extracted by the face extraction unit through the front camera module adopting the 3D structured light technology, a three-dimensional world coordinate system is established by taking the front 3D depth sensing camera as an origin, a 3D face grid is drawn in real time, and the position coordinates of the corneal vertex on the optical axis of the eyeball in the world coordinate system are obtained through the position acquisition unit through tracking and positioning of human eyes, and are expressed as (x 1, y1, z 1) and the unit is mm; and (3) acquiring the position coordinates of the randomly generated E-shaped sighting target center point in a display screen coordinate system, wherein the position coordinates are expressed as (m, n) and the unit is a pixel point, converting the position coordinates in the display screen coordinate system into a world coordinate system through coordinate system transformation, and expressing the position coordinates in the display screen coordinate system as (x 2, y2, z 2) and the unit is mm.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating a distance between a vertex of an cornea and a center of an E-shaped visual target on an optical axis of an eyeball in a world coordinate system; the distance between the human eye node and the center position of the E-shaped sighting mark, namely the eye distance, can be calculated by the distance calculating unit according to the position coordinates of the vertex of the cornea on the optical axis of the eyeball acquired from the 3D face grid, and the formula is as follows:
Figure 936240DEST_PATH_IMAGE009
in the formula d0The distance from the optical center of the eye dioptric system to the vertex of the cornea on the optical axis of the eye is usually a fixed value of 7 mm; (x 1, y1, z 1) is the position coordinate of the corneal vertex on the optical axis of the eyeball in a world coordinate system; (x 2, y2, z 2) is the position coordinate of the randomly generated E-shaped sighting mark center in the world coordinate system.
When the application program displays the E-shaped sighting target on the display screen, the positioning and the display are usually performed by using the position coordinates of the center point of the E-shaped sighting target in the coordinate system of the display screen, so the position coordinates in the coordinate system of the display screen need to be converted into the world coordinate system through coordinate system conversion. Since the display screen of each mobile terminal has different PPI pixel density, that is, the number of pixels on the diagonal of each inch, k is the side length of a square pixel on the display screen, and the unit is mm, according to the definition of pixel density, in combination with 1 inch =25.4mm, the PPI pixel density can be calculated
Figure 262093DEST_PATH_IMAGE010
. Referring to fig. 10, the distance between the front 3D depth-sensing camera (the infrared lens in fig. 10) and the Y' axis of the display screen coordinate system is set asaIn mm, and the distance from the axis X' of the display screen coordinate system is b in mm. Because the center of the E-shaped sighting mark is in the X, Y, Z-axis three-dimensional world established by taking the front 3D depth-sensing camera as the originThe position coordinates in the coordinate system are (X2, Y2, z 2), and the position coordinates in the X ', Y' axis display screen coordinate system, which is established with the first pixel point in the upper left corner of the display screen as the origin, are (m, n). Obtained by coordinate system transformation:
Figure 813160DEST_PATH_IMAGE011
since the front 3D deep sensing camera is usually in the same plane with the display screen, the front 3D deep sensing camera and the display screen are in the same plane
Figure 19013DEST_PATH_IMAGE012
Substituting the eye distance formula to obtain:
Figure 886606DEST_PATH_IMAGE013
wherein d is the distance between the human eye node and the center of the E-shaped sighting mark, and the unit is mm; (x 1, y1, z 1) is the position coordinate of the corneal vertex on the optical axis of the eyeball in a world coordinate system, and the unit is mm; (m, n) is the position coordinate of the center of the E-shaped sighting mark in a display screen coordinate system, and the unit is a pixel point; a is the distance between the front 3D depth sensing camera and the axis Y' of the display screen coordinate system, and the unit is mm; and b is the distance between the front 3D depth sensing camera and the X' axis of the display screen coordinate system, and the unit is mm.
Secondly, the size calculation module calculates the number of pixel points required to be occupied by displaying the side length of the square E-shaped visual target on the display screen of the mobile terminal in real time according to the PPI pixel density of the display screen and the distance between the eye node and the center position of the E-shaped visual target; referring to fig. 8 and 9, fig. 8 is a schematic diagram illustrating a relationship between an eye distance d and a stroke width h of a visual target in the definition of a viewing angle α; FIG. 9 is a schematic view of an isosceles triangle formed by the viewing angle α, the eye distance d, and the optotype stroke width h; in the visual chart design standard required by the national standard GB11533-2011, the side length H of the square E-shaped sighting target is 5 times of the stroke width H of the sighting target, namely H =5H, and by combining the triangle pythagorean theorem, the size of the side length of the E-shaped sighting target can be dynamically calculated in real time by adopting the following formula:
Figure 582030DEST_PATH_IMAGE014
in the formula, H is the side length of the E-shaped sighting mark, and the unit is mm; d is the distance between the human eye node and the center of the E-shaped sighting mark, and the unit is mm; alpha is an angle formed by the upper edge and the lower edge of the middle stroke of the E-shaped sighting mark at the eye node, and the unit is divided (');
and n is the number of square pixel points required to be occupied by displaying the side length of the square E-shaped sighting target on the display screen of the mobile terminal, then:
Figure 620393DEST_PATH_IMAGE015
in the formula, k is the side length of a square pixel point on the display screen, and the unit is mm; the PPI is the pixel density of a display screen of the mobile terminal, namely the number of pixels per inch of the display screen;
referring to fig. 6, fig. 6 is a schematic diagram illustrating real-time dynamic adjustment of the size of an E-shaped optotype along with changes in eye distance; through the formula, the expression formula recorded by 5 points in the national standard GB11533-2011 is combined
Figure 895516DEST_PATH_IMAGE016
Is converted into
Figure 538987DEST_PATH_IMAGE017
And the unit is a point ('), a formula for calculating the number of square pixel points which need to be occupied by the side length of the E-shaped sighting mark on the display screen of the mobile terminal can be calculated as follows:
Figure 687203DEST_PATH_IMAGE018
n is the number of square pixel points required to be occupied by displaying the side length of the square E-shaped sighting target on a display screen of the mobile terminal; d is the distance between the human eye node and the center position of the E-shaped sighting mark, namely the eye distance, and the unit is mm; PPI is the pixel density of a display screen of the mobile terminal; l is recorded at 5 points.
If the iPhone 12 Pro Max mobile terminal of apple company is used for vision detection of a motor vehicle driver as an example, since the pixel density PPI of the display screen is 458, L can take a value of 4.9 or 5.0 according to different types of the motor vehicle driver in which the vehicle is driving, and thus the corresponding relation between the number n of square pixel points occupied by the side length of the sighting mark and the eye distance d in the following table 1 is deduced:
table 1 corresponding relation between number n of pixels used for side length of visual standard in iPhone 12 Pro Max and eye distance d
Figure 416125DEST_PATH_IMAGE019
The eye distance calculation method is completely executed according to the requirements in the national standard GB11533-2011, namely the eye distance is the standard eye distance. In order to prove that the standard eye distance calculation method of the present invention has higher accuracy and smaller error in vision detection than some non-standard eye distance calculation methods in the prior art, please refer to fig. 10, fig. 10 is an example diagram of coordinate transformation of the center position of an E-shaped visual target by taking iPhone 12 Pro Max as an example; through engineering documents published by manufacturers and actual measurement, the resolution of a display screen of the iPhone 12 Pro Max is 1284 multiplied by 2778, the pixel density PPI is 458, the distance between a front 3D depth-sensing camera (an infrared lens in fig. 10) and the axis of a display screen coordinate system Y 'is 21.5mm, and the distance between the front 3D depth-sensing camera and the axis of the display screen coordinate system X' is 2 mm. The standard eye distance at this time is:
Figure 557256DEST_PATH_IMAGE020
referring to fig. 11, fig. 11 is a schematic diagram illustrating a comparison calculation between a non-standard eye distance and a standard eye distance; if the non-standard eye distance, such as the distance from the front 3D deep sensing camera to the corneal vertex on the optical axis of the eyeball, is directly used in the vision detection process, the calculation formula of the non-standard eye distance S is as follows:
Figure 55233DEST_PATH_IMAGE021
the ratio of the non-standard eye distance to the standard eye distance is calculated by the formula:
Figure 905509DEST_PATH_IMAGE022
generally, the corneal vertex and the E-shaped sighting mark center on the optical axis of the eyeball exist on the plane projection of the mobile terminal display screen
Figure 324989DEST_PATH_IMAGE023
The distance error of (2). For the sake of calculation, assuming that the position coordinates of the center of the E-shaped chart in the display screen coordinate system (1200,2700) are displayed, and assuming that the line of sight is completely perpendicular to the center of the E-shaped chart on the display screen, x1= x2 and y1= y2, from which:
Figure 800970DEST_PATH_IMAGE024
according to the calculation formula of the ratio of the non-standard eye distance to the standard eye distance, when x1 and y1 are fixed values, the smaller the value of z1 is, the larger the ratio of the non-standard eye distance to the standard eye distance is. When the eye distance d is 151.44mm when the score L is 4.9 according to 5 in Table 1, the value of z1 is 144.44mm, and the ratio of the non-standard eye distance to the standard eye distance is:
Figure 622295DEST_PATH_IMAGE025
the method comprises the following steps of obtaining according to a formula of the number of square pixel points needing to be occupied by the side length of the E-shaped sighting mark on a display screen of the mobile terminal: the number n of the pixels and the eye distance d are in a linear relation. It can be concluded that if the non-standard eye distance is used to calculate and dynamically adjust the size of the E-shaped optotype during the vision test, the error of the vision test result can reach 39.64%.
In summary, the standard eye distance used in the present invention, that is, the distance between the eye node (the optical center of the eye dioptric system, about 7mm behind the vertex of the cornea on the optical axis of the eye) and the center position of the E-shaped sighting mark, is used to calculate the number of square pixel points that the side length of the E-shaped sighting mark needs to occupy, so as to dynamically adjust the size of the E-shaped sighting mark on the display screen of the mobile terminal, and further meet the requirements of the vision detection standard in the national standard GB 11533-2011.
S76, the application program adjusts the size of the E-shaped sighting mark and the positions of four vertexes in real time through the display adjusting module based on the number of pixel points occupied by the side length of the E-shaped sighting mark, the angle from the eyes to the display screen and the direction, so that the E-shaped sighting mark which is always square is displayed in a virtual mode in the visual angle direction of the eyes; because the driver is hard to be required to make the tested eye vertically face the center of the E-shaped sighting mark on the display screen of the mobile terminal during visual inspection, the invention measures the position coordinate of the corneal vertex on the optical axis of the eyeball in a world coordinate system through the front camera module of the mobile terminal to calculate the distance, the angle and the direction from the eye to the display screen, and the E-shaped sighting mark which is always square is virtually displayed in the visual angle direction of the eye through projection transformation, so that the visual effect is not changed along with the change of the angle of the eye.
Referring to fig. 12, fig. 12 is a comparison graph of actual viewing effects of a traditional visual chart light box and a mobile terminal display screen in eyes at different distances; generally, when the visual acuity test is carried out, if a traditional visual chart light box is used, the standard (inspection) distance is far from 5m, after the eyes transversely move by 10cm, the effect difference actually seen by the eyes is not large, the visual acuity test is carried out in a mode of displaying an E-shaped visual target on a display screen of a mobile terminal, the distance between the eyes is generally 15-110 cm close to that in the visual chart 1, and after the eyes transversely move by 10cm, the effect difference actually seen by the eyes is obvious. Therefore, the invention is necessary to adjust the E-shaped sighting target displayed on the screen of the mobile terminal through the angle, the distance and the direction.
Referring to fig. 13, fig. 13 is a comparison graph of the display effect before and after the E-shaped optotype size and the four vertex positions are adjusted; the comparison of the effect of the conventional display scheme and the display scheme of the present invention is shown, wherein the three graphs in the first row on the left show the effect of the E-shaped optotype actually seen in the eye when the angle of the eye is changed when the conventional display scheme is used; when the display scheme of the invention is adopted, when the angles of eyes are changed, the effect of the E-shaped sighting target actually seen in the eyes is realized, and no matter which angle is seen, the E-shaped sighting target is always square by dynamically adjusting the size of the E-shaped sighting target and the positions of four vertexes in real time; the three graphs in the second row on the right show the actual display effect of the E-shaped sighting target on the display screen of the mobile terminal when the eye angle is changed by adopting the display scheme of the invention (the size of the E-shaped sighting target and the positions of four vertexes are changed along with the position and the angle of the eye).
In summary, in the invention, projection transformation is performed based on the number of pixel points occupied by the side length of the E-shaped sighting target and the angle, distance and direction from the eyes to the display screen, and the size of the E-shaped sighting target and the positions of four vertexes are adjusted, so that the E-shaped sighting target which is always square is virtually displayed in the visual angle direction of the eyes, and thus the measured visual force value completely meets the requirement of the visual force detection standard in the national standard GB11533-2011, and the real visual force value of a tester is more accurately reflected.
S77, the application program randomly changes the display position and the opening direction of each group of E-shaped targets on the display screen through the cheating detection module, please refer to fig. 14, fig. 14 is a schematic diagram of the effect of randomly changing the display position of each group of E-shaped targets on the display screen; cheating detection is carried out through human body image data collected by the front camera module; obtaining the distance d between the human eye node and the center position of the E-shaped sighting target in real time, and if the distance is smaller than a preset first distance threshold value, sending a prompt of too short distance; if the distance is larger than or equal to the first distance threshold value, acquiring a human body posture image in the process of judging the direction of each group of E-shaped sighting mark through a front camera module, extracting the depth structure information of the upper half body of the human body according to the human body posture image, and extracting the coordinates of the upper half body joint point of the human body according to the depth structure information of the upper half body of the human body;
the application program identifies the posture of the human body according to the upper half body joint point coordinates of the human body, compares the posture with the standard vision detection posture prompted on the display screen, and judges the standard degree of the posture; and (3) judging whether one or more cheating suspicions including arm extension, foreign matter holding by hand and E-shaped visual target position follow exist or not by combining the randomly changed E-shaped visual target position and the corresponding human body posture, and judging whether cheating exists or not. Please refer to the cheating detection module 50 of the online self-help vision detection system of the present invention, which will not be described herein again.
In addition, the cheating detection module is used for continuously monitoring touch click operation and touch point coordinates (pointer positions) by calling the display screen touch panel, when the touch click operation on an area except a button in the display screen is detected, the application program calls the front camera to shoot 1 picture, a dynamic graph or an audio/video image and uploads the picture to the background system for auditing, and meanwhile, the application program pops up a warning window and temporarily suspends detection, so that the cheating condition caused by the fact that a tester uses the display screen virtual magnifier function of the mobile terminal system in the process of eyesight detection can be effectively screened and prevented.
S78, the application program guides the driver to judge the opening direction of each group of E-shaped sighting marks by the left eye and the right eye through the vision judging module, please refer to fig. 7, fig. 7 is a schematic diagram of left-eye vision detection; acquiring the voice instructions ' up ', ' down ', ' left ', ' right ' or ' unclear ' spoken by the driver according to prompts on the display screen by calling the microphone MIC, and acquiring the result of the driver's judgment on the direction when each group of E-shaped sighting marks are displayed based on a voice recognition technology to perform vision judgment; and (3) correctly identifying the photos, dynamic pictures or audio/video images of the eyes of the driver, which are taken by calling the front camera, uploading the photos, the dynamic pictures or the audio/video images to a background system for auditing, and judging the direction of the next group of E-shaped sighting marks after the display position and the opening direction of the E-shaped sighting marks are randomly changed by the cheating detection module. With the circulation, 8 groups of detection are respectively carried out on the left eye and the right eye, and when each eye has not less than 4 groups of detection, the vision of the eye is judged to be qualified.
Sixthly, detecting the color discrimination
S81, the application program displays multiple groups of color blindness check maps on the display screen of the mobile terminal at random, guides the driver to recognize the patterns in the color blindness check maps, and obtains multiple groups of recognition results of the driver to realize color discrimination detection.
Seventh, hearing test
And S91, the application program guides the driver to judge whether the driver wears the hearing aid, randomly calls an earphone connected with the mobile terminal to play music in the left sound channel or the right sound channel, and obtains the recognition result of the driver on the sound source direction so as to realize hearing detection. FIG. 17 is a flow chart of some of the steps of the hearing test of the present invention.
Eight, limb detection
S101, the application program respectively obtains images, dynamic images or audio and video images of actions of the driver according to action prompts on the display screen through the front-facing camera so as to detect the trunk and the neck and the upper limbs, and the rear-facing camera is closed after the upper limbs are detected so as to finish the picture movement detection. The lower limb detection is carried out by simultaneously calling the front camera and the rear camera to acquire images, dynamic images or audio and video images of actions of a driver according to action prompts on a display screen, and FIG. 18 is a partial step flow chart of the limb detection.
Nine, anti-cheating measures
When the application program is detected to be minimized, switched to a background, locked, quitted, returned to a system desktop, answered when a call comes in and answered, responded by an SMS short message coming in and interrupted by any one of abnormalities in the process of running, the application program pops up an anti-cheating warning window and terminates the detection by detecting the application program process running on the foreground of a mobile terminal system in the processes of identity verification, environment detection, vision detection, color discrimination detection, hearing detection, trunk and neck detection, upper limb detection and lower limb detection;
in the processes of vision detection, color discrimination detection, hearing detection, trunk and neck detection and upper limb detection, the application program calls the rear camera to carry out picture movement detection on an entrance and an exit of a room, and simultaneously calls the IMU inertia measurement unit to continuously monitor the vibration, displacement and posture of the mobile terminal, when an object moves in a detected picture or the mobile terminal is moved or taken from a support, the front camera and the rear camera are simultaneously called to automatically shoot a picture, a dynamic graph or an audio-video image and upload the picture to a background system for auditing, and corresponding cheating prevention warning windows are popped up to suspend detection;
in the processes of vision detection, color discrimination detection and hearing detection, the driver needs to wear the earphone to prevent cheating risk caused by external interference;
in the process of vision detection, the application program can effectively discriminate and prevent the condition that a driver carries out cheating by using a magnifier or a similar device in the process of vision detection by recording the position of each group of E-shaped visual targets which are randomly changed on the display screen and acquiring the human posture image in the process of identifying each group of visual targets by combining the front camera module.
The embodiment adopts an online self-help physical examination mode, and is particularly important for identifying and preventing cheating in the online self-help physical examination process. The method and the device ensure that the process of the online self-help physical examination of the driver is real and effective and the auditing standard of the online self-help physical examination is objective and fair by integrating the three aspects of requirements on physical examination environment and tools, setting on a physical examination process and detecting abnormal behaviors of intervening physical examination through technical means.
Ten, subsequent treatment
S131, previewing physical examination information, and previewing and displaying the collected partial physical examination information to a driver by an application program; s141, disease reporting confirmation, wherein the application program guides the driver to faithfully fill in the disease condition; s151, submitting physical examination information, and uploading all collected physical examination data to a background system through the Internet by an application program; s161, physical examination auditing, namely automatically auditing by the system according to the physical examination information of the driver in the background system and manually reviewing by a doctor according to the needs, wherein the reviewing result is a final physical examination result; and S162, the final physical examination result is automatically pushed to an application program of the mobile terminal by the background system for the inquiry of the driver, wherein the physical examination information passing the review is automatically uploaded to an Internet traffic safety comprehensive service management platform by the background system, so that the online self-help physical examination process of the motor vehicle driver completes a closed loop.
Compared with the online lower body examination mode, the invention is more convenient, faster and more intelligent, and the driver can complete online self-help physical examination only by operating according to the standard flow and the requirements of the system without going out. On the premise of realizing the function of detecting the human body by driving on line, anti-cheating measures are arranged in important detection items so as to ensure the reality and effectiveness of the physical examination process and the objective and justice of the physical examination result, and the efficiency is greatly improved.
The invention also discloses a mobile terminal, comprising: at least one processing unit, at least one memory unit, a communication interface and a bus; the processing unit, the storage unit and the communication interface complete mutual communication through the bus; the processing unit includes: CPU, GPU graphic processor, ISP image signal processor, 4G/5G baseband, sensing element controller, buffer memory; the user input unit includes: a display screen touch panel and physical keys; the audio output unit includes: earphones and speakers; the sensing unit includes: the device comprises an IMU inertia measurement unit and an electronic compass, wherein a three-axis gyroscope and an acceleration sensor are arranged in the IMU inertia measurement unit; the A/V input unit includes: leading camera module, rear camera module, microphone MIC, wherein leading camera module includes: leading camera, leading 3D feel the camera deeply, rearmounted camera module includes: a rear camera, a rear lidar scanner, or a rear TOF field depth camera; the storage unit stores a program instruction which can be executed by the processing unit, and the processing unit calls the program instruction to realize the online self-help vision detection system; the types of mobile terminals include, but are not limited to: smart mobile phone, portable computer, panel computer, PDA intelligent terminal, on-vehicle intelligent terminal, wearable equipment includes: intelligent wrist-watch, intelligent bracelet, intelligent ring etc..
The invention also discloses a computer readable storage medium which stores computer instructions, and the computer instructions enable the computer to realize all or part of functional modules or units of the online self-help vision detection system. Such storage media include, but are not limited to: various media capable of storing program codes, such as a U disk, a removable hard disk, a read-only memory ROM, a random access memory RAM, a magnetic disk, or an optical disk.
The above-described system embodiments are merely illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts shown as units may or may not be physical units, i.e. may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. An online self-service vision testing system, the system comprising:
a distance calculation module: the system comprises a display screen, a human eye node, an E-shaped sighting target center position and a distance calculation module, wherein the E-shaped sighting target center position is randomly generated on the display screen;
a size calculation module: the system is used for calculating the number of pixel points required to be occupied for displaying the side length of the square E-shaped visual target on the display screen in real time based on the PPI pixel density of the display screen and the distance between the eye point and the center position of the E-shaped visual target;
a display adjustment module: the E-shaped sighting target size and the positions of four vertexes are adjusted in real time based on the number of pixel points occupied by the side length of the E-shaped sighting target, the angle from eyes to a display screen and the direction of the E-shaped sighting target, so that the E-shaped sighting target which is always square is virtually displayed in the visual angle direction of the eyes;
and a vision judging module: the direction judgment device is used for acquiring a corresponding direction judgment result when each group of E-shaped sighting marks are displayed, and performing vision judgment;
a cheating detection module: the human body image data collected by the front camera module is used for cheating detection.
2. The online self-help vision detection system according to claim 1, wherein the distance calculation module specifically comprises:
a depth extraction unit: the system comprises a front camera module, a front 3D (three-dimensional) depth sensing camera, a three-dimensional world coordinate system and a three-dimensional face grid, wherein the front camera module is used for extracting depth structure information of the upper half of a human body, the front 3D depth sensing camera is used as an origin, and the 3D face grid is drawn in real time;
a position acquisition unit: the method comprises the steps of establishing a display screen coordinate system by taking a first pixel point at the upper left corner of a display screen as an origin, and acquiring the coordinate position of an E-shaped visual target randomly generated by a cheating detection module on the display screen, wherein the coordinate position is the position of the center point of the E-shaped visual target to be displayed;
a distance calculation unit: according to the position coordinates of the vertex of the cornea on the optical axis of the eyeball acquired from the 3D face grid, the distance between the human eye node and the center position of the E-shaped sighting mark, namely the eye distance, can be calculated, and the formula is as follows:
Figure 592401DEST_PATH_IMAGE001
in the formula d0The distance from the optical center of the eyeball dioptric system to the vertex of the cornea on the optical axis of the eyeball; (x 1, y1, z 1) is the position coordinate of the corneal vertex on the optical axis of the eyeball in a world coordinate system; (x 2, y2, z 2) is the position coordinate of the randomly generated E-shaped sighting mark center in the world coordinate system.
3. The online self-help vision testing system of claim 2, wherein the calculation formula of the size calculation module is:
Figure 499177DEST_PATH_IMAGE002
n is the number of square pixel points required to be occupied by displaying the side length of the square E-shaped sighting target on the display screen; d is the distance between the human eye node and the center position of the E-shaped sighting mark, namely the eye distance, and the unit is mm; PPI is the pixel density of the display screen; l is recorded at 5 points.
4. The online self-help vision detection system of claim 2, wherein the cheating detection module specifically comprises:
a visual target transformation unit: the system is used for randomly changing the display position and the opening direction of the next group of E-shaped sighting marks on the display screen after the direction of each group of E-shaped sighting marks is judged, and recording the position coordinates of the center of each group of E-shaped sighting marks in a display screen coordinate system in sequence;
a data acquisition unit: the distance d between the human eye node and the center position of the E-shaped sighting target is obtained in real time, and if the distance is smaller than a preset first distance threshold value, a prompt of too short distance is sent; if the distance is larger than or equal to the first distance threshold value, acquiring a human body posture image in the process of judging the direction of each group of E-shaped sighting mark through a front camera module, extracting the depth structure information of the upper half body of the human body in real time according to the human body posture image, and extracting the coordinates of the joint points of the upper half body of the human body according to the depth structure information of the upper half body of the human body;
a standard attitude detection unit: the human body posture detection device is used for identifying the human body posture according to the coordinates of the upper body joint point of the human body, comparing the human body posture with a standard vision detection posture prompted on a display screen, and judging the standard degree of the posture;
a cheating gesture detection unit: the E-shaped visual target position and the corresponding human body posture are combined for randomly changing, judging whether one or more cheating suspicions including arm extension, foreign matters held by hands and E-shaped visual target position follow exist or not, and judging whether cheating is performed or not.
5. The online self-help vision detection system of claim 4, wherein the standard gesture detection unit is specifically configured to:
acquiring a human body posture image and the upper half body joint point coordinates of the human body corresponding to each group of E-shaped sighting target judgment results, calculating limb included angles formed by the joint point coordinates, respectively calculating the similarity between each limb included angle and each corresponding limb included angle of the standard posture image, and weighting and summing each similarity to obtain the final posture similarity;
and taking the attitude similarity as an attitude standard degree, and if the attitude similarity is greater than or equal to a first similarity threshold, determining the attitude standard.
6. The online self-help vision detection system according to claim 5, wherein the steps of obtaining the human body posture image and the human body upper body joint point coordinates corresponding to each group of E-shaped sighting target judgment results, and calculating the limb included angle formed by each key point coordinate specifically comprise:
when the left eye vision detection is carried out, a tester is guided to cover the right eye with the right hand, the left eye coordinate, the left shoulder coordinate, the left elbow joint coordinate, the left wrist joint coordinate, the right elbow joint coordinate, the right wrist joint coordinate and the right hand four-finger fingertip coordinate are extracted according to the human body posture image corresponding to the judgment result, and the right eye coordinate and the right shoulder coordinate are calculated according to the distance and the angle between the left eye and the display screen and the human body bilateral symmetry principle; calculating a limb included angle among connecting lines of coordinates of a left eye, coordinates of a left shoulder and coordinates of a left elbow joint in the human body posture image, and a limb included angle among connecting lines of coordinates of the left shoulder, coordinates of the left elbow joint and coordinates of the left wrist joint; calculating a limb included angle between connecting lines of a right shoulder coordinate, a right elbow joint coordinate and a right wrist joint coordinate in the human body posture image and a limb included angle between connecting lines of the right wrist joint coordinate and a right eye coordinate and coordinates of four fingertips of a right hand respectively;
when right-eye vision detection is carried out, a tester is guided to cover the left eye with the left hand, the coordinate of the right eye, the coordinate of the right shoulder, the coordinate of the right elbow joint, the coordinate of the right wrist joint, the coordinate of the left elbow joint, the coordinate of the left wrist joint and the coordinate of the left four-finger fingertip are extracted according to the human posture image corresponding to the judgment result, and the coordinate of the left eye and the coordinate of the left shoulder are calculated according to the distance and the angle between the right eye and the display screen and the left-right symmetry principle of the human body; calculating a limb included angle among connecting lines of coordinates of a right eye, coordinates of a right shoulder and coordinates of a right elbow joint in the human body posture image, and a limb included angle among connecting lines of coordinates of the right shoulder, coordinates of the right elbow joint and coordinates of a right wrist joint; and calculating the limb included angle between the left shoulder coordinate, the left elbow joint coordinate and the left wrist joint coordinate connecting line in the human body posture image and the limb included angle between the left wrist joint coordinate and the left eye coordinate and the connecting line between the left hand four-finger fingertip coordinates respectively.
7. The online self-help vision detection system of claim 4, wherein the cheating gesture detection unit is specifically configured to:
judging whether the arm extends forwards, wherein when the distance between one elbow joint and the display screen is smaller than a preset second distance threshold value, judging that the arm extends forwards;
judging whether the foreign body is held by hand, namely detecting the foreign body through image recognition and judging whether the foreign body is held by hand;
the visual target position following judgment is carried out, in the visual detection process of the same eye, the position coordinates of each displayed E-shaped visual target on the display screen are obtained, and a visual target transformation vector set is formed according to the display sequence; acquiring elbow joint coordinates in the upper body joint point coordinates of the human body corresponding to the judgment result of each group of E-shaped sighting marks, and generating an elbow joint motion vector set according to the judgment sequence; calculating the similarity between the visual target transformation vector set and the elbow joint motion vector set, and judging that the position of the arm follows the E-shaped visual target when the similarity is greater than or equal to a preset second similarity threshold, namely the visual target position following condition exists;
and in the process of judging the direction of each group of E-shaped sighting marks, if the posture standard degree is smaller than a first similarity threshold value and any condition of arm extension, foreign matter holding by hand and sighting mark position following exists, the cheating judgment judges that the sighting marks are cheated and sends cheating reminding.
8. The on-line self-help vision-detecting system according to claim 7, wherein the position coordinates of the center of each displayed E-shaped sighting mark in the coordinate system of the display screen are given as (A)x 1,i ,y 1,i ) The coordinates of the elbow joint in the coordinates of the upper body joint points of the human body corresponding to the judgment result of each group of E-shaped sighting marks in the X, Y plane of the world coordinate system are (x 2,i ,y 2,i ),i=0,1,2,…,mIf the visual target transformation vector set is T1={(x 1,i - x 1,i-1 ,y 1,i - y 1,i-1 )| i=1,2,…,m The elbow joint motion vector set isT 2 ={(x 2,i - x 2,i-1 ,y 2,i - y 2,i-1 )| i=1,2,…,m};
The formula for calculating the similarity between the sighting target transformation vector set and the elbow joint motion vector set is as follows:
Figure 513269DEST_PATH_IMAGE003
in the formulamThe total number of E-shaped optotypes displayed for monocular vision test is reduced by 1,
Figure 454680DEST_PATH_IMAGE004
9. a mobile terminal, comprising: at least one processing unit, at least one memory unit, a communication interface and a bus;
the processing unit, the storage unit and the communication interface complete mutual communication through the bus;
the storage unit stores program instructions executable by the processing unit, and the processing unit calls the program instructions to realize the system according to any one of claims 1 to 8;
the types of mobile terminals include, but are not limited to: smart mobile phone, portable computer, panel computer, PDA intelligent terminal, on-vehicle intelligent terminal, wearable equipment includes: intelligence wrist-watch, intelligent bracelet, intelligent ring.
10. A computer readable storage medium storing computer instructions which cause a computer to implement the system of any one of claims 1 to 8.
CN202110483570.0A 2021-04-30 2021-04-30 Online self-service vision detection system, mobile terminal and storage medium Active CN113197542B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110483570.0A CN113197542B (en) 2021-04-30 2021-04-30 Online self-service vision detection system, mobile terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110483570.0A CN113197542B (en) 2021-04-30 2021-04-30 Online self-service vision detection system, mobile terminal and storage medium

Publications (2)

Publication Number Publication Date
CN113197542A true CN113197542A (en) 2021-08-03
CN113197542B CN113197542B (en) 2024-01-30

Family

ID=77029940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110483570.0A Active CN113197542B (en) 2021-04-30 2021-04-30 Online self-service vision detection system, mobile terminal and storage medium

Country Status (1)

Country Link
CN (1) CN113197542B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116898704A (en) * 2023-07-13 2023-10-20 广州视景医疗软件有限公司 VR-based visual target adjusting method and device

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004337348A (en) * 2003-05-15 2004-12-02 New Opto Corp Eye movement analysis system
CN201612597U (en) * 2010-02-02 2010-10-27 陈献聪 Vision inspection tester
FR3001118A1 (en) * 2013-01-24 2014-07-25 Jlm Medical Method for assisting measurement of visual acuity of patient, involves determining distance between display support and eye of patient, and adjusting dimension of optotype display according to determined distance
CN105996975A (en) * 2016-05-31 2016-10-12 乐视控股(北京)有限公司 Method, device and terminal for testing vision
US20170127934A1 (en) * 2015-06-05 2017-05-11 Boe Technology Group Co., Ltd. Automatic unattended-monitoring visual acuity inspection device
CN107198505A (en) * 2017-04-07 2017-09-26 天津市天中依脉科技开发有限公司 Visual function detecting system and method based on smart mobile phone
JP2017176302A (en) * 2016-03-29 2017-10-05 ヤンマー株式会社 Eyeball movement measuring apparatus, eyeball movement measuring method, and eyeball movement measuring program
CN109431763A (en) * 2018-12-12 2019-03-08 广州视景医疗软件有限公司 The method and its system of vision accommodation functional training based on VR
CN110032992A (en) * 2019-04-25 2019-07-19 沈阳航空航天大学 A kind of detection method that cheats at one's exam based on posture
CN110123257A (en) * 2019-03-29 2019-08-16 深圳和而泰家居在线网络科技有限公司 A kind of vision testing method, device, sight tester and computer storage medium
CN110226913A (en) * 2019-05-24 2019-09-13 山东海博科技信息系统股份有限公司 A kind of self-service examination machine eyesight detection intelligent processing method and device
CN111248851A (en) * 2019-10-11 2020-06-09 中山大学中山眼科中心 Visual function self-testing method
CN111543934A (en) * 2020-04-29 2020-08-18 深圳创维-Rgb电子有限公司 Vision detection method and device, electronic product and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004337348A (en) * 2003-05-15 2004-12-02 New Opto Corp Eye movement analysis system
CN201612597U (en) * 2010-02-02 2010-10-27 陈献聪 Vision inspection tester
FR3001118A1 (en) * 2013-01-24 2014-07-25 Jlm Medical Method for assisting measurement of visual acuity of patient, involves determining distance between display support and eye of patient, and adjusting dimension of optotype display according to determined distance
US20170127934A1 (en) * 2015-06-05 2017-05-11 Boe Technology Group Co., Ltd. Automatic unattended-monitoring visual acuity inspection device
JP2017176302A (en) * 2016-03-29 2017-10-05 ヤンマー株式会社 Eyeball movement measuring apparatus, eyeball movement measuring method, and eyeball movement measuring program
CN105996975A (en) * 2016-05-31 2016-10-12 乐视控股(北京)有限公司 Method, device and terminal for testing vision
CN107198505A (en) * 2017-04-07 2017-09-26 天津市天中依脉科技开发有限公司 Visual function detecting system and method based on smart mobile phone
CN109431763A (en) * 2018-12-12 2019-03-08 广州视景医疗软件有限公司 The method and its system of vision accommodation functional training based on VR
CN110123257A (en) * 2019-03-29 2019-08-16 深圳和而泰家居在线网络科技有限公司 A kind of vision testing method, device, sight tester and computer storage medium
CN110032992A (en) * 2019-04-25 2019-07-19 沈阳航空航天大学 A kind of detection method that cheats at one's exam based on posture
CN110226913A (en) * 2019-05-24 2019-09-13 山东海博科技信息系统股份有限公司 A kind of self-service examination machine eyesight detection intelligent processing method and device
CN111248851A (en) * 2019-10-11 2020-06-09 中山大学中山眼科中心 Visual function self-testing method
CN111543934A (en) * 2020-04-29 2020-08-18 深圳创维-Rgb电子有限公司 Vision detection method and device, electronic product and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116898704A (en) * 2023-07-13 2023-10-20 广州视景医疗软件有限公司 VR-based visual target adjusting method and device
CN116898704B (en) * 2023-07-13 2023-12-26 广州视景医疗软件有限公司 VR-based visual target adjusting method and device

Also Published As

Publication number Publication date
CN113197542B (en) 2024-01-30

Similar Documents

Publication Publication Date Title
CN107609383B (en) 3D face identity authentication method and device
CN104364733B (en) Watch position detecting device attentively, watch method for detecting position attentively and watch position detection program attentively
CN107545302B (en) Eye direction calculation method for combination of left eye image and right eye image of human eye
CN103106401B (en) Mobile terminal iris recognition device with human-computer interaction mechanism
CN113303791A (en) Online self-service physical examination system for motor vehicle driver, mobile terminal and storage medium
US20160225164A1 (en) Automatic generation of virtual materials from real-world materials
US20050175218A1 (en) Method and apparatus for calibration-free eye tracking using multiple glints or surface reflections
CN112102389A (en) Method and system for determining spatial coordinates of a 3D reconstruction of at least a part of a physical object
WO2012142202A1 (en) Apparatus, systems and methods for providing motion tracking using a personal viewing device
WO2020042542A1 (en) Method and apparatus for acquiring eye movement control calibration data
CN109799900A (en) The wireless wrist connected for three-dimensional imaging, mapping, networking and interface calculates and controls device and method
WO2020020022A1 (en) Method for visual recognition and system thereof
CN109643152A (en) Use face and the eyes tracking of the face sensor in head-mounted display and FA Facial Animation
CN108537103B (en) Living body face detection method and device based on pupil axis measurement
KR20190119212A (en) System for performing virtual fitting using artificial neural network, method thereof and computer recordable medium storing program to perform the method
CN108124150A (en) Virtual reality wears display device and observes the method for real scene by it
US10402996B2 (en) Distance measuring device for human body features and method thereof
JP2020140630A (en) Fixation point estimation system, fixation point estimation method, fixation point estimation program, and information recording medium for recording the same
CN113197542B (en) Online self-service vision detection system, mobile terminal and storage medium
CN112995523B (en) Online self-service environment detection method and system
CN112801038B (en) Multi-view face in-vivo detection method and system
JP2017191546A (en) Medical use head-mounted display, program of medical use head-mounted display, and control method of medical use head-mounted display
US11589001B2 (en) Information processing apparatus, information processing method, and program
CN112155509A (en) Physical examination method for VR of driver
JP2021018729A (en) Personal identification apparatus, head-mounted display, content distribution server, and personal identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant