CN111613314A - Student eyesight detection system based on big data - Google Patents

Student eyesight detection system based on big data Download PDF

Info

Publication number
CN111613314A
CN111613314A CN202010469166.3A CN202010469166A CN111613314A CN 111613314 A CN111613314 A CN 111613314A CN 202010469166 A CN202010469166 A CN 202010469166A CN 111613314 A CN111613314 A CN 111613314A
Authority
CN
China
Prior art keywords
data
student
test
value
vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010469166.3A
Other languages
Chinese (zh)
Inventor
班俊超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202010469166.3A priority Critical patent/CN111613314A/en
Publication of CN111613314A publication Critical patent/CN111613314A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a student vision detection system based on big data, which comprises a camera, an identification unit, a database, an analysis module, a display screen, a judgment module, a monitoring module and a sending module, wherein the identification unit is used for identifying the student vision detection system; the camera is used for acquiring the vision test condition of the student, automatically acquiring image information, transmitting the image information to the identification unit, wherein the image information comprises student name data and image data; the identification unit is used for identifying the image information and carrying out identification operation on the image information, direction judgment is carried out on an X-axis difference value and a Y-axis difference value analyzed by the analysis module through the arrangement of the judgment module, corresponding degree data are extracted, and the actual value of the measured degree is calculated according to the influence factors of the distance difference value and the time difference value, so that the vision degree of a student is obtained, the influence change of variable factors is reduced, the accuracy of the data is improved, the time consumed by measuring the degree is saved, and the working efficiency is improved.

Description

Student eyesight detection system based on big data
Technical Field
The invention relates to the technical field of vision detection, in particular to a student vision detection system based on big data.
Background
Vision refers to the ability of the retina to resolve images. The eyesight is judged by the capacity of retina to distinguish images, with the development of society and the increase of electronic products, the eyesight of primary and secondary school students is reduced under the influence of the products, and parents usually test the eyesight of the children at intervals.
The existing eyesight test is that an eyesight table is usually hung on a wall, a line for testing eyesight is drawn on the ground, children stand on the line and then perform the eyesight test, some advanced glasses shops amplify the test table through a tester, and some errors in side view can appear more or less after the children measure, so that the eyesight degrees of the children cannot be accurately analyzed.
Disclosure of Invention
The invention aims to provide a student vision detection system based on big data, which identifies the vision test condition of a student obtained in a camera through the arrangement of an identification unit, accurately identifies the obtained image information, converts the image into data information, transmits the data information to an analysis module for analysis of related data, solves the problem that the related data cannot be accurately analyzed in the prior art, avoids large error of the data, increases the accuracy of the data, increases the persuasion force of the data information, saves the analysis time, improves the working efficiency, carries out direction judgment on an X-axis difference value and a Y-axis difference value analyzed by the analysis module through the arrangement of a judgment module, extracts corresponding degree data, calculates the actual value of the measured degree according to the influence factors of the distance difference value and the time difference value, and obtains the vision degree of the student, the problem that the degree of a student can not be accurately calculated in the prior art is solved, influence change of variable factors is reduced, accuracy of data is improved, time consumed by measuring the degree is saved, and working efficiency is improved.
The purpose of the invention can be realized by the following technical scheme: a student vision detection system based on big data comprises a camera, an identification unit, a database, an analysis module, a display screen, a judgment module, a monitoring module and a sending module;
the camera is used for acquiring the vision test condition of the student, automatically acquiring image information, transmitting the image information to the identification unit, wherein the image information comprises student name data and image data;
the identification unit is used for identifying the image information and carrying out identification operation on the image information to obtain hand information and transmitting the hand information to the database for storage;
the database stores test chart letter record picture data and corresponding direction data, the monitoring module is used for monitoring vision test chart data, test distance data and gesture making reaction time data when students test vision, the monitoring module obtains the letter record picture data and the direction data from the database, carries out vision test monitoring according to the letter record picture data and the direction data, obtains the direction data, the degree data, the reaction time data and the test distance data, and transmits the direction data, the degree data, the reaction time data and the test distance data to the database for storage;
the analysis module acquires hand information from the database, performs data analysis operation according to the hand information to obtain an X-axis difference value and a Y-axis difference value among the fingers, the palm and the wrist, and transmits the X-axis difference value and the Y-axis difference value to the judgment module;
the judging module acquires degree data, testing distance data, reaction time data and direction data from the database, and judges the degree data, the testing distance data, the reaction time data and the direction data together with an X-axis difference value and a Y-axis difference value among the fingers, the palm and the wrist:
firstly, marking positive and negative values of an X-axis difference value and a Y-axis difference value among fingers, a palm and a wrist, and judging the direction according to the positive and negative values;
extracting the judged degree data, bringing the degree data, the distance difference value and the time difference value into corresponding calculation formulas together, calculating the value of the test degree, and respectively transmitting the test degree to the database and the sending module;
the database receives and stores the test degrees, the sending module is used for receiving and sending the test degrees to the display screen, and the display screen is used for displaying the test degrees.
As a further improvement of the invention: the specific operation process of the identification operation comprises the following steps:
the method comprises the following steps: acquiring student name data in the image information, and marking the student name data as Ai, i 1,2,3.
Step two: acquiring image data in the image information, and calibrating the image data as Bi, i ═ 1,2,3.. n 1;
step three: identifying image data, extracting image data in the image data, identifying face data of students according to the image data, matching the face data of the students in the image data with corresponding name data of the students, judging that the students and the face have errors when matching results are inconsistent, not performing vision test and automatically generating a prohibition signal, judging that the students are testing the vision when the matching results are consistent, and continuing to perform the vision test and generating a permission signal, wherein the face data of the students are identified through a face identification unit which is arranged in an identification unit;
step four: two eye data in the discernment image data judge whether the student closes the eye, specifically are:
s1: setting a preset value of the eye data, and comparing the preset value with the two eye data, wherein the preset value specifically comprises the following steps:
s2: when the preset value is larger than the eye data, judging that the eye is in an eye closing state;
s3: when the preset value is less than or equal to the eye data, judging that the eyes are in an open state;
s4: testing and judging according to the comparison result of the eye data and a preset value, judging that the vision test of the student is not in accordance with the specification when the two eye data are both in an open state, judging that the vision test of the student is not in accordance with the specification when the two eye data are both in a closed state, and judging that the vision test of the student is in accordance with the specification when one eye data is in an open state and the other eye data is in a closed state;
step five: the hands of the student in the image data are identified, and hand information is automatically acquired.
As a further improvement of the invention: the specific monitoring process is as follows:
k1: acquiring picture data of the vision test chart, and identifying a test bar on the vision test chart, wherein the test bar is identified through picture comparison;
k2: acquiring test meter letter pattern data pointed by a test bar, matching the test meter letter pattern data with letter record picture data to obtain letter record picture data consistent with the letter pattern data, and extracting direction data corresponding to the letter record picture data;
k3: and acquiring the vision degrees of the same row on the vision test chart, calibrating the vision degrees as degree data, and acquiring the vision degrees through identification numbers.
As a further improvement of the invention: the specific operation process of the data analysis operation comprises the following steps:
h1: acquiring hand information, marking a position connected with an arm in the hand information as wrist data, marking the wrist data as SYi, i as 1,2,3.. No. n1, acquiring the hand information, marking five finger heads in the hand information as finger data, marking the finger data as SZi, i as 1,2,3.. No. n1, acquiring the hand information, marking a part in which the fingers and the wrist are connected as palm data, and marking the palm data as SWi, i as 1,2,3.. No. n 1;
h2: establishing a virtual space rectangular coordinate system, and performing position calibration on the virtual rectangular coordinate system by using the same position of image data acquired by a monitoring module, so as to obtain a wrist position KLv, wherein v is 1,2,3.. n2, the coordinates of the wrist are (LXv, LYv), the palm position is KCv, v is 1,2,3.. n2, the coordinates of the palm are (CXv, CYv), the finger position is KZv, v is 1,2,3.. n2, and the coordinates of the finger are (ZXv, ZYv), wherein a plurality of positions of the finger, the palm and the wrist are set because the finger, the palm and the wrist are not a single point;
h3: coordinates KLv (LXv, LYv), KCv (CXv, CYv) and KZv (ZXv, ZYv) of the finger, palm and wrist in the above H2 are acquired, and are respectively substituted into the calculation formula: CXLCv=LXv-CXv,CXLZv=LXv-ZXv,CYLCv=LYv-CYv,CYLZvLYv-ZYv wherein CXLCvExpressed as the difference between the finger and palm X-axes, CXLZvExpressed as the difference of the X-axis of the finger and the wrist, CYLCvExpressed as the difference between the Y axes of the fingers and the palm, CYLZvExpressed as the Y-axis difference of the finger and the wrist.
As a further improvement of the invention: the specific operation process of the judgment operation is as follows:
g1: acquiring an X-axis difference value and a Y-axis difference value among the fingers, the palm and the wrist, respectively re-marking the X-axis difference value and the Y-axis difference value according to a positive value and a negative value, and judging the finger pointing direction according to the positive value and the negative value, specifically:
when the difference value of the X axes of the fingers and the palm is a negative value and the difference value of the X axes of the fingers and the wrist is a negative value, judging that the direction of the student is the left direction;
when the difference value of the X axes of the fingers and the palm is a positive value and the difference value of the X axes of the fingers and the wrist is a positive value, judging that the direction of the student is the right direction;
when the Y-axis difference value of the fingers and the palm is a positive value and the Y-axis difference value of the fingers and the wrist is a positive value, judging that the direction of the student is upward;
when the Y-axis difference value of the fingers and the palm is a negative value and the Y-axis difference value of the fingers and the wrist is a negative value, judging that the student points to the lower part;
when other results appear, judging that the identification is wrong;
g2: matching the judgment result in the G1 with the direction data, judging that the student indication is correct when the matching result is consistent, judging that the student indication is wrong when the matching result is inconsistent, measuring the number one lower when the matching result is inconsistent, and extracting the corresponding number data when the matching result is consistent;
g3: acquiring test distance data and reaction time data, setting a distance preset value and a reaction time preset value at the same time, and bringing the distance preset value and the reaction time preset value into a difference calculation formula together with the test distance data and the reaction time data so as to calculate a distance difference and a time difference;
g4: the distance difference and the time difference in the G3 and the degree data in the G2 are obtained and are substituted into the calculation formula: the test degree is degree data + (distance difference distance influence factor) + (time difference time influence factor), wherein the value of the distance influence factor is 0.15283, and the value of the time influence factor is 0.3926178.
The invention has the beneficial effects that:
(1) the camera acquires the vision test condition of the student, automatically acquires image information, transmits the image information to the identification unit, and the image information comprises student name data and image data; the identification unit identifies the image information, performs identification operation on the image information to obtain hand information, and transmits the hand information to the database for storage; the monitoring module is used for monitoring vision test chart data, test distance data and gesture making reaction time data when the student tests vision, and acquiring letter record picture data and direction data from the database and carrying out vision test monitoring according to the letter record picture data and the direction data; analysis module acquires hand information from the database, and carry out data analysis operation according to it, the analysis goes out the finger, X axle difference and Y axle difference between palm and the wrist three, setting through the recognition cell, the student's visual test situation that acquires in the discernment camera, and carry out accurate identification to the image information who acquires, change the image into data message, retransmission carries out the analysis of relevant data to analysis module, it is great to avoid the error of data, thereby increase the accuracy nature of data, increase data message's persuasion dynamics, save the analysis time, and the work efficiency is improved.
(2) The judging module acquires degree data, testing distance data, reaction time data and direction data from the database, and judges the degree data, the testing distance data, the reaction time data and the direction data together with an X-axis difference value and a Y-axis difference value among the fingers, the palm and the wrist: firstly, marking positive and negative values of an X-axis difference value and a Y-axis difference value among fingers, a palm and a wrist, and judging the direction according to the positive and negative values; extracting the judged degree data, bringing the degree data, the distance difference value and the time difference value into corresponding calculation formulas together, calculating the value of the test degree, and respectively transmitting the test degree to the database and the sending module; the database receives the test degree, and store it, the sending module is used for receiving the test degree and sends it to the display screen, the display screen is used for showing the test degree, through the setting of decision module, carry out the direction judgement to the X axle difference and the Y axle difference of analysis module analysis, and extract corresponding degree data, according to its influence factor with distance difference and time difference, calculate the actual value of measuring the degree, thereby reach student's eyesight degree, reduce the influence change of variable factor, increase the accuracy of data, save the time that the degree consumed, improve work efficiency.
Drawings
The invention will be further described with reference to the accompanying drawings.
FIG. 1 is a system block diagram of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the invention relates to a student vision detection system based on big data, which comprises a camera, an identification unit, a database, an analysis module, a display screen, a judgment module, a monitoring module and a sending module;
the camera is used for acquiring the vision test condition of the student, automatically acquiring image information, transmitting the image information to the identification unit, wherein the image information comprises student name data and image data;
the identification unit is used for identifying the image information and carrying out identification operation on the image information, and the specific operation process of the identification operation is as follows:
the method comprises the following steps: acquiring student name data in the image information, and marking the student name data as Ai, i 1,2,3.
Step two: acquiring image data in the image information, and calibrating the image data as Bi, i ═ 1,2,3.. n 1;
step three: identifying image data, extracting image data in the image data, identifying face data of students according to the image data, matching the face data of the students in the image data with corresponding name data of the students, judging that the students and the face have errors when matching results are inconsistent, not performing vision test and automatically generating a prohibition signal, judging that the students are testing the vision when the matching results are consistent, and continuing to perform the vision test and generating a permission signal, wherein the face data of the students are identified through a face identification unit which is arranged in an identification unit;
step four: two eye data in the discernment image data judge whether the student closes the eye, specifically are:
s1: setting a preset value of the eye data, and comparing the preset value with the two eye data, wherein the preset value specifically comprises the following steps:
s2: when the preset value is larger than the eye data, judging that the eye is in an eye closing state;
s3: when the preset value is less than or equal to the eye data, judging that the eyes are in an open state;
s4: testing and judging according to the comparison result of the eye data and a preset value, judging that the vision test of the student is not in accordance with the specification when the two eye data are both in an open state, judging that the vision test of the student is not in accordance with the specification when the two eye data are both in a closed state, judging that the vision test of the student is in accordance with the specification when one eye data is in the open state and the other eye data is in the closed state, wherein the eye data refers to the distance data between the upper eyelid and the lower eyelid, and the preset value of the eye data is zero;
step five: identifying the hands of students in the image data, automatically acquiring hand information, and transmitting the hand information to a database for storage, wherein the hands of the students are identified through limb identification or comparison among the image data, and the hands of the students refer to the palms and fingers of the students;
the database is internally stored with test chart letter record picture data and corresponding direction data, the monitoring module is used for monitoring the vision test chart data and test distance data of students during vision test and making gesture response time data, the monitoring module acquires the letter record picture data and the direction data from the database, and the vision test monitoring is carried out according to the letter record picture data and the direction data, and the specific monitoring process is as follows:
k1: acquiring picture data of the vision test chart, and identifying a test bar on the vision test chart, wherein the test bar is identified through picture comparison;
k2: acquiring test meter letter pattern data pointed by a test bar, matching the test meter letter pattern data with letter record picture data to obtain letter record picture data consistent with the letter pattern data, and extracting direction data corresponding to the letter record picture data;
k3: acquiring the vision degrees of the same row on the vision test chart, calibrating the vision degrees as degree data, and acquiring the vision degrees through identification numbers;
k4: transmitting the direction data, the degree data, the reaction time data and the test distance data to a database together for storage;
the analysis module acquires hand information from the database and performs data analysis operation according to the hand information, and the specific operation process of the data analysis operation is as follows:
h1: acquiring hand information, marking a position connected with an arm in the hand information as wrist data, marking the wrist data as SYi, i as 1,2,3.. No. n1, acquiring the hand information, marking five finger heads in the hand information as finger data, marking the finger data as SZi, i as 1,2,3.. No. n1, acquiring the hand information, marking a part in which the fingers and the wrist are connected as palm data, and marking the palm data as SWi, i as 1,2,3.. No. n 1;
h2: establishing a virtual space rectangular coordinate system, and performing position calibration on the virtual rectangular coordinate system by using the same position of image data acquired by a monitoring module, so as to obtain a wrist position KLv, wherein v is 1,2,3.. n2, the coordinates of the wrist are (LXv, LYv), the palm position is KCv, v is 1,2,3.. n2, the coordinates of the palm are (CXv, CYv), the finger position is KZv, v is 1,2,3.. n2, and the coordinates of the finger are (ZXv, ZYv), wherein a plurality of positions of the finger, the palm and the wrist are set because the finger, the palm and the wrist are not a single point;
h3: coordinates KLv (LXv, LYv), KCv (CXv, CYv) and KZv (ZXv, ZYv) of the finger, palm and wrist in the above H2 are acquired, and are respectively substituted into the calculation formula: CXLCv=LXv-CXv,CXLZv=LXv-ZXv,CYLCv=LYv-CYv,CYLZvLYv-ZYv wherein CXLCvExpressed as the difference between the finger and palm X-axes, CXLZvExpressed as the difference of the X-axis of the finger and the wrist, CYLCvExpressed as the difference between the Y axes of the fingers and the palm, CYLZvExpressed as the Y-axis difference of the finger and the wrist;
h4: transmitting the X-axis difference and the Y-axis difference among the fingers, the palm and the wrist in H3 to a judgment module;
the judging module acquires degree data, testing distance data, reaction time data and direction data from the database, and judges the degree data, the testing distance data, the reaction time data and the direction data together with an X-axis difference value and a Y-axis difference value among the fingers, the palm and the wrist, wherein the specific operation process of the judging operation is as follows:
g1: acquiring an X-axis difference value and a Y-axis difference value among the fingers, the palm and the wrist, respectively re-marking the X-axis difference value and the Y-axis difference value according to a positive value and a negative value, wherein the result of the positive-negative value indicating difference value is larger than zero or smaller than zero, and judging according to the positive value and the negative value, namely the finger direction, specifically:
when the difference value of the X axes of the fingers and the palm is a negative value and the difference value of the X axes of the fingers and the wrist is a negative value, judging that the direction of the student is the left direction;
when the difference value of the X axes of the fingers and the palm is a positive value and the difference value of the X axes of the fingers and the wrist is a positive value, judging that the direction of the student is the right direction;
when the Y-axis difference value of the fingers and the palm is a positive value and the Y-axis difference value of the fingers and the wrist is a positive value, judging that the direction of the student is upward;
when the Y-axis difference value of the fingers and the palm is a negative value and the Y-axis difference value of the fingers and the wrist is a negative value, judging that the student points to the lower part;
when other results appear, judging that the identification is wrong;
g2: matching the judgment result in the G1 with the direction data, judging that the student indication is correct when the matching result is consistent, judging that the student indication is wrong when the matching result is inconsistent, measuring the number one lower when the matching result is inconsistent, and extracting the corresponding number data when the matching result is consistent;
g3: acquiring test distance data and reaction time data, setting a distance preset value and a reaction time preset value at the same time, and bringing the distance preset value and the reaction time preset value into a difference calculation formula together with the test distance data and the reaction time data so as to calculate a distance difference and a time difference;
g4: the distance difference and the time difference in the G3 and the degree data in the G2 are obtained and are substituted into the calculation formula: testing degree-degree data + (distance difference value distance influence factor) + (time difference value time influence factor), wherein the value of the distance influence factor is 0.15283, and the value of the time influence factor is 0.3926178;
g5: respectively transmitting the test degrees to a database and a sending module;
the database receives and stores the test degrees, the sending module is used for receiving and sending the test degrees to the display screen, and the display screen is used for displaying the test degrees.
When the student vision testing device works, the camera acquires the vision testing condition of a student, automatically acquires image information, transmits the image information to the identification unit, wherein the image information comprises student name data and image data; the identification unit identifies the image information, performs identification operation on the image information to obtain hand information, and transmits the hand information to the database for storage; the monitoring module is used for monitoring vision test chart data, test distance data and gesture making reaction time data when the student tests vision, acquiring letter record picture data and direction data from a database, carrying out vision test monitoring according to the letter record picture data and the direction data, acquiring picture data of the vision test chart, and identifying a test bar on the vision test chart, wherein the test bar is identified by comparing pictures; acquiring test meter letter pattern data pointed by a test bar, matching the test meter letter pattern data with letter record picture data to obtain letter record picture data consistent with the letter pattern data, and extracting direction data corresponding to the letter record picture data; acquiring the vision degrees of the same row on the vision test chart, calibrating the vision degrees into degree data, acquiring the vision degrees through identification numbers, and transmitting the vision degrees to a database for storage; the analysis module acquires hand information from the database, performs data analysis operation according to the hand information to obtain an X-axis difference value and a Y-axis difference value among the fingers, the palm and the wrist, and transmits the X-axis difference value and the Y-axis difference value to the judgment module; the judging module acquires degree data, testing distance data, reaction time data and direction data from the database, and judges the degree data, the testing distance data, the reaction time data and the direction data together with an X-axis difference value and a Y-axis difference value among the fingers, the palm and the wrist: firstly, marking positive and negative values of an X-axis difference value and a Y-axis difference value among fingers, a palm and a wrist, and judging the direction according to the positive and negative values; extracting the judged degree data, bringing the degree data, the distance difference value and the time difference value into corresponding calculation formulas together, calculating the value of the test degree, and respectively transmitting the test degree to the database and the sending module; the database receives and stores the test degrees, the sending module is used for receiving and sending the test degrees to the display screen, and the display screen is used for displaying the test degrees.
The foregoing is merely exemplary and illustrative of the present invention and various modifications, additions and substitutions may be made by those skilled in the art to the specific embodiments described without departing from the scope of the invention as defined in the following claims.

Claims (5)

1. A student vision detection system based on big data is characterized by comprising a camera, an identification unit, a database, an analysis module, a display screen, a judgment module, a monitoring module and a sending module;
the camera is used for acquiring the vision test condition of the student, automatically acquiring image information, transmitting the image information to the identification unit, wherein the image information comprises student name data and image data;
the identification unit is used for identifying the image information and carrying out identification operation on the image information to obtain hand information and transmitting the hand information to the database for storage;
the database stores test chart letter record picture data and corresponding direction data, the monitoring module is used for monitoring vision test chart data, test distance data and gesture making reaction time data when students test vision, the monitoring module obtains the letter record picture data and the direction data from the database, carries out vision test monitoring according to the letter record picture data and the direction data, obtains the direction data, the degree data, the reaction time data and the test distance data, and transmits the direction data, the degree data, the reaction time data and the test distance data to the database for storage;
the analysis module acquires hand information from the database, performs data analysis operation according to the hand information to obtain an X-axis difference value and a Y-axis difference value among the fingers, the palm and the wrist, and transmits the X-axis difference value and the Y-axis difference value to the judgment module;
the judging module acquires degree data, testing distance data, reaction time data and direction data from the database, and judges the degree data, the testing distance data, the reaction time data and the direction data together with an X-axis difference value and a Y-axis difference value among the fingers, the palm and the wrist:
firstly, marking positive and negative values of an X-axis difference value and a Y-axis difference value among fingers, a palm and a wrist, and judging the direction according to the positive and negative values;
extracting the judged degree data, bringing the degree data, the distance difference value and the time difference value into corresponding calculation formulas together, calculating the value of the test degree, and respectively transmitting the test degree to the database and the sending module;
the database receives and stores the test degrees, the sending module is used for receiving and sending the test degrees to the display screen, and the display screen is used for displaying the test degrees.
2. The student vision inspection system based on big data as claimed in claim 1, wherein the specific operation process of the recognition operation is:
the method comprises the following steps: acquiring student name data in the image information, and marking the student name data as Ai, i 1,2,3.
Step two: acquiring image data in the image information, and calibrating the image data as Bi, i ═ 1,2,3.. n 1;
step three: identifying image data, extracting image data in the image data, identifying face data of students according to the image data, matching the face data of the students in the image data with corresponding name data of the students, judging that the students and the face have errors when matching results are inconsistent, not performing vision test and automatically generating a prohibition signal, judging that the students are testing the vision when the matching results are consistent, and continuing to perform the vision test and generating a permission signal, wherein the face data of the students are identified through a face identification unit which is arranged in an identification unit;
step four: two eye data in the discernment image data judge whether the student closes the eye, specifically are:
s1: setting a preset value of the eye data, and comparing the preset value with the two eye data, wherein the preset value specifically comprises the following steps:
s2: when the preset value is larger than the eye data, judging that the eye is in an eye closing state;
s3: when the preset value is less than or equal to the eye data, judging that the eyes are in an open state;
s4: testing and judging according to the comparison result of the eye data and a preset value, judging that the vision test of the student is not in accordance with the specification when the two eye data are both in an open state, judging that the vision test of the student is not in accordance with the specification when the two eye data are both in a closed state, and judging that the vision test of the student is in accordance with the specification when one eye data is in an open state and the other eye data is in a closed state;
step five: the hands of the student in the image data are identified, and hand information is automatically acquired.
3. The student vision inspection system based on big data as claimed in claim 1, wherein the specific monitoring process is:
k1: acquiring picture data of the vision test chart, and identifying a test bar on the vision test chart, wherein the test bar is identified through picture comparison;
k2: acquiring test meter letter pattern data pointed by a test bar, matching the test meter letter pattern data with letter record picture data to obtain letter record picture data consistent with the letter pattern data, and extracting direction data corresponding to the letter record picture data;
k3: and acquiring the vision degrees of the same row on the vision test chart, calibrating the vision degrees as degree data, and acquiring the vision degrees through identification numbers.
4. The big data based student vision inspection system as claimed in claim 1, wherein the specific operation process of the data analysis operation is:
h1: acquiring hand information, marking a position connected with an arm in the hand information as wrist data, marking the wrist data as SYi, i as 1,2,3.. No. n1, acquiring the hand information, marking five finger heads in the hand information as finger data, marking the finger data as SZi, i as 1,2,3.. No. n1, acquiring the hand information, marking a part in which the fingers and the wrist are connected as palm data, and marking the palm data as SWi, i as 1,2,3.. No. n 1;
h2: establishing a virtual space rectangular coordinate system, and performing position calibration on the virtual rectangular coordinate system by using the same position of image data acquired by a monitoring module, so as to obtain a wrist position KLv, wherein v is 1,2,3.. n2, the coordinates of the wrist are (LXv, LYv), the palm position is KCv, v is 1,2,3.. n2, the coordinates of the palm are (CXv, CYv), the finger position is KZv, v is 1,2,3.. n2, and the coordinates of the finger are (ZXv, ZYv), wherein a plurality of positions of the finger, the palm and the wrist are set because the finger, the palm and the wrist are not a single point;
h3: coordinates KLv (LXv, LYv), KCv (CXv, CYv) and KZv (ZXv, ZYv) of the finger, palm and wrist in the above H2 are acquired, and are respectively substituted into the calculation formula: CXLCv=LXv-CXv,CXLZv=LXv-ZXv,CYLCv=LYv-CYv,CYLZvLYv-ZYv wherein CXLCvExpressed as the difference between the finger and palm X-axes, CXLZvExpressed as the difference of the X-axis of the finger and the wrist, CYLCvExpressed as the difference between the Y axes of the fingers and the palm, CYLZvExpressed as the Y-axis difference of the finger and the wrist.
5. The student vision inspection system based on big data as claimed in claim 1, wherein the specific operation process of the decision operation is:
g1: acquiring an X-axis difference value and a Y-axis difference value among the fingers, the palm and the wrist, respectively re-marking the X-axis difference value and the Y-axis difference value according to a positive value and a negative value, and judging the finger pointing direction according to the positive value and the negative value, specifically:
when the difference value of the X axes of the fingers and the palm is a negative value and the difference value of the X axes of the fingers and the wrist is a negative value, judging that the direction of the student is the left direction;
when the difference value of the X axes of the fingers and the palm is a positive value and the difference value of the X axes of the fingers and the wrist is a positive value, judging that the direction of the student is the right direction;
when the Y-axis difference value of the fingers and the palm is a positive value and the Y-axis difference value of the fingers and the wrist is a positive value, judging that the direction of the student is upward;
when the Y-axis difference value of the fingers and the palm is a negative value and the Y-axis difference value of the fingers and the wrist is a negative value, judging that the student points to the lower part;
when other results appear, judging that the identification is wrong;
g2: matching the judgment result in the G1 with the direction data, judging that the student indication is correct when the matching result is consistent, judging that the student indication is wrong when the matching result is inconsistent, measuring the number one lower when the matching result is inconsistent, and extracting the corresponding number data when the matching result is consistent;
g3: acquiring test distance data and reaction time data, setting a distance preset value and a reaction time preset value at the same time, and bringing the distance preset value and the reaction time preset value into a difference calculation formula together with the test distance data and the reaction time data so as to calculate a distance difference and a time difference;
g4: the distance difference and the time difference in the G3 and the degree data in the G2 are obtained and are substituted into the calculation formula: the test degree is degree data + (distance difference distance influence factor) + (time difference time influence factor), wherein the value of the distance influence factor is 0.15283, and the value of the time influence factor is 0.3926178.
CN202010469166.3A 2020-05-28 2020-05-28 Student eyesight detection system based on big data Withdrawn CN111613314A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010469166.3A CN111613314A (en) 2020-05-28 2020-05-28 Student eyesight detection system based on big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010469166.3A CN111613314A (en) 2020-05-28 2020-05-28 Student eyesight detection system based on big data

Publications (1)

Publication Number Publication Date
CN111613314A true CN111613314A (en) 2020-09-01

Family

ID=72196607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010469166.3A Withdrawn CN111613314A (en) 2020-05-28 2020-05-28 Student eyesight detection system based on big data

Country Status (1)

Country Link
CN (1) CN111613314A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112232172A (en) * 2020-10-12 2021-01-15 上海大学 Multi-person cooperation simulation system for electronic warfare equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112232172A (en) * 2020-10-12 2021-01-15 上海大学 Multi-person cooperation simulation system for electronic warfare equipment
CN112232172B (en) * 2020-10-12 2021-12-21 上海大学 Multi-person cooperation simulation system for electronic warfare equipment

Similar Documents

Publication Publication Date Title
CN105996975A (en) Method, device and terminal for testing vision
CN108665687B (en) Sitting posture monitoring method and device
CN102078180A (en) Handheld electronic equipment with vision detection function and detection method thereof
CN109620124A (en) A kind of campus vision monitoring system
CN111613314A (en) Student eyesight detection system based on big data
CN110414101B (en) Simulation scene measurement method, accuracy measurement method and system
CN107525652A (en) Lens distortion method of testing, apparatus and system
CN114973090A (en) Experiment scoring method and device, electronic equipment and storage medium
CN212365287U (en) Lever principle experimental instrument and lever principle experimental instrument teaching data acquisition system
CN111428577B (en) Face living body judgment method based on deep learning and video amplification technology
CN106963384A (en) A kind of Compensatory Head Posture detection method and device
CN115861977A (en) Evaluation method for simulated driving posture and simulated driving device
CN114862960A (en) Multi-camera calibrated image ground leveling method and device, electronic equipment and medium
CN109710664B (en) Information display system for data measurement of spectrum analyzer
CN113869112A (en) Instrument automatic reading method and device based on machine vision
CN110181511B (en) Robot zero loss detection and zero calibration assisting method and system
CN113128881A (en) Operation evaluation method and device for measuring instrument and storage medium
CN217723475U (en) Glasses for vision detection
CN214856575U (en) Vision detection device
CN113397471B (en) Vision data acquisition system based on Internet of things
CN110796922A (en) Novel drum learning system
CN115553709A (en) Automatic intelligent vision detection system and method
CN112729551B (en) Human body infrared temperature measurement method, distance compensation method and device
CN113457110B (en) Counting method, system and device in intelligent playground
CN115153416B (en) Human eye contrast sensitivity inspection system based on virtual reality scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200901